Loading…
Type: Breakout: Breaker Track clear filter
Thursday, May 29
 

11:30am CEST

Hacking Your Enterprise Copilot: A Direct Guide to Indirect Prompt Injections
Thursday May 29, 2025 11:30am - 12:15pm CEST
Enterprise copilots, from Microsoft Copilot to Salesforce’s Einstein, are adopted by every major enterprise. Grounded into your personal enterprise data they offer major productivity gains. But what happens when they get compromised? And how exactly can that happen?

In this talk we will see how we can turn these trusted enterprise AI assistants into our own malicious insiders within the victim organization. Spreading misinformation, tricking innocent employees into making fatal mistakes, routing users to our phishing sites, and even directly exfiltrating sensitive data!

We’ll go through the process of building these attack techniques from scratch, presenting a mental framework for how to hack any enterprise copilot, no prior experience needed. Starting from system prompt extraction techniques to crafting reliable and robust indirect prompt injections (IPIs) using our extracted system prompt. Showing a step by step process of how we arrived at each of the results we’ve mentioned above, and how you can replicate them to any enterprise copilot of your choosing.

To demonstrate the efficacy of our methods, we will use Microsoft Copilot as our guinea pig for the session, seeing how our newly found techniques manage to circumvent Microsoft’s responsible AI security layer.

Join us to explore the unique attack surface of enterprise copilots, and learn how to harden your own enterprise copilot to protect against the vulnerabilities we were able to discover.
Speakers
avatar for Tamir Ishay Sharbat

Tamir Ishay Sharbat

Software Engineer and Security Researcher, Zenity
Tamir Ishay Sharbat is a software engineer and security researcher with a particular passion for AI security. His current focus is on identifying vulnerabilities in enterprise AI products such as Microsoft Copilot, Microsoft Copilot Studio, Salesforce Einstein, Google Gemini and more... Read More →
Thursday May 29, 2025 11:30am - 12:15pm CEST
Room 113

1:15pm CEST

Beyond the Surface: Exploring Attacker Persistence Strategies in Kubernetes
Thursday May 29, 2025 1:15pm - 2:00pm CEST
Kubernetes has been put to great use by a wide variety of organizations to manage their workloads, as it hides away a lot of the complexity of managing and scheduling containers. But with each added layer of abstraction, there can be new places for attackers to hide in darkened corners.

This talk will examine how attackers can (ab)use little known features of Kubernetes and the components that are commonly deployed as part of cloud-native containerized workloads to persist in compromised systems, sometimes for years at a time. We'll also pinpoint places where, if you don't detect the initial attack, it might be very difficult to spot the attacker lurking in your cluster.

  rorym@mccune.org.uk
 linkedin.com/in/rorym/
 raesene.github.io (blog)
 datadoghq.com (company)
 infosec.exchange/@raesene (Mastodon)
 bsky.app/profile/m... (Bluesky )
Speakers
avatar for Rory McCune

Rory McCune

Senior Advocate, Datadog
Rory is a senior advocate for Datadog who has extensive experience with Cyber security and Cloud native computing. In addition to his work as a security reviewer and architect on containerization technologies like Kubernetes and Docker he has presented at Kubecon EU and NA, as well... Read More →
Thursday May 29, 2025 1:15pm - 2:00pm CEST
Room 113

2:15pm CEST

Builders and Breakers: A Collaborative Look at Securing LLM-Integrated Apps
Thursday May 29, 2025 2:15pm - 3:00pm CEST
As Large Language Models (LLMs) become an integral part of modern applications, they not only enable new functionalities but also introduce unique security vulnerabilities. In this collaborative talk, we bring together two perspectives: a builder who has experience developing and defending LLM-integrated apps, and a penetration tester who specialises in AI red teaming. Together, we’ll dissect the evolving landscape of AI security.

On the defensive side, we’ll explore strategies like prompt injection prevention, input validation frameworks, and continuous testing to protect AI systems from adversarial attacks. From the offensive perspective, we’ll showcase how techniques like data poisoning and prompt manipulation are used to exploit vulnerabilities, as well as the risks tied to generative misuse that can lead to data leaks or unauthorised actions.

Through live demonstrations and real-world case studies, participants will witness both the attack and defence in action, gaining practical insights into securing AI-driven applications. Whether you’re developing AI apps or testing them for weaknesses, you’ll leave this session equipped with actionable knowledge on the latest methods for protecting LLM systems. This collaborative session offers a comprehensive look into AI security, combining the expertise of two professionals with distinct backgrounds - builder and breaker.
Speakers
avatar for Javan Rasokat

Javan Rasokat

Senior Application Security Specialist, Sage
Javan is a Senior Application Security Specialist at Sage, helping product teams enhance security throughout the software development lifecycle. On the side, he lectures Secure Coding at DHBW University in Germany. His journey as an ethical hacker began young, where he began to automate... Read More →
avatar for Rico Komenda

Rico Komenda

Senior Security Consultant, adesso SE
Rico is a senior security consultant at adesso SE. His main security areas are in application security, cloud security, offensive security and AI security.For him, general security intelligence in various aspects is a top priority. Today’s security world is constantly changing and... Read More →
Thursday May 29, 2025 2:15pm - 3:00pm CEST
Room 113

3:30pm CEST

To BI or Not to BI? Data Leakage Tragedies with Power BI Reports
Thursday May 29, 2025 3:30pm - 4:15pm CEST
In this session, we will expose a major data leakage vulnerability in Microsoft Fabric (Power BI) that has already affected tens of thousands of reports, putting thousands of enterprises and organizations at risk. We’ll demonstrate how a Power BI report viewer, especially for reports published to the web, can access unintended data by manipulating API requests to reveal the underlying data model.

We will also showcase PBAnalyzer, an open-source tool to help organizations identify their exposure, and unveil a new attack vector: DAX Injection. This vulnerability stems from improper handling of variables in DAX queries, which we will demonstrate using a Power Automate flow that leaks sensitive data to an external anonymous user.

The session will conclude with actionable steps to secure Power BI reports and prevent unnecessary data exposure.
Speakers
avatar for Uriya Elkayam

Uriya Elkayam

Senior Security Researcher, Nokod Security
Uriya Elkayam is a senior security researcher at Nokod Security. His research focuses on application security aspects of low-code/ No-code platforms such as MS Power Platform, UiPath, and OutSystems. He has a passion for both finding vulnerabilities and new mitigation techniques... Read More →
Thursday May 29, 2025 3:30pm - 4:15pm CEST
Room 113
 
Friday, May 30
 

10:30am CEST

Doors of (AI)pportunity: The Front and Backdoors of LLMs
Friday May 30, 2025 10:30am - 11:15am CEST
The question “What is AI security?” followed by “No, not image classification, LLMs!” has become a frequent conversation for us at conferences around the world. So, we decided to answer the real question.

Having spent the last year actively trying to break LLMs as attackers and defenders, as external entities, and as insider threats, we have gathered and created many techniques to jailbreak, trick, and control LLMs, and have distilled previously complex techniques in a way everyone can understand. We will teach you how to exploit control tokens, much like when we hacked Google’s Gemini for Workspace. You will see how to get an LLM to pop a shell with an image of a seashell, and we’ll even provide the tools to automatically extract pop-culture exploits for your very own KROP gadgets. We will reveal how an insider threat could implant hidden logic or backdoors into your LLM, enabling an attacker to control outputs, change inputs, or even make the LLM refuse to say the word “OWASP”. We will enable you to take full control over their local LLMs, even demonstrating how an LLM can be fully and permanently jailbroken in minutes with a CPU rather than with dozens of hours on multiple GPUs. By the end, our audience will be able to make any LLM say whatever they want.
Speakers
avatar for Kasimir Schulz

Kasimir Schulz

Principal Security Researcher, HiddenLayer,
Kasimir Schulz, Principal Security Researcher at HiddenLayer, is a leading expert in uncovering zero-day exploits and supply chain vulnerabilities in AI. His work has been featured in BleepingComputer and Dark Reading, and he has spoken at conferences such as FS-ISAC and Black Hat... Read More →
avatar for Kenneth Yeung

Kenneth Yeung

AI Threat Researcher, HiddenLayer
Kenneth Yeung is an AI Threat Researcher at HiddenLayer, specializing in adversarial machine learning and AI security. He is known for identifying LLM vulnerabilities in AI systems like Google Gemini, and his work has been featured in publications like Forbes and DarkReading. Kenneth... Read More →
Friday May 30, 2025 10:30am - 11:15am CEST
Room 113

11:30am CEST

Restless Guests: From Subscription to Backdoor Intruder
Friday May 30, 2025 11:30am - 12:15pm CEST
Through novel research our team uncovered a critical vulnerability in Azure's guest user model, revealing that guest users can create and own subscriptions in external tenants they've joined—even without explicit privileges. This capability, which is often overlooked by Azure administrators, allows attackers to exploit these subscriptions to expand their access, move laterally within resource tenants, and create stealthy backdoor identities in the Entra directory. Alarmingly, Microsoft has confirmed real-world attacks using this method, highlighting a significant gap in many Azure threat models. This talk will share the findings from this first of its kind research into this exploit found in the wild.

We'll dive into how subscriptions, intended to act as security boundaries, make it possible for any guest to create and control a subscription undermines this premise. We'll provide examples of attackers leveraging this pathway to exploit known attack vectors to escalate privileges and establish persistent access, a threat most Azure admins do not anticipate when inviting guest users. While Microsoft plans to introduce preventative options in the future, this gap leaves organizations exposed to risks they may not even realize exist––but should definitely know about!
Speakers
avatar for Simon Maxwell-Stewart

Simon Maxwell-Stewart

Security Researcher and Data Scientist, BeyondTrust
Simon Maxwell-Stewart is a seasoned data scientist with over a decade of experience in big data environments and a passion for pushing the boundaries of analytics. A Physics graduate from the University of Oxford, Simon began his career tackling complex data challenges and has since... Read More →
Friday May 30, 2025 11:30am - 12:15pm CEST
Room 113

1:15pm CEST

Abusing misconfigurations in CI/CD to hijack apps and clouds
Friday May 30, 2025 1:15pm - 2:00pm CEST
Writing and maintaining secure applications is hard enough, and in today's paradigm with DevOps and CI/CD developers are often tasked with integrating and automating a full code-to-cloud pipeline. This introduces new control plane to application risks. Some of these instances can lead to full compromise if exploited by a threat actor.

In this talk we will break down the core components of a modern CI/CD-workflow such as OIDC, GitHub Actions and Workload Identities. Then we will describe the security properties of these components, and present a threat model for the code-to-cloud flow. Based on this we will showcase and demonstrate common flaws that could lead to full application and cloud compromise.

To increase the capacity of organizations to detect such flaws we will release an open source tool, developed by the presenters, to discover and triage these issues. In the session the tool will be demonstrated and discussed. Attendees will get actionable knowledge and tooling that can be applied when leaving the room. The talk and tool is based on findings and experiences from cloud and application security assessment conducted by the presenters.
Speakers
avatar for Håkon Nikolai Stange Sørum

Håkon Nikolai Stange Sørum

Principal Security Architect and Partner, O3 Cyber
Håkon has extensive knowledge on implementing secure software development practices for modern DevOps teams, designing and implementing cloud security architectures, and securely operating cloud infrastructure. Håkon offers industry insights into the implementation of secure design... Read More →
avatar for Karim El-Melhaoui

Karim El-Melhaoui

Principal Security Architect at O3 Cyber, Microsoft Security MVP, O3 Cyber
Karim is a seasoned and renowned thought leader within cloud security. At O3 Cyber, he conducts research and development and works with our clients, primarily in Financial Industry. Karim has a background in building and operating platform services for security on private and public... Read More →
Friday May 30, 2025 1:15pm - 2:00pm CEST
Room 113

2:15pm CEST

Compromised at the Source: Supply Chain Risks in Open-Source AI
Friday May 30, 2025 2:15pm - 3:00pm CEST
Step into the shadowy world of AI tools and ask yourself: How secure are they? This session dives deep into the architecture of AI models, exposing their most vulnerable points. Moreover, you will learn how malicious actors can weaponize AI, turning powerful tools into threats based on an example of a ‘Malicious Copilot’ IDE plugin. It will reveal how a code-completion model can be trained to embed harmful behavior, target victims, and execute attacks. Finally, you will take home actionable strategies for organizations leveraging generative AI and LLMs, ensuring security isn’t left to chance.
Speakers
avatar for Tal Folkman

Tal Folkman

Security Research Team Lead, Checkmarx
Tal brings over 8 years of experience to her role as a supply chain security research team lead within Checkmarx Supply Chain Security group. She is in charge of detecting tracking and stopping Opensource attacks. linkedin.com/in/tal-folkman/ medium.com/@tal.folk... (blog... Read More →
Friday May 30, 2025 2:15pm - 3:00pm CEST
Room 113

3:30pm CEST

When Regulation Backfires: How a Vulnerable Plugin Led to an XSS Pandemic
Friday May 30, 2025 3:30pm - 4:15pm CEST
What began as a simple WAF bypass challenge on a single website turned into the discovery of a vulnerability affecting thousands of organizations. Join us in the journey of how an accessibility plugin, mandated by regulation, became the perfect vehicle for a widespread XSS vulnerability. We’ll explore the real-world impact of compromised sensitive systems, from government and military to healthcare and finance, showing how a single regulatory requirement led to an ecosystem-wide security breach.

We’ll also analyze the plugin’s source code to understand how and why this XSS vulnerability occurs, along with a behavior analysis that suggests the plugin may also be tracking users without consent, indicating potential malicious intent. Additionally, we’ll share the methodology and tools used to uncover and validate these vulnerabilities at scale.
Speakers
avatar for Eilon Cohen

Eilon Cohen

Security Analyst, Checkmarx Research
That kid who took apart all his toys to see how they worked.Currently breaking (and fixing) things in the Research group at Checkmarx. Educational spans from Mechanical Engineering and Robotics to Computer science, but a self-made security personnel. Ex-IBM as a security engineer... Read More →
avatar for Ori Ron

Ori Ron

Senior AppSec Researcher, Checkmarx
Ori Ron is a Senior Application Security Researcher at Checkmarx with over 8 years of experience. He works to find and help fix security vulnerabilities and enjoys sharing security knowledge through talks and write-ups. linkedin.com/in/ori-ron-40099912b/ checkmarx.com/author/or... Read More →
Friday May 30, 2025 3:30pm - 4:15pm CEST
Room 113
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.