Loading…
Venue: Room 113 clear filter
arrow_back View All Dates
Thursday, May 29
 

10:30am CEST

Leveraging AI for Secure React Development with Effective Prompt Engineering
Thursday May 29, 2025 10:30am - 11:15am CEST
Practical and usable advice on how to harness the power of AI to create secure React applications by using prompt engineering best practices. We will discuss practical methods for guiding AI models to produce safe, high-quality React code that reduces common vulnerabilities, such as cross-site scripting (XSS) and injection flaws.

Attendees will learn foundational techniques for crafting precise prompts, incorporating secure coding patterns, and validating AI-generated outputs.

By the end of this session, you will be equipped with actionable steps to integrate AI-driven development into your workflow and strengthen the overall security of your React and other software projects.
Speakers
avatar for Jim Manico

Jim Manico

Founder, Manicode Security
Jim Manico is the founder of Manicode Security, where he trains software developers on secure coding and security engineering. He is also an investor/advisor for 10Security, Aiya, MergeBase, Nucleus Security, KSOC, and Inspectiv. Jim is a frequent speaker on secure software practices... Read More →
Thursday May 29, 2025 10:30am - 11:15am CEST
Room 113

11:30am CEST

Hacking Your Enterprise Copilot: A Direct Guide to Indirect Prompt Injections
Thursday May 29, 2025 11:30am - 12:15pm CEST
Enterprise copilots, from Microsoft Copilot to Salesforce’s Einstein, are adopted by every major enterprise. Grounded into your personal enterprise data they offer major productivity gains. But what happens when they get compromised? And how exactly can that happen?

In this talk we will see how we can turn these trusted enterprise AI assistants into our own malicious insiders within the victim organization. Spreading misinformation, tricking innocent employees into making fatal mistakes, routing users to our phishing sites, and even directly exfiltrating sensitive data!

We’ll go through the process of building these attack techniques from scratch, presenting a mental framework for how to hack any enterprise copilot, no prior experience needed. Starting from system prompt extraction techniques to crafting reliable and robust indirect prompt injections (IPIs) using our extracted system prompt. Showing a step by step process of how we arrived at each of the results we’ve mentioned above, and how you can replicate them to any enterprise copilot of your choosing.

To demonstrate the efficacy of our methods, we will use Microsoft Copilot as our guinea pig for the session, seeing how our newly found techniques manage to circumvent Microsoft’s responsible AI security layer.

Join us to explore the unique attack surface of enterprise copilots, and learn how to harden your own enterprise copilot to protect against the vulnerabilities we were able to discover.
Speakers
avatar for Tamir Ishay Sharbat

Tamir Ishay Sharbat

Software Engineer and Security Researcher, Zenity
Tamir Ishay Sharbat is a software engineer and security researcher with a particular passion for AI security. His current focus is on identifying vulnerabilities in enterprise AI products such as Microsoft Copilot, Microsoft Copilot Studio, Salesforce Einstein, Google Gemini and more... Read More →
Thursday May 29, 2025 11:30am - 12:15pm CEST
Room 113

1:15pm CEST

Beyond the Surface: Exploring Attacker Persistence Strategies in Kubernetes
Thursday May 29, 2025 1:15pm - 2:00pm CEST
Kubernetes has been put to great use by a wide variety of organizations to manage their workloads, as it hides away a lot of the complexity of managing and scheduling containers. But with each added layer of abstraction, there can be new places for attackers to hide in darkened corners.

This talk will examine how attackers can (ab)use little known features of Kubernetes and the components that are commonly deployed as part of cloud-native containerized workloads to persist in compromised systems, sometimes for years at a time. We'll also pinpoint places where, if you don't detect the initial attack, it might be very difficult to spot the attacker lurking in your cluster.

  rorym@mccune.org.uk
 linkedin.com/in/rorym/
 raesene.github.io (blog)
 datadoghq.com (company)
 infosec.exchange/@raesene (Mastodon)
 bsky.app/profile/m... (Bluesky )
Speakers
avatar for Rory McCune

Rory McCune

Senior Advocate, Datadog
Rory is a senior advocate for Datadog who has extensive experience with Cyber security and Cloud native computing. In addition to his work as a security reviewer and architect on containerization technologies like Kubernetes and Docker he has presented at Kubecon EU and NA, as well... Read More →
Thursday May 29, 2025 1:15pm - 2:00pm CEST
Room 113

2:15pm CEST

Builders and Breakers: A Collaborative Look at Securing LLM-Integrated Apps
Thursday May 29, 2025 2:15pm - 3:00pm CEST
As Large Language Models (LLMs) become an integral part of modern applications, they not only enable new functionalities but also introduce unique security vulnerabilities. In this collaborative talk, we bring together two perspectives: a builder who has experience developing and defending LLM-integrated apps, and a penetration tester who specialises in AI red teaming. Together, we’ll dissect the evolving landscape of AI security.

On the defensive side, we’ll explore strategies like prompt injection prevention, input validation frameworks, and continuous testing to protect AI systems from adversarial attacks. From the offensive perspective, we’ll showcase how techniques like data poisoning and prompt manipulation are used to exploit vulnerabilities, as well as the risks tied to generative misuse that can lead to data leaks or unauthorised actions.

Through live demonstrations and real-world case studies, participants will witness both the attack and defence in action, gaining practical insights into securing AI-driven applications. Whether you’re developing AI apps or testing them for weaknesses, you’ll leave this session equipped with actionable knowledge on the latest methods for protecting LLM systems. This collaborative session offers a comprehensive look into AI security, combining the expertise of two professionals with distinct backgrounds - builder and breaker.
Speakers
avatar for Javan Rasokat

Javan Rasokat

Senior Application Security Specialist, Sage
Javan is a Senior Application Security Specialist at Sage, helping product teams enhance security throughout the software development lifecycle. On the side, he lectures Secure Coding at DHBW University in Germany. His journey as an ethical hacker began young, where he began to automate... Read More →
avatar for Rico Komenda

Rico Komenda

Senior Security Consultant, adesso SE
Rico is a senior security consultant at adesso SE. His main security areas are in application security, cloud security, offensive security and AI security.For him, general security intelligence in various aspects is a top priority. Today’s security world is constantly changing and... Read More →
Thursday May 29, 2025 2:15pm - 3:00pm CEST
Room 113

3:30pm CEST

To BI or Not to BI? Data Leakage Tragedies with Power BI Reports
Thursday May 29, 2025 3:30pm - 4:15pm CEST
In this session, we will expose a major data leakage vulnerability in Microsoft Fabric (Power BI) that has already affected tens of thousands of reports, putting thousands of enterprises and organizations at risk. We’ll demonstrate how a Power BI report viewer, especially for reports published to the web, can access unintended data by manipulating API requests to reveal the underlying data model.

We will also showcase PBAnalyzer, an open-source tool to help organizations identify their exposure, and unveil a new attack vector: DAX Injection. This vulnerability stems from improper handling of variables in DAX queries, which we will demonstrate using a Power Automate flow that leaks sensitive data to an external anonymous user.

The session will conclude with actionable steps to secure Power BI reports and prevent unnecessary data exposure.
Speakers
avatar for Uriya Elkayam

Uriya Elkayam

Senior Security Researcher, Nokod Security
Uriya Elkayam is a senior security researcher at Nokod Security. His research focuses on application security aspects of low-code/ No-code platforms such as MS Power Platform, UiPath, and OutSystems. He has a passion for both finding vulnerabilities and new mitigation techniques... Read More →
Thursday May 29, 2025 3:30pm - 4:15pm CEST
Room 113
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -