Loading…
Venue: Room 114 clear filter
Thursday, May 29
 

10:30am CEST

False Positives, Begone! Harnessing AI for Efficient SAST Triage
Thursday May 29, 2025 10:30am - 11:15am CEST
False positives are one of the biggest pain points in running a Static Application Security Testing (SAST) program. While SAST tools are valuable for identifying security issues in a codebase—flagging critical vulnerabilities like Remote Code Execution and SQL Injection—they often generate significant noise due to their lack of contextual awareness. SAST testing is relatively easy to set up, requires no accounts or credentials, and can uncover issues in multi-step processes that would be difficult to detect with dynamic security testing. However, the high volume of false positives leads to alert fatigue and demands considerable effort to triage, making it challenging to identify the relatively small number of true vulnerabilities.

This research addresses that challenge by combining Program Analysis with Large Language Models (LLMs) to simulate the manual triage process for SAST findings. Our approach leverages a carefully designed LLM agent that enhances context around vulnerable code, identifies conditions that make exploitation infeasible, and determines whether a clear execution path exists from a user-controlled input to the vulnerable line flagged by SAST.

We will demonstrate this novel approach in action, showcasing how it can be integrated with any SAST tooling to streamline triage. By reducing false positives and prioritizing actionable findings, this method allows security engineers and developers to focus on the vulnerabilities that truly matter.
Speakers
avatar for Elliot Ward

Elliot Ward

Staff Security Researcher, Snyk Security Labs
Elliot is a Staff security researcher at software security company Snyk. He has a background in software engineering and application security. securitylabs.snyk.io (blog)securitylabs.snyk.io (company... Read More →
Thursday May 29, 2025 10:30am - 11:15am CEST
Room 114

11:30am CEST

Emerging Frontiers: Ransomware Attacks in AI Systems
Thursday May 29, 2025 11:30am - 12:15pm CEST
This session will delve into the convergence of ransomware and Artificial Intelligence/Machine Learning (AI/ML) systems, providing attendees with a comprehensive understanding of the evolving ransomware landscape in AI environments. The presentation will cover:

The progression of ransomware from traditional attacks to AI-driven variants.
Vulnerabilities in AI/ML systems, such as supply chains, models, and training pipelines, that adversaries can exploit for ransomware attacks.
Real-world examples of potential ransomware exploits in predictive AI (e.g., OWASP ML06: 2023 ML Supply Chain Attacks) and generative AI (e.g., OWASP LLM06: Excessive Agency).
Practical strategies and AI-driven solutions to detect, protect against, and mitigate ransomware threats.

Attendees will gain actionable insights into adapting traditional ransomware defenses to safeguard modern AI infrastructures and explore open challenges in standardizing defenses for AI/ML systems. The session will also provide references to OWASP frameworks and insights from the OWASP AI Exchange.
Speakers
avatar for Behnaz Karimi

Behnaz Karimi

Senior Cyber Security Analyst, Accenture
Behnaz Karimi is a Senior Cyber Security Analyst at Accenture and a Co-Author and Co-Lead of OWASP AI Exchange, where she also serves as the Lead for AI Red Teaming. She has actively contributed to OWASP initiatives, including participating in the development of the GenAI Red Teaming... Read More →
avatar for Yuvaraj Govindarajulu

Yuvaraj Govindarajulu

Head of Research, AIShield (Powered by Bosch)
Yuvaraj Govindarajulu is a dynamic technical leader with over a decade of experience in AI, Cybersecurity and Embedded Systems R&D. He is the Head of Research at AIShield, a startup of Bosch with a mission to secure AI systems of the world, from development to deployment. His key... Read More →
Thursday May 29, 2025 11:30am - 12:15pm CEST
Room 114

1:15pm CEST

From Prompt to Protect: LLMs as Next-Gen WAF's
Thursday May 29, 2025 1:15pm - 2:00pm CEST
When exploring the use of Large Language Models (LLMs) in application security, a new frontier emerges for Web Application Firewalls (WAFs). Traditionally, WAFs operate on structured rules to detect and block application attacks, but what if we could leverage the unique capabilities of an LLM? In this talk, we will delve into the potential of using LLMs as WAFs, evaluating their strengths, challenges, and implications.

During this talk attendees will learn how existing applications may need to evolve to align with LLM capabilities, as well as discussing how LLMs can not only help detect threats and reduce false positives but also adapt better to zero-day vulnerabilities.

Through live demonstrations and a practical breakdown of potential architectures, this talk will equip attendees with actionable insights into how LLMs can transform application security while addressing the challenges they bring to the table.
Speakers
avatar for Juan Berner

Juan Berner

Principal Security Engineer, Booking.com
Juan Berner is a security researcher with over 13 years of experience in the field, currently working as a Principal Security Engineer at Booking.com, as SME for Application Security and Architect for security solutions.He has given talks in the past on how to build an open source... Read More →
Thursday May 29, 2025 1:15pm - 2:00pm CEST
Room 114

2:15pm CEST

Living the SBOM life - the good, the bad and the evil parts
Thursday May 29, 2025 2:15pm - 3:00pm CEST
The Software Bill of Materials (SBOM) are in the limelight as the silver bullet for many things - open source license compliance, vulnerability management, copyright management, identifying technical debt and the path towards a healthy, secure and legislation-certified happy state of a binary life. But behind all this marketing and makeup is a fairly simple syntax and a lot of missing pieces in the puzzle. Let’s dive into the SBOM lifestyle together and look at the current status, the hopes and the vision for a toolset with less hype, but more real benefits for compliance, developers, product managers, with a chance of being a workhorse in risk management as well as the automatic vulnerability management toolchain. Help us make the SBOM dream come true, listen to the talk and then walk the SBOM walk!
Speakers
avatar for Olle E. Johansson

Olle E. Johansson

Leader OWASP Project Koala, Edvina AB
Olle E. Johansson is an experienced and appreciated speaker, teacher as well as an Open Source developer and consultant. He is currently project lead for OWASP Project Koala - developing the Transparency Exchange API (TEA), member of the CycloneDX industry working group, the OWASP... Read More →
avatar for Anthony Harrison

Anthony Harrison

Founder and Director, APH10
I am the Founder and Director of APH10 which helps organisations more efficiently manage software risks in their applications, in particular risks from vulnerabilities in 3rd party components and compliance with open-source licences.Has been an active member of the open source community... Read More →
Thursday May 29, 2025 2:15pm - 3:00pm CEST
Room 114

3:30pm CEST

Current challenges of GraphQL security
Thursday May 29, 2025 3:30pm - 4:15pm CEST
GraphQL’s capability to fetch precisely what’s needed and nothing more, its efficient handling of real-time data, and its ease of integration with modern architectures make it a compelling choice for modern web and mobile applications. As developers seek more efficiency and better performance from their applications, GraphQL is increasingly becoming the go-to technology for API development. However, building and maintaining GraphQL applications requires careful consideration of security.

In this talk, security engineers will strengthen their GraphQL security skills by learning key techniques such as complexity management, batching, aliasing, sanitization, and depth limit enforcement. They will also learn to implement customizable middleware with their development team, like GraphQL Armor, for various GraphQL server engines.

Participants will explore different techniques and packages, and apply them to enhance the safety of their GraphQL applications. By the end of the talk, attendees will be equipped with practical knowledge to build secure and efficient GraphQL APIs.
Speakers
avatar for Maxence Lecanu

Maxence Lecanu

Technical Lead, Escape
Maxence is Technical Lead at Escape, where, as a founding engineer, he played a key role in shaping the platform from the ground up—helping security teams detect and mitigate business logic vulnerabilities at scale. With over 6 years of experience across software engineering and... Read More →
avatar for Antoine Carossio

Antoine Carossio

Cofounder & CTO, Escape.tech
Former pentester for the French Intelligence Services.Former Machine Learning Research @ Apple. linkedin.com/in/acarossio/ escape.tech (company) @iCarossio escape.tech (blog... Read More →
Thursday May 29, 2025 3:30pm - 4:15pm CEST
Room 114
 
Friday, May 30
 

10:30am CEST

Think Before You Prompt: Securing Large Language Models from a Code Perspective
Friday May 30, 2025 10:30am - 11:15am CEST
As Large Language Models (LLMs) become integral to modern applications, securing them at the code level is critical to preventing prompt injection attacks, poisoned models, unauthorized modifications, and other vulnerabilities. This talk delves into common pitfalls and effective mitigations when integrating LLMs into software systems, whether working with cloud vendors or hosting your own models. By focusing on LLM security from a developer's perspective rather than runtime defenses, we emphasize a shift-left approach—embedding security early in the software development lifecycle to proactively mitigate threats and minimize risks before deployment.

We'll examine practical security challenges faced during LLM integration, including input sanitization, output validation, and model pinning. Through detailed code examples and a live demonstration of model tampering, attendees will witness firsthand how attackers can exploit inadequate security controls to compromise LLM systems. The demonstration will showcase a real-world scenario where a legitimate model is swapped with a malicious one, highlighting the critical importance of robust model integrity verification and secure deployment practices.

Participants will learn concrete implementation patterns and security controls that can prevent such attacks, with practical code samples they can apply to their own projects. The session will cover essential defensive techniques including proper API key management, secure model loading and validation, and safe handling of sensitive data in prompts. Whether you're building applications using cloud-based LLM services or deploying your own models, you'll leave with actionable code-level strategies to enhance your application's security posture and protect against emerging AI-specific threats.
Speakers
avatar for Yaron Avital

Yaron Avital

Security Researcher, Palo Alto Networks
Yaron Avital is a seasoned professional with a diverse background in the technology and cybersecurity fields. Yaron's career has spanned over 15 years in the private sector as a software engineer and team lead at global companies and startups.Driven by a passion for cybersecurity... Read More →
avatar for Tomer Segev

Tomer Segev

Security Researcher, Palo Alto Networks
 Tomer Segev is a cybersecurity professional with a strong background in software development and security research. He began his career at 17 as a developer before serving as a cyber researcher in the top cyber unit of the IDF, where he gained hands-on experience in the most advanced... Read More →
Friday May 30, 2025 10:30am - 11:15am CEST
Room 114

11:30am CEST

Surviving prioritisation when CVE stands for “Customer Very Enthusiastic"
Friday May 30, 2025 11:30am - 12:15pm CEST
Everybody talks about problems with the width of CVE space - too many, coming too fast, how to prioritise them. This talk takes the problem into 3D - let’s talk about the depth of the space!

How a single medium risk CVE can consume crazy amounts of time of an AppSec team?

We will look into couple of examples of CVEs in a product that my team protects and trace their journey through the ecosystem. On the journey we will meet various dragons, hydras, and other dangerous creatures:

- LLM-empowered scanners hallucinating CVSS scores, packages, versions, anything;
- Good research teams making mistakes translating between different versions of CVSS
- Glory-chasing “research teams” writing their own advisories for no apparent reason
- Consensus based approach in CVE ecosystem guarantees security team cannot sleep until EVERY scanner has calmed down;
- And my favourite troll under the bridge: customers saying “I don’t care it’s not reachable in your context, I can’t deploy your product until my scanner is happy”.

The soundtrack for the quest is provided by the vendors continuously messaging you with fantastic promises to solve everything.

Can your character survive the quest and what loot do you need?
Speakers
avatar for Irene Michlin

Irene Michlin

Application Security Lead, Neo4j
Irene Michlin is an application security lead at Neo4j. Before going into application security, Irene worked as software engineer, architect, and technical lead at companies ranging from startups to corporate giants. Her professional interests include securing development life-cycles... Read More →
Friday May 30, 2025 11:30am - 12:15pm CEST
Room 114

1:15pm CEST

Signing is Sassy, but CI/CD Security Pays the Bills
Friday May 30, 2025 1:15pm - 2:00pm CEST
This talk is primarily aimed at AppSec practitioners, DevOps & SecOps Engineers as well as Makers and Breakers. If this is not you but you have a professional interest in CI/CD and Security then we’d love you to join us.

Modern software development practices rely entirely on CI/CD systems to deliver change at scale and speed. These systems are highly privileged environments with many actors and entities ( internal, external, human, machine ), and known attack vectors. The risk of compromise is severe because attacks can easily go undetected for extended dwell times resulting in an exponential blast radius. Just ask SolarWinds.

Now that we’ve set the scene it’s time to buckle up because we’re going to share what we’ve learnt, what can be done and what is the art of the possible. And what might the future look like.

This talk will focus on what good security looks like for CI/CD systems and lessons from the field. Spoiler: It’s challenging at scale because security solutions aren’t keeping pace. We will talk about our journey navigating complex CI/CD setups, where we recognise ways these systems can be exploited, and propose ways to tackle with some of the challenges. We’ll also see how signing could get us closer to securing the DevOps environment.

We’ll talk about the need to balance security with engineering imperatives. Enhancing your security posture is an investment that draws down on precious engineering resource, acting as a drag on productivity and cadence. Therefore, expect engineering functions to challenge it, hard and rightly so. Being able to influence key stakeholders so that they are onboard and committed is a must – we’ll show you how we approach this.

This talk will help you prepare for those tough conversations. At the end of the talk we want you to understand how to build a business case for CI/CD Security adoption in your organisation including how to implement in your workplace. The starting point is knowing how much risk your organisation’s build environment is exposed to and how much is tolerable.
Speakers
avatar for Patricia R.

Patricia R.

Root
Automation, innovation and correctness. Three principles constantly on my mind.Working in security consultancy and engineering, endeavoring in exciting projects. Strive to deliver impact and change in the realms of cloud (security), identity and architecture. @ytimyno linkedin.co... Read More →
avatar for Chris Snowden

Chris Snowden

Enterprise Security Architect
Accidental Application Security Architect! Software Engineer by trade. linkedin.com/in/csn0wden/
Friday May 30, 2025 1:15pm - 2:00pm CEST
Room 114

2:15pm CEST

GenAI Security - Insights and Current Gaps in OS LLM Vulnerability scanners and Guardrails
Friday May 30, 2025 2:15pm - 3:00pm CEST
As Large Language Models (LLMs) become integral to various applications, securing them against evolving threats—such as **information leakage, jailbreak attacks, and prompt injection—**remains a critical challenge. This presentation provides a comparative analysis of open-source vulnerability scanners—Garak, Giskard, PyRIT, and CyberSecEval—that leverage red-teaming methodologies to uncover these risks. We explore their capabilities, limitations, and design principles, while conducting quantitative evaluations that expose key gaps in their ability to reliably detect attacks.

However, vulnerability detection alone is not enough. Proactive security measures, such as AI guardrails, are essential to mitigating real-world threats. We will discuss how guardrail mechanisms—including **input/output filtering, policy enforcement, and real-time anomaly detection—**can complement scanner-based assessments to create a holistic security approach for LLM deployments. Additionally, we present a preliminary labeled dataset, aimed at improving scanner effectiveness and enabling more robust guardrail implementations.

Beyond these tools, we will share our experience in developing a comprehensive GenAI security framework at Fujitsu, designed to integrate both scanning and guardrail solutions within an enterprise AI security strategy. This framework emphasizes multi-layered protection, balancing LLM risk assessments, red-teaming methodologies, and runtime defenses to proactively mitigate emerging threats.

Finally, based on our findings, we will provide strategic recommendations for organizations looking to enhance their LLM security posture, including:

Selecting the right scanners for red-teaming and vulnerability assessments
Implementing guardrails to ensure real-time policy enforcement and risk mitigation
Adopting a structured framework for securing GenAI systems at scale
This session aims to bridge theory and practice, equipping security professionals with actionable insights to fortify LLM deployments in real-world environments.
Speakers
avatar for Roman Vainshtein

Roman Vainshtein

Head of the GenAI Trust, Fujitsu Research of Europe
I am the Head of the Generative AI Trust and Security Research team at Fujitsu Research of Europe, where I lead efforts to enhance the security, trustworthiness, and resilience of Generative AI systems. My work focuses on bridging the gap between AI security, red-teaming methodologies... Read More →
Friday May 30, 2025 2:15pm - 3:00pm CEST
Room 114

3:30pm CEST

Know Thy Judge: Uncovering Vulnerabilities of AI Evaluators
Friday May 30, 2025 3:30pm - 4:15pm CEST
Current methods for evaluating the safety of Large Language Models (LLMs) risk creating a false sense of security. Organizations deploying generative AI often rely on automated “judges” to detect safety violations like jailbreak attacks, as scaling evaluations with human experts is impractical. These judges—typically built with LLMs—underpin key safety processes such as offline benchmarking and automated red-teaming, as well as online guardrails designed to minimize risks from attacks. However, this raises a crucial question of meta-evaluation: can we trust the evaluations provided by these evaluators?

In this talk, we examine how popular LLM-as-judge systems were initially evaluated—typically using narrow datasets, constrained attack scenarios, and limited human validation—and why these approaches can fall short. We highlight two critical challenges: (i) evaluations in the wild, where factors like prompt sensitivity and distribution shifts can affect performance, and (ii) adversarial attacks that target the judges themselves. Through practical examples, we demonstrate how minor changes in data or attack strategies that do not affect the underlying safety nature of the model outputs can significantly reduce a judge’s ability to assess jailbreak success.

Our aim is to underscore the need for rigorous threat modeling and clearer applicability domains for LLM-as-judge systems. Without these measures, low attack success rates may not reliably indicate robust safety, leaving deployed models vulnerable to unseen risks.
Speakers
avatar for Francisco Girbal Eiras

Francisco Girbal Eiras

Machine Learning Research Scientist, DynamoAI
Francisco is an ML Research Scientist at Dynamo AI, a leading startup building enterprise solutions that enable private, secure, and compliant generative AI systems. He earned his PhD in trustworthy machine learning from the University of Oxford as part of the Autonomous Intelligent... Read More →
avatar for Eliott Zemour

Eliott Zemour

Senior ML Research Engineer, Dynamo AI
DR

Dan Ross

Head of AI Compliance Strategy, Dynamo AI
Dan Ross, Head of AI Compliance Strategy at Dynamo AI, focuses on aligning artificial intelligence, policy, risk and security management, and business application. Prior to Dynamo, Dan spent close to a decade at Promontory Financial Group, a premier risk and regulatory advisory firm... Read More →
Friday May 30, 2025 3:30pm - 4:15pm CEST
Room 114
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.