I am the Head of the Generative AI Trust and Security Research team at Fujitsu Research of Europe, where I lead efforts to enhance the security, trustworthiness, and resilience of Generative AI systems. My work focuses on bridging the gap between AI security, red-teaming methodologies, and enterprise risk management, ensuring that AI technologies remain both innovative and secure.
With a strong background in cybersecurity, AI governance, and system architecture, I have been involved in developing AI security frameworks, vulnerability assessment tools, and guardrail mechanisms to protect LLMs and AI-driven applications. My research explores the evolving threat landscape of AI agents and workflows, emphasizing proactive defense strategies to mitigate jailbreak attacks, adversarial manipulation, and data leakage risks.
I am actively engaged in AI security research communities and frequently contribute to academic and industry conferences. My recent work, including comparative analyses of open-source LLM vulnerability scanners and AI guardrails, has been recognized in top venues such as the IEEE/ACM International Conference on Software Engineering (ICSE 2025).
linkedin.com/in/romanva/