As AI-powered coding assistants become more prevalent, developers are increasingly incorporating AI-generated code snippets into their projects...
But how reliable and secure is it?
The last Dora report made it clear: there is a 7.2% drop in delivery stability for every 25% increase in AI use!
In this webinar, we’ll explore the risks AI-generated code has on development environments. From quality issues to outright security vulnerabilities, we’ll break down real-world examples of how AI-generated code can introduce security flaws, compliance issues, and reduce overall code quality.
Key takeaways include:
• The pitfalls of AI-generated code regarding code quality and security.
• How blind implementation of GenAI code can lead to increased technical and security debt.
• Why traditional reviews often fail to catch AI-induced issues.
• Best practices for validating, securing, and integrating AI-generated code responsibly.
Join us for an engaging session where we go beyond the hype and take a critical look at the risks, while offering actionable strategies to mitigate them - without stifling innovation!
Chief Executive Officer
@ Symbiotic Security
Jérôme Robert is the co‑founder and CEO of Symbiotic Security, an AI‑driven cybersecurity startup focused on embedding security coaching directly into developers’ workflows. With over 20 years of experience in cybersecurity and more than 15 years in C‑level leadership, he began his career in mathematics and engineering before transitioning to business strategy and innovation.
Chief Executive Officer
@ Packmind
Laurent Py is an entrepreneur, GM and product leader specializing in SaaS and AI, with over 20 years of experience building high‑performing engineering teams and scaling software products. As CEO of Packmind, he drives innovation at the intersection of generative AI and enterprise solutions, regularly sharing his expertise on Medium’s Packmind publication.
© 2025 D2 Emerge LLC.