Podcast
GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons
Explore unseen GenAI threats and AI pentesting insights in this podcast. Learn best practices to secure AI systems against evolving threats and vulnerabilities.
Anton Chuvakin
Ante Gojsalić
DATE
Nov 11, 2024
TIME & LENGTH
27 min
STATUS
Available on demand
LANGUAGE
English
The 198th episode of the Google Cloud Security Podcast features host Anton Chuvakin, Senior Security Staff of the CISO office at Google, and Ante Gojsalić, Co-Founder and CTO at SplxAI and takes a deep dive into the unseen attack surfaces and pentesting lessons in GenAI security. Some of the main topics covered in this podcast include: the unique attack surfaces of LLM applications, evolving threats, the most exploited GenAI risks, common mistakes made by enterprises from previous experiences with customers, and lessons learned from automating pentesting and red teaming of GenAI applications.
Simulating many different conversational scenarios plays a key role in ensuring secure interactions with your GenAI app
Traditional pentesting and red teaming, usually performed a few times yearly, are not sufficient for GenAI applications due to their non-deterministic nature and their need to be continuously tested after any update to the model or attached database.
Automated assessments can help determine whether AI firewalls are misconfigured and too strict, which can lead to bad user experiences and a waste of resources.
Manual pentesting of GenAI systems requires lots of time and resources. The SplxAI platform discovers up to 95% of the AI risk surface through automation and significantly speeds up time to production.