Podcast

GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons

Explore unseen GenAI threats and AI pentesting insights in this podcast. Learn best practices to secure AI systems against evolving threats and vulnerabilities.

Google Cloud - Anton Chuvakin
Google Cloud - Anton Chuvakin

Anton Chuvakin

SplxAI - Ante Gojsalic
SplxAI - Ante Gojsalic

Ante Gojsalić

SplxAI - Date
SplxAI - Date

DATE

Nov 11, 2024

SplxAI - Time
SplxAI - Time

TIME & LENGTH

27 min

SplxAI - Status
SplxAI - Status
SplxAI - Status

STATUS

Available on demand

SplxAI - Language
SplxAI - Language
SplxAI - Language

LANGUAGE

English

Google Cloud Security - SplxAI Podcast Cover
Google Cloud Security - SplxAI Podcast Cover
Google Cloud Security - SplxAI Podcast Cover

The 198th episode of the Google Cloud Security Podcast features host Anton Chuvakin, Senior Security Staff of the CISO office at Google, and Ante Gojsalić, Co-Founder and CTO at SplxAI and takes a deep dive into the unseen attack surfaces and pentesting lessons in GenAI security. Some of the main topics covered in this podcast include: the unique attack surfaces of LLM applications, evolving threats, the most exploited GenAI risks, common mistakes made by enterprises from previous experiences with customers, and lessons learned from automating pentesting and red teaming of GenAI applications.

Simulating many different conversational scenarios plays a key role in ensuring secure interactions with your GenAI app

Traditional pentesting and red teaming, usually performed a few times yearly, are not sufficient for GenAI applications due to their non-deterministic nature and their need to be continuously tested after any update to the model or attached database.

Automated assessments can help determine whether AI firewalls are misconfigured and too strict, which can lead to bad user experiences and a waste of resources.

Manual pentesting of GenAI systems requires lots of time and resources. The SplxAI platform discovers up to 95% of the AI risk surface through automation and significantly speeds up time to production.

Available on demand

Available on demand

Available on demand

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

SplxAI - Background Pattern

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

SplxAI - Background Pattern
SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.