Podcast

AI Security – Can We Unpack The Blackbox?

Join top experts live at RSA as they unpack real-world AI security risks, broken standards, and what it takes to secure AI beyond the black box.

Ante Gojsalic - SplxAI

Ante Gojsalić

Reet Kaur

Reet Kaur

Savanah Frisk

Savanah Frisk

John Sotiropoulos

John Sotiropoulos

SplxAI - Date

DATE

Apr 28, 2025

SplxAI - Time

TIME & LENGTH

2:00pm PST – 45 min

SplxAI - Status

STATUS

Upcoming

SplxAI - Language

LANGUAGE

English

The Elephant in AppSec – AI Security
The Elephant in AppSec – AI Security
The Elephant in AppSec – AI Security

In this live episode of The Elephant in AppSec Podcast, recorded during RSA Conference 2025, industry experts come together to tackle one of the biggest questions in the field: Is AI security a black box problem? Featuring CTO & Co-Founder of SplxAI Ante Gojsalić, former CISO & GRC leader Reet Kaur, AI Safety and Security Lead of Snap Inc. Savanah Frisk, and OWASP GenAI Security Project Co-Lead and Sr. Security Architect at Kainos John Sotiropoulos, the conversation will explore the limitations of current AI security tools and standards – and what it will really take to secure increasingly complex and autonomous AI systems.

The discussion is set to explore why many AI security promises fall short, how organizations can implement an effective AI security posture management, and which vulnerabilities are most critical to address today. With architectures like retrieval-augmented generation (RAG), agentic workflows, and multimodal AI systems on the rise, security teams must rethink how they assess risk and ensure integrity at scale.

Whether you're building AI or securing it, this episode will deliver real-world insights from practitioners, tool builders, and policy shapers working on the front lines of AI security.

What You’ll Learn About Securing AI in the Age of Autonomy

Why AI Red Teaming Needs to Go Beyond the Prompt: Discover how traditional testing methods fall short for agentic AI systems — and why continuous, automated red teaming is essential for identifying real-world exploits like jailbreaks, data leakage, and alignment failures.

How to Strengthen Runtime Security for Dynamic AI Systems: Learn what runtime security looks like in modern GenAI environments, and how to monitor, manage, and defend systems that use real-time decision-making, external tools, and live data sources.

Building a Resilient AI Security Posture Across the Stack: Explore practical strategies for AI security posture management, from policy to implementation — including how to assess and harden systems using RAG, agentic frameworks, and multimodal models.

Coming up soon

Coming up soon

Coming up soon

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

SplxAI - Background Pattern

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

SplxAI - Background Pattern
SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.