Podcast

Continuous Red Teaming for AI: Insights from OWASP Experts - GenAI Security Ep.6

Discover how AI red teaming must evolve for agents, RAG, and multimodal AI apps in this podcast episode with SplxAI founders and OWASP's Aubrey King.

Aubrey King – OWASP

Aubrey King

Kristian Kamber - SplxAI

Kristian Kamber

Ante Gojsalic - SplxAI

Ante Gojsalić

SplxAI - Date

DATE

Apr 5, 2025

SplxAI - Time

TIME & LENGTH

27 min

SplxAI - Status

STATUS

Available on demand

SplxAI - Language

LANGUAGE

English

OWASP - GenAI Security Podcast
OWASP - GenAI Security Podcast
OWASP - GenAI Security Podcast

In this episode of the OWASP GenAI Security Podcast, host Aubrey King sits down with SplxAI co-founders Kristian Kamber and Ante Gojsalic to discuss the evolving role of red teaming in AI security. As generative AI systems become more autonomous — using tools, writing code, and making real-time decisions — relying on only black-box testing won't be enough.

The discussion highlights why continuous and automated red teaming is essential for proactively identifying vulnerabilities like jailbreaks, data poisoning, and harmful outputs. With architectures like retrieval-augmented generation (RAG), agentic frameworks, and multimodal systems introducing new layers of complexity, red teaming will have to continue evolving.

From scaling testing workflows to anticipating real-world threats, this episode offers practical insights for practitioners looking to adopt AI with confidence — while ensuring compliance, resilience, and reduced risk across the lifecycle of their AI deployments.

Securing the Future of AI: Why Red Teaming Must Evolve With the Tech It Protects

Continuous Red Teaming is Essential: As AI applications grow more complex, ongoing red teaming becomes crucial to proactively identify and mitigate emerging vulnerabilities.

Automation Enhances Security Testing: Implementing automated red teaming workflows allows organizations to scale their security testing efforts effectively and keep up with the fast pace of new emerging AI vulnerabilities.

Addressing Unique Risks in Advanced AI Systems: Security challenges in retrieval-augmented generation (RAG), multimodal systems, and agentic frameworks require special attention, as traditional black-box testing must evolve into more adaptive gray-box approaches.

Available on demand

Available on demand

Available on demand

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

SplxAI - Background Pattern

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

Deploy secure AI Assistants and Agents with confidence.

Don’t wait for an incident to happen. Proactively identify and remediate your AI's vulnerabilities to ensure you're protected at all times.

SplxAI - Background Pattern
SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.