Podcast

AI Security is Coming: How can we protect GenAI Apps from Cyber Crime?

Discover how to secure GenAI apps against cybercrime, address emerging AI security risks, and the importance of proactive "Security by Design" strategies.

SureSecure - Michael Döhmen
SureSecure - Michael Döhmen

Michael Döhmen

SplxAI - Kristian Kamber
SplxAI - Kristian Kamber

Kristian Kamber

SplxAI - Date
SplxAI - Date

DATE

Nov 14, 2024

SplxAI - Time
SplxAI - Time

TIME & LENGTH

42 min

SplxAI - Status
SplxAI - Status
SplxAI - Status

STATUS

Available on demand

SplxAI - Language
SplxAI - Language
SplxAI - Language

LANGUAGE

German

SplxAI - SureSecure Podcast
SplxAI - SureSecure Podcast
SplxAI - SureSecure Podcast

This episode of the Cybersecurity Basement Podcast featuring Michael Döhmen, CMO at SureSecure GmbH, and Kristian Kamber, Co-Founder and CEO at SplxAI, explores the still relatively unknown realms of AI security and how GenAI applications can be effectively safeguarded against cyber crime. Security by design is something that needs to be considered specifically for building GenAI applications. Regular tests and audits are necessary to identify potential vulnerabilities in AI systems ahead of time before malicious actors can be exploit them. Continuous monitoring is also crucial to recognize adversarial activity when AI apps are live and in production.

Security measures need to be integrated early on in the development phase of GenAI apps

The deployment of GenAI applications expands the digital attack surface, making systems more susceptible to new and more sophisticated cyberattacks.

Incorporating security practices from the start of GenAI application development is crucial to safeguard against potential vulnerabilities and adhere to regulatory standards.

Regulatory frameworks, such as the EU AI Act and DORA, will require regular security and safety audits of AI systems and can lead to heavy financial penalties if not met.

Available on demand

Available on demand

Available on demand

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

SplxAI - Background Pattern

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

SplxAI - Background Pattern
SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.