Announcement

Mar 10, 2025

5 min read

Introducing Agentic Radar: The New OSS Tool for AI Workflow Transparency

Assess Agentic AI systems for operational insights and identify potential vulnerabilities to enhance AI security

SplxAI

The SplxAI Team

SplxAI - Agentic Radar Cover
SplxAI - Agentic Radar Cover
SplxAI - Agentic Radar Cover

The SplxAI Team is excited to announce the release of Agentic Radar, our open-source contribution to the AI security community. As AI systems grow more autonomous, the need for transparency, security, and explainability becomes more urgent. With Agentic Radar, we are taking a significant step toward securing agentic AI workflows by providing practitioners with a powerful tool to gain deep insights into decision making paths of AI systems and their security vulnerabilities. This will help security teams meet requirements of AI compliance policies, which demand explainability of AI systems and disclosure of the AI-BOM (AI Bill of Materials) within them.

The SplxAI Team is excited to announce the release of Agentic Radar, our open-source contribution to the AI security community. As AI systems grow more autonomous, the need for transparency, security, and explainability becomes more urgent. With Agentic Radar, we are taking a significant step toward securing agentic AI workflows by providing practitioners with a powerful tool to gain deep insights into decision making paths of AI systems and their security vulnerabilities. This will help security teams meet requirements of AI compliance policies, which demand explainability of AI systems and disclosure of the AI-BOM (AI Bill of Materials) within them.

The SplxAI Team is excited to announce the release of Agentic Radar, our open-source contribution to the AI security community. As AI systems grow more autonomous, the need for transparency, security, and explainability becomes more urgent. With Agentic Radar, we are taking a significant step toward securing agentic AI workflows by providing practitioners with a powerful tool to gain deep insights into decision making paths of AI systems and their security vulnerabilities. This will help security teams meet requirements of AI compliance policies, which demand explainability of AI systems and disclosure of the AI-BOM (AI Bill of Materials) within them.

Why we built Agentic Radar

Today, AI Security is facing a critical gap: To properly secure AI agent workflows, we need to understand how these systems operate and function. As described in our recent article about AI Transparency, simple black-box security testing won't be enough to proactively identify and patch vulnerabilities. With Agentic Radar, practitioners can access deeper insights on how agentic AI systems function, how they are connected, and what tools are intergrated – enabling them to perform more specific AI Red Teaming with a gray-box approach.

Key Security Challenges in Agentic AI:

  • Lack of visibility: What tools are integrated into the data flow?

  • Unclear AI risks: How many LLMs are connected and what are each of their vulnerabilities?

  • AI compliance: New AI regulations demand explainability and transparency in AI workflows.

  • Black-box testing is not enough: To assess agentic AI effectively we need insights into its architecture.

This is why we built Agentic Radar, a powerful tool that automatically assesses AI-powered workflows, maps out vulnerabilities, and enables security-by-design from early development stages. By having a clear view of agentic architectures, AI security teams are equipped to stay compliant and run more specific and targeted risk assessments of their AI systems.

Today, AI Security is facing a critical gap: To properly secure AI agent workflows, we need to understand how these systems operate and function. As described in our recent article about AI Transparency, simple black-box security testing won't be enough to proactively identify and patch vulnerabilities. With Agentic Radar, practitioners can access deeper insights on how agentic AI systems function, how they are connected, and what tools are intergrated – enabling them to perform more specific AI Red Teaming with a gray-box approach.

Key Security Challenges in Agentic AI:

  • Lack of visibility: What tools are integrated into the data flow?

  • Unclear AI risks: How many LLMs are connected and what are each of their vulnerabilities?

  • AI compliance: New AI regulations demand explainability and transparency in AI workflows.

  • Black-box testing is not enough: To assess agentic AI effectively we need insights into its architecture.

This is why we built Agentic Radar, a powerful tool that automatically assesses AI-powered workflows, maps out vulnerabilities, and enables security-by-design from early development stages. By having a clear view of agentic architectures, AI security teams are equipped to stay compliant and run more specific and targeted risk assessments of their AI systems.

Today, AI Security is facing a critical gap: To properly secure AI agent workflows, we need to understand how these systems operate and function. As described in our recent article about AI Transparency, simple black-box security testing won't be enough to proactively identify and patch vulnerabilities. With Agentic Radar, practitioners can access deeper insights on how agentic AI systems function, how they are connected, and what tools are intergrated – enabling them to perform more specific AI Red Teaming with a gray-box approach.

Key Security Challenges in Agentic AI:

  • Lack of visibility: What tools are integrated into the data flow?

  • Unclear AI risks: How many LLMs are connected and what are each of their vulnerabilities?

  • AI compliance: New AI regulations demand explainability and transparency in AI workflows.

  • Black-box testing is not enough: To assess agentic AI effectively we need insights into its architecture.

This is why we built Agentic Radar, a powerful tool that automatically assesses AI-powered workflows, maps out vulnerabilities, and enables security-by-design from early development stages. By having a clear view of agentic architectures, AI security teams are equipped to stay compliant and run more specific and targeted risk assessments of their AI systems.

What is Agentic Radar?

Agentic Radar is an open-source scanner tool for agentic systems that helps security teams and AI engineers understand how AI agents interact with tools, external components, and with each other. By visualizing an AI system’s architecture through static code analysis, it reveals hidden workflows and potential vulnerabilities, allowing security teams to secure them proactively. The tool supports a variety of agentic frameworks and our team will be constantly shipping more integrations.

Agentic Radar enables AI security practitioners to:

  • Visualize AI workflows: Generate a graph of an AI system’s components – showing how agents and tools form decision paths.

  • Identify external tools: Detect all tools, APIs, and services integrated within the workflow.

  • Map AI vulnerabilities: Identify potential vulnerabilities in agentic workflows and align the findings with these LLM security frameworks:

  • See instant remediation steps: Get clear and actionable fixes to mitigate risks and strengthen the security of your agentic systems.

The results of Agentic Radar's workflow assessments are delivered in an HTML report for easy access and distribution. Below you can see an example of a visualized agentic workflow graph:

SplxAI - Agentic Workflow Graph

Tool vulnerabilities are shown with a detailed description, security framework mappings, and remediation steps for instant response:

SplxAI - Agentic Workflow Tool Vulnerabilities

Agentic Radar is an open-source scanner tool for agentic systems that helps security teams and AI engineers understand how AI agents interact with tools, external components, and with each other. By visualizing an AI system’s architecture through static code analysis, it reveals hidden workflows and potential vulnerabilities, allowing security teams to secure them proactively. The tool supports a variety of agentic frameworks and our team will be constantly shipping more integrations.

Agentic Radar enables AI security practitioners to:

  • Visualize AI workflows: Generate a graph of an AI system’s components – showing how agents and tools form decision paths.

  • Identify external tools: Detect all tools, APIs, and services integrated within the workflow.

  • Map AI vulnerabilities: Identify potential vulnerabilities in agentic workflows and align the findings with these LLM security frameworks:

  • See instant remediation steps: Get clear and actionable fixes to mitigate risks and strengthen the security of your agentic systems.

The results of Agentic Radar's workflow assessments are delivered in an HTML report for easy access and distribution. Below you can see an example of a visualized agentic workflow graph:

SplxAI - Agentic Workflow Graph

Tool vulnerabilities are shown with a detailed description, security framework mappings, and remediation steps for instant response:

SplxAI - Agentic Workflow Tool Vulnerabilities

Agentic Radar is an open-source scanner tool for agentic systems that helps security teams and AI engineers understand how AI agents interact with tools, external components, and with each other. By visualizing an AI system’s architecture through static code analysis, it reveals hidden workflows and potential vulnerabilities, allowing security teams to secure them proactively. The tool supports a variety of agentic frameworks and our team will be constantly shipping more integrations.

Agentic Radar enables AI security practitioners to:

  • Visualize AI workflows: Generate a graph of an AI system’s components – showing how agents and tools form decision paths.

  • Identify external tools: Detect all tools, APIs, and services integrated within the workflow.

  • Map AI vulnerabilities: Identify potential vulnerabilities in agentic workflows and align the findings with these LLM security frameworks:

  • See instant remediation steps: Get clear and actionable fixes to mitigate risks and strengthen the security of your agentic systems.

The results of Agentic Radar's workflow assessments are delivered in an HTML report for easy access and distribution. Below you can see an example of a visualized agentic workflow graph:

SplxAI - Agentic Workflow Graph

Tool vulnerabilities are shown with a detailed description, security framework mappings, and remediation steps for instant response:

SplxAI - Agentic Workflow Tool Vulnerabilities

Let's stop the guesswork in Agentic AI Security

Securing agentic AI workflows starts with transparency. Without understanding how agents, tools, and data flows interact, it's impossible to conduct precise security testing or ensure compliance. Agentic Radar is the first tool of its kind, giving AI security teams real visibility into agentic workflows and potential vulnerabilities, enabling targeted risk assessments and a more robust AI security posture.

At SplxAI, we believe securing complex AI systems should be accessible and efficient. We’re committed to supporting the AI security community, which is why we decided to make Agentic Radar fully open source.

Try it out for yourself – scan the source code of your own agentic system and see the results firsthand. Our public repo includes a detailed guide and some great examples to get started. Feel free to leave a star if you want to support our cause for AI transparency and secure agentic workflows!

Securing agentic AI workflows starts with transparency. Without understanding how agents, tools, and data flows interact, it's impossible to conduct precise security testing or ensure compliance. Agentic Radar is the first tool of its kind, giving AI security teams real visibility into agentic workflows and potential vulnerabilities, enabling targeted risk assessments and a more robust AI security posture.

At SplxAI, we believe securing complex AI systems should be accessible and efficient. We’re committed to supporting the AI security community, which is why we decided to make Agentic Radar fully open source.

Try it out for yourself – scan the source code of your own agentic system and see the results firsthand. Our public repo includes a detailed guide and some great examples to get started. Feel free to leave a star if you want to support our cause for AI transparency and secure agentic workflows!

Securing agentic AI workflows starts with transparency. Without understanding how agents, tools, and data flows interact, it's impossible to conduct precise security testing or ensure compliance. Agentic Radar is the first tool of its kind, giving AI security teams real visibility into agentic workflows and potential vulnerabilities, enabling targeted risk assessments and a more robust AI security posture.

At SplxAI, we believe securing complex AI systems should be accessible and efficient. We’re committed to supporting the AI security community, which is why we decided to make Agentic Radar fully open source.

Try it out for yourself – scan the source code of your own agentic system and see the results firsthand. Our public repo includes a detailed guide and some great examples to get started. Feel free to leave a star if you want to support our cause for AI transparency and secure agentic workflows!

Ready to adopt Generative AI with confidence?

Ready to adopt Generative AI with confidence?

Ready to adopt Generative AI with confidence?

Leverage GenAI technology securely with SplxAI

Join a number of enterprises that trust SplxAI for their AI Security needs:

CX platforms

Sales platforms

Conversational AI

Finance & banking

Insurances

CPaaS providers

300+

Tested GenAI apps

100k+

Vulnerabilities found

1,000+

Unique attack scenarios

12x

Accelerated deployments

SECURITY YOU CAN TRUST

GDPR

COMPLIANT

CCPA

COMPLIANT

ISO 27001

CERTIFIED

SOC 2 TYPE II

COMPLIANT

OWASP

CONTRIBUTORS

Leverage GenAI technology securely with SplxAI

Join a number of enterprises that trust SplxAI for their AI Security needs:

CX platforms

Sales platforms

Conversational AI

Finance & banking

Insurances

CPaaS providers

300+

Tested GenAI apps

100k+

Vulnerabilities found

1,000+

Unique attack scenarios

12x

Accelerated deployments

SECURITY YOU CAN TRUST

GDPR

COMPLIANT

CCPA

COMPLIANT

ISO 27001

CERTIFIED

SOC 2 TYPE II

COMPLIANT

OWASP

CONTRIBUTORS

Leverage GenAI technology securely with SplxAI

Join a number of enterprises that trust SplxAI for their AI Security needs:

CX platforms

Sales platforms

Conversational AI

Finance & banking

Insurances

CPaaS providers

300+

Tested GenAI apps

100k+

Vulnerabilities found

1,000+

Unique attack scenarios

12x

Accelerated deployments

SECURITY YOU CAN TRUST

GDPR

COMPLIANT

CCPA

COMPLIANT

ISO 27001

CERTIFIED

SOC 2 TYPE II

COMPLIANT

OWASP

CONTRIBUTORS

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

SplxAI - Background Pattern

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

SplxAI - Background Pattern

Supercharged security for your AI systems

Don’t wait for an incident to happen. Make sure your AI apps are safe and trustworthy.

SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI - Accelerator Programs
SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.

SplxAI Logo

For a future of safe and trustworthy AI.

Subscribe to our newsletter

By clicking "Subscribe" you agree to our privacy policy.