Today, we’re excited to introduce Policy Generator, another market-first feature that reshapes how AI security teams operationalize findings from automated AI red teaming assessments. With this release, the SPLX Platform becomes the first solution to create a streamlined pipeline from adversarial testing to production-ready guardrails – all generated automatically based on red teaming test results and domain-specific test configurations.
Instead of manually interpreting red teaming results and rewriting inline protection policies, SPLX users can now generate enforceable guardrail policies in just a few clicks, directly informed by discovered risks and probe configurations. This creates an instant connection between testing → remediation → enforcement, minimizing time to mitigation and immediately reducing risk across your AI systems.
What the Policy Generator Does
The Policy Generator creates a seamless connection between AI red teaming results on the SPLX Platform and the guardrails that secure your AI systems in production. Instead of relying on generic templates, the feature analyzes the actual test results of the selected probes, including any vulnerabilities surfaced during execution – and uses this data as the foundation for the generated policy.
Just as importantly, it incorporates the probe configuration behind those results: Confidential system-prompt sections used for Context Leakage testing, canary words, boundary conditions, allowed or blocked topics, and any other domain-specific settings that make a probe highly targeted. By combining probe results with these configuration details, the generated policy reflects not just what went wrong, but why – and how the system should be protected going forward.
Each policy includes the appropriate policy rule types and their recommended configurations, all structured according to SPLX’s supported policy format. You can export the full policy in JSON format for use across other infrastructures, apply it directly to an existing runtime policy, or create a completely new one.
Because each guardrail provider (e.g., Zscaler, AWS, Azure) offers different guard types and configuration models, the Policy Generator is provider-aware. When creating a policy, users select the target guardrail provider, and the generated policy is automatically mapped to that provider’s rule schema and enforcement semantics. This ensures that a confidentiality rule becomes the right kind of input filter in one provider, a content moderation blocklist in another, or a system prompt constraint in a third. Under the hood, SPLX applies provider-specific logic so you don’t have to translate red teaming insights into multiple, incompatible configurations. The result is a policy that fits the guardrails you actually use, without manual rework.

In essence, the Policy Generator takes what was discovered during red teaming assessments and transforms it into actionable and production-ready guardrails – empowering AI security teams to remediate risks faster and enforce stronger inline protection everywhere their AI runs.
What This Means for AI Security Teams
The launch of Policy Generator marks another important milestone in building a holistic, end-to-end approach to securing AI systems – from development to deployment. By closing the loop between testing, remediation, and runtime enforcement, SPLX now allows teams to move from risk discovery to inline protection with unprecedented speed and accuracy.
For AI security teams this means:
Faster and more consistent remediation
Guardrail policies that reflect real adversarial behavior
Reduced manual effort and fewer chances for human error
The ability to deploy strong, aligned protection across cloud and on-prem environments
A clearer, more repeatable pathway for maintaining secure AI systems at scale
Policy Generator streamlines one of the most complex and time-consuming steps in the AI security lifecycle: turning threat insights into real defenses. It helps teams create the most effective guardrail policies possible – rooted in evidence, grounded in configuration context, and deployable wherever your AI systems live.
For example, customers running models or agents on AWS can now use SPLX to generate guardrail policies aligned with Guardrails for Amazon Bedrock. With a single click, you can protect Bedrock workloads without deploying extra infrastructure, leveraging AWS’s managed guardrail capabilities. That means faster rollout, reduced operational overhead, and policies that stay current with AWS’s native enforcement model, so your teams spend less time maintaining guardrails and more time securing what matters.
And this is just the beginning. As we continue to expand SPLX’s automated security and governance capabilities, Policy Generator becomes a foundational building block in a future where AI systems can be continuously tested, improved, and protected.
Ready to see it in action?
Get started by generating your first AI runtime protection policy with the new Policy Generator, now available in the SPLX Platform.
If you’d like a deeper look, our team is here to help – request a demo or explore our detailed documentation to understand how Policy Generator works.
Table of contents










