The Recommendations feature enables you to seamlessly transition from
identifying AI system vulnerabilities through Red Teaming assessments to
implementing targeted security controls that address your specific risks. This
feature closes the critical gap between AI risk assessment and mitigation by
transforming vulnerability findings into actionable remediation plans. The
remediation recommendations can be found in all
Attack Library and Agent Scan Reports.
When you conduct AI Red Teaming evaluations on your AI models,
applications, or agents, this integrated solution automatically analyzes the
discovered security, safety, brand reputation, and compliance risks to generate
contextual remediation recommendations that directly address your specific
vulnerabilities.
The generated contextual remediation recommendations include two distinct
components:
For organizations deploying AI systems in production environments, this
capability ensures that your runtime security configurations and remediation
measures are informed by actual risk insights rather than generic best practices,
resulting in more effective protection against the specific threats your AI systems
face.
The remediation recommendations appear directly in your AI Red Teaming scan
reports, providing actionable guidance. You can then manually create and attach the
recommended security profiles to your desired workloads, transforming AI risk
management from a reactive process into a proactive workflow that connects
vulnerability discovery with targeted protection.