AI Runtime Security: API Intercept: Toxic Content
Detection
Released in March
AI Security Profile Customization
AI Model Protection:
Added Toxic
Content Detection in LLM model requests and
responses to protect the models from generating or
responding to inappropriate content. Toxic content
includes references to hateful, sexual, violent, or
profane themes. Malicious threat actors can easily
bypass the LLM guardrails against toxic content through
direct or indirect prompt injection.
Create and manage multiple AI security profiles and
their revisions.
AI Security Profile Customization
AI Application Protection: Enhanced the
application security with advanced options for URL
filtering with custom allow and block lists for the
predefined URL security
categories.
AI Data Protection: Expanded data loss prevention
(DLP) profile selection - You can now define your custom
DLP profiles for AI security.
Database Security Detection: Enable database
security detection to regulate database security threats
in the prompt or response. This feature allows you to
allow or block malicious SQL queries, preventing
unauthorized actions on your database. (For detailed
instructions on implementing this feature and using the
scan APIs, refer to the creating a security
profile section).
xThanks for visiting https://docs.paloaltonetworks.com. To improve your experience when accessing content across our site, please add the domain to the allow list on your ad blocker application.