Create and Configure API Security Profile
Focus
Focus
AI Runtime Security

Create and Configure API Security Profile

Table of Contents

Create and Configure API Security Profile

Create and configure a security profile and define the protections to be enabled.
On this page you will create an API security profile and configure AI model protection, AI application protection, and AI data protection.
Where Can I Use This?What Do I Need?
  • AI Runtime Security
Follow these steps if you are onboarding the AI Runtime Security: API intercept for the first time, or when you have already onboarded the AI API intercept and want to manage the API security profile.
  1. For the first time onboarding, ensure you Onboard AI Runtime Security: API Intercept in Strata Cloud Manager. For updating an exisiting API security profile, navigate to Insights → AI Runtime Security and click Manage from the top right corner and select Security Profiles.
  2. Create Security profile or edit and existing one with the following configurations:
    1. Enter a Security Profile Name.
    2. Select the following protections with Allow or Block actions:
      Protection TypeConfigurations
      AI Model Protection
      • Enable Prompt Injection Detection and set it to Allow or Block
        The feature supports the following languages: English, Spanish, Russian, German, French, Japanese, Portuguese, and Italian.
        .
      • Enable Toxic Content Detection in LLM model requests or responses.
        This feature helps protect the LLM models from generating or responding to inappropriate content.
        The actions include Allow or Block, with the following severity levels:
        • Moderate: Detects content that some users may consider toxic, but which may be more ambiguous. The default value is Allow.
        • High: Content with a high likelihood of most users considering it toxic. The default value is Allow.
      AI Application Protection
      • Enable Malicious Code Detection
        • This feature is used to analyze code snippets generated by Large Language Models (LLMs) and identify potential security threats.
        • Set the action to Block to prevent the execution of potentially malicious code or set it to Allow to ignore the “malicious” verdict if needed.
        • To test your LLMs, trigger a scan API with a response containing malicious code for supported languages (Javascript, Python, VBScript, Powershell, Batch, Shell, and Perl).
        • The system provides a verdict on the code snippet and generates a detailed report with SHA-256, file type, known verdict, code action, and malware analysis.
      • Enable Malicious URL Detection
        • Basic: Enable the Malicious URL Detection in a prompt or AI model response and set the action to Allow or Block. This is to detect the predefined malicious categories.
        • Advanced: Provide URL security exceptions:
          The default action (Allow or Block) is applied to all the predefined URL security categories.
          In the URL Security Exceptions table, you can override the default behavior by specifying actions for individual URL categories.
          Select the plus (+) icon to add the predefined URL categories and set an action for each.
      Refer to the API reference docs to trigger the scan APIs with the intended detections. Also, generate the reports with report ID and scan ID displayed in the output snippet.
      AI Data ProtectionEnable Sensitive Data Detection
      • Basic: Enable sensitive DLP protection for predefined data patterns with Allow or Block action.
      • Advanced: Select the predefined or custom DLP profile.
        The drop down list shows your custom DLP profiles and all the DLP profiles linked to the tenant service group (TSG) associated with your AI Runtime Security: API intercept deployment profile.
        Navigate to Manage > Configuration > Data Loss Preventions > Data Profiles to create a new DLP profile (Add a data profile).
        The prompts and responses will be run against this DLP profile attached to the AI security profile and action (allow or block) will be taken based on the output of the DLP profile.
      • Enable Database Security Detection:
        This detection is for AI applications using genAI to generate database queries and regulate the types of queries generated.
        Set an Allow or Block action on the database queries (Create, Read, Update, and Delete) to prevent unauthorized actions.
        Refer to the Use Cases: AI Runtime Security: API Intercept for details on the request and response report.
      You can use a single API key to manage multiple AI security profiles for testing.
    3. Latency Configuration: Define acceptable API response times by setting a latency threshold in seconds. This threshold will determine when API responses exceed the permissible limit, impacting how quickly threats are detected and actions are executed. You can set the action to Allow or Block when the latency threshold exceeds.
    4. Create Profile.
xThanks for visiting https://docs.paloaltonetworks.com. To improve your experience when accessing content across our site, please add the domain to the allow list on your ad blocker application.