AI Runtime Security
Addressed-Known Issues Consolidated List
Table of Contents
Expand All
|
Collapse All
AI Runtime Security Docs
Addressed-Known Issues Consolidated List
addressed-known-consolidated-sdwan-issues
ISSUE ID | STATUS | DESCRIPTION | ADDRESSED IN RELEASE/S | UNRESOLVED IN RELEASE/S |
PLUG-13393 | Verified | On Azure plugin 5.1.0, after configuring the Monitoring Definition on the Panorama UI, unless the commit is done the Monitoring definition does not show up on the UI. Workaround: After adding the first Monitoring Definition, you must initiate a commit and once the commit goes through, the UI will display the Monitoring Definition. Refresh your browser window after the commit. | - | 5.1.1 |
PLUG-11732 | Assigned | The command debug plugins azure azure-tags dump-all monitoring-definition does not work in the tags contain ASCII characters. | - | 4.1.0 |
PLUG-10972 | Verified | Following an upgrade of Azure plugin from 3.2.0 to 3.2.1, an exception related to MS connection is observed in the monitoring logs. | - | 3.2.1 |
PLUG-3478 | Verified | Spaces and special characters in user-defined tags are now treated differently. In previous releases both spaces and special characters caused a tag to be ignored. In the current release, user-defined tags containing empty spaces can be retrieved, provided they do not include special characters. An empty space in a user-defined tag is replaced with “/”, allowing the tag to be retrieved.For example, if your tag is finance and accounts, the tag can be retrieved. User-defined tags with special characters are ignored and not retrieved.For example, if your tag is finance&accounts, your tag is ignored and the log shows the following message:admin@Panorama> less plugins-log plugin_azure.log 2020-02-27 12:20:46.018 -0800 DEBUG:: Tag azure.tag.Tag-spcl-char.<finance>&<accounts> has unsupported chars.. Ignoring... Workaround—Modify the tag to remove special characters. This issue is fixed in the Panorama plugin for Azure, version 2.0.2. | 2.0.2 | 2.0.1 |
PLUG-1931 | Verified | Unable to configure the Azure plugin on Panorama when it is installed on an M-600 appliance. Fixed in the Panorama plugin for Azure 2.0.1. | - | 1.0.0 |
PLUG-10840 | Verified | Fixed an issue where the InvalidResourceReference error was observed while adding an inbound stack to an existing deployment and redeploying it. | - | 3.2.0, 4.0.1, 3.2.1 |
PLUG-13661 | Resolved | When a request from the Azure plugin for device group information fails from Panorama, the plugin cannot process IP-tag information. When this issue occurs, the plugin generates the following syslog entry. Azure plugin: Tags used in policy was not successfully retrieved, please disable tag pruning from CLI. Workaround: If you receive this syslog entry shown above, disable tag pruning to retrieve IP tags. Use the following command to disable tag pruning. request plugins azure set-tag-pruning-flag value False On a High Availability Panorama setup, the tag pruning CLI commands must be configured on both the HA peers. | 11.0.4 | 11.0.2 |
PLUG-10792 | Verified | While using CIDR notations below /22 in Azure deployments, ensure that the vNet address space does not overlap with any other vNet in the same organization. Workaround: It is recommended to use CIDR notations below /16, although /22 is allowed. Fixed in Panorama plugin for Azure 3.2.1. | 3.2.1 | 3.2.0, 3.2.1 |
PLUG-6674 | Assigned | If an authcode, device certificate PIN ID, device certificate PIN value, or jumbo frame configuration is changed and a deployment update is done, an automatic rolling update is not triggered. These new changes will only apply to newly deployed firewalls. Workaround: Because rolling updates do not support authcodes, device certificate information, or jumbo frame configuration, you must manually delete the firewalls in the VMSS one by one. The changes will be applied to the new firewalls that come up. | - | 3.0.0 |
PLUG-10435 | Resolved | In fully scaled environments on an HA setup, the plugin installation is observed to take around 20 minutes on the Primary node. | - | 3.2.0 |
PLUG-6860 | Verified | Deployment update fails when the minimum number of firewalls count is changed. Fixed in the Panorama plugin for Azure, version 3.0.1. | - | 3.0.0 |
FWAAS-9055 | Verified | Fixed an issue that changed CNGFW into an unhealthy state and loss of connectivity to Panorama after renaming cloud DG in Panorama and pushing the configuration. For Cloud NGFW, the names of Device Group and Template Stack could not be renamed, as the renaming caused Cloud NGFW auto-scaled devices not be able to register to Panorama with the original names. This issue is fixed in the Panorama plugin for Azure 5.1.1. | Azure Post GA | - |
PLUG-11011 | Resolved | When a proxy configuration is deleted and committed on Panorama, you might observe logs reaching the proxy intermittently. | - | 3.2.1 |
PLUG-8744 | Verified | Fixed an issue when multiple subscriptions are monitored and few are in failed state while others in success state, the plugin was only processing the last subscription updates and sending to Panorama. | - | 3.0.1, 3.1.0 |
PLUG-11954 | NEEDSMOREINFO | When you update the number firewalls in the Azure Virtual Machine Scale Set (VMSS) and attempting to redeploy, the deployment status becomes stuck in the Deploying state and some firewalls are displayed as disconnected. Additionally, the new number of firewalls is not reflected in the summary. After 30 or more minutes, the firewalls display as Connected and the correct number of firewalls is displayed. | - | 4.1.0 |
FWAAS-7948 | Done | The selection and deselection of individual VMs as a target when pushing policies on Panorama to cloud NGFW is disabled for Azure plugin 5.1.0. | - | - |
PLUG-4477 | Verified | Fixed an issue that caused the Panorama plugin for Azure to print the error message Aroxy server not set in configuration when using the Validate button. The plugin logs on Panorama show that the validation is successful in spite of the printed error message. | - | 2.0.2 |
PLUG-12401 | Resolved | When public frontend IP is configured on application gateway v2, the Panorama plugin for Azure 4.1.0 considers it as a public application gateway and creates a static route towards untrust interface with subnet CIDR as source. The plugin recognizes the application gateway v2 as an external load balancer because Azure requires a public IP frontend for the application gateway v2. You can not use the application gateway v2 as an internal load balancer. | - | 4.1.0 |
PLUG-10122 | Verified | When viewed in Mozilla Firefox, the Configuration General tab of the Panorama plugin for Azure 4.0.0 might appear distorted or unclickable. Workaround: Use other browsers such as Google Chrome or Microsoft Edge. | - | 4.0.0 |
PLUG-10478 | Verified | Out of Memory issues are observed on smaller Panorama instances with resources of around 4/16 CPU/Memory and running around 100+ subscriptions. | - | 3.2.0 |
PLUG-7780 | Resolved | When the monitoring definition service principle for VM monitoring in Azure is configured correctly on the Panorama plugin for Azure 3.0.x with PAN-OS 10.0.x, the service principal validation check displays as failed under PanoramaAzure Setup Service Principal. | - | 2.0.3, 3.0.0, 2.0.1 |
PLUG-10898 | Verified | Fixed an issue where the vNet Resource Groups were not refreshed when the regions were changed in the UI. | - | 3.2.1 |
PLUG-5389 | Verified | When a deployment is first added on Panorama, the status displays Commit Changes directing you to perform a commit. However, when you make a change to the deployment configuration, the status does not change although a commit is required before your Azure stack is updated. Workaround: Perform a commit on Panorama. | - | 3.0.0 |
FWAAS-10146 | Verified | You cannot select the selected firewall instance as a User ID Master Device when configuring a template. | - | - |
PLUG-11905 | Resolved | On a Panorama HA pair, you might see tracebacks on the new primary firewall in the monitoring logs after a failover event. Additionally, monitoring fails while the tracebacks occur. Workaround: Perform a commit on the primary active node to recover monitoring. | - | 4.1.0 |
FWAAS-9739 | Verified | The interface section in the DNS proxy configuration dialogue box in the Panorama console is now disabled for CNGFW template and template stacks. | - | - |
PLUG-718 | Closed | For a Dynamic address group that is not referenced in a Security policy rule, the list of registered IP addresses displayed on ObjectsAddress Groups is not accurate. This is a display issue only, and security policy is properly enforced on all your running VMs in the VPC. Workaround: Use the Dynamic address group in a Security policy to see the most current list of registered IP addresses on the firewall, or use the CLI command show object dynamic-address-group all for an up-to-date list of IP addresses. | 1.0.0 | 1.0.0 |
PLUG-1766 | Resolved | On Azure AutoScaling Definition it can take from three to five minutes to list the Protected Applications and Services. | - | 2.0.0 |
PLUG-12220 | Verified | Fixed an Azure resource deletion issue due to nonavailability of Panorama plugin for Azure configuration. | - | 4.1.0 |
PLUG-13778 | Resolved | Fixed an issue that caused partial undeployment of Azure plugin orchestration without any Administrator actions. | - | 4.1.0 |
PLUG-1901 | Resolved | If the service name is not unique across namespaces (for example, Staging and Production) the IP addresses associated with both services are mapped to the same tag and policy enforcement is the same for the services across both namespaces.Instead of using the default tags on Panorama, use the label selector to filter tags based on namespace, and use the filter results as part of the address group. | - | 2.0.0 |
PLUG-13860 | Assigned | When validating credentials using an invalid secret, the process fails. An error message appears displaying extraneous information. | - | 5.1.1, 5.0.0 |
FWAAS-9343 | Verified | If you are pushing security rules in non-cloud device groups to cloud NGFW devices, security rules which are pushed to Target Devices from Panorama are not present in autoscaled Firewalls by default. | 1.7.0 | - |
PLUG-10338 | Verified | On successful deletion of a deployment, it is observed that the PA-VMs are not deleted from the device summary and are displayed as disconnected under Managed Devices. This issue is observed in PAN OS 10.1.x and later. | - | 3.2.1, 4.0.0, 4.1.0 |
PLUG-10134 | Verified | In Azure environments, the Retry logic fails intermittently with HTTP Error 429 during monitoring. | - | 3.2.0, 3.1.1, 4.0.0 |
PLUG-3797 | Resolved | When upgrading the Panorama plugin for Azure on peers configured as an HA pair, if you upgrade the plugin on the secondary peer first and the peer becomes active, the primary (now passive) cannot function as an HA peer. Workaround—When upgrading the Panorama plugin for Azure on peers that are configured as an HA pair, you must install the plugin on the primary peer first and commit your changes immediately, and then install the same plugin version on the secondary peer and commit your changes immediately. This issue is fixed in Panorama plugin for Azure version, 2.0.2. | 2.0.2 | 2.0.0 |
PLUG-9905 | Verified | When proxy is configured on a Panorama, the Monitoring status is observed to be failing due to rejection of the connection. Workaround: Manually configure the Public IP of the Panorama, at the interface level. | - | 3.1.1 |
PLUG-7143 | Verified | Azure deployment fails in United Arab Emirate regions with the error: Failed to get zones for entered region Fixed in the Panorama plugin for Azure, version 3.0.1. With this fix, the plugin detects the valid instance types for a region before the deployment. | - | 3.0.0 |
PLUG-10409 | Closed | In environments with many subscriptions, tags updated on Azure are reflected on the Panorama with a delay. | - | 3.2.0 |
PLUG- 13010 | N/A | On Azure plugin 5.1.0, if DAGs overlap and one of the DAGs is part of a policy, then those IPs will show up in both the DAGs including the one, which is not part of a policy. | - | - |
PLUG-9994 | Verified | Static IPs are not recognized when "and" operators are used with IP CiDr range. | - | 4.0.0 |
PLUG-9978 | Verified | In the plugin_client.log file, it is observed that the Proxy password is displayed as plain text, instead of being masked. | - | 3.1.1 |
PLUG-6991 | Verified | After a successful deployment using the PAN-OS 10.0.1 image, if you add a front end with a new public IP type and add related load balancing rules, the new front end functions, however, updating to PAN-OS 10.0.2, deletes the new public IP and load balancing rules. Fixed in the Panorama plugin for Azure, version 3.0.1. This plugin fix works with PAN-OS 10.0.1 and later. | - | 3.0.0 |
PLUG-10476 | Closed | If you have a one or more other Panorama plugins installed, the upgrade and installation of the Panorama plugin for Azure to version 3.2.0 can take longer than expected. | - | 3.2.0 |
DIT-40519 | Incident Resolved | The vm_auth_key used by the firewall to connect with Panorama, upon expiration, deletion, or invalidation, does not let any new firewalls to connect with Panorama. | - | - |
PLUG-10775 | Verified | Fixed an issue where the Azure plugin would not process a subscription successfully, if the tags contained special characters. | 3.2.1 | 3.2.0 |
PLUG-996 | Resolved | For firewalls running PAN-OS 8.1, if the total number of tags exceeds 7000 for a device group that contains a firewall or a group of firewalls, an XML parsing error displays. This parsing error causes the failure to register tags to the firewalls. For firewalls running PAN-OS 8.0.x, this XML parsing error limit is met at 2500 tags. | - | 2.0.0, 1.0.0 |
FWAAS-9738 | Verified | This release resolves an issue where after configuring a DNS proxy object ( Device>Setup>Services ) you could not use the Panorama interface to modify or delete it. | - | - |
PLUG-11906 | Verified | When deployed in the North Central US region of Azure, Panorama does not properly handle subnets with overlapping IP addresses. The plugin is expected to use the next available subnet IP range but instead returns an error message and the Azure Orchestrated deployment fails. | - | 3.2.0, 3.2.1 |
PLUG-676 | Resolved | If the memory allocation on a Panorama virtual appliance is lower than the minimum recommendation, you cannot access and configure the plugin. Make sure to size your Panorama appliance properly so that you can install the plugin. | - | 1.0.0 |
PLUG-1874 | Reopened | On rare occasions, the license server reuses the serial number of an active device, and Panorama deactivates the device. Remove the device from the auto scale group. | - | 2.0.0 |
PLUG-14014 | Verified | The process of firewall delicensing is improved. The improved process includes more checks to verify before delicensing the firewall. | - | 5.1.1 |
PLUG-13535 | Verified | Fixed an issue that caused VM-Series firewalls in all device groups to be deactivated when only Cloud NGFW for Azure firewall instances should have been disconnected. Additionally, this fixes an issue that prevented the last disconnected timestamp from being updated. | - | 5.1.0 |
PLUG-13162 | Closed | CLI allows you to configure unavailable regions. You must ensure that the region you want to configure is available for the corresponding service principal. | - | 5.1.0 |
PLUG-10707 | Verified | Fixed an issue where the Azure plugin failed to upgrade the PAN-OS version of the deployed firewalls through redeploy. | 3.2.1, 4.0.1, 4.1.0 | 3.2.0, 4.0.0 |
PLUG-6543 | TODO | When deploying or updating multiple deployments, the Panorama plugin for Azure might fail to commit your changes when too many commits have been issued by the plugin. This occurs because Panorama allows a maximum of 10 administrator-initiated commits. See Panorama Commit, Validation, and Preview Operations for more information. Workaround: To resolve this issue, perform a manual commit. | - | 3.0.0 |
PLUG-6990 | Verified | In a configuration where a service principal is valid for both Azure monitoring and deployments, the user interface incorrectly displays the following error for monitoring: Failed to process subscription <subscription-id> with exception: local variable ‘service_tag_response’ referenced before assignment Fixed in the Panorama plugin for Azure, version 3.0.1. | - | 3.0.0 |
FWAAS-11870 | Verified | The MongoDB synchronization fails between primary and secondary nodes while storing the vm auth key, pin-id, and pin-value. Also, the failover to the secondary node requires regeneration of the registration string. | - | - |
PLUG-11917 | Resolved | VM-Series firewalls are not cleaned up from the device summary, templates, and device groups on the secondary passive Panorama node after undeploy of an upgraded and redeployed Panorama plugin for Azure 4.1.0 orchestration. | - | 4.1.0 |
PLUG-6343 | Resolved | Dynamic address groups that include the resource group tag retrieves the IP addresses for application gateways and load balancers but not the IP addresses of VM instances in the resource group. This occurs because the Azure API sometimes returns the resource group tag string in all capital letters and sometimes in all lower case letters. Workaround: When creating a dynamic address group with a resource group tag, add the all-capital tag and all-lower case tag separated by the OR operator. | - | 3.0.0 |
PLUG-13934 | Resolved | Fixed an issue that caused incorrect determination of Azure plugin orchestration determining Azure resource group as deleted. | - | azure |
PLUG-10437 | Resolved | After a successful VM-Series firewall deployment in PAN-OS 10.0.9 and above, and Azure Plugin 3.2.0; it is observed that the deployment status displays as Warning, even though the firewalls are successfully connected to the Panorama. | - | 3.2.0, 3.2.1, 4.1.0 |
PLUG-1613 | Resolved | Downgrade is not supported for plugin versions. | - | 2.0.0 |
PLUG-13846 | Resolved | When changing the region on a successfully deployed monitoring definition, a commit is required to populate the appropriate IP-tag information in correspondence with the region change. | - | 5.1.1 |
PLUG-15904 | Resolved | The Panorama orchestrated VM-Series firewall deployment fails when the VM-Series image from the custom resource group is used. | - | 5.1.2 |
PLUG-1711 | Resolved | VNET peering between AKS clusters and Inbound Resource Groups sometimes causes a delay in scheduling and pods are in the Pending, Terminating, or Unknown state. If this happens, restart the nodes. | - | 2.0.0 |
PLUG-11312 | Verified | Fixed an issue that caused redeployment to fail after upgrading VM-Series for Azure firewalls with BYOL licenses. To change the license type of deployed VM-Series firewalls, you must delicense all the firewalls and then redeploy them using the new auth code. Note that once the firewalls are delicensed, they can no longer process traffic and traffic is dropped. | 4.1.0 | 3.2.1, 4.1.0 |
PLUG-14214 | Resolved | Fixed an issue that failed disassociating cloud devices from log collector group when cleaning up stale devices, and issued partial commit every second when CNGFW daemon fails to clean up stale devices in some cases. | - | 5.1.1 |
PLUG-10452 | Resolved | Device groups are not listed in the plugin debug command debug plugins azure azure-tags dump-all monitoring-definition MD1 action send-to-file when multiple monitoring definitions are configured. | - | 3.2.0 |
PLUG-11909 | Resolved | After performing an undeploy of a successful orchestration with the Panorama plugin for Azure 4.1.0, when attempting to commit a new configuration, you might see device group, template, or template stack settings from the previous deployment in the commit list. | - | 4.1.0 |
PLUG-10785 | Verified | Fixed an issue where an undeploy that was triggered following a failed deployment, was not clearing the template-stacks and device-groups that were created during the deployment. | 3.2.1 | 3.2.0 |
PLUG-10896 | Verified | In Azure 3.2.1 deployments where a vNet is connected to vWAN, an InvalidAuthenticationTokenTenant error is observed in the deployment logs. However, no change in functionality is observed. | - | 3.2.1 |
PLUG-7434 | Verified | Panorama plugin for Azure only supports general access releases: Fixed in the Panorama plugin for Azure, version 3.0.1. With this fix, the plugin supports other release types. | - | 3.0.0 |
PLUG-6987 | Verified | If you create an Azure deployment with a hub stack only, launch the deployment, and edit the deployment configuration to add an inbound stack, the plugin UI does not allow you to choose a device group and a VM size. Fixed in the Panorama plugin for Azure, version 3.0.1. | - | 3.0.0 |
PLUG-12896 | Verified | Fixed an issue that incorrectly displayed the firewall deployment status as the warning state (firewall not connected to Panorama after 20 minutes) even though the firewall deployed is in success state. | 5.1.1 | 5.0.1 |
PLUG-12279 | Verified | Fixed an issue that caused resource group validation to fail due to capitalization differences between the configured resource group name on Azure and the resource group name returned in API responses from Azure. | - | 4.1.0 |
PLUG-2074 | Resolved | After a VM Scale Set (VMSS) is deleted, wait until the resource group deletion is complete before you attempt to delicense the VMs. When the deletion is complete, issue the following command: request plugins azure force-delicense-deleted-vms | - | 2.0.0 |
PLUG-10774 | Verified | Orchestration deployment might fail in unsupported regions such as canadaeast, as the migration to workspace based appinsights instance is not yet implemented. | - | 3.2.0, 3.2.1, 4.0.0 |
PLUG-8194 | Verified | Fixed an issue that caused Panorama orchestrated deployment on Azure to fail when an inbound stack was added to an existing, successful deployment. | - | 3.0.1 |
PLUG-12945 | Resolved | Fixed an issue where the Azure plugin on Panorama showed all possible service tags but when creating a dynamic address group, the parent level service objects do not contain all IP ranges listed by Microsoft. | - | 4.1.0 |
PLUG-1876 | Resolved | On rare occasions, you see a message indicating the template configuration is out of sync. Check the syslog and push the configuration to your managed devices. | - | 2.0.0 |
PLUG-5793 | Resolved | The VM-Series firewall on Azure can only handle traffic that originated in the same region where the firewall is deployed. Traffic originating from a different region is not seen by the firewall. | - | 3.0.0 |
PLUG-1646 | Resolved | Azure plugin 2.0 does not support deployments with a proxy server. | - | 2.0.0 |
PLUG-7024 | Verified | Some parts of the user interface use the terms Egress Private IP and Egress Public IP terminology, while documentation and other parts of the UI use Hub Private IP and Hub Public IP. Fixed in the Panorama plugin for Azure, version 3.0.1.With this fix, the user interface replaces “Egress” with “Hub”. | - | 3.0.0 |
FWAAS-9041 | Verified | Fixed an issue where device Server Profiles options like LDAP, Syslog, and RADIUS appeared in a disabled state in Panorama Template for CNGFW devices. This issue is fixed in the Panorama plugin for Azure 5.1.1. | Azure Post GA | - |
PLUG-13852 | TODO | Downgrading from version 5.1.1 is not supported. Palo Alto Networks recommends that you uninstall version 5.1.1 and install the desired version. When downgrading to a previous version the displayed error message could be misleading. Ignore this error message. | - | 5.1.1 |
PLUG-4572 | Verified | In an Azure deployment orchestrated from Panorama, outbound ICMP traffic cannot be handled by the VM-Series firewall due to a limitation in the Azure load balancer. | Cl-Int-2020-June | 3.0.0 |
PLUG-14947 | Closed | Fixed an issue that prevented the Panorama plugin for Azure from connecting to Azure. | 3.2.2 | 2.0.3 |