: Known Issues in Kubernetes Plugin 2.0.0
Focus
Focus

Known Issues in Kubernetes Plugin 2.0.0

Table of Contents

Known Issues in Kubernetes Plugin 2.0.0

The following list describes known issues in the Panorama plugin for Kubernetes version 2.0.0.

PLUG-8446

When configuring a monitoring definition, the monitoring watcher might become stuck in the initialization stage and does not receive any new updates.
Workaround: Use the following command to restart the plugin—request plugins reset-plugin only plugin plugin-name kubernetes.
This issue is fixed in Panorama Plugin for Kubernetes 2.0.1.

PLUG-5569

On occasion, CN-MGMT pods fail to connect to Panorama.
Workaround: Commit the Panorama configuration after the CN-MGMT pod successfully registers with Panorama.

PLUG-5320

If you did not allocate adequate tokens, and some CN-NGFW pods are on the 4-hour license grace period, on a Panorama HA failover the CN-NGFW pods that connect first are allocated licenses. Hence, a CN-NGFW pod that was licensed can now be unlicensed based on the order it connects to Panorama.To ensure that you continuous coverage, verify that you have allocated the required number of tokens for securing you deployment.

PLUG-5250

When you uninstall the Kubernetes plugin on Panorama, you must also delete the default template (K8S-Network-Setup) that the plugin automatically creates before or after you downgrade to 9.1 or an earlier version of Panorama. You will be unable to commit changes on Panorama if this default template with 60 interfaces is not deleted manually. This commit failure occurs because Panorama 9.1 and earlier versions only support up to 30 interfaces per template, and this template has 60 interfaces.
Workaround: Delete the K8S-Network-Setup template before downgrading or after downgrading. The commit on Panorama will be successful after you remove this template.

PLUG-4888

If you update the number of tokens allocated for licensing the CN-Series firewalls (KubernetesSetupLicenses), when the plugin communicates with the IT server, the license Issued Date is updated to match the current date on Panorama.

PLUG-4724

When the Kubernetes cluster becomes unreachable from the Panorama plugin, all the nodes that were licensed till that point will remain licensed.Tokens can be given back to the licensing server only if there is cluster connectivity and you must reestablish cluster connectivity in the Panorama plugin to reclaim tokens.
Workaround: Delete the cluster configuration on Panorama.

PLUG-4664

When you deploy Panorama in a HA set up, each active Panorama peer consumes a license token for each CN-NGFW pod that it manages.

PLUG-4549

API calls from the Kubernetes plugin to the Kubernetes API server become unresponsive sometimes due to network connectivity issues. When this issue occurs a System Log is generated with the following message:
Subprocess hanging for <mon-def-name>. Plugin watcher may be in bad state. To resolve please run following command: request plugins reset-plugin plugin-name kubernetes
Workaround: When this issue happens restart the plugin from the CLI with the following command: request plugins reset-plugin plugin-name kubernetes

PLUG-3543

When you add a large number of Kubernetes clusters to Panorama, depending on the number of services running on each cluster, it might up to 10 minutes to create the service objects and register the IP addresses on Panorama.

PLUG-3524

If you reference the same device group in more than one monitoring definition on Panorama, (PluginsKubernetesMonitoring Definition, the tags associated with one cluster and monitoring definition may be shared with the other clusters associated with the other monitoring definitions.