Advanced WildFire Powered by Precision AI™
WildFire Cluster Upgrade Prerequisites
Table of Contents
Expand All
|
Collapse All
Advanced WildFire
-
-
- Forward Files for Advanced WildFire Analysis
- Manually Upload Files to the WildFire Portal
- Forward Decrypted SSL Traffic for Advanced WildFire Analysis
- Enable Advanced WildFire Inline Cloud Analysis
- Enable Advanced WildFire Inline ML
- Enable Hold Mode for Real-Time Signature Lookup
- Configure the Content Cloud FQDN Settings
- Sample Removal Request
- Firewall File-Forwarding Capacity by Model
-
-
-
- set deviceconfig cluster
- set deviceconfig high-availability
- set deviceconfig setting management
- set deviceconfig setting wildfire
- set deviceconfig system eth2
- set deviceconfig system eth3
- set deviceconfig system panorama local-panorama panorama-server
- set deviceconfig system panorama local-panorama panorama-server-2
- set deviceconfig system update-schedule
- set deviceconfig system vm-interface
-
- clear high-availability
- create wildfire api-key
- delete high-availability-key
- delete wildfire api-key
- delete wildfire-metadata
- disable wildfire
- edit wildfire api-key
- load wildfire api-key
- request cluster decommission
- request cluster reboot-local-node
- request high-availability state
- request high-availability sync-to-remote
- request system raid
- request wildfire sample redistribution
- request system wildfire-vm-image
- request wf-content
- save wildfire api-key
- set wildfire portal-admin
- show cluster all-peers
- show cluster controller
- show cluster data migration status
- show cluster membership
- show cluster task
- show high-availability all
- show high-availability control-link
- show high-availability state
- show high-availability transitions
- show system raid
- submit wildfire local-verdict-change
- show wildfire
- show wildfire global
- show wildfire local
- test wildfire registration
WildFire Cluster Upgrade Prerequisites
Where Can I Use This? | What Do I Need? |
---|---|
|
|
Before proceeding with the WildFire cluster upgrade tasks, you must verify that each node in the
cluster is ready to undergo the upgrade process. These verification tasks should be
performed immediately before the WildFire cluster is scheduled to be upgraded.
- Perform validation checks on all WildFire appliance cluster nodes:
- The following examples indicate a CLI prompt for a WildFire appliance passive controller, however, these commands are applicable for all WildFire appliance cluster node roles.
- Verify all jobs running on the WildFire appliance are completed:
admin@WF-500(passive-controller)> show jobs pending
admin@WF-500(passive-controller)> show jobs processed
- Verify all the interfaces are up and that the counter does not show any anomalies:
admin@WF-500(passive-controller)> show system disk-space
admin@WF-500(passive-controller)> show interface all
admin@WF-500(passive-controller)> show arp all
admin@WF-500(passive-controller)> show interface eth1
admin@WF-500(passive-controller)> show interface eth2
admin@WF-500(passive-controller)> show interface eth3
admin@WF-500(passive-controller)> show counter interface management
admin@WF-500(passive-controller)> show counter interface eth1
admin@WF-500(passive-controller)> show counter interface eth2
admin@WF-500(passive-controller)> show counter interface eth3
- Verify that the WildFire services related to configuration and task queue status are running:
admin@WF-500(passive-controller)> debug cluster diagnostic
admin@WF-500(passive-controller)> debug cluster agent connectivity
admin@WF-500(passive-controller)> debug cluster agent dump-kv
- Determine the cluster controller pair roles. For each WildFire appliance in a cluster, refer to the Node mode and HA priority fields. Be sure to note what role each specific WildFire appliance has been assigned; this will dictate the order by which the nodes are upgraded. Refer to the output example below.
admin@WF-500(passive-controller)> show cluster membership
- Active (Controller Node Pair)—Node mode: controller Self, Server role: True, HA priority: primary
- Passive (Controller Node Pair)—Node mode: controller Peer, Server role: True, HA priority: secondary
- Determine the cluster worker node roles. On one of the WildFire appliances in a cluster, refer to the Mode and Server fields. This command retrieves the worker node role details for all appliances enrolled in the cluster. Be sure to note what role each specific WildFire appliance has been assigned; this dictates the order by which the nodes are upgraded. Refer to the output example below.
admin@WF-500(passive-controller)> show cluster all-peers
- Server Node (Cluster Server)—Mode: worker, Server: True
- Worker Node(s) (Cluster Client)—Mode: worker, Server: False
For example:------- ---- ------ --------- Address Mode Server Node Name ------- ---- ------ --------- 2.2.2.204 controller Self True wf204 Service: infra signature wfcore wfpc Status: Connected, Server role assigned Changed: Mon, 10 Mar 2025 02:47:33 -0700 WF App: global-queue-service: ReadyLeader global-db-service: ReadyLeader siggen-db: ReadyMaster wildfire-management-service: Done wildfire-apps-service: Ready 2.2.2.205 controller Peer True wf205 Service: infra signature wfcore wfpc Status: Connected, Server role assigned Changed: Mon, 10 Mar 2025 02:47:33 -0700 WF App: global-queue-service: JoinedCluster global-db-service: Ready siggen-db: ReadySlave wildfire-management-service: Done wildfire-apps-service: Ready 2.2.2.202 worker True wf202B Service: infra wfcore wfpc Status: Connected, Server role assigned Changed: Mon, 10 Mar 2025 02:47:33 -0700 WF App: global-queue-service: JoinedCluster global-db-service: JoinedCluster siggen-db: Stopped wildfire-management-service: Done wildfire-apps-service: Ready 2.2.2.206 worker False wf206B Service: infra wfpc Status: Connected Changed: Tue, 18 Mar 2025 09:07:54 -0700 WF App: global-queue-service: StandbyAsWorker global-db-service: StandbyAsWorker siggen-db: Deregistered wildfire-management-service: Done wildfire-apps-service: Ready Diag report: 2.2.2.202: reported leader '2.2.2.204', age 0. 2.2.2.204: local node passed sanity check.