Cumulus Networks

NVIDIA NetQ 3.2 User Guide

NVIDIA® Cumulus® NetQ is a highly-scalable, modern network operations tool set utilizes telemetry for deep troubleshooting, visibility, and automated workflows from a single GUI interface, reducing maintenance and network downtimes. It combines the ability to easily upgrade, configure and deploy network elements with a full suite of operations capabilities, such as visibility, troubleshooting, validation, trace and comparative look-back functionality.

This guide is intended for network administrators who are responsible for deploying, configuring, monitoring and troubleshooting the network in their data center or campus environment. NetQ 3.2 offers the ability to easily monitor and manage your network infrastructure and operational health. This guide provides instructions and information about monitoring individual components of the network, the network as a whole, and the NetQ software applications using the NetQ command line interface (NetQ CLI), NetQ (graphical) user interface (NetQ UI), and NetQ Admin UI.

What's New

NVIDIA NetQ 3.2 eases your customers deployment and maintenance activities for their data center networks with new configuration, performance, and security features and improvements.

What’s New in NetQ 3.2.1

NetQ 3.2.1 contains bug fixes.

What’s New in NetQ 3.2.0

NetQ 3.2.0 includes the following new features and improvements:

Upgrade paths for customers include:

Upgrades from NetQ 2.3.x and earlier require a fresh installation.

For information regarding bug fixes and known issues present in this release, refer to the release notes.

NetQ CLI Changes

A number of commands have changed in this release to accommodate the addition of new options or to simplify their syntax. Additionally, new commands have been added and others have been removed. A summary of those changes is provided here.

New Commands

The following table summarizes the new commands available with this release. They include history for IP address and neighbors, selecting a premise and MAC commentary.

CommandSummaryVersion
netq [<hostname>] show address-history <text-prefix> [ifname <text-ifname>] [vrf <text-vrf>] [diff] [between <text-time> and <text-endtime>] [listby <text-list-by>] [json]Shows the history for a given IP address and prefix.3.2.0
netq [<hostname>] show neighbor-history <text-ipaddress> [ifname <text-ifname>] [diff] [between <text-time> and <text-endtime>] [listby <text-list-by>] [json]Shows the neighbor history for a given IP address.3.2.0
netq [<hostname>] show mac-commentary <mac> vlan <1-4096> [between <text-time> and <text-endtime>] [json]Shows commentary information for a given MAC address.3.2.0
netq config add agent wjh-threshold (latency|congestion) <text-tc-list> <text-port-list> <text-th-hi> <text-th-lo>Configures latency and congestion thresholds for Mellanox What Just Happened (WJH).3.2.0
netq config del agent wjh-threshold (latency|congestion) <text-tc-list>Removes a Mellanox WJH threshold configuration.3.2.0
netq config select cli premise <text-premise>Specifies which premise to use.3.2.0

Modified Commands

The following table summarizes the commands that have been changed with this release.

Updated CommandOld CommandWhat ChangedVersion
netq add tca [event_id <text-event-id-anchor>] [tca_id <text-tca-id-anchor>] [scope <text-scope-anchor>] [severity info | severity critical] [is_active true | is_active false] [suppress_until <text-suppress-ts>] [threshold_type user_set | threshold_type vendor_set] [ threshold <text-threshold-value> ] [channel <text-channel-name-anchor> | channel drop <text-drop-channel-name>]netq add tca [event_id <text-event-id-anchor>] [scope <text-scope-anchor>] [tca_id <text-tca-id-anchor>] [severity info | severity critical] [is_active true | is_active false] [suppress_until <text-suppress-ts>] [ threshold <text-threshold-value> ] [channel <text-channel-name-anchor> | channel drop <text-drop-channel-name>]Added the threshold_type option, to indicate user-configured or vendor-configured thresholds. Also switched the positions of the tca_id and scope options.3.2.0
netq config show agent [kubernetes-monitor|loglevel|stats|sensors|frr-monitor|wjh|wjh-threshold|cpu-limit] [json]netq config show agent [kubernetes-monitor|loglevel|stats|sensors|frr-monitor|wjh|cpu-limit] [json]The command now shows Mellanox WJH latency and congestion thresholds.3.2.0
netq [<hostname>] show events [level info | level error | level warning | level critical | level debug] [type clsupport | type ntp | type mtu | type configdiff | type vlan | type trace | type vxlan | type clag | type bgp | type interfaces | type interfaces-physical | type agents | type ospf | type evpn | type macs | type services | type lldp | type license | type os | type sensors | type btrfsinfo | type lcm] [between <text-time> and <text-endtime>] [json]netq [<hostname>] show events [level info | level error | level warning | level critical | level debug] [type clsupport | type ntp | type mtu | type configdiff | type vlan | type trace | type vxlan | type clag | type bgp | type interfaces | type interfaces-physical | type agents | type ospf | type evpn | type macs | type services | type lldp | type license | type os | type sensors | type btrfsinfo] [between <text-time> and <text-endtime>] [json]Added the type lcm option for lifecycle management event information.3.2.0
netq bootstrap master (interface <text-opta-ifname>|ip-addr <text-ip-addr>) tarball <text-tarball-name> [pod-ip-range <text-pod-ip-range>]netq bootstrap master (interface <text-opta-ifname>|ip-addr <text-ip-addr>) tarball <text-tarball-name>Added the pod-ip-range <text-pod-ip-range> option, enabling you to specify a range of IP addresses for the pod.3.2.0
netq [<hostname>] show dom type (module_temp|module_voltage) [interface <text-dom-port-anchor>] [around <text-time>] [json]netq [<hostname>] show dom type (module_temperature|module_voltage) [interface <text-dom-port-anchor>] [around <text-time>] [json]Renamed the module_temperature variable to module_temp.3.2.0
netq [<hostname>] show wjh-drop <text-drop-type> [ingress-port <text-ingress-port>] [severity <text-severity>] [reason <text-reason>] [src-ip <text-src-ip>] [dst-ip <text-dst-ip>] [proto <text-proto>] [src-port <text-src-port>] [dst-port <text-dst-port>] [src-mac <text-src-mac>] [dst-mac <text-dst-mac>] [egress-port <text-egress-port>] [traffic-class <text-traffic-class>] [rule-id-acl <text-rule-id-acl>] [between <text-time> and <text-endtime>] [around <text-time>] [json]netq [<hostname>] show wjh-drop <text-drop-type> [ingress-port <text-ingress-port>] [reason <text-reason>] [src-ip <text-src-ip>] [dst-ip <text-dst-ip>] [proto <text-proto>] [src-port <text-src-port>] [dst-port <text-dst-port>] [src-mac <text-src-mac>] [dst-mac <text-dst-mac>] [egress-port <text-egress-port>] [traffic-class <text-traffic-class>] [rule-id-acl <text-rule-id-acl>] [between <text-time> and <text-endtime>] [around <text-time>] [json]Added the severity <text-severity> option.3.2.0
netq [<hostname>] show wjh-drop [ingress-port <text-ingress-port>] [severity <text-severity>] [details] [between <text-time> and <text-endtime>] [around <text-time>] [json]netq [<hostname>] show wjh-drop [ingress-port <text-ingress-port>] [details] [between <text-time> and <text-endtime>] [around <text-time>] [json]Added the severity <text-severity> option.3.2.0

Get Started

This topic provides overviews of NetQ components, architecture, and the CLI and UI interfaces. These provide the basis for understanding and following the instructions contained in the rest of the user guide.

Cumulus NetQ Overview

Cumulus® NetQ is a highly-scalable, modern network operations tool set that provides visibility and troubleshooting of your overlay and underlay networks in real-time. NetQ delivers actionable insights and operational intelligence about the health of your data center - from the container, virtual machine, or host, all the way to the switch and port. NetQ correlates configuration and operational status, and instantly identifies and tracks state changes while simplifying management for the entire Linux-based data center. With NetQ, network operations change from a manual, reactive, box-by-box approach to an automated, informed and agile one.

Cumulus NetQ performs three primary functions:

NetQ is available as an on-site or in-cloud deployment.

Unlike other network operations tools, NetQ delivers significant operational improvements to your network management and maintenance processes. It simplifies the data center network by reducing the complexity through real-time visibility into hardware and software status and eliminating the guesswork associated with investigating issues through the analysis and presentation of detailed, focused data.

Demystify Overlay Networks

While overlay networks provide significant advantages in network management, it can be difficult to troubleshoot issues that occur in the overlay one box at a time. You are unable to correlate what events (configuration changes, power outages, etc.) may have caused problems in the network and when they occurred. Only a sampling of data is available to use for your analysis. By contrast, with Cumulus NetQ deployed, you have a networkwide view of the overlay network, can correlate events with what is happening now or in the past, and have real-time data to fill out the complete picture of your network health and operation.

In summary:

Without NetQWith NetQ
Difficult to debug overlay networkView networkwide status of overlay network
Hard to find out what happened in the pastView historical activity with time-machine view
Periodically sampled dataReal-time collection of telemetry data for a more complete data set

Protect Network Integrity with NetQ Validation

Network configuration changes can cause numerous trouble tickets because you are not able to test a new configuration before deploying it. When the tickets start pouring in, you are stuck with a large amount of data that is collected and stored in multiple tools making correlation of the events to the resolution required difficult at best. Isolating faults in the past is challenging. By contract, with Cumulus NetQ deployed, you can proactively verify a configuration change as inconsistencies and misconfigurations can be caught prior to deployment. And historical data is readily available to correlate past events with current issues.

In summary:

Troubleshoot Issues Across the Network

Troubleshooting networks is challenging in the best of times, but trying to do so manually, one box at a time, and digging through a series of long and ugly logs make the job harder than it needs to be. Cumulus NetQ provides rolled up and correlated network status on a regular basis, enabling you to get down to the root of the problem quickly, whether it occurred recently or over a week ago. The graphical user interface makes this possible visually to speed the analysis.

In summary:

Track Connectivity with NetQ Trace

Conventional trace only traverses the data path looking for problems, and does so on a node to node basis. For paths with a small number of hops that might be fine, but in larger networks, it can become extremely time consuming. With Cumulus NetQ both the data and control paths are verified providing additional information. It discovers misconfigurations along all of the hops in one go, speeding the time to resolution.

In summary:

Without NetQWith NetQ
Trace covers only data path; hard to check control pathBoth data and control paths are verified
View portion of entire pathView all paths between devices all at once to find problem paths
Node-to-node check on misconfigurationsView any misconfigurations along all hops from source to destination

Cumulus NetQ Components

Cumulus NetQ contains the following applications and key components:

While these function apply to both the on-site and in-cloud solutions, where the functions reside varies, as shown here.

NetQ interfaces with event notification applications, third-party analytics tools.

Each of the NetQ components used to gather, store and process data about the network state are described here.

NetQ Agents

NetQ Agents are software installed and running on every monitored node in the network - including Cumulus® Linux® switches, Linux bare-metal hosts, and virtual machines. The NetQ Agents push network data regularly and event information immediately to the NetQ Platform.

Switch Agents

The NetQ Agents running on Cumulus Linux switches gather the following network data via Netlink:

for the following protocols:

The NetQ Agent is supported on Cumulus Linux 3.3.2 and later.

Host Agents

The NetQ Agents running on hosts gather the same information as that for switches, plus the following network data:

The NetQ Agent obtains container information by listening to the Kubernetes orchestration tool.

The NetQ Agent is supported on hosts running Ubuntu 16.04, Red Hat® Enterprise Linux 7, and CentOS 7 Operating Systems.

NetQ Core

The NetQ core performs the data collection, storage, and processing for delivery to various user interfaces. It is comprised of a collection of scalable components running entirely within a single server. The NetQ software queries this server, rather than individual devices enabling greater scalability of the system. Each of these components is described briefly here.

Data Aggregation

The data aggregation component collects data coming from all of the NetQ Agents. It then filters, compresses, and forwards the data to the streaming component. The server monitors for missing messages and also monitors the NetQ Agents themselves, providing alarms when appropriate. In addition to the telemetry data collected from the NetQ Agents, the aggregation component collects information from the switches and hosts, such as vendor, model, version, and basic operational state.

Data Stores

Two types of data stores are used in the NetQ product. The first stores the raw data, data aggregations, and discrete events needed for quick response to data requests. The second stores data based on correlations, transformations and processing of the raw data.

Real-time Streaming

The streaming component processes the incoming raw data from the aggregation server in real time. It reads the metrics and stores them as a time series, and triggers alarms based on anomaly detection, thresholds, and events.

Network Services

The network services component monitors protocols and services operation individually and on a networkwide basis and stores status details.

User Interfaces

NetQ data is available through several user interfaces:

The CLI and UI query the RESTful API for the data to present. Standard integrations can be configured to integrate with third-party notification tools.

Data Center Network Deployments

There are three deployment types that are commonly deployed for network management in the data center:

A summary of each type is provided here.

Cumulus NetQ operates over layer 3, and can be used in both layer 2 bridged and layer 3 routed environments. Cumulus Networks always recommends layer 3 routed environments whenever possible.

Out-of-band Management Deployment

Cumulus Networks recommends deploying NetQ on an out-of-band (OOB) management network to separate network management traffic from standard network data traffic, but it is not required. This figure shows a sample CLOS-based network fabric design for a data center using an OOB management network overlaid on top, where NetQ is deployed.

The physical network hardware includes:

The diagram shows physical connections (in the form of grey lines) between Spine 01 and four Leaf devices and two Exit devices, and Spine 02 and the same four Leaf devices and two Exit devices. Leaf 01 and Leaf 02 are connected to each other over a peerlink and act as an MLAG pair for Server 01 and Server 02. Leaf 03 and Leaf 04 are connected to each other over a peerlink and act as an MLAG pair for Server 03 and Server 04. The Edge is connected to both Exit devices, and the Internet node is connected to Exit 01.

Data Center Network Example

The physical management hardware includes:

These switches are connected to each of the physical network devices through a virtual network overlay, shown with purple lines.

In-band Management Deployment

While not the preferred deployment method, you might choose to implement NetQ within your data network. In this scenario, there is no overlay and all traffic to and from the NetQ Agents and the NetQ Platform traverses the data paths along with your regular network traffic. The roles of the switches in the CLOS network are the same, except that the NetQ Platform performs the aggregation function that the OOB management switch performed. If your network goes down, you might not have access to the NetQ Platform for troubleshooting.

High Availability Deployment

NetQ supports a high availability deployment for users who prefer a solution in which the collected data and processing provided by the NetQ Platform remains available through alternate equipment should the platform fail for any reason. In this configuration, three NetQ Platforms are deployed, with one as the master and two as workers (or replicas). Data from the NetQ Agents is sent to all three switches so that if the master NetQ Platform fails, one of the replicas automatically becomes the master and continues to store and provide the telemetry data. This example is based on an OOB management configuration, and modified to support high availability for NetQ.

Cumulus NetQ Operation

In either in-band or out-of-band deployments, NetQ offers networkwide configuration and device management, proactive monitoring capabilities, and performance diagnostics for complete management of your network. Each component of the solution provides a critical element to make this possible.

The NetQ Agent

From a software perspective, a network switch has software associated with the hardware platform, the operating system, and communications. For data centers, the software on a Cumulus Linux network switch would be similar to the diagram shown here.

The NetQ Agent interacts with the various components and software on switches and hosts and provides the gathered information to the NetQ Platform. You can view the data using the NetQ CLI or UI.

The NetQ Agent polls the user space applications for information about the performance of the various routing protocols and services that are running on the switch. Cumulus Networks supports BGP and OSPF FRRouting (FRR) protocols as well as static addressing. Cumulus Linux also supports LLDP and MSTP among other protocols, and a variety of services such as systemd and sensors . For hosts, the NetQ Agent also polls for performance of containers managed with Kubernetes. All of this information is used to provide the current health of the network and verify it is configured and operating correctly.

For example, if the NetQ Agent learns that an interface has gone down, a new BGP neighbor has been configured, or a container has moved, it provides that information to the NetQ Platform. That information can then be used to notify users of the operational state change through various channels. By default, data is logged in the database, but you can use the CLI (netq show events) or configure the Event Service in NetQ to send the information to a third-party notification application as well. NetQ supports PagerDuty and Slack integrations.

The NetQ Agent interacts with the Netlink communications between the Linux kernel and the user space, listening for changes to the network state, configurations, routes and MAC addresses. NetQ uses this information to enable notifications about these changes so that network operators and administrators can respond quickly when changes are not expected or favorable.

For example, if a new route is added or a MAC address removed, NetQ Agent records these changes and sends that information to the NetQ Platform. Based on the configuration of the Event Service, these changes can be sent to a variety of locations for end user response.

The NetQ Agent also interacts with the hardware platform to obtain performance information about various physical components, such as fans and power supplies, on the switch. Operational states and temperatures are measured and reported, along with cabling information to enable management of the hardware and cabling, and proactive maintenance.

For example, as thermal sensors in the switch indicate that it is becoming very warm, various levels of alarms are generated. These are then communicated through notifications according to the Event Service configuration.

The NetQ Platform

Once the collected data is sent to and stored in the NetQ database, you can:

Validate Configurations

The NetQ CLI enables validation of your network health through two sets of commands: netq check and netq show. They extract the information from the Network Service component and Event service. The Network Service component is continually validating the connectivity and configuration of the devices and protocols running on the network. Using the netq check and netq show commands displays the status of the various components and services on a networkwide and complete software stack basis. For example, you can perform a networkwide check on all sessions of BGP with a single netq check bgp command. The command lists any devices that have misconfigurations or other operational errors in seconds. When errors or misconfigurations are present, using the netq show bgp command displays the BGP configuration on each device so that you can compare and contrast each device, looking for potential causes. netq check and netq show commands are available for numerous components and services as shown in the following table.

Component or ServiceCheckShowComponent or ServiceCheckShow
AgentsXXLLDPX
BGPXXMACsX
CLAG (MLAG)XXMTUX
EventsXNTPXX
EVPNXXOSPFXX
InterfacesXXSensorsXX
InventoryXServicesX
IPv4/v6XVLANXX
KubernetesXVXLANXX
LicenseX

Monitor Communication Paths

The trace engine is used to validate the available communication paths between two network devices. The corresponding netq trace command enables you to view all of the paths between the two devices and if there are any breaks in the paths. This example shows two successful paths between server12 and leaf11, all with an MTU of 9152. The first command shows the output in path by path tabular mode. The second command show the same output as a tree.

cumulus@switch:~$ netq trace 10.0.0.13 from 10.0.0.21
Number of Paths: 2
Number of Paths with Errors: 0
Number of Paths with Warnings: 0
Path MTU: 9152
Id  Hop Hostname    InPort          InTun, RtrIf    OutRtrIf, Tun   OutPort
--- --- ----------- --------------- --------------- --------------- ---------------
1   1   server12                                                    bond1.1002
    2   leaf12      swp8                            vlan1002        peerlink-1
    3   leaf11      swp6            vlan1002                        vlan1002
--- --- ----------- --------------- --------------- --------------- ---------------
2   1   server12                                                    bond1.1002
    2   leaf11      swp8                                            vlan1002
--- --- ----------- --------------- --------------- --------------- ---------------
 
 
cumulus@switch:~$ netq trace 10.0.0.13 from 10.0.0.21 pretty
Number of Paths: 2
Number of Paths with Errors: 0
Number of Paths with Warnings: 0
Path MTU: 9152
 hostd-12 bond1.1002 -- swp8 leaf12 <vlan1002> peerlink-1 -- swp6 <vlan1002> leaf11 vlan1002
          bond1.1002 -- swp8 leaf11 vlan1002

This output is read as:

If the MTU does not match across the network, or any of the paths or parts of the paths have issues, that data is called out in the summary at the top of the output and shown in red along the paths, giving you a starting point for troubleshooting.

View Historical State and Configuration

All of the check, show and trace commands can be run for the current status and for a prior point in time. For example, this is useful when you receive messages from the night before, but are not seeing any problems now. You can use the netq check command to look for configuration or operational issues around the time that the messages are timestamped. Then use the netq show commands to see information about how the devices in question were configured at that time or if there were any changes in a given timeframe. Optionally, you can use the netq trace command to see what the connectivity looked like between any problematic nodes at that time. This example shows problems occurred on spine01, leaf04, and server03 last night. The network administrator received notifications and wants to investigate. The diagram is followed by the commands to run to determine the cause of a BGP error on spine01. Note that the commands use the around option to see the results for last night and that they can be run from any switch in the network.

cumulus@switch:~$ netq check bgp around 30m
Total Nodes: 25, Failed Nodes: 3, Total Sessions: 220 , Failed Sessions: 24,
Hostname          VRF             Peer Name         Peer Hostname     Reason                                        Last Changed
----------------- --------------- ----------------- ----------------- --------------------------------------------- -------------------------
exit-1            DataVrf1080     swp6.2            firewall-1        BGP session with peer firewall-1 swp6.2: AFI/ 1d:2h:6m:21s
                                                                      SAFI evpn not activated on peer              
exit-1            DataVrf1080     swp7.2            firewall-2        BGP session with peer firewall-2 (swp7.2 vrf  1d:1h:59m:43s
                                                                      DataVrf1080) failed,                         
                                                                      reason: Peer not configured                  
exit-1            DataVrf1081     swp6.3            firewall-1        BGP session with peer firewall-1 swp6.3: AFI/ 1d:2h:6m:21s
                                                                      SAFI evpn not activated on peer              
exit-1            DataVrf1081     swp7.3            firewall-2        BGP session with peer firewall-2 (swp7.3 vrf  1d:1h:59m:43s
                                                                      DataVrf1081) failed,                         
                                                                      reason: Peer not configured                  
exit-1            DataVrf1082     swp6.4            firewall-1        BGP session with peer firewall-1 swp6.4: AFI/ 1d:2h:6m:21s
                                                                      SAFI evpn not activated on peer              
exit-1            DataVrf1082     swp7.4            firewall-2        BGP session with peer firewall-2 (swp7.4 vrf  1d:1h:59m:43s
                                                                      DataVrf1082) failed,                         
                                                                      reason: Peer not configured                  
exit-1            default         swp6              firewall-1        BGP session with peer firewall-1 swp6: AFI/SA 1d:2h:6m:21s
                                                                      FI evpn not activated on peer                
exit-1            default         swp7              firewall-2        BGP session with peer firewall-2 (swp7 vrf de 1d:1h:59m:43s
...
 
cumulus@switch:~$ netq exit-1 show bgp
Matching bgp records:
Hostname          Neighbor                     VRF             ASN        Peer ASN   PfxRx        Last Changed
----------------- ---------------------------- --------------- ---------- ---------- ------------ -------------------------
exit-1            swp3(spine-1)                default         655537     655435     27/24/412    Fri Feb 15 17:20:00 2019
exit-1            swp3.2(spine-1)              DataVrf1080     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp3.3(spine-1)              DataVrf1081     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp3.4(spine-1)              DataVrf1082     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp4(spine-2)                default         655537     655435     27/24/412    Fri Feb 15 17:20:00 2019
exit-1            swp4.2(spine-2)              DataVrf1080     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp4.3(spine-2)              DataVrf1081     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp4.4(spine-2)              DataVrf1082     655537     655435     13/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp5(spine-3)                default         655537     655435     28/24/412    Fri Feb 15 17:20:00 2019
exit-1            swp5.2(spine-3)              DataVrf1080     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp5.3(spine-3)              DataVrf1081     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp5.4(spine-3)              DataVrf1082     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp6(firewall-1)             default         655537     655539     73/69/-      Fri Feb 15 17:22:10 2019
exit-1            swp6.2(firewall-1)           DataVrf1080     655537     655539     73/69/-      Fri Feb 15 17:22:10 2019
exit-1            swp6.3(firewall-1)           DataVrf1081     655537     655539     73/69/-      Fri Feb 15 17:22:10 2019
exit-1            swp6.4(firewall-1)           DataVrf1082     655537     655539     73/69/-      Fri Feb 15 17:22:10 2019
exit-1            swp7                         default         655537     -          NotEstd      Fri Feb 15 17:28:48 2019
exit-1            swp7.2                       DataVrf1080     655537     -          NotEstd      Fri Feb 15 17:28:48 2019
exit-1            swp7.3                       DataVrf1081     655537     -          NotEstd      Fri Feb 15 17:28:48 2019
exit-1            swp7.4                       DataVrf1082     655537     -          NotEstd      Fri Feb 15 17:28:48 2019

Manage Network Events

The NetQ notifier manages the events that occur for the devices and components, protocols and services that it receives from the NetQ Agents. The notifier enables you to capture and filter events that occur to manage the behavior of your network. This is especially useful when an interface or routing protocol goes down and you want to get them back up and running as quickly as possible, preferably before anyone notices or complains. You can improve resolution time significantly by creating filters that focus on topics appropriate for a particular group of users. You can easily create filters around events related to BGP and MLAG session states, interfaces, links, NTP and other services, fans, power supplies, and physical sensor measurements.

For example, for operators responsible for routing, you can create an integration with a notification application that notifies them of routing issues as they occur. This is an example of a Slack message received on a netq-notifier channel indicating that the BGP session on switch leaf04 interface swp2 has gone down.

Timestamps in NetQ

Every event or entry in the NetQ database is stored with a timestamp of when the event was captured by the NetQ Agent on the switch or server. This timestamp is based on the switch or server time where the NetQ Agent is running, and is pushed in UTC format. It is important to ensure that all devices are NTP synchronized to prevent events from being displayed out of order or not displayed at all when looking for events that occurred at a particular time or within a time window.

Interface state, IP addresses, routes, ARP/ND table (IP neighbor) entries and MAC table entries carry a timestamp that represents the time the event happened (such as when a route is deleted or an interface comes up) - except the first time the NetQ agent is run. If the network has been running and stable when a NetQ agent is brought up for the first time, then this time reflects when the agent was started. Subsequent changes to these objects are captured with an accurate time of when the event happened.

Data that is captured and saved based on polling, and just about all other data in the NetQ database, including control plane state (such as BGP or MLAG), has a timestamp of when the information was captured rather than when the event actually happened, though NetQ compensates for this if the data extracted provides additional information to compute a more precise time of the event. For example, BGP uptime can be used to determine when the event actually happened in conjunction with the timestamp.

When retrieving the timestamp, command outputs display the time in three ways:

This example shows the difference between the timestamp displays.

cumulus@switch:~$ netq show bgp
Matching bgp records:
Hostname          Neighbor                     VRF             ASN        Peer ASN   PfxRx        Last Changed
----------------- ---------------------------- --------------- ---------- ---------- ------------ -------------------------
exit-1            swp3(spine-1)                default         655537     655435     27/24/412    Fri Feb 15 17:20:00 2019
exit-1            swp3.2(spine-1)              DataVrf1080     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp3.3(spine-1)              DataVrf1081     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp3.4(spine-1)              DataVrf1082     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp4(spine-2)                default         655537     655435     27/24/412    Fri Feb 15 17:20:00 2019
exit-1            swp4.2(spine-2)              DataVrf1080     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp4.3(spine-2)              DataVrf1081     655537     655435     14/12/0      Fri Feb 15 17:20:00 2019
exit-1            swp4.4(spine-2)              DataVrf1082     655537     655435     13/12/0      Fri Feb 15 17:20:00 2019
...
 
cumulus@switch:~$ netq show agents
Matching agents records:
Hostname          Status           NTP Sync Version                              Sys Uptime                Agent Uptime              Reinitialize Time          Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
border01          Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:54 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:38 2020
border02          Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:57 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:33 2020
fw1               Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:44 2020  Tue Sep 29 21:24:48 2020  Tue Sep 29 21:24:48 2020   Thu Oct  1 16:07:26 2020
fw2               Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:42 2020  Tue Sep 29 21:24:48 2020  Tue Sep 29 21:24:48 2020   Thu Oct  1 16:07:22 2020
leaf01            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 16:49:04 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:10 2020
leaf02            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:14 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:30 2020
leaf03            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:37 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:24 2020
leaf04            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:35 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:13 2020
oob-mgmt-server   Fresh            yes      3.1.1-ub18.04u29~1599111022.78b9e43  Mon Sep 21 16:43:58 2020  Mon Sep 21 17:55:00 2020  Mon Sep 21 17:55:00 2020   Thu Oct  1 16:07:31 2020
server01          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:16 2020
server02          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:24 2020
server03          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:56 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:12 2020
server04          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:17 2020
server05          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:25 2020
server06          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:21 2020
server07          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:06:48 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:28 2020
server08          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:06:45 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:31 2020
spine01           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:34 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:20 2020
spine02           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:33 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:16 2020
spine03           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:34 2020  Tue Sep 29 21:25:07 2020  Tue Sep 29 21:25:07 2020   Thu Oct  1 16:07:20 2020
spine04           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:32 2020  Tue Sep 29 21:25:07 2020  Tue Sep 29 21:25:07 2020   Thu Oct  1 16:07:33 2020
 
cumulus@switch:~$ netq show agents json
{
    "agents":[
        {
            "hostname":"border01",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707894.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568519.0
        },
        {
            "hostname":"border02",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707897.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568515.0
        },
        {
            "hostname":"fw1",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707884.0,
            "agentUptime":1601414688.0,
            "reinitializeTime":1601414688.0,
            "lastChanged":1601568506.0
        },
        {
            "hostname":"fw2",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707882.0,
            "agentUptime":1601414688.0,
            "reinitializeTime":1601414688.0,
            "lastChanged":1601568503.0
        },
        {
            "hostname":"leaf01",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600706944.0,
            "agentUptime":1601414689.0,
            "reinitializeTime":1601414689.0,
            "lastChanged":1601568522.0
        },
        {
            "hostname":"leaf02",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707794.0,
            "agentUptime":1601414689.0,
            "reinitializeTime":1601414689.0,
            "lastChanged":1601568512.0
        },
        {
            "hostname":"leaf03",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707817.0,
            "agentUptime":1601414689.0,
            "reinitializeTime":1601414689.0,
            "lastChanged":1601568505.0
        },
        {
            "hostname":"leaf04",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707815.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568525.0
        },
        {
            "hostname":"oob-mgmt-server",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.1.1-ub18.04u29~1599111022.78b9e43",
            "sysUptime":1600706638.0,
            "agentUptime":1600710900.0,
            "reinitializeTime":1600710900.0,
            "lastChanged":1601568511.0
        },
        {
            "hostname":"server01",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413987.0,
            "reinitializeTime":1601413987.0,
            "lastChanged":1601568527.0
        },
        {
            "hostname":"server02",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413987.0,
            "reinitializeTime":1601413987.0,
            "lastChanged":1601568504.0
        },
        {
            "hostname":"server03",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708796.0,
            "agentUptime":1601413987.0,
            "reinitializeTime":1601413987.0,
            "lastChanged":1601568522.0
        },
        {
            "hostname":"server04",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413987.0,
            "reinitializeTime":1601413987.0,
            "lastChanged":1601568497.0
        },
        {
            "hostname":"server05",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413990.0,
            "reinitializeTime":1601413990.0,
            "lastChanged":1601568506.0
        },
        {
            "hostname":"server06",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413990.0,
            "reinitializeTime":1601413990.0,
            "lastChanged":1601568501.0
        },
        {
            "hostname":"server07",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708008.0,
            "agentUptime":1601413990.0,
            "reinitializeTime":1601413990.0,
            "lastChanged":1601568508.0
        },
        {
            "hostname":"server08",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708005.0,
            "agentUptime":1601413990.0,
            "reinitializeTime":1601413990.0,
            "lastChanged":1601568511.0
        },
        {
            "hostname":"spine01",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707814.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568502.0
        },
        {
            "hostname":"spine02",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707813.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568497.0
        },
        {
            "hostname":"spine03",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707814.0,
            "agentUptime":1601414707.0,
            "reinitializeTime":1601414707.0,
            "lastChanged":1601568501.0
        },
        {
            "hostname":"spine04",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707812.0,
            "agentUptime":1601414707.0,
            "reinitializeTime":1601414707.0,
            "lastChanged":1601568514.0
        }
    ],
    "truncatedResult":false
}

If a NetQ Agent is restarted on a device, the timestamps for existing objects are not updated to reflect this new restart time. Their timestamps are preserved relative to the original start time of the Agent. A rare exception is if the device is rebooted between the time it takes the Agent being stopped and restarted; in this case, the time is once again relative to the start time of the Agent.

Exporting NetQ Data

Data from the NetQ Platform can be exported in a couple of ways:

Example Using the CLI

You can check the state of BGP on your network with netq check bgp:

cumulus@leaf01:~$ netq check bgp
Total Nodes: 25, Failed Nodes: 3, Total Sessions: 220 , Failed Sessions: 24,
Hostname          VRF             Peer Name         Peer Hostname     Reason                                        Last Changed
----------------- --------------- ----------------- ----------------- --------------------------------------------- -------------------------
exit01            DataVrf1080     swp6.2            firewall01        BGP session with peer firewall01 swp6.2: AFI/ Tue Feb 12 18:11:16 2019
                                                                      SAFI evpn not activated on peer              
exit01            DataVrf1080     swp7.2            firewall02        BGP session with peer firewall02 (swp7.2 vrf  Tue Feb 12 18:11:27 2019
                                                                      DataVrf1080) failed,                         
                                                                      reason: Peer not configured                  
exit01            DataVrf1081     swp6.3            firewall01        BGP session with peer firewall01 swp6.3: AFI/ Tue Feb 12 18:11:16 2019
                                                                      SAFI evpn not activated on peer              
exit01            DataVrf1081     swp7.3            firewall02        BGP session with peer firewall02 (swp7.3 vrf  Tue Feb 12 18:11:27 2019
                                                                      DataVrf1081) failed,                         
                                                                      reason: Peer not configured                  
...

When you show the output in JSON format, this same command looks like this:

cumulus@leaf01:~$ netq check bgp json
{
    "failedNodes":[
        {
            "peerHostname":"firewall01",
            "lastChanged":1549995080.0,
            "hostname":"exit01",
            "peerName":"swp6.2",
            "reason":"BGP session with peer firewall01 swp6.2: AFI/SAFI evpn not activated on peer",
            "vrf":"DataVrf1080"
        },
        {
            "peerHostname":"firewall02",
            "lastChanged":1549995449.7279999256,
            "hostname":"exit01",
            "peerName":"swp7.2",
            "reason":"BGP session with peer firewall02 (swp7.2 vrf DataVrf1080) failed, reason: Peer not configured",
            "vrf":"DataVrf1080"
        },
        {
            "peerHostname":"firewall01",
            "lastChanged":1549995080.0,
            "hostname":"exit01",
            "peerName":"swp6.3",
            "reason":"BGP session with peer firewall01 swp6.3: AFI/SAFI evpn not activated on peer",
            "vrf":"DataVrf1081"
        },
        {
            "peerHostname":"firewall02",
            "lastChanged":1549995449.7349998951,
            "hostname":"exit01",
            "peerName":"swp7.3",
            "reason":"BGP session with peer firewall02 (swp7.3 vrf DataVrf1081) failed, reason: Peer not configured",
            "vrf":"DataVrf1081"
        },
...
 
    ],
    "summary": {
        "checkedNodeCount": 25,
        "failedSessionCount": 24,
        "failedNodeCount": 3,
        "totalSessionCount": 220
    }
}

Example Using the UI

Open the full screen Switch Inventory card, select the data to export, and click Export.

Important File Locations

To aid in troubleshooting issues with NetQ, there are the following configuration and log files that can provide insight into the root cause of the issue:

FileDescription
/etc/netq/netq.ymlThe NetQ configuration file. This file appears only if you installed either the netq-apps package or the NetQ Agent on the system.
/var/log/netqd.logThe NetQ daemon log file for the NetQ CLI. This log file appears only if you installed the netq-apps package on the system.
/var/log/netq-agent.logThe NetQ Agent log file. This log file appears only if you installed the NetQ Agent on the system.

NetQ User Interface Overview

The NetQ 3.x graphical user interface (UI) enables you to access NetQ capabilities through a web browser as opposed to through a terminal window using the Command Line Interface (CLI). Visual representations of the health of the network, inventory, and system events make it easy to both find faults and misconfigurations, and to fix them.

The UI is accessible from both on-premises and cloud deployments. It is supported on Google Chrome. Other popular browsers may be used, but have not been tested and may have some presentation issues.

Before you get started, you should refer to the release notes for this version.

Access the NetQ UI

The NetQ UI is a web-based application. Logging in and logging out are simple and quick. Users working with a cloud deployment of NetQ can reset their password if it is forgotten.

Log In to NetQ

To log in to the UI:

  1. Open a new Chrome browser window or tab.

  2. Enter the following URL into the address bar:

  3. Sign in.

    Default usernames and passwords for UI access:

    • NetQ On-premises: admin, admin
    • NetQ Cloud: Use credentials provided by Cumulus Networks via email titled Welcome to Cumulus NetQ!
  1. Enter your username.

  2. Enter your password.

  3. Enter a new password.

  4. Enter the new password again to confirm it.

  5. Click Update and Accept after reading the Terms of Use.

    The default Cumulus Workbench opens, with your username shown in the upper right corner of the application.

  1. Enter your username.

  2. Enter your password.

    The user-specified home workbench is displayed. If a home workbench is not specified, then the Cumulus Default workbench is displayed.

Reset a Forgotten Password

For cloud deployments, you can reset your password if it has been forgotten.

To reset a password:

  1. Enter https://netq.cumulusnetworks.com in your browser to open the login page.

  2. Click Forgot Password?

  3. Enter an email address where you want instructions to be sent for resetting the password.

  4. Click Send Reset Email, or click Cancel to return to login page.

  5. Log in to the email account where you sent the reset message. Look for a message with a subject of NetQ Password Reset Link from netq-sre@cumulusnetworks.com.

  6. Click on the link provided to open the Reset Password dialog.

  7. Enter a new password.

  8. Enter the new password again to confirm it.

  9. Click Reset.

    A confirmation message is shown on successful reset.

  10. Click Login to access NetQ with your username and new password.

Log Out of NetQ

To log out of the NetQ UI:

  1. Click at the top right of the application.

  2. Select Log Out.

Application Layout

The NetQ UI contains two main areas:

Found in the application header, click to open the main menu which provides navigation to:

Recent Actions

Found in the header, Recent Actions keeps track of every action you take on your workbench and then saves each action with a timestamp. This enables you to go back to a previous state or repeat an action.

To open Recent Actions, click . Click on any of the actions to perform that action again.

The Global Search field in the UI header enables you to search for devices and cards. It behaves like most searches and can help you quickly find device information. For more detail on creating and running searches, refer to Create and Run Searches.

Clicking on the Cumulus logo takes you to your favorite workbench. For details about specifying your favorite workbench, refer to Set User Preferences.

Quick Network Health View

Found in the header, the graph and performance rating provide a view into the health of your network at a glance.

On initial start up of the application, it may take up to an hour to reach an accurate health indication as some processes only run every 30 minutes.

Workbenches

A workbench is comprised of a given set of cards. A pre-configured default workbench, Cumulus Workbench, is available to get you started. It contains Device Inventory, Switch Inventory, Alarm and Info Events, and Network Health cards. On initial login, this workbench is opened. You can create your own workbenches and add or remove cards to meet your particular needs. For more detail about managing your data using workbenches, refer to Focus Your Monitoring Using Workbenches.

Cards

Cards present information about your network for monitoring and troubleshooting. This is where you can expect to spend most of your time. Each card describes a particular aspect of the network. Cards are available in multiple sizes, from small to full screen. The level of the content on a card varies in accordance with the size of the card, with the highest level of information on the smallest card to the most detailed information on the full-screen view. Cards are collected onto a workbench where you see all of the data relevant to a task or set of tasks. You can add and remove cards from a workbench, move between cards and card sizes, and make copies of cards to show different levels of data at the same time. For details about working with cards, refer to Access Data with Cards.

User Settings

Each user can customize the NetQ application display, change their account password, and manage their workbenches. This is all performed from User Settings > Profile & Preferences. For details, refer to Set User Preferences.

Format Cues

Color is used to indicate links, options, and status within the UI.

ItemColor
Hover on itemBlue
Clickable itemBlack
Selected itemGreen
Highlighted itemBlue
LinkBlue
Good/Successful resultsGreen
Result with critical severity eventPink
Result with high severity eventRed
Result with medium severity eventOrange
Result with low severity eventYellow

Create and Run Searches

The Global Search field in the UI header enables you to search for devices or cards. You can create new searches or run existing searches.

As with most search fields, simply begin entering the criteria in the search field. As you type, items that match the search criteria are shown in the search history dropdown along with the last time the search was viewed. Wildcards are not allowed, but this predictive matching eliminates the need for them. By default, the most recent searches are shown. If more have been performed, they can be accessed. This provides a quicker search by reducing entry specifics and suggesting recent searches. Selecting a suggested search from the list provides a preview of the search results to the right.

To create a new search:

  1. Click in the Global Search field.

  2. Enter your search criteria.

  3. Click the device hostname or card workflow in the search list to open the associated information.

    If you have more matches than fit in the window, click the See All \# Results link to view all found matches. The count represents the number of devices found. It does not include cards found.

You can re-run a recent search, saving time if you are comparing data from two or more devices.

To re-run a recent search:

  1. Click in the Global Search field.

  2. When the desired search appears in the suggested searches list, select it.

    You may need to click See All \# Results to find the desired search. If you do not find it in the list, you may still be able to find it in the Recent Actions list.

Focus Your Monitoring Using Workbenches

Workbenches are an integral structure of the Cumulus NetQ UI. They are where you collect and view the data that is important to you.

There are two types of workbenches:

Both types of workbenches display a set of cards. Default workbenches are public (available for viewing by all users), whereas Custom workbenches are private (only viewable by the user who created them).

Default Workbenches

In this release, only one default workbench is available, the Cumulus Workbench, to get you started. It contains Device Inventory, Switch Inventory, Alarm and Info Events, and Network Health cards, giving you a high-level view of how your network is operating.

On initial login, the Cumulus Workbench is opened. On subsequent logins, the last workbench you had displayed is opened.

Custom Workbenches

Users with either administrative or user roles can create and save as many custom workbenches as suits their needs. For example, a user might create a workbench that:

And so forth.

Create a Workbench

To create a workbench:

  1. Click in the workbench header.

  2. Enter a name for the workbench.

  3. Click Create to open a blank new workbench, or Cancel to discard the workbench.

  4. Add cards to the workbench using or .

Refer to Access Data with Cards for information about interacting with cards on your workbenches.

Remove a Workbench

Once you have created a number of custom workbenches, you might find that you no longer need some of them. As an administrative user, you can remove any workbench, except for the default Cumulus Workbench. Users with a user role can only remove workbenches they have created.

To remove a workbench:

  1. Click in the application header to open the User Settings options.

  2. Click Profile & Preferences.

  3. Locate the Workbenches card.

  4. Hover over the workbench you want to remove, and click Delete.

Open an Existing Workbench

There are several options for opening workbenches:

Manage Auto-refresh for Your Workbenches

With NetQ 2.3.1 and later, you can specify how often to update the data displayed on your workbenches. Three refresh rates are available:

By default, auto-refresh is enabled and configured to update every 30 seconds.

Disable/Enable Auto-refresh

To disable or pause auto-refresh of your workbenches, simply click the Refresh icon. This toggles between the two states, Running and Paused, where indicates it is currently disabled and indicates it is currently enabled.

While having the workbenches update regularly is good most of the time, you may find that you want to pause the auto-refresh feature when you are troubleshooting and you do not want the data to change on a given set of cards temporarily. In this case, you can disable the auto-refresh and then enable it again when you are finished.

View Current Settings

To view the current auto-refresh rate and operational status, hover over the Refresh icon on a workbench header, to open the tool tip as follows:

Change Settings

To modify the auto-refresh setting:

  1. Click on the Refresh icon.

  2. Select the refresh rate you want. The refresh rate is applied immediately. A check mark is shown next to the current selection.

Manage Workbenches

To manage your workbenches as a group, either:

Both of these open the Profiles & Preferences page. Look for the Workbenches card and refer to Manage Your Workbenches for more information.

Access Data with Cards

Cards present information about your network for monitoring and troubleshooting. This is where you can expect to spend most of your time. Each card describes a particular aspect of the network. Cards are available in multiple sizes, from small to full screen. The level of the content on a card varies in accordance with the size of the card, with the highest level of information on the smallest card to the most detailed information on the full-screen card. Cards are collected onto a workbench where you see all of the data relevant to a task or set of tasks. You can add and remove cards from a workbench, move between cards and card sizes, change the time period of the data shown on a card, and make copies of cards to show different levels of data at the same time.

Card Sizes

The various sizes of cards enables you to view your content at just the right level. For each aspect that you are monitoring there is typically a single card, that presents increasing amounts of data over its four sizes. For example, a snapshot of your total inventory may be sufficient, but to monitor the distribution of hardware vendors may requires a bit more space.

Small Cards

Small cards are most effective at providing a quick view of the performance or statistical value of a given aspect of your network. They are commonly comprised of an icon to identify the aspect being monitored, summary performance or statistics in the form of a graph and/or counts, and often an indication of any related events. Other content items may be present. Some examples include a Devices Inventory card, a Switch Inventory card, an Alarm Events card, an Info Events card, and a Network Health card, as shown here:

Medium Cards

Medium cards are most effective at providing the key measurements for a given aspect of your network. They are commonly comprised of an icon to identify the aspect being monitored, one or more key measurements that make up the overall performance. Often additional information is also included, such as related events or components. Some examples include a Devices Inventory card, a Switch Inventory card, an Alarm Events card, an Info Events card, and a Network Health card, as shown here. Compare these with their related small- and large-sized cards.

Large Cards

Large cards are most effective at providing the detailed information for monitoring specific components or functions of a given aspect of your network. These can aid in isolating and resolving existing issues or preventing potential issues. They are commonly comprised of detailed statistics and graphics. Some large cards also have tabs for additional detail about a given statistic or other related information. Some examples include a Devices Inventory card, an Alarm Events card, and a Network Health card, as shown here. Compare these with their related small- and medium-sized cards.

Full-Screen Cards

Full-screen cards are most effective for viewing all available data about an aspect of your network all in one place. When you cannot find what you need in the small, medium, or large cards, it is likely on the full-screen card. Most full-screen cards display data in a grid, or table; however, some contain visualizations. Some examples include All Events card and All Switches card, as shown here.

Card Size Summary

Card Workflows

The UI provides a number of card workflows. Card workflows focus on a particular aspect of your network and are a linked set of each size card-a small card, a medium card, one or more large cards, and one or more full screen cards. The following card workflows are available:

Access a Card Workflow

You can access a card workflow in multiple ways:

If you have multiple cards open on your workbench already, you might need to scroll down to see the card you have just added.

To open the card workflow through an existing workbench:

  1. Click in the workbench task bar.

  2. Select the relevant workbench.

    The workbench opens, hiding your previous workbench.

To open the card workflow from Recent Actions:

  1. Click in the application header.

  2. Look for an “Add: <card name>” item.

  3. If it is still available, click the item.

    The card appears on the current workbench, at the bottom.

To access the card workflow by adding the card:

  1. Click in the workbench task bar.

  2. Follow the instructions in Add Cards to Your Workbench or Add Switch Cards to Your Workbench.

    The card appears on the current workbench, at the bottom.

To access the card workflow by searching for the card:

  1. Click in the Global Search field.

  2. Begin typing the name of the card.

  3. Select it from the list.

    The card appears on a current workbench, at the bottom.

Card Interactions

Every card contains a standard set of interactions, including the ability to switch between card sizes, and change the time period of the presented data. Most cards also have additional actions that can be taken, in the form of links to other cards, scrolling, and so forth. The four sizes of cards for a particular aspect of the network are connected into a flow; however, you can have duplicate cards displayed at the different sizes. Cards with tabular data provide filtering, sorting, and export of data. The medium and large cards have descriptive text on the back of the cards.

To access the time period, card size, and additional actions, hover over the card. These options appear, covering the card header, enabling you to select the desired option.

Add Cards to Your Workbench

You can add one or more cards to a workbench at any time. To add Devices|Switches cards, refer to Add Switch Cards to Your Workbench. For all other cards, follow the steps in this section.

To add one or more cards:

  1. Click to open the Cards modal.

  2. Scroll down until you find the card you want to add, select the category of cards, or use Search to find the card you want to add.

    This example uses the category tab to narrow the search for a card.

  3. Click on each card you want to add.

    As you select each card, it is grayed out and a appears on top of it. If you have selected one or more cards using the category option, you can selected another category without losing your current selection. Note that the total number of cards selected for addition to your workbench is noted at the bottom.

    Also note that if you change your mind and do not want to add a particular card you have selected, simply click on it again to remove it from the cards to be added. Note the total number of cards selected decreases with each card you remove.

  4. When you have selected all of the cards you want to add to your workbench, you can confirm which cards have been selected by clicking the Cards Selected link. Modify your selection as needed.

  5. Click Open Cards to add the selected cards, or Cancel to return to your workbench without adding any cards.

The cards are placed at the end of the set of cards currently on the workbench. You might need to scroll down to see them. By default, the medium size of the card is added to your workbench for all except the Validation and Trace cards. These are added in the large size by default. You can rearrange the cards as described in Reposition a Card on Your Workbench.

Add Switch Cards to Your Workbench

You can add switch cards to a workbench at any time. For all other cards, follow the steps in Add Cards to Your Workbench. You can either add the card through the Switches icon on a workbench header or by searching for it through Global Search.

To add a switch card using the icon:

  1. Click to open the Add Switch Card modal.

  2. Begin entering the hostname of the switch you want to monitor.

  3. Select the device from the suggestions that appear.

    If you attempt to enter a hostname that is unknown to NetQ, a pink border appears around the entry field and you are unable to select Add. Try checking for spelling errors. If you feel your entry is valid, but not an available choice, consult with your network administrator.

  4. Optionally select the small or large size to display instead of the medium size.

  5. Click Add to add the switch card to your workbench, or Cancel to return to your workbench without adding the switch card.

To open the switch card by searching:

  1. Click in Global Search.

  2. Begin typing the name of a switch.

  3. Select it from the options that appear.

Remove Cards from Your Workbench

Removing cards is handled one card at a time.

To remove a card:

  1. Hover over the card you want to remove.

  2. Click (More Actions menu).

  3. Click Remove.

The card is removed from the workbench, but not from the application.

Change the Time Period for the Card Data

All cards have a default time period for the data shown on the card, typically the last 24 hours. You can change the time period to view the data during a different time range to aid analysis of previous or existing issues.

To change the time period for a card:

  1. Hover over any card.

  2. Click in the header.

  3. Select a time period from the dropdown list.

Changing the time period in this manner only changes the time period for the given card.

Switch to a Different Card Size

You can switch between the different card sizes at any time. Only one size is visible at a time. To view the same card in different sizes, open a second copy of the card.

To change the card size:

  1. Hover over the card.

  2. Hover over the Card Size Picker and move the cursor to the right or left until the desired size option is highlighted.

    Single width opens a small card. Double width opens a medium card. Triple width opens large cards. Full width opens full-screen cards.

  3. Click the Picker.
    The card changes to the selected size, and may move its location on the workbench.

View a Description of the Card Content

When you hover over a medium or large card, the bottom right corner turns up and is highlighted. Clicking the corner turns the card over where a description of the card and any relevant tabs are described. Hover and click again to turn it back to the front side.

Reposition a Card on Your Workbench

You can also move cards around on the workbench, using a simple drag and drop method.

To move a card:

  1. Simply click and drag the card to left or right of another card, next to where you want to place the card.

  2. Release your hold on the card when the other card becomes highlighted with a dotted line. In this example, we are moving the medium Network Health card to the left of the medium Devices Inventory card.

Table Settings

You can manipulate the data in a data grid in a full-screen card in several ways. The available options are displayed above each table. The options vary depending on the card and what is selected in the table.

IconActionDescription
Select AllSelects all items in the list.
Clear AllClears all existing selections in the list.
Add ItemAdds item to the list.
EditEdits the selected item.
DeleteRemoves the selected items.
FilterFilters the list using available parameters. Refer to Filter Table Data for more detail.
,Generate/Delete AuthKeysCreates or removes NetQ CLI authorization keys.
Open CardsOpens the corresponding validation or trace card(s).
Assign roleOpens role assignment options for switches.
ExportExports selected data into either a .csv or JSON-formatted file. Refer to Export Data for more detail.

When there are numerous items in a table, NetQ loads the first 25 by default and provides the rest in additional table pages. In this case, pagination is shown under the table.

From there, you can:

Change Order of Columns

You can rearrange the columns within a table. Click and hold on a column header, then drag it to the location where you want it.

Sort Table Data by Column

You can sort tables (with up to 10,000 rows) by a given column for tables on full-screen cards. The data is sorted in ascending or descending order; A to Z, Z to A, 1 to n, or n to 1.

To sort table data by column:

  1. Open a full-screen card.

  2. Hover over a column header.

  3. Click the header to toggle between ascending and descending sort order.

For example, this IP Addresses table is sorted by hostname in a descending order. Click the Hostname header to sort the data in ascending order. Click the IfName header to sort the same table by interface name.

Sorted by descending hostname

Sorted by descending hostname

Sorted by ascending hostname

Sorted by ascending hostname

Sorted by descending interface name

Sorted by descending interface name

Filter Table Data

The filter option associated with tables on full-screen cards can be used to filter the data by any parameter (column name). The parameters available vary according to the table you are viewing. Some tables offer the ability to filter on more than one parameter.

Tables that Support a Single Filter

Tables that allow a single filter to be applied let you select the parameter and set the value. You can use partial values.

For example, to set the filter to show only BGP sessions using a particular VRF:

  1. Open the full-screen Network Services | All BGP Sessions card.

  2. Click the All Sessions tab.

  3. Click above the table.

  4. Select VRF from the Field dropdown.

  5. Enter the name of the VRF of interest. In our example, we chose vrf1.

  6. Click Apply.

    The filter icon displays a red dot to indicate filters are applied.

  7. To remove the filter, click (with the red dot).

  8. Click Clear.

  9. Close the Filters dialog by clicking .

Tables that Support Multiple Filters

For tables that offer filtering by multiple parameters, the Filter dialog is slightly different. For example, to filter the list of IP Addresses in your system by hostname and interface:

  1. Click .

  2. Select IP Addresses under Network.

  3. Click above the table.

  4. Enter a hostname and interface name in the respective fields.

  5. Click Apply.

    The filter icon displays a red dot to indicate filters are applied, and each filter is presented above the table.

  6. To remove a filter, simply click on the filter, or to remove all filters at once, click Clear All Filters.

Export Data

You can export tabular data from a full-screen card to a CSV- or JSON-formatted file.

To export the all data:

  1. Click above the table.

  2. Select the export format.

  3. Click Export to save the file to your downloads directory.

To export selected data:

  1. Select the individual items from the list by clicking in the checkbox next to each item.

  2. Click above the table.

  3. Select the export format.

  4. Click Export to save the file to your downloads directory.

Set User Preferences

Each user can customize the NetQ application display, change his account password, and manage his workbenches.

Configure Display Settings

The Display card contains the options for setting the application theme, language, time zone, and date formats. There are two themes available: a Light theme and a Dark theme (default). The screen captures in this document are all displayed with the Dark theme. English is the only language available for this release. You can choose to view data in the time zone where you or your data center resides. You can also select the date and time format, choosing words or number format and a 12- or 24-hour clock. All changes take effect immediately.

To configure the display settings:

  1. Click in the application header to open the User Settings options.

  2. Click Profile & Preferences.

  3. Locate the Display card.

  4. In the Theme field, click to select your choice of theme. This figure shows the light theme. Switch back and forth as desired.

  5. In the Time Zone field, click to change the time zone from the default.
    By default, the time zone is set to the user’s local time zone. If a time zone has not been selected, NetQ defaults to the current local time zone where NetQ is installed. All time values are based on this setting. This is displayed in the application header, and is based on Greenwich Mean Time (GMT).

    Tip: You can also change the time zone from the header display.

    If your deployment is not local to you (for example, you want to view the data from the perspective of a data center in another time zone) you can change the display to another time zone. The following table presents a sample of time zones:

    Time ZoneDescriptionAbbreviation
    GMT +12New Zealand Standard TimeNST
    GMT +11Solomon Standard TimeSST
    GMT +10Australian Eastern TimeAET
    GMT +9:30Australia Central TimeACT
    GMT +9Japan Standard TimeJST
    GMT +8China Taiwan TimeCTT
    GMT +7Vietnam Standard TimeVST
    GMT +6Bangladesh Standard TimeBST
    GMT +5:30India Standard TimeIST
    GMT+5Pakistan Lahore TimePLT
    GMT +4Near East TimeNET
    GMT +3:30Middle East TimeMET
    GMT +3Eastern African Time/Arab Standard TimeEAT/AST
    GMT +2Eastern European TimeEET
    GMT +1European Central TimeECT
    GMTGreenwich Mean TimeGMT
    GMT -1Central African TimeCAT
    GMT -2Uruguay Summer TimeUYST
    GMT -3Argentina Standard/Brazil Eastern TimeAGT/BET
    GMT -4Atlantic Standard Time/Puerto Rico TimeAST/PRT
    GMT -5Eastern Standard TimeEST
    GMT -6Central Standard TimeCST
    GMT -7Mountain Standard TimeMST
    GMT -8Pacific Standard TimePST
    GMT -9Alaskan Standard TimeAST
    GMT -10Hawaiian Standard TimeHST
    GMT -11Samoa Standard TimeSST
    GMT -12New Zealand Standard TimeNST
  6. In the Date Format field, select the date and time format you want displayed on the cards.

    The four options include the date displayed in words or abbreviated with numbers, and either a 12- or 24-hour time representation. The default is the third option.

  7. Return to your workbench by clicking and selecting a workbench from the NetQ list.

Change Your Password

You can change your account password at any time should you suspect someone has hacked your account or your administrator requests you to do so.

To change your password:

  1. Click in the application header to open the User Settings options.

  2. Click Profile & Preferences.

  3. Locate the Basic Account Info card.

  4. Click Change Password.

  5. Enter your current password.

  6. Enter and confirm a new password.

  7. Click Save to change to the new password, or click Cancel to discard your changes.

  8. Return to your workbench by clicking and selecting a workbench from the NetQ list.

Manage Your Workbenches

You can view all of your workbenches in a list form, making it possible to manage various aspects of them. There are public and private workbenches. Public workbenches are visible by all users. Private workbenches are visible only by the user who created the workbench. From the Workbenches card, you can:

To manage your workbenches:

  1. Click in the application header to open the User Settings options.

  2. Click Profile & Preferences.

  3. Locate the Workbenches card.

  4. To specify a home workbench, click to the left of the desired workbench name. is placed there to indicate its status as your favorite workbench.

  5. To search the workbench list by name, access type, and cards present on the workbench, click the relevant header and begin typing your search criteria.

  6. To sort the workbench list, click the relevant header and click .

  7. To delete a workbench, hover over the workbench name to view the Delete button. As an administrator, you can delete both private and public workbenches.

  8. Return to your workbench by clicking and selecting a workbench from the NetQ list.

NetQ Command Line Overview

The NetQ CLI provides access to all of the network state and event information collected by the NetQ Agents. It behaves the same way most CLIs behave, with groups of commands used to display related information, the ability to use TAB completion when entering commands, and to get help for given commands and options. The commands are grouped into four categories: check, show, config, and trace.

The NetQ command line interface only runs on switches and server hosts implemented with Intel x86 or ARM-based architectures. If you are unsure what architecture your switch or server employs, check the Cumulus Hardware Compatibility List and verify the value in the Platforms tab > CPU column.

CLI Access

When NetQ is installed or upgraded, the CLI may also be installed and enabled on your NetQ server or appliance and hosts. Refer to the Install NetQ topic for details.

To access the CLI from a switch or server:

  1. Log in to the device. This example uses the default username of cumulus and a hostname of switch.

    <computer>:~<username>$ ssh cumulus@switch
    
  2. Enter your password to reach the command prompt. The default password is CumulusLinux! For example:

    Enter passphrase for key '/Users/<username>/.ssh/id_rsa': <enter CumulusLinux! here>
    Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-112-generic x86_64)
        * Documentation:  https://help.ubuntu.com
        * Management:     https://landscape.canonical.com
        * Support:        https://ubuntu.com/advantage
    Last login: Tue Sep 15 09:28:12 2019 from 10.0.0.14
    cumulus@switch:~$
    
  3. Run commands. For example:

    cumulus@switch:~$ netq show agents
    cumulus@switch:~$ netq check bgp
    

Command Line Basics

This section describes the core structure and behavior of the NetQ CLI. It includes the following:

Command Line Structure

The Cumulus NetQ command line has a flat structure as opposed to a modal structure. This means that all commands can be run from the primary prompt instead of only in a specific mode. For example, some command lines require the administrator to switch between a configuration mode and an operation mode. Configuration commands can only be run in the configuration mode and operational commands can only be run in operation mode. This structure requires the administrator to switch between modes to run commands which can be tedious and time consuming. Cumulus NetQ command line enables the administrator to run all of its commands at the same level.

Command Syntax

NetQ CLI commands all begin with netq. Cumulus NetQ commands fall into one of four syntax categories: validation (check), monitoring (show), configuration, and trace.

netq check <network-protocol-or-service> [options]
netq show <network-protocol-or-service> [options]
netq config <action> <object> [options]
netq trace <destination> from <source> [options]
SymbolsMeaning
Parentheses ( )Grouping of required parameters. Choose one.
Square brackets [ ]Single or group of optional parameters. If more than one object or keyword is available, choose one.
Angle brackets < >Required variable. Value for a keyword or option; enter according to your deployment nomenclature.
Pipe |Separates object and keyword options, also separates value options; enter one object or keyword and zero or one value.

For example, in the netq check command:

Thus some valid commands are:

Command Output

The command output presents results in color for many commands. Results with errors are shown in red, and warnings are shown in yellow. Results without errors or warnings are shown in either black or green. VTEPs are shown in blue. A node in the pretty output is shown in bold, and a router interface is wrapped in angle brackets (< >). To view the output with only black text, run the netq config del color command. You can view output with colors again by running netq config add color.

All check and show commands are run with a default timeframe of now to one hour ago, unless you specify an approximate time using the around keyword. For example, running netq check bgp shows the status of BGP over the last hour. Running netq show bgp around 3h shows the status of BGP three hours ago.

Command Prompts

NetQ code examples use the following prompts:

To use the NetQ CLI, the switches must be running the Cumulus Linux operating system (OS), NetQ Platform or NetQ Collector software, the NetQ Agent, and the NetQ CLI. The hosts must be running CentOS, RHEL, or Ubuntu OS, the NetQ Agent, and the NetQ CLI. Refer to the Install NetQ topic for details.

Command Completion

As you enter commands, you can get help with the valid keywords or options using the Tab key. For example, using Tab completion with netq check displays the possible objects for the command, and returns you to the command prompt to complete the command.

cumulus@switch:~$ netq check <<press Tab>>
    agents      :  Netq agent
    bgp         :  BGP info
    cl-version  :  Cumulus Linux version
    clag        :  Cumulus Multi-chassis LAG
    evpn        :  EVPN
    interfaces  :  network interface port
    license     :  License information
    mlag        :  Multi-chassis LAG (alias of clag)
    mtu         :  Link MTU
    ntp         :  NTP
    ospf        :  OSPF info
    sensors     :  Temperature/Fan/PSU sensors
    vlan        :  VLAN
    vxlan       :  VXLAN data path
cumulus@switch:~$ netq check

Command Help

As you enter commands, you can get help with command syntax by entering help at various points within a command entry. For example, to find out what options are available for a BGP check, enter help after entering a portion of the netq check command. In this example, you can see that there are no additional required parameters and three optional parameters, hostnames, vrf and around, that can be used with a BGP check.

cumulus@switch:~$ netq check bgp help
Commands:
    netq check bgp [label <text-label-name> | hostnames <text-list-hostnames>] [vrf <vrf>] [include <bgp-number-range-list> | exclude <bgp-number-range-list>] [around <text-time>] [json | summary]

To see an exhaustive list of commands, run:

cumulus@switch:~$ netq help list verbose

To see a list of all NetQ commands and keyword help, run:

cumulus@switch:~$ netq help list

Command History

The CLI stores commands issued within a session, which enables you to review and rerun commands that have already been run. At the command prompt, press the Up Arrow and Down Arrow keys to move back and forth through the list of commands previously entered. When you have found a given command, you can run the command by pressing Enter, just as you would if you had entered it manually. Optionally you can modify the command before you run it.

Command Categories

While the CLI has a flat structure, the commands can be conceptually grouped into four functional categories:

Validation Commands

The netq check commands enable the network administrator to validate the current or historical state of the network by looking for errors and misconfigurations in the network. The commands run fabric-wide validations against various configured protocols and services to determine how well the network is operating. Validation checks can be performed for the following:

The commands take the form of netq check <network-protocol-or-service> [options], where the options vary according to the protocol or service.

This example shows the output for the netq check bgp command, followed by the same command using the json option. If there had been any failures, they would be have been listed below the summary results or in the failedNodes section, respectively.

cumulus@switch:~$ netq check bgp
bgp check result summary:

Checked nodes       : 8
Total nodes         : 8
Rotten nodes        : 0
Failed nodes        : 0
Warning nodes       : 0

Additional summary:
Total Sessions      : 30
Failed Sessions     : 0

Session Establishment Test   : passed
Address Families Test        : passed
Router ID Test               : passed

cumulus@switch:~$ netq check bgp json
{
    "tests":{
        "Session Establishment":{
            "suppressed_warnings":0,
            "errors":[

            ],
            "suppressed_errors":0,
            "passed":true,
            "warnings":[

            ],
            "duration":0.0000853539,
            "enabled":true,
            "suppressed_unverified":0,
            "unverified":[

            ]
        },
        "Address Families":{
            "suppressed_warnings":0,
            "errors":[

            ],
            "suppressed_errors":0,
            "passed":true,
            "warnings":[

            ],
            "duration":0.0002634525,
            "enabled":true,
            "suppressed_unverified":0,
            "unverified":[

            ]
        },
        "Router ID":{
            "suppressed_warnings":0,
            "errors":[

            ],
            "suppressed_errors":0,
            "passed":true,
            "warnings":[

            ],
            "duration":0.0001821518,
            "enabled":true,
            "suppressed_unverified":0,
            "unverified":[

            ]
        }
    },
    "failed_node_set":[

    ],
    "summary":{
        "checked_cnt":8,
        "total_cnt":8,
        "rotten_node_cnt":0,
        "failed_node_cnt":0,
        "warn_node_cnt":0
    },
    "rotten_node_set":[

    ],
    "warn_node_set":[

    ],
    "additional_summary":{
        "total_sessions":30,
        "failed_sessions":0
    },
    "validation":"bgp"
}

Monitoring Commands

The netq show commands enable the network administrator to view details about the current or historical configuration and status of the various protocols or services. The configuration and status can be shown for the following:

The commands take the form of netq [<hostname>] show <network-protocol-or-service> [options], where the options vary according to the protocol or service. The commands can be restricted from showing the information for all devices to showing information for a selected device using the hostname option.

This example shows the standard and restricted output for the netq show agents command.

cumulus@switch:~$ netq show agents
Matching agents records:
Hostname          Status           NTP Sync Version                              Sys Uptime                Agent Uptime              Reinitialize Time          Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
border01          Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:54 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:38 2020
border02          Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:57 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:33 2020
fw1               Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:44 2020  Tue Sep 29 21:24:48 2020  Tue Sep 29 21:24:48 2020   Thu Oct  1 16:07:26 2020
fw2               Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:42 2020  Tue Sep 29 21:24:48 2020  Tue Sep 29 21:24:48 2020   Thu Oct  1 16:07:22 2020
leaf01            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 16:49:04 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:10 2020
leaf02            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:14 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:30 2020
leaf03            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:37 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:24 2020
leaf04            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:35 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:13 2020
oob-mgmt-server   Fresh            yes      3.1.1-ub18.04u29~1599111022.78b9e43  Mon Sep 21 16:43:58 2020  Mon Sep 21 17:55:00 2020  Mon Sep 21 17:55:00 2020   Thu Oct  1 16:07:31 2020
server01          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:16 2020
server02          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:24 2020
server03          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:56 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:12 2020
server04          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:17 2020
server05          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:25 2020
server06          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:21 2020
server07          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:06:48 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:28 2020
server08          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:06:45 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:31 2020
spine01           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:34 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:20 2020
spine02           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:33 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:16 2020
spine03           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:34 2020  Tue Sep 29 21:25:07 2020  Tue Sep 29 21:25:07 2020   Thu Oct  1 16:07:20 2020
spine04           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:32 2020  Tue Sep 29 21:25:07 2020  Tue Sep 29 21:25:07 2020   Thu Oct  1 16:07:33 2020
cumulus@switch:~$ netq show agents json
{
    "agents":[
        {
            "hostname":"border01",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707894.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568519.0
        },
        {
            "hostname":"border02",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707897.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568515.0
        },
        {
            "hostname":"fw1",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707884.0,
            "agentUptime":1601414688.0,
            "reinitializeTime":1601414688.0,
            "lastChanged":1601568506.0
        },
        {
            "hostname":"fw2",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707882.0,
            "agentUptime":1601414688.0,
            "reinitializeTime":1601414688.0,
            "lastChanged":1601568503.0
        },
        {
            "hostname":"leaf01",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600706944.0,
            "agentUptime":1601414689.0,
            "reinitializeTime":1601414689.0,
            "lastChanged":1601568522.0
        },
        {
            "hostname":"leaf02",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707794.0,
            "agentUptime":1601414689.0,
            "reinitializeTime":1601414689.0,
            "lastChanged":1601568512.0
        },
        {
            "hostname":"leaf03",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707817.0,
            "agentUptime":1601414689.0,
            "reinitializeTime":1601414689.0,
            "lastChanged":1601568505.0
        },
        {
            "hostname":"leaf04",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707815.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568525.0
        },
        {
            "hostname":"oob-mgmt-server",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.1.1-ub18.04u29~1599111022.78b9e43",
            "sysUptime":1600706638.0,
            "agentUptime":1600710900.0,
            "reinitializeTime":1600710900.0,
            "lastChanged":1601568511.0
        },
        {
            "hostname":"server01",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413987.0,
            "reinitializeTime":1601413987.0,
            "lastChanged":1601568527.0
        },
        {
            "hostname":"server02",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413987.0,
            "reinitializeTime":1601413987.0,
            "lastChanged":1601568504.0
        },
        {
            "hostname":"server03",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708796.0,
            "agentUptime":1601413987.0,
            "reinitializeTime":1601413987.0,
            "lastChanged":1601568522.0
        },
        {
            "hostname":"server04",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413987.0,
            "reinitializeTime":1601413987.0,
            "lastChanged":1601568497.0
        },
        {
            "hostname":"server05",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413990.0,
            "reinitializeTime":1601413990.0,
            "lastChanged":1601568506.0
        },
        {
            "hostname":"server06",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708797.0,
            "agentUptime":1601413990.0,
            "reinitializeTime":1601413990.0,
            "lastChanged":1601568501.0
        },
        {
            "hostname":"server07",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708008.0,
            "agentUptime":1601413990.0,
            "reinitializeTime":1601413990.0,
            "lastChanged":1601568508.0
        },
        {
            "hostname":"server08",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-ub18.04u30~1601393774.104fb9e",
            "sysUptime":1600708005.0,
            "agentUptime":1601413990.0,
            "reinitializeTime":1601413990.0,
            "lastChanged":1601568511.0
        },
        {
            "hostname":"spine01",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707814.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568502.0
        },
        {
            "hostname":"spine02",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707813.0,
            "agentUptime":1601414698.0,
            "reinitializeTime":1601414698.0,
            "lastChanged":1601568497.0
        },
        {
            "hostname":"spine03",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707814.0,
            "agentUptime":1601414707.0,
            "reinitializeTime":1601414707.0,
            "lastChanged":1601568501.0
        },
        {
            "hostname":"spine04",
            "status":"Fresh",
            "ntpSync":"yes",
            "version":"3.2.0-cl4u30~1601410518.104fb9ed",
            "sysUptime":1600707812.0,
            "agentUptime":1601414707.0,
            "reinitializeTime":1601414707.0,
            "lastChanged":1601568514.0
	}
    ],
    "truncatedResult":false
}
cumulus@switch:~$ netq leaf01 show agents
Matching agents records:
Hostname          Status           NTP Sync Version                              Sys Uptime                Agent Uptime              Reinitialize Time          Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
leaf01            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 16:49:04 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:26:33 2020

Configuration Commands

The netq config and netq notification commands enable the network administrator to manage NetQ Agent and CLI server configuration, set up container monitoring, and event notification.

NetQ Agent Configuration

The agent commands enable the network administrator to configure individual NetQ Agents. Refer to Cumulus NetQ Components for a description of NetQ Agents, to Manage NetQ Agents, or to Install NetQ Agents for more detailed usage examples.

The agent configuration commands enable you to add and remove agents from switches and hosts, start and stop agent operations, debug the agent, specify default commands, and enable or disable a variety of monitoring features (including Kubernetes, sensors, FRR (FRRouting), CPU usage limit, and What Just Happened).

Commands apply to one agent at a time, and are run from the switch or host where the NetQ Agent resides.

The agent configuration commands include:

netq config (add|del|show) agent
netq config (start|stop|status|restart) agent

This example shows how to configure the agent to send sensor data.

cumulus@switch~:$ netq config add agent sensors

This example shows how to start monitoring with Kubernetes.

cumulus@switch:~$ netq config add agent kubernetes-monitor poll-period 15

This example shows how to view the NetQ Agent configuration.

cumulus@switch:~$ netq config show agent
netq-agent             value      default
---------------------  ---------  ---------
enable-opta-discovery  True       True
exhibitport
agenturl
server                 127.0.0.1  127.0.0.1
exhibiturl
vrf                    default    default
agentport              8981       8981
port                   31980      31980

After making configuration changes to your agents, you must restart the agent for the changes to take effect. Use the netq config restart agent command.

CLI Configuration

The CLI commands enable the network administrator to configure and manage the CLI component. These commands enable you to add or remove CLI (essentially enabling/disabling the service), start and restart it, and view the configuration of the service.

Commands apply to one device at a time, and are run from the switch or host where the CLI is run.

The CLI configuration commands include:

netq config add cli server
netq config del cli server
netq config show cli premises [json]
netq config show (cli|all) [json]
netq config (status|restart) cli

This example shows how to restart the CLI instance.

cumulus@switch~:$ netq config restart cli

This example shows how to enable the CLI on a NetQ On-premises Appliance or Virtual Machine (VM).

cumulus@switch~:$ netq config add cli server 10.1.3.101

This example shows how to enable the CLI on a NetQ Cloud Appliance or VM for the Chicago premises and the default port.

netq config add cli server api.netq.cumulusnetworks.com access-key <user-access-key> secret-key <user-secret-key> premises chicago port 443

Event Notification Commands

The notification configuration commands enable you to add, remove and show notification application integrations. These commands create the channels, filters, and rules needed to control event messaging. The commands include:

netq (add|del|show) notification channel
netq (add|del|show) notification rule
netq (add|del|show) notification filter
netq (add|del|show) notification proxy

An integration includes at least one channel (PagerDuty, Slack, or syslog), at least one filter (defined by rules you create), and at least one rule.

This example shows how to configure a PagerDuty channel:

cumulus@switch:~$ netq add notification channel pagerduty pd-netq-events integration-key c6d666e210a8425298ef7abde0d1998
Successfully added/updated channel pd-netq-events

Refer to Configure Notifications for details about using these commands and additional examples.

Trace Commands

The trace commands enable the network administrator to view the available paths between two nodes on the network currently and at a time in the past. You can perform a layer 2 or layer 3 trace, and view the output in one of three formats (json, pretty, and detail). JSON output provides the output in a JSON file format for ease of importing to other applications or software. Pretty output lines up the paths in a pseudo-graphical manner to help visualize multiple paths. Detail output is useful for traces with higher hop counts where the pretty output wraps lines, making it harder to interpret the results. The detail output displays a table with a row for each path.

The trace command syntax is:

netq trace <mac> [vlan <1-4096>] from (<src-hostname>|<ip-src>) [vrf <vrf>] [around <text-time>] [json|detail|pretty] [debug]
netq trace <ip> from (<src-hostname>|<ip-src>) [vrf <vrf>] [around <text-time>] [json|detail|pretty] [debug]

This example shows how to run a trace based on the destination IP address, in pretty output with a small number of resulting paths:

cumulus@switch:~$ netq trace 10.0.0.11 from 10.0.0.14 pretty
Number of Paths: 6
    Inconsistent PMTU among paths
Number of Paths with Errors: 0
Number of Paths with Warnings: 0
Path MTU: 9000
    leaf04 swp52 -- swp4 spine02 swp2 -- swp52 leaf02 peerlink.4094 -- peerlink.4094 leaf01 lo
                                                    peerlink.4094 -- peerlink.4094 leaf01 lo
    leaf04 swp51 -- swp4 spine01 swp2 -- swp51 leaf02 peerlink.4094 -- peerlink.4094 leaf01 lo
                                                    peerlink.4094 -- peerlink.4094 leaf01 lo
    leaf04 swp52 -- swp4 spine02 swp1 -- swp52 leaf01 lo
    leaf04 swp51 -- swp4 spine01 swp1 -- swp51 leaf01 lo

This example shows how to run a trace based on the destination IP address, in detail output with a small number of resulting paths:

cumulus@switch:~$ netq trace 10.0.0.11 from 10.0.0.14 detail
Number of Paths: 6
    Inconsistent PMTU among paths
Number of Paths with Errors: 0
Number of Paths with Warnings: 0
Path MTU: 9000
Id  Hop Hostname        InPort          InVlan InTunnel              InRtrIf         InVRF           OutRtrIf        OutVRF          OutTunnel             OutPort         OutVlan
--- --- --------------- --------------- ------ --------------------- --------------- --------------- --------------- --------------- --------------------- --------------- -------
1   1   leaf04                                                                                       swp52           default                               swp52
    2   spine02         swp4                                         swp4            default         swp2            default                               swp2
    3   leaf02          swp52                                        swp52           default         peerlink.4094   default                               peerlink.4094
    4   leaf01          peerlink.4094                                peerlink.4094   default                                                               lo
--- --- --------------- --------------- ------ --------------------- --------------- --------------- --------------- --------------- --------------------- --------------- -------
2   1   leaf04                                                                                       swp52           default                               swp52
    2   spine02         swp4                                         swp4            default         swp2            default                               swp2
    3   leaf02          swp52                                        swp52           default         peerlink.4094   default                               peerlink.4094
    4   leaf01          peerlink.4094                                peerlink.4094   default                                                               lo
--- --- --------------- --------------- ------ --------------------- --------------- --------------- --------------- --------------- --------------------- --------------- -------
3   1   leaf04                                                                                       swp51           default                               swp51
    2   spine01         swp4                                         swp4            default         swp2            default                               swp2
    3   leaf02          swp51                                        swp51           default         peerlink.4094   default                               peerlink.4094
    4   leaf01          peerlink.4094                                peerlink.4094   default                                                               lo
--- --- --------------- --------------- ------ --------------------- --------------- --------------- --------------- --------------- --------------------- --------------- -------
4   1   leaf04                                                                                       swp51           default                               swp51
    2   spine01         swp4                                         swp4            default         swp2            default                               swp2
    3   leaf02          swp51                                        swp51           default         peerlink.4094   default                               peerlink.4094
    4   leaf01          peerlink.4094                                peerlink.4094   default                                                               lo
--- --- --------------- --------------- ------ --------------------- --------------- --------------- --------------- --------------- --------------------- --------------- -------
5   1   leaf04                                                                                       swp52           default                               swp52
    2   spine02         swp4                                         swp4            default         swp1            default                               swp1
    3   leaf01          swp52                                        swp52           default                                                               lo
--- --- --------------- --------------- ------ --------------------- --------------- --------------- --------------- --------------- --------------------- --------------- -------
6   1   leaf04                                                                                       swp51           default                               swp51
    2   spine01         swp4                                         swp4            default         swp1            default                               swp1
    3   leaf01          swp51                                        swp51           default                                                               lo
--- --- --------------- --------------- ------ --------------------- --------------- --------------- --------------- --------------- --------------------- --------------- -------

This example shows how to run a trace based on the destination MAC address, in pretty output:

cumulus@switch:~$ netq trace A0:00:00:00:00:11 vlan 1001 from Server03 pretty
Number of Paths: 6
Number of Paths with Errors: 0
Number of Paths with Warnings: 0
Path MTU: 9152
    
    Server03 bond1.1001 -- swp7 <vlan1001> Leaf02 vni: 34 swp5 -- swp4 Spine03 swp7 -- swp5 vni: 34 Leaf04 swp6 -- swp1.1001 Server03 <swp1.1001>
                                                        swp4 -- swp4 Spine02 swp7 -- swp4 vni: 34 Leaf04 swp6 -- swp1.1001 Server03 <swp1.1001>
                                                        swp3 -- swp4 Spine01 swp7 -- swp3 vni: 34 Leaf04 swp6 -- swp1.1001 Server03 <swp1.1001>
            bond1.1001 -- swp7 <vlan1001> Leaf01 vni: 34 swp5 -- swp3 Spine03 swp7 -- swp5 vni: 34 Leaf04 swp6 -- swp1.1001 Server03 <swp1.1001>
                                                        swp4 -- swp3 Spine02 swp7 -- swp4 vni: 34 Leaf04 swp6 -- swp1.1001 Server03 <swp1.1001>
                                                        swp3 -- swp3 Spine01 swp7 -- swp3 vni: 34 Leaf04 swp6 -- swp1.1001 Server03 <swp1.1001>

Manage Deployment

This topic is intended for network administrators who are responsible for installation, setup, and maintenance of Cumulus NetQ in their data center or campus environment. NetQ offers the ability to monitor and manage your network infrastructure and operational health with simple tools based on open source Linux. This topic provides instructions and information about installing, backing up, and upgrading NetQ. It also contains instructions for integrating with an LDAP server and Grafana.

Before you get started, you should review the release notes for this version.

Install NetQ

The Cumulus NetQ software contains several components that must be installed, including the NetQ applications, the database, and the NetQ Agents. NetQ can be deployed in two arrangements:

The NetQ Agents reside on the switches and hosts being monitored in your network.

For the on-premises solution, the NetQ Agents collect and transmit data from the switches and/or hosts back to the NetQ On-premises Appliance or Virtual Machine running the NetQ Platform, which in turn processes and stores the data in its database. This data is then provided for display through several user interfaces.

For the cloud solution, the NetQ Agent function is exactly the same, transmitting collected data, but instead sends it to the NetQ Collector containing only the aggregation and forwarding application. The NetQ Collector then transmits this data to Cumulus Networks cloud-based infrastructure for further processing and storage. This data is then provided for display through the same user interfaces as the on-premises solution. In this solution, the browser interface can be pointed to the local NetQ Cloud Appliance or VM, or directly to netq.cumulusnetworks.com.

Installation Choices

There are several choices that you must make to determine what steps you need to perform to install the NetQ solution. First and foremost, you must determine whether you intend to deploy the solution fully on your premises or if you intend to deploy the cloud solution. Secondly, you must decide whether you are going to deploy a Virtual Machine on your own hardware or use one of the Cumulus NetQ appliances. Thirdly, you also must determine whether you want to install the software on a single server or as a server cluster. Finally, if you have an existing on-premises solution and want to save your existing NetQ data, you must backup that data before installing the new software.

The documentation walks you through these choices and then provides the instructions specific to your selections.

Installation Workflow Summary

No matter how you answer the questions above, the installation workflow can be summarized as follows:

  1. Prepare physical server or virtual machine.
  2. Install the software (NetQ Platform or NetQ Collector).
  3. Install and configure NetQ Agents on switches and hosts.
  4. Install and configure NetQ CLI on switches and hosts (optional, but useful).

Where to Go Next

Follow the instructions in Install the NetQ System to begin installation of Cumulus NetQ.

Install the NetQ System

This topic walks you through the NetQ System installation decisions and then provides installation steps based on those choices. If you are already comfortable with your installation choices, you may use the matrix in Install NetQ Quick Start to go directly to the installation steps.

To install NetQ 3.2.x, you must first decide whether you want to install the NetQ System in an on-premises or cloud deployment. Both deployment options provide secure access to data and features useful for monitoring and troubleshooting your network, and each has its benefits.

It is common to select an on-premises deployment model if you want to host all required hardware and software at your location, and you have the in-house skill set to install, configure, and maintain it—including performing data backups, acquiring and maintaining hardware and software, and integration and license management. This model is also a good choice if you want very limited or no access to the Internet from switches and hosts in your network. Some companies simply want complete control of the their network, and no outside impact.

If, however, you find that you want to host only a small server on your premises and leave the details up to Cumulus Networks, then a cloud deployment might be the right choice for you. With a cloud deployment, a small local server connects to the NetQ Cloud service over selected ports or through a proxy server. Only data aggregation and forwarding is supported. The majority of the NetQ applications are hosted and data storage is provided in the cloud. Cumulus handles the backups and maintenance of the application and storage. This model is often chosen when it is untenable to support deployment in-house or if you need the flexibility to scale quickly, while also reducing capital expenses.

Click the deployment model you want to use to continue with installation:

Install NetQ as an On-premises Deployment

On-premises deployments of NetQ can use a single server or a server cluster. In either case, you can use either the Cumulus NetQ Appliance or your own server running a KVM or VMware Virtual Machine (VM). This topic walks you through the installation for each of these on-premises options.

The next installation step is to decide whether you are deploying a single server or a server cluster. Both options provide the same services and features. The biggest difference is in the number of servers to be deployed and in the continued availability of services running on those servers should hardware failures occur.

A single server is easier to set up, configure and manage, but can limit your ability to scale your network monitoring quickly. Multiple servers is a bit more complicated, but you limit potential downtime and increase availability by having more than one server that can run the software and store the data.

Select the standalone single-server arrangements for smaller, simpler deployments. Be sure to consider the capabilities and resources needed on this server to support the size of your final deployment.

Select the server cluster arrangement to obtain scalability and high availability for your network. You can configure one master node and up to nine worker nodes.

Click the server arrangement you want to use to begin installation:

Install NetQ as a Cloud Deployment

Cloud deployments of NetQ can use a single server or a server cluster on site. The NetQ database remains in the cloud either way. You can use either the Cumulus NetQ Cloud Appliance or your own server running a KVM or VMware Virtual Machine (VM). This topic walks you through the installation for each of these cloud options.

The next installation step is to decide whether you are deploying a single server or a server cluster. Both options provide the same services and features. The biggest difference is in the number of servers to be deployed and in the continued availability of services running on those servers should hardware failures occur.

A single server is easier to set up, configure and manage, but can limit your ability to scale your network monitoring quickly. Multiple servers is a bit more complicated, but you limit potential downtime and increase availability by having more than one server that can run the software and store the data.

Click the server arrangement you want to use to begin installation:

Set Up Your VMware Virtual Machine for a Single On-premises Server

Follow these steps to setup and configure your VM on a single server in an on-premises deployment:

  1. Verify that your system meets the VM requirements.

    When using a VM, the following system resources must be allocated:
    ResourceMinimum Requirement
    ProcessorEight (8) virtual CPUs
    Memory64 GB RAM
    Local disk storage256 GB (2 TB max) SSD with minimum disk IOPS of 1000 for a standard 4kb block size
    (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
    Network interface speed1 Gb NIC
    HypervisorVMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ Platform:
    PortProtocolComponent Access
    8443TCPAdmin UI
    443TCPNetQ UI
    31980TCPNetQ Agent communication
    32708TCPAPI Gateway
    22TCPSSH

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    Access to the software downloads depends on whether you are an existing customer before September 1, 2020 or whether you are a new customer. Please follow the instructions accordingly.

    Existing customer who has downloaded Cumulus Networks software before September 1, 2020:
    1. On the MyMellanox Downloads page, select NetQ from the Software -> Cumulus Software list.
    2. Click 3.2 from the Version list, and then select 3.2.1 from the submenu.
    3. Select VMware from the HyperVisor/Platform list.

    4. Scroll down to view the image, and click Download. This downloads the NetQ-3.2.1.tgz installation package.

    New customer downloading Cumulus Networks software on or after September 1, 2020:
    1. On the My Mellanox support page, log in to your account. If needed create a new account and then log in.

      Your username is based on your Email address. For example, user1@domain.com.mlnx.
    2. Open the Downloads menu.
    3. Click Software.
    4. Open the Cumulus Software option.
    5. Click All downloads next to Cumulus NetQ.
    6. Select 3.2.1 from the NetQ Version dropdown.
    7. Select VMware from the Hypervisor dropdown.
    8. Click Show Download.
    9. Verify this is the correct image, then click Download.

    Ignore the Firmware, Documentation, and More files options as these do not apply to NetQ.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    VMware Example Configuration This example shows the VM setup process using an OVA file with VMware ESXi.
    1. Enter the address of the hardware in your browser.

    2. Log in to VMware using credentials with root access.

    3. Click Storage in the Navigator to verify you have an SSD installed.

    4. Click Create/Register VM at the top of the right pane.

    5. Select Deploy a virtual machine from an OVF or OVA file, and click Next.

    6. Provide a name for the VM, for example NetQ.

      Tip: Make note of the name used during install as this is needed in a later step.

    7. Drag and drop the NetQ Platform image file you downloaded in Step 2 above.

    8. Click Next.

    9. Select the storage type and data store for the image to use, then click Next. In this example, only one is available.

    10. Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.

    11. Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.

      The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.

    12. Once completed, view the full details of the VM and hardware.

  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the platform is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
  8. Run the Bootstrap CLI. Be sure to replace the eth0 interface used in this example with the interface on the server used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset [purge-db|keep-db] and then try again.

    If you have changed the IP address or hostname of the NetQ On-premises VM after this step, you need to re-register this address with NetQ as follows:

    Reset the VM, indicating whether you want to purge any NetQ DB data or keep it.

    cumulus@hostname:~$ netq bootstrap reset [purge-db|keep-db]

    Re-run the Bootstrap CLI. This example uses interface eth0. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Set Up Your VMware Virtual Machine for a Single Cloud Server

Follow these steps to setup and configure your VM for a cloud deployment:

  1. Verify that your system meets the VM requirements.

    When using a VM, the following system resources must be allocated:
    ResourceMinimum Requirement
    ProcessorFour (4) virtual CPUs
    Memory8 GB RAM
    Local disk storageFor NetQ 3.2.x and later: 64 GB (2 TB max)
    For NetQ 3.1 and earlier: 32 GB (2 TB max)
    Network interface speed1 Gb NIC
    HypervisorVMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ Platform:
    PortProtocolComponent Access
    8443TCPAdmin UI
    443TCPNetQ UI
    31980TCPNetQ Agent communication
    32708TCPAPI Gateway
    22TCPSSH

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    Access to the software downloads depends on whether you are an existing customer before September 1, 2020 or whether you are a new customer. Please follow the instructions accordingly.

    Existing customer who has downloaded Cumulus Networks software before September 1, 2020:
    1. On the MyMellanox Downloads page, select NetQ from the Software -> Cumulus Software list.
    2. Click 3.2 from the Version list, and then select 3.2.1 from the submenu.
    3. Select VMware (Cloud) from the HyperVisor/Platform list.

    4. Scroll down to view the image, and click Download. This downloads the NetQ-3.2.1-opta.tgz installation package.

    New customer downloading Cumulus Networks software on or after September 1, 2020:
    1. On the My Mellanox support page, log in to your account. If needed create a new account and then log in.

      Your username is based on your Email address. For example, user1@domain.com.mlnx.
    2. Open the Downloads menu.
    3. Click Software.
    4. Open the Cumulus Software option.
    5. Click All downloads next to Cumulus NetQ.
    6. Select 3.2.1 from the NetQ Version dropdown.
    7. Select VMware (cloud) from the Hypervisor dropdown.
    8. Click Show Download.
    9. Verify this is the correct image, then click Download.

    Ignore the Firmware, Documentation, and More files options as these do not apply to NetQ.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    VMware Example Configuration This example shows the VM setup process using an OVA file with VMware ESXi.
    1. Enter the address of the hardware in your browser.

    2. Log in to VMware using credentials with root access.

    3. Click Storage in the Navigator to verify you have an SSD installed.

    4. Click Create/Register VM at the top of the right pane.

    5. Select Deploy a virtual machine from an OVF or OVA file, and click Next.

    6. Provide a name for the VM, for example NetQ.

      Tip: Make note of the name used during install as this is needed in a later step.

    7. Drag and drop the NetQ Platform image file you downloaded in Step 2 above.

    8. Click Next.

    9. Select the storage type and data store for the image to use, then click Next. In this example, only one is available.

    10. Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.

    11. Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.

      The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.

    12. Once completed, view the full details of the VM and hardware.

  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the platform is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
  8. Run the Bootstrap CLI. Be sure to replace the eth0 interface used in this example with the interface on the server used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset and then try again.

    If you have changed the IP address or hostname of the NetQ Cloud VM after this step, you need to re-register this address with NetQ as follows:

    Reset the VM.

    cumulus@hostname:~$ netq bootstrap reset

    Re-run the Bootstrap CLI. This example uses interface eth0. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Set Up Your VMware Virtual Machine for an On-premises Server Cluster

First configure the VM on the master node, and then configure the VM on each worker node.

Follow these steps to setup and configure your VM cluster for an on-premises deployment:

  1. Verify that your master node meets the VM requirements.

    When using a VM, the following system resources must be allocated:
    ResourceMinimum Requirement
    ProcessorEight (8) virtual CPUs
    Memory64 GB RAM
    Local disk storage256 GB (2 TB max) SSD with minimum disk IOPS of 1000 for a standard 4kb block size
    (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
    Network interface speed1 Gb NIC
    HypervisorVMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ Platforms:
    PortProtocolComponent Access
    8443TCPAdmin UI
    443TCPNetQ UI
    31980TCPNetQ Agent communication
    32708TCPAPI Gateway
    22TCPSSH
    Additionally, for internal cluster communication, you must open these ports:
    PortProtocolComponent Access
    8080TCPAdmin API
    5000TCPDocker registry
    8472UDPFlannel port for VXLAN
    6443TCPKubernetes API server
    10250TCPkubelet health probe
    2379TCPetcd
    2380TCPetcd
    7072TCPKafka JMX monitoring
    9092TCPKafka client
    7071TCPCassandra JMX monitoring
    7000TCPCassandra cluster communication
    9042TCPCassandra client
    7073TCPZookeeper JMX monitoring
    2888TCPZookeeper cluster communication
    3888TCPZookeeper cluster communication
    2181TCPZookeeper client

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    Access to the software downloads depends on whether you are an existing customer before September 1, 2020 or whether you are a new customer. Please follow the instructions accordingly.

    Existing customer who has downloaded Cumulus Networks software before September 1, 2020:
    1. On the MyMellanox Downloads page, select NetQ from the Software -> Cumulus Software list.
    2. Click 3.2 from the Version list, and then select 3.2.1 from the submenu.
    3. Select VMware from the HyperVisor/Platform list.

    4. Scroll down to view the image, and click Download. This downloads the NetQ-3.2.1.tgz installation package.

    New customer downloading Cumulus Networks software on or after September 1, 2020:
    1. On the My Mellanox support page, log in to your account. If needed create a new account and then log in.

      Your username is based on your Email address. For example, user1@domain.com.mlnx.
    2. Open the Downloads menu.
    3. Click Software.
    4. Open the Cumulus Software option.
    5. Click All downloads next to Cumulus NetQ.
    6. Select 3.2.1 from the NetQ Version dropdown.
    7. Select VMware from the Hypervisor dropdown.
    8. Click Show Download.
    9. Verify this is the correct image, then click Download.

    Ignore the Firmware, Documentation, and More files options as these do not apply to NetQ.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    VMware Example Configuration This example shows the VM setup process using an OVA file with VMware ESXi.
    1. Enter the address of the hardware in your browser.

    2. Log in to VMware using credentials with root access.

    3. Click Storage in the Navigator to verify you have an SSD installed.

    4. Click Create/Register VM at the top of the right pane.

    5. Select Deploy a virtual machine from an OVF or OVA file, and click Next.

    6. Provide a name for the VM, for example NetQ.

      Tip: Make note of the name used during install as this is needed in a later step.

    7. Drag and drop the NetQ Platform image file you downloaded in Step 2 above.

    8. Click Next.

    9. Select the storage type and data store for the image to use, then click Next. In this example, only one is available.

    10. Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.

    11. Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.

      The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.

    12. Once completed, view the full details of the VM and hardware.

  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
  8. Run the Bootstrap CLI. Be sure to replace the eth0 interface used in this example with the interface on the server used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset [purge-db|keep-db] and then try again.

    If you have changed the IP address or hostname of the NetQ On-premises VM after this step, you need to re-register this address with NetQ as follows:

    Reset the VM, indicating whether you want to purge any NetQ DB data or keep it.

    cumulus@hostname:~$ netq bootstrap reset [purge-db|keep-db]

    Re-run the Bootstrap CLI. This example uses interface eth0. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz
  9. Verify that your first worker node meets the VM requirements, as described in Step 1.

  10. Confirm that the needed ports are open for communications, as described in Step 2.

  11. Open your hypervisor and setup the VM in the same manner as for the master node.

    Make a note of the private IP address you assign to the worker node. It is needed for later installation steps.

  12. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  13. Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  14. Run the Bootstrap CLI on the worker node.

    cumulus@:~$ netq bootstrap worker tarball /mnt/installables/netq-bootstrap-3.2.1.tgz master-ip <master-ip>

    Provide a password using the password option if required. Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset [purge-db|keep-db] on the new worker node and then try again.

  15. Repeat Steps 9 through 14 for each additional worker node you want in your cluster.

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Set Up Your VMware Virtual Machine for a Cloud Server Cluster

First configure the VM on the master node, and then configure the VM on each worker node.

Follow these steps to setup and configure your VM on a cluster of servers in a cloud deployment:

  1. Verify that your master node meets the VM requirements.

    When using a VM, the following system resources must be allocated:
    ResourceMinimum Requirement
    ProcessorFour (4) virtual CPUs
    Memory8 GB RAM
    Local disk storageFor NetQ 3.2.x and later: 64 GB (2 TB max)
    For NetQ 3.1 and earlier: 32 GB (2 TB max)
    Network interface speed1 Gb NIC
    HypervisorVMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ Platforms:
    PortProtocolComponent Access
    8443TCPAdmin UI
    443TCPNetQ UI
    31980TCPNetQ Agent communication
    32708TCPAPI Gateway
    22TCPSSH
    Additionally, for internal cluster communication, you must open these ports:
    PortProtocolComponent Access
    8080TCPAdmin API
    5000TCPDocker registry
    8472UDPFlannel port for VXLAN
    6443TCPKubernetes API server
    10250TCPkubelet health probe
    2379TCPetcd
    2380TCPetcd
    7072TCPKafka JMX monitoring
    9092TCPKafka client
    7071TCPCassandra JMX monitoring
    7000TCPCassandra cluster communication
    9042TCPCassandra client
    7073TCPZookeeper JMX monitoring
    2888TCPZookeeper cluster communication
    3888TCPZookeeper cluster communication
    2181TCPZookeeper client

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    Access to the software downloads depends on whether you are an existing customer before September 1, 2020 or whether you are a new customer. Please follow the instructions accordingly.

    Existing customer who has downloaded Cumulus Networks software before September 1, 2020:
    1. On the MyMellanox Downloads page, select NetQ from the Software -> Cumulus Software list.
    2. Click 3.2 from the Version list, and then select 3.2.1 from the submenu.
    3. Select VMware (Cloud) from the HyperVisor/Platform list.

    4. Scroll down to view the image, and click Download. This downloads the NetQ-3.2.1-opta.tgz installation package.

    New customer downloading Cumulus Networks software on or after September 1, 2020:
    1. On the My Mellanox support page, log in to your account. If needed create a new account and then log in.

      Your username is based on your Email address. For example, user1@domain.com.mlnx.
    2. Open the Downloads menu.
    3. Click Software.
    4. Open the Cumulus Software option.
    5. Click All downloads next to Cumulus NetQ.
    6. Select 3.2.1 from the NetQ Version dropdown.
    7. Select VMware (cloud) from the Hypervisor dropdown.
    8. Click Show Download.
    9. Verify this is the correct image, then click Download.

    Ignore the Firmware, Documentation, and More files options as these do not apply to NetQ.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    VMware Example Configuration This example shows the VM setup process using an OVA file with VMware ESXi.
    1. Enter the address of the hardware in your browser.

    2. Log in to VMware using credentials with root access.

    3. Click Storage in the Navigator to verify you have an SSD installed.

    4. Click Create/Register VM at the top of the right pane.

    5. Select Deploy a virtual machine from an OVF or OVA file, and click Next.

    6. Provide a name for the VM, for example NetQ.

      Tip: Make note of the name used during install as this is needed in a later step.

    7. Drag and drop the NetQ Platform image file you downloaded in Step 2 above.

    8. Click Next.

    9. Select the storage type and data store for the image to use, then click Next. In this example, only one is available.

    10. Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.

    11. Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.

      The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.

    12. Once completed, view the full details of the VM and hardware.

  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
  8. Run the Bootstrap CLI. Be sure to replace the eth0 interface used in this example with the interface on the server used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset and then try again.

    If you have changed the IP address or hostname of the NetQ Cloud VM after this step, you need to re-register this address with NetQ as follows:

    Reset the VM.

    cumulus@hostname:~$ netq bootstrap reset

    Re-run the Bootstrap CLI. This example uses interface eth0. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz
  9. Verify that your first worker node meets the VM requirements, as described in Step 1.

  10. Confirm that the needed ports are open for communications, as described in Step 2.

  11. Open your hypervisor and setup the VM in the same manner as for the master node.

    Make a note of the private IP address you assign to the worker node. It is needed for later installation steps.

  12. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  13. Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  14. Run the Bootstrap CLI on the worker node.

    cumulus@:~$ netq bootstrap worker tarball /mnt/installables/netq-bootstrap-3.2.1.tgz master-ip <master-ip>

    Provide a password using the password option if required. Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset on the new worker node and then try again.

  15. Repeat Steps 9 through 14 for each additional worker node you want in your cluster.

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Set Up Your KVM Virtual Machine for a Single On-premises Server

Follow these steps to setup and configure your VM on a single server in an on-premises deployment:

  1. Verify that your system meets the VM requirements.

    When using a VM, the following system resources must be allocated:
    ResourceMinimum Requirement
    ProcessorEight (8) virtual CPUs
    Memory64 GB RAM
    Local disk storage256 GB (2 TB max) SSD with minimum disk IOPS of 1000 for a standard 4kb block size
    (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
    Network interface speed1 Gb NIC
    HypervisorKVM/QCOW (QEMU Copy on Write) image for servers running CentOS, Ubuntu and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ Platform:
    PortProtocolComponent Access
    8443TCPAdmin UI
    443TCPNetQ UI
    31980TCPNetQ Agent communication
    32708TCPAPI Gateway
    22TCPSSH

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    Access to the software downloads depends on whether you are an existing customer before September 1, 2020 or whether you are a new customer. Please follow the instructions accordingly.

    Existing customer who has downloaded Cumulus Networks software before September 1, 2020:
    1. On the MyMellanox Downloads page, select NetQ from the Software -> Cumulus Software list.
    2. Click 3.2 from the Version list, and then select 3.2.1 from the submenu.
    3. Select KVM from the HyperVisor/Platform list.

    4. Scroll down to view the image, and click Download. This downloads the NetQ-3.2.1.tgz installation package.

    New customer downloading Cumulus Networks software on or after September 1, 2020:
    1. On the My Mellanox support page, log in to your account. If needed create a new account and then log in.

      Your username is based on your Email address. For example, user1@domain.com.mlnx.
    2. Open the Downloads menu.
    3. Click Software.
    4. Open the Cumulus Software option.
    5. Click All downloads next to Cumulus NetQ.
    6. Select 3.2.1 from the NetQ Version dropdown.
    7. Select KVM from the Hypervisor dropdown.
    8. Click Show Download.
    9. Verify this is the correct image, then click Download.

    Ignore the Firmware, Documentation, and More files options as these do not apply to NetQ.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    KVM Example Configuration

    This example shows the VM setup process for a system with Libvirt and KVM/QEMU installed.

    1. Confirm that the SHA256 checksum matches the one posted on the Cumulus Downloads website to ensure the image download has not been corrupted.

      $ sha256sum ./Downloads/netq-3.2.1-ubuntu-18.04-ts-qemu.qcow2
      $ F4EF2B16C41EBF92ECCECD0A6094A49EB30AD59508F027B18B9DDAE7E57F0A6F ./Downloads/netq-3.2.1-ubuntu-18.04-ts-qemu.qcow2
    2. Copy the QCOW2 image to a directory where you want to run it.

      Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.

      $ sudo mkdir /vms
      $ sudo cp ./Downloads/netq-3.2.1-ubuntu-18.04-ts-qemu.qcow2 /vms/ts.qcow2
    3. Create the VM.

      For a Direct VM, where the VM uses a MACVLAN interface to sit on the host interface for its connectivity:

      $ virt-install --name=netq_ts --vcpus=8 --memory=65536 --os-type=linux --os-variant=generic --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=type=direct,source=eth0,model=virtio -import --noautoconsole

      Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.

      Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:

      $ virt-install --name=netq_ts --vcpus=8 --memory=65536 --os-type=linux --os-variant=generic \ --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=bridge=br0,model=virtio --import --noautoconsole

      Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.

      Make note of the name used during install as this is needed in a later step.

    4. Watch the boot process in another terminal window.
      $ virsh console netq_ts
  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the platform is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
  8. Run the Bootstrap CLI. Be sure to replace the eth0 interface used in this example with the interface on the server used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset [purge-db|keep-db] and then try again.

    If you have changed the IP address or hostname of the NetQ On-premises VM after this step, you need to re-register this address with NetQ as follows:

    Reset the VM, indicating whether you want to purge any NetQ DB data or keep it.

    cumulus@hostname:~$ netq bootstrap reset [purge-db|keep-db]

    Re-run the Bootstrap CLI. This example uses interface eth0. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Set Up Your KVM Virtual Machine for a Single Cloud Server

Follow these steps to setup and configure your VM on a single server in a cloud deployment:

  1. Verify that your system meets the VM requirements.

    When using a VM, the following system resources must be allocated:
    ResourceMinimum Requirement
    ProcessorFour (4) virtual CPUs
    Memory8 GB RAM
    Local disk storageFor NetQ 3.2.x and later: 64 GB (2 TB max)
    For NetQ 3.1 and earlier: 32 GB (2 TB max)
    Network interface speed1 Gb NIC
    HypervisorKVM/QCOW (QEMU Copy on Write) image for servers running CentOS, Ubuntu and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ Platform:
    PortProtocolComponent Access
    8443TCPAdmin UI
    443TCPNetQ UI
    31980TCPNetQ Agent communication
    32708TCPAPI Gateway
    22TCPSSH

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    Access to the software downloads depends on whether you are an existing customer before September 1, 2020 or whether you are a new customer. Please follow the instructions accordingly.

    Existing customer who has downloaded Cumulus Networks software before September 1, 2020:
    1. On the MyMellanox Downloads page, select NetQ from the Software -> Cumulus Software list.
    2. Click 3.2 from the Version list, and then select 3.2.1 from the submenu.
    3. Select KVM (Cloud) from the HyperVisor/Platform list.

    4. Scroll down to view the image, and click Download. This downloads the NetQ-3.2.1-opta.tgz installation package.

    New customer downloading Cumulus Networks software on or after September 1, 2020:
    1. On the My Mellanox support page, log in to your account. If needed create a new account and then log in.

      Your username is based on your Email address. For example, user1@domain.com.mlnx.
    2. Open the Downloads menu.
    3. Click Software.
    4. Open the Cumulus Software option.
    5. Click All downloads next to Cumulus NetQ.
    6. Select 3.2.1 from the NetQ Version dropdown.
    7. Select KVM (cloud) from the Hypervisor dropdown.
    8. Click Show Download.
    9. Verify this is the correct image, then click Download.

    Ignore the Firmware, Documentation, and More files options as these do not apply to NetQ.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    KVM Example Configuration

    This example shows the VM setup process for a system with Libvirt and KVM/QEMU installed.

    1. Confirm that the SHA256 checksum matches the one posted on the Cumulus Downloads website to ensure the image download has not been corrupted.

      $ sha256sum ./Downloads/netq-3.2.1-ubuntu-18.04-tscloud-qemu.qcow2
      $ DDC24C25CD50DF5C6F1C0D7070ACA8317A6C4AB52F3A95EA005BA9777849981E ./Downloads/netq-3.2.1-ubuntu-18.04-tscloud-qemu.qcow2
    2. Copy the QCOW2 image to a directory where you want to run it.

      Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.

      $ sudo mkdir /vms
      $ sudo cp ./Downloads/netq-3.2.1-ubuntu-18.04-tscloud-qemu.qcow2 /vms/ts.qcow2
    3. Create the VM.

      For a Direct VM, where the VM uses a MACVLAN interface to sit on the host interface for its connectivity:

      $ virt-install --name=netq_ts --vcpus=4 --memory=8192 --os-type=linux --os-variant=generic --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=type=direct,source=eth0,model=virtio -import --noautoconsole

      Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.

      Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:

      $ virt-install --name=netq_ts --vcpus=4 --memory=8192 --os-type=linux --os-variant=generic \ --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=bridge=br0,model=virtio --import --noautoconsole

      Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.

      Make note of the name used during install as this is needed in a later step.

    4. Watch the boot process in another terminal window.
      $ virsh console netq_ts
  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the platform is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
  8. Run the Bootstrap CLI. Be sure to replace the eth0 interface used in this example with the interface on the server used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset and then try again.

    If you have changed the IP address or hostname of the NetQ Cloud VM after this step, you need to re-register this address with NetQ as follows:

    Reset the VM.

    cumulus@hostname:~$ netq bootstrap reset

    Re-run the Bootstrap CLI. This example uses interface eth0. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Set Up Your KVM Virtual Machine for an On-premises Server Cluster

First configure the VM on the master node, and then configure the VM on each worker node.

Follow these steps to setup and configure your VM on a cluster of servers in an on-premises deployment:

  1. Verify that your master node meets the VM requirements.

    When using a VM, the following system resources must be allocated:
    ResourceMinimum Requirement
    ProcessorEight (8) virtual CPUs
    Memory64 GB RAM
    Local disk storage256 GB (2 TB max) SSD with minimum disk IOPS of 1000 for a standard 4kb block size
    (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
    Network interface speed1 Gb NIC
    HypervisorKVM/QCOW (QEMU Copy on Write) image for servers running CentOS, Ubuntu and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ Platforms:
    PortProtocolComponent Access
    8443TCPAdmin UI
    443TCPNetQ UI
    31980TCPNetQ Agent communication
    32708TCPAPI Gateway
    22TCPSSH
    Additionally, for internal cluster communication, you must open these ports:
    PortProtocolComponent Access
    8080TCPAdmin API
    5000TCPDocker registry
    8472UDPFlannel port for VXLAN
    6443TCPKubernetes API server
    10250TCPkubelet health probe
    2379TCPetcd
    2380TCPetcd
    7072TCPKafka JMX monitoring
    9092TCPKafka client
    7071TCPCassandra JMX monitoring
    7000TCPCassandra cluster communication
    9042TCPCassandra client
    7073TCPZookeeper JMX monitoring
    2888TCPZookeeper cluster communication
    3888TCPZookeeper cluster communication
    2181TCPZookeeper client

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    Access to the software downloads depends on whether you are an existing customer before September 1, 2020 or whether you are a new customer. Please follow the instructions accordingly.

    Existing customer who has downloaded Cumulus Networks software before September 1, 2020:
    1. On the MyMellanox Downloads page, select NetQ from the Software -> Cumulus Software list.
    2. Click 3.2 from the Version list, and then select 3.2.1 from the submenu.
    3. Select KVM from the HyperVisor/Platform list.

    4. Scroll down to view the image, and click Download. This downloads the NetQ-3.2.1.tgz installation package.

    New customer downloading Cumulus Networks software on or after September 1, 2020:
    1. On the My Mellanox support page, log in to your account. If needed create a new account and then log in.

      Your username is based on your Email address. For example, user1@domain.com.mlnx.
    2. Open the Downloads menu.
    3. Click Software.
    4. Open the Cumulus Software option.
    5. Click All downloads next to Cumulus NetQ.
    6. Select 3.2.1 from the NetQ Version dropdown.
    7. Select KVM from the Hypervisor dropdown.
    8. Click Show Download.
    9. Verify this is the correct image, then click Download.

    Ignore the Firmware, Documentation, and More files options as these do not apply to NetQ.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    KVM Example Configuration

    This example shows the VM setup process for a system with Libvirt and KVM/QEMU installed.

    1. Confirm that the SHA256 checksum matches the one posted on the Cumulus Downloads website to ensure the image download has not been corrupted.

      $ sha256sum ./Downloads/netq-3.2.1-ubuntu-18.04-ts-qemu.qcow2
      $ F4EF2B16C41EBF92ECCECD0A6094A49EB30AD59508F027B18B9DDAE7E57F0A6F ./Downloads/netq-3.2.1-ubuntu-18.04-ts-qemu.qcow2
    2. Copy the QCOW2 image to a directory where you want to run it.

      Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.

      $ sudo mkdir /vms
      $ sudo cp ./Downloads/netq-3.2.1-ubuntu-18.04-ts-qemu.qcow2 /vms/ts.qcow2
    3. Create the VM.

      For a Direct VM, where the VM uses a MACVLAN interface to sit on the host interface for its connectivity:

      $ virt-install --name=netq_ts --vcpus=8 --memory=65536 --os-type=linux --os-variant=generic --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=type=direct,source=eth0,model=virtio -import --noautoconsole

      Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.

      Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:

      $ virt-install --name=netq_ts --vcpus=8 --memory=65536 --os-type=linux --os-variant=generic \ --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=bridge=br0,model=virtio --import --noautoconsole

      Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.

      Make note of the name used during install as this is needed in a later step.

    4. Watch the boot process in another terminal window.
      $ virsh console netq_ts
  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
  8. Run the Bootstrap CLI on the master node. Be sure to replace the eth0 interface used in this example with the interface on the server used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset [purge-db|keep-db] and then try again.

    If you have changed the IP address or hostname of the NetQ On-premises VM after this step, you need to re-register this address with NetQ as follows:

    Reset the VM, indicating whether you want to purge any NetQ DB data or keep it.

    cumulus@hostname:~$ netq bootstrap reset [purge-db|keep-db]

    Re-run the Bootstrap CLI. This example uses interface eth0. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz
  9. Verify that your first worker node meets the VM requirements, as described in Step 1.

  10. Confirm that the needed ports are open for communications, as described in Step 2.

  11. Open your hypervisor and setup the VM in the same manner as for the master node.

    Make a note of the private IP address you assign to the worker node. It is needed for later installation steps.

  12. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  13. Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  14. Run the Bootstrap CLI on the worker node.

    cumulus@:~$ netq bootstrap worker tarball /mnt/installables/netq-bootstrap-3.2.1.tgz master-ip <master-ip>

    Provide a password using the password option if required. Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset [purge-db|keep-db] on the new worker node and then try again.

  15. Repeat Steps 9 through 14 for each additional worker node you want in your cluster.

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Set Up Your KVM Virtual Machine for a Cloud Server Cluster

First configure the VM on the master node, and then configure the VM on each worker node.

Follow these steps to setup and configure your VM on a cluster of servers in a cloud deployment:

  1. Verify that your master node meets the VM requirements.

    When using a VM, the following system resources must be allocated:
    ResourceMinimum Requirement
    ProcessorFour (4) virtual CPUs
    Memory8 GB RAM
    Local disk storageFor NetQ 3.2.x and later: 64 GB (2 TB max)
    For NetQ 3.1 and earlier: 32 GB (2 TB max)
    Network interface speed1 Gb NIC
    HypervisorKVM/QCOW (QEMU Copy on Write) image for servers running CentOS, Ubuntu and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ Platforms:
    PortProtocolComponent Access
    8443TCPAdmin UI
    443TCPNetQ UI
    31980TCPNetQ Agent communication
    32708TCPAPI Gateway
    22TCPSSH
    Additionally, for internal cluster communication, you must open these ports:
    PortProtocolComponent Access
    8080TCPAdmin API
    5000TCPDocker registry
    8472UDPFlannel port for VXLAN
    6443TCPKubernetes API server
    10250TCPkubelet health probe
    2379TCPetcd
    2380TCPetcd
    7072TCPKafka JMX monitoring
    9092TCPKafka client
    7071TCPCassandra JMX monitoring
    7000TCPCassandra cluster communication
    9042TCPCassandra client
    7073TCPZookeeper JMX monitoring
    2888TCPZookeeper cluster communication
    3888TCPZookeeper cluster communication
    2181TCPZookeeper client

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    Access to the software downloads depends on whether you are an existing customer before September 1, 2020 or whether you are a new customer. Please follow the instructions accordingly.

    Existing customer who has downloaded Cumulus Networks software before September 1, 2020:
    1. On the MyMellanox Downloads page, select NetQ from the Software -> Cumulus Software list.
    2. Click 3.2 from the Version list, and then select 3.2.1 from the submenu.
    3. Select KVM (Cloud) from the HyperVisor/Platform list.

    4. Scroll down to view the image, and click Download. This downloads the NetQ-3.2.1-opta.tgz installation package.

    New customer downloading Cumulus Networks software on or after September 1, 2020:
    1. On the My Mellanox support page, log in to your account. If needed create a new account and then log in.

      Your username is based on your Email address. For example, user1@domain.com.mlnx.
    2. Open the Downloads menu.
    3. Click Software.
    4. Open the Cumulus Software option.
    5. Click All downloads next to Cumulus NetQ.
    6. Select 3.2.1 from the NetQ Version dropdown.
    7. Select KVM (cloud) from the Hypervisor dropdown.
    8. Click Show Download.
    9. Verify this is the correct image, then click Download.

    Ignore the Firmware, Documentation, and More files options as these do not apply to NetQ.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    KVM Example Configuration

    This example shows the VM setup process for a system with Libvirt and KVM/QEMU installed.

    1. Confirm that the SHA256 checksum matches the one posted on the Cumulus Downloads website to ensure the image download has not been corrupted.

      $ sha256sum ./Downloads/netq-3.2.1-ubuntu-18.04-tscloud-qemu.qcow2
      $ DDC24C25CD50DF5C6F1C0D7070ACA8317A6C4AB52F3A95EA005BA9777849981E ./Downloads/netq-3.2.1-ubuntu-18.04-tscloud-qemu.qcow2
    2. Copy the QCOW2 image to a directory where you want to run it.

      Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.

      $ sudo mkdir /vms
      $ sudo cp ./Downloads/netq-3.2.1-ubuntu-18.04-tscloud-qemu.qcow2 /vms/ts.qcow2
    3. Create the VM.

      For a Direct VM, where the VM uses a MACVLAN interface to sit on the host interface for its connectivity:

      $ virt-install --name=netq_ts --vcpus=4 --memory=8192 --os-type=linux --os-variant=generic --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=type=direct,source=eth0,model=virtio -import --noautoconsole

      Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.

      Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:

      $ virt-install --name=netq_ts --vcpus=4 --memory=8192 --os-type=linux --os-variant=generic \ --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=bridge=br0,model=virtio --import --noautoconsole

      Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.

      Make note of the name used during install as this is needed in a later step.

    4. Watch the boot process in another terminal window.
      $ virsh console netq_ts
  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
  8. Run the Bootstrap CLI. Be sure to replace the eth0 interface used in this example with the interface on the server used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset and then try again.

    If you have changed the IP address or hostname of the NetQ Cloud VM after this step, you need to re-register this address with NetQ as follows:

    Reset the VM.

    cumulus@hostname:~$ netq bootstrap reset

    Re-run the Bootstrap CLI. This example uses interface eth0. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz
  9. Verify that your first worker node meets the VM requirements, as described in Step 1.

  10. Confirm that the needed ports are open for communications, as described in Step 2.

  11. Open your hypervisor and setup the VM in the same manner as for the master node.

    Make a note of the private IP address you assign to the worker node. It is needed for later installation steps.

  12. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 18.04.5 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  13. Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  14. Run the Bootstrap CLI on the worker node.

    cumulus@:~$ netq bootstrap worker tarball /mnt/installables/netq-bootstrap-3.2.1.tgz master-ip <master-ip>

    Provide a password using the password option if required. Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset on the new worker node and then try again.

  15. Repeat Steps 9 through 14 for each additional worker node you want in your cluster.

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Install the NetQ On-premises Appliance

This topic describes how to prepare your single, NetQ On-premises Appliance for installation of the NetQ Platform software.

Inside the box that was shipped to you, you’ll find:

For more detail about hardware specifications (including LED layouts and FRUs like the power supply or fans, and accessories like included cables) or safety and environmental information, refer to the user manual and quick reference guide.

Install the Appliance

After you unbox the appliance:
  1. Mount the appliance in the rack.
  2. Connect it to power following the procedures described in your appliance's user manual.
  3. Connect the Ethernet cable to the 1G management port (eno1).
  4. Power on the appliance.

If your network runs DHCP, you can configure Cumulus NetQ over the network. If DHCP is not enabled, then you configure the appliance using the console cable provided.

Configure the Password, Hostname and IP Address

Change the password and specify the hostname and IP address for the appliance before installing the NetQ software.

  1. Log in to the appliance using the default login credentials:

    • Username: cumulus
    • Password: cumulus
  2. Change the password using the passwd command:

    cumulus@hostname:~$ passwd
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    
  3. The default hostname for the NetQ On-premises Appliance is netq-appliance. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
    
  4. Identify the IP address.

    The appliance contains two Ethernet ports. Port eno1, is dedicated for out-of-band management. This is where NetQ Agents should send the telemetry data collected from your monitored switches and hosts. By default, eno1 uses DHCPv4 to get its IP address. You can view the assigned IP address using the following command:

    cumulus@hostname:~$ ip -4 -brief addr show eno1
    eno1             UP             10.20.16.248/24
    

    Alternately, you can configure the interface with a static IP address by editing the /etc/netplan/01-ethernet.yaml Ubuntu Netplan configuration file.

    For example, to set your network interface eno1 to a static IP address of 192.168.1.222 with gateway 192.168.1.1 and DNS server as 8.8.8.8 and 8.8.4.4:

    # This file describes the network interfaces available on your system
    # For more information, see netplan(5).
    network:
        version: 2
        renderer: networkd
        ethernets:
            eno1:
                dhcp4: no
                addresses: [192.168.1.222/24]
                gateway4: 192.168.1.1
                nameservers:
                    addresses: [8.8.8.8,8.8.4.4]
    

    Apply the settings.

    cumulus@hostname:~$ sudo netplan apply
    

Verify NetQ Software and Appliance Readiness

Now that the appliance is up and running, verify that the software is available and the appliance is ready for installation.

  1. Verify that the needed packages are present and of the correct release, version 3.2.1 and update 31.

    cumulus@hostname:~$ dpkg -l | grep netq
    ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
    ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
  2. Verify the installation images are present and of the correct release, version 3.2.1.

    cumulus@hostname:~$ cd /mnt/installables/
    cumulus@hostname:/mnt/installables$ ls
    NetQ-3.2.1.tgz  netq-bootstrap-3.2.1.tgz
  3. Verify the appliance is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  4. Run the Bootstrap CLI. Be sure to replace the eno1 interface used in this example with the interface or IP address on the appliance used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eno1 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset [purge-db|keep-db] and then try again.

    If you have changed the IP address or hostname of the NetQ On-premises Appliance after this step, you need to re-register this address with NetQ as follows:

    Reset the appliance, indicating whether you want to purge any NetQ DB data or keep it.

    cumulus@hostname:~$ netq bootstrap reset [purge-db|keep-db]

    Re-run the Bootstrap CLI on the appliance. This example uses interface eno1. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eno1 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the NetQ CLI.

Click the installation and activation method you want to use to complete installation:

Install the NetQ Cloud Appliance

This topic describes how to prepare your single, NetQ Cloud Appliance for installation of the NetQ Collector software.

Inside the box that was shipped to you, you’ll find:

If you’re looking for hardware specifications (including LED layouts and FRUs like the power supply or fans and accessories like included cables) or safety and environmental information, check out the appliance’s user manual.

Install the Appliance

After you unbox the appliance:
  1. Mount the appliance in the rack.
  2. Connect it to power following the procedures described in your appliance's user manual.
  3. Connect the Ethernet cable to the 1G management port (eno1).
  4. Power on the appliance.

If your network runs DHCP, you can configure Cumulus NetQ over the network. If DHCP is not enabled, then you configure the appliance using the console cable provided.

Configure the Password, Hostname and IP Address

  1. Log in to the appliance using the default login credentials:

    • Username: cumulus
    • Password: cumulus
  2. Change the password using the passwd command:

    cumulus@hostname:~$ passwd
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    
  3. The default hostname for the NetQ Cloud Appliance is netq-appliance. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
    
  4. Identify the IP address.

    The appliance contains two Ethernet ports. Port eno1, is dedicated for out-of-band management. This is where NetQ Agents should send the telemetry data collected from your monitored switches and hosts. By default, eno1 uses DHCPv4 to get its IP address. You can view the assigned IP address using the following command:

    cumulus@hostname:~$ ip -4 -brief addr show eno1
    eno1             UP             10.20.16.248/24
    

    Alternately, you can configure the interface with a static IP address by editing the /etc/netplan/01-ethernet.yaml Ubuntu Netplan configuration file.

    For example, to set your network interface eno1 to a static IP address of 192.168.1.222 with gateway 192.168.1.1 and DNS server as 8.8.8.8 and 8.8.4.4:

    # This file describes the network interfaces available on your system
    # For more information, see netplan(5).
    network:
        version: 2
        renderer: networkd
        ethernets:
            eno1:
                dhcp4: no
                addresses: [192.168.1.222/24]
                gateway4: 192.168.1.1
                nameservers:
                    addresses: [8.8.8.8,8.8.4.4]
    

    Apply the settings.

    cumulus@hostname:~$ sudo netplan apply
    

Verify NetQ Software and Appliance Readiness

Now that the appliance is up and running, verify that the software is available and the appliance is ready for installation.

  1. Verify that the needed packages are present and of the correct release, version 3.2.1 and update 31.

    cumulus@hostname:~$ dpkg -l | grep netq
    ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
    ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
  2. Verify the installation images are present and of the correct release, version 3.2.1.

    cumulus@hostname:~$ cd /mnt/installables/
    cumulus@hostname:/mnt/installables$ ls
    NetQ-3.2.1-opta.tgz  netq-bootstrap-3.2.1.tgz
  3. Verify the appliance is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  4. Run the Bootstrap CLI. Be sure to replace the eno1 interface used in this example with the interface or IP address on the appliance used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eno1 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset and then try again.

    If you have changed the IP address or hostname of the NetQ Cloud Appliance after this step, you need to re-register this address with NetQ as follows:

    Reset the appliance.

    cumulus@hostname:~$ netq bootstrap reset

    Re-run the Bootstrap CLI on the appliance. This example uses interface eno1. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eno1 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software. You can do this using the Admin UI or the NetQ CLI.

Click the installation and activation method you want to use to complete installation:

Install a NetQ On-premises Appliance Cluster

This topic describes how to prepare your cluster of NetQ On-premises Appliances for installation of the NetQ Platform software.

Inside each box that was shipped to you, you’ll find:

For more detail about hardware specifications (including LED layouts and FRUs like the power supply or fans, and accessories like included cables) or safety and environmental information, refer to the user manual and quick reference guide.

Install Each Appliance

After you unbox the appliance:
  1. Mount the appliance in the rack.
  2. Connect it to power following the procedures described in your appliance's user manual.
  3. Connect the Ethernet cable to the 1G management port (eno1).
  4. Power on the appliance.

If your network runs DHCP, you can configure Cumulus NetQ over the network. If DHCP is not enabled, then you configure the appliance using the console cable provided.

Configure the Password, Hostname and IP Address

Change the password and specify the hostname and IP address for each appliance before installing the NetQ software.

  1. Log in to the appliance using the default login credentials:

    • Username: cumulus
    • Password: cumulus
  2. Change the password using the passwd command:

    cumulus@hostname:~$ passwd
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    
  3. The default hostname for the NetQ On-premises Appliance is netq-appliance. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
    
  4. Identify the IP address.

    The appliance contains two Ethernet ports. Port eno1, is dedicated for out-of-band management. This is where NetQ Agents should send the telemetry data collected from your monitored switches and hosts. By default, eno1 uses DHCPv4 to get its IP address. You can view the assigned IP address using the following command:

    cumulus@hostname:~$ ip -4 -brief addr show eno1
    eno1             UP             10.20.16.248/24
    

    Alternately, you can configure the interface with a static IP address by editing the /etc/netplan/01-ethernet.yaml Ubuntu Netplan configuration file.

    For example, to set your network interface eno1 to a static IP address of 192.168.1.222 with gateway 192.168.1.1 and DNS server as 8.8.8.8 and 8.8.4.4:

    # This file describes the network interfaces available on your system
    # For more information, see netplan(5).
    network:
        version: 2
        renderer: networkd
        ethernets:
            eno1:
                dhcp4: no
                addresses: [192.168.1.222/24]
                gateway4: 192.168.1.1
                nameservers:
                    addresses: [8.8.8.8,8.8.4.4]
    

    Apply the settings.

    cumulus@hostname:~$ sudo netplan apply
    
  5. Repeat these steps for each of the worker node appliances.

Verify NetQ Software and Appliance Readiness

Now that the appliances are up and running, verify that the software is available and the appliance is ready for installation.

  1. On the master node, verify that the needed packages are present and of the correct release, version 3.2.1 and update 31 or later.

    cumulus@hostname:~$ dpkg -l | grep netq
    ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
    ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
  2. Verify the installation images are present and of the correct release, version 3.2.1.

    cumulus@hostname:~$ cd /mnt/installables/
    cumulus@hostname:/mnt/installables$ ls
    NetQ-3.2.1.tgz  netq-bootstrap-3.2.1.tgz
  3. Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  4. Run the Bootstrap CLI. Be sure to replace the eno1 interface used in this example with the interface or IP address on the appliance used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eno1 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset [purge-db|keep-db] and then try again.

    If you have changed the IP address or hostname of the NetQ On-premises Appliance after this step, you need to re-register this address with NetQ as follows:

    Reset the appliance, indicating whether you want to purge any NetQ DB data or keep it.

    cumulus@hostname:~$ netq bootstrap reset [purge-db|keep-db]

    Re-run the Bootstrap CLI on the appliance. This example uses interface eno1. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eno1 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz
  5. On one or your worker nodes, verify that the needed packages are present and of the correct release, version 3.2.1 and update 31 or later.

    cumulus@hostname:~$ dpkg -l | grep netq
    ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
    ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
  6. Configure the IP address, hostname, and password using the same steps as for the master node. Refer to Configure the Password, Hostname and IP Address.

    Make a note of the private IP addresses you assign to the master and worker nodes. They are needed for the later installation steps.

  7. Verify that the needed packages are present and of the correct release, version 3.2.1 and update 31.

    cumulus@hostname:~$ dpkg -l | grep netq
    ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
    ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
  8. Verify that the needed files are present and of the correct release.

    cumulus@hostname:~$ cd /mnt/installables/
    cumulus@hostname:/mnt/installables$ ls
    NetQ-3.2.1.tgz  netq-bootstrap-3.2.1.tgz
  9. Verify the appliance is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  10. Run the Bootstrap CLI on the worker node.

    cumulus@:~$ netq bootstrap worker tarball /mnt/installables/netq-bootstrap-3.2.1.tgz master-ip <master-ip>

    Provide a password using the password option if required. Allow about five to ten minutes for this to complete, and only then continue to the next step.

  11. Repeat Steps 5-10 for each additional worker node (NetQ On-premises Appliance).

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software on each appliance in your cluster. You can do this using the Admin UI or the NetQ CLI.

Click the installation and activation method you want to use to complete installation:

Install a NetQ Cloud Appliance Cluster

This topic describes how to prepare your cluster of NetQ Cloud Appliances for installation of the NetQ Collector software.

Inside each box that was shipped to you, you’ll find:

For more detail about hardware specifications (including LED layouts and FRUs like the power supply or fans and accessories like included cables) or safety and environmental information, refer to the user manual.

Install Each Appliance

After you unbox the appliance:
  1. Mount the appliance in the rack.
  2. Connect it to power following the procedures described in your appliance's user manual.
  3. Connect the Ethernet cable to the 1G management port (eno1).
  4. Power on the appliance.

If your network runs DHCP, you can configure Cumulus NetQ over the network. If DHCP is not enabled, then you configure the appliance using the console cable provided.

Configure the Password, Hostname and IP Address

Change the password and specify the hostname and IP address for each appliance before installing the NetQ software.

  1. Log in to the appliance using the default login credentials:

    • Username: cumulus
    • Password: cumulus
  2. Change the password using the passwd command:

    cumulus@hostname:~$ passwd
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    
  3. The default hostname for the NetQ Cloud Appliance is netq-appliance. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME
    
  4. Identify the IP address.

    The appliance contains two Ethernet ports. Port eno1, is dedicated for out-of-band management. This is where NetQ Agents should send the telemetry data collected from your monitored switches and hosts. By default, eno1 uses DHCPv4 to get its IP address. You can view the assigned IP address using the following command:

    cumulus@hostname:~$ ip -4 -brief addr show eno1
    eno1             UP             10.20.16.248/24
    

    Alternately, you can configure the interface with a static IP address by editing the /etc/netplan/01-ethernet.yaml Ubuntu Netplan configuration file.

    For example, to set your network interface eno1 to a static IP address of 192.168.1.222 with gateway 192.168.1.1 and DNS server as 8.8.8.8 and 8.8.4.4:

    # This file describes the network interfaces available on your system
    # For more information, see netplan(5).
    network:
        version: 2
        renderer: networkd
        ethernets:
            eno1:
                dhcp4: no
                addresses: [192.168.1.222/24]
                gateway4: 192.168.1.1
                nameservers:
                    addresses: [8.8.8.8,8.8.4.4]
    

    Apply the settings.

    cumulus@hostname:~$ sudo netplan apply
    
  5. Repeat these steps for each of the worker node appliances.

Verify NetQ Software and Appliance Readiness

Now that the appliances are up and running, verify that the software is available and each appliance is ready for installation.

  1. On the master NetQ Cloud Appliance, verify that the needed packages are present and of the correct release, version 3.2.1 and update 31.

    cumulus@hostname:~$ dpkg -l | grep netq
    ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
    ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
  2. Verify the installation images are present and of the correct release, version 3.2.1.

    cumulus@hostname:~$ cd /mnt/installables/
    cumulus@hostname:/mnt/installables$ ls
    NetQ-3.2.1-opta.tgz  netq-bootstrap-3.2.1.tgz
  3. Verify the master NetQ Cloud Appliance is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  4. Run the Bootstrap CLI. Be sure to replace the eno1 interface used in this example with the interface or IP address on the appliance used to listen for NetQ Agents.

    cumulus@:~$ netq bootstrap master interface eno1 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz

    Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset and then try again.

    If you have changed the IP address or hostname of the NetQ Cloud Appliance after this step, you need to re-register this address with NetQ as follows:

    Reset the appliance.

    cumulus@hostname:~$ netq bootstrap reset

    Re-run the Bootstrap CLI on the appliance. This example uses interface eno1. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

    cumulus@:~$ netq bootstrap master interface eno1 tarball /mnt/installables/netq-bootstrap-3.2.1.tgz
  5. On one of your worker NetQ Cloud Appliances, verify that the needed packages are present and of the correct release, version 3.2.1 and update 31.

    cumulus@hostname:~$ dpkg -l | grep netq
    ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
    ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
  6. Configure the IP address, hostname, and password using the same steps as as for the master node. Refer to Configure the Password, Hostname, and IP Address.

    Make a note of the private IP addresses you assign to the master and worker nodes. They are needed for later installation steps.

  7. Verify that the needed packages are present and of the correct release, version 3.2.1 and update 31 or later.

    cumulus@hostname:~$ dpkg -l | grep netq
    ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
    ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
  8. Verify that the needed files are present and of the correct release.

    cumulus@hostname:~$ cd /mnt/installables/
    cumulus@hostname:/mnt/installables$ ls
    NetQ-3.2.1-opta.tgz  netq-bootstrap-3.2.1.tgz
  9. Verify the appliance is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  10. Run the Bootstrap CLI on the worker node.

    cumulus@:~$ netq bootstrap worker tarball /mnt/installables/netq-bootstrap-3.2.1.tgz master-ip <master-ip>

    Provide a password using the password option if required. Allow about five to ten minutes for this to complete, and only then continue to the next step.

    If this step fails for any reason, you can run netq bootstrap reset on the new worker node and then try again.

  11. Repeat Steps 5-10 for each additional worker NetQ Cloud Appliance.

Considerations for Container Environments

Flannel Virtual Networks

If you are using Flannel with a container environment on your network, you may need to change its default IP address ranges if they conflict with other addresses on your network. This can only be done one time during the first installation. You do this by running the bootstrap command.

The address range is 10.244.0.0/16. NetQ overrides the original Flannel default, which is 10.1.0.0/16.

To change the default address range, use the bootstrap CLI with the pod-ip-range option. For example:

cumulus@hostname:~$ netq bootstrap master interface eth0 tarball /mnt/installables/netq-bootstrap-3.2.0.tgz pod-ip-range 10.255.0.0/16
Docker Default Bridge Interface

The default Docker bridge interface is disabled in NetQ. If you need to re-enable the interface, contact support.

Install and Activate the NetQ Software

The final step is to install and activate the Cumulus NetQ software on each appliance in your cluster. You can do this using the Admin UI or the CLI.

Click the installation and activation method you want to use to complete installation:

Prepare Your Existing NetQ Appliances for a NetQ 3.2 Deployment

This topic describes how to prepare a NetQ 2.4.x or earlier NetQ Appliance before installing NetQ 3.x. The steps are the same for both the on-premises and cloud appliances. The only difference is the software you download for each platform. On completion of the steps included here, you will be ready to perform a fresh installation of NetQ 3.x.

The preparation workflow is summarized in this figure:

To prepare your appliance:

  1. Verify that your appliance is a supported hardware model.

  2. For on-premises solutions using the NetQ On-premises Appliance, optionally back up your NetQ data.

    1. Run the backup script to create a backup file in /opt/<backup-directory>.

      Be sure to replace the backup-directory option with the name of the directory you want to use for the backup file. This location must be somewhere that is off of the appliance to avoid it being overwritten during these preparation steps.

      cumulus@<hostname>:~$ ./backuprestore.sh --backup --localdir /opt/<backup-directory>
      
    2. Verify the backup file has been created.

      cumulus@<hostname>:~$ cd /opt/<backup-directory>
      cumulus@<hostname>:~/opt/<backup-directory># ls
      netq_master_snapshot_2020-01-09_07_24_50_UTC.tar.gz
      
  3. Install Ubuntu 18.04 LTS

    Follow the instructions here to install Ubuntu.

    Note these tips when installing:

    • Ignore the instructions for MAAS.

    • Ubuntu OS should be installed on the SSD disk. Select Micron SSD with ~900 GB at step#9 in the aforementioned instructions.

    • Set the default username to cumulus and password to CumulusLinux!.

    • When prompted, select Install SSH server.

  4. Configure networking.

    Ubuntu uses Netplan for network configuration. You can give your appliance an IP address using DHCP or a static address.

    • Create and/or edit the /etc/netplan/01-ethernet.yaml Netplan configuration file.

      # This file describes the network interfaces available on your system
      # For more information, see netplan(5).
      network:
          version: 2
          renderer: networkd
          ethernets:
              eno1:
                  dhcp4: yes
      
    • Apply the settings.

      $ sudo netplan apply
      
    • Create and/or edit the  /etc/netplan/01-ethernet.yaml Netplan configuration file.

      In this example the interface, eno1, is given a static IP address of 192.168.1.222 with a gateway at 192.168.1.1 and DNS server at 8.8.8.8 and 8.8.4.4.

      # This file describes the network interfaces available on your system
      # For more information, see netplan(5).
      network:
          version: 2
          renderer: networkd
          ethernets:
              eno1:
                  dhcp4: no
                  addresses: [192.168.1.222/24]
                  gateway4: 192.168.1.1
                  nameservers:
                      addresses: [8.8.8.8,8.8.4.4]
      
    • Apply the settings.

      $ sudo netplan apply
      
  5. Update the Ubuntu repository.

    1. Reference and update the local apt repository.

      root@ubuntu:~# wget -O- https://apps3.cumulusnetworks.com/setup/cumulus-apps-deb.pubkey | apt-key add -
      
    2. Add the Ubuntu 18.04 repository.

      Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-bionic.list and add the following line:

      root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-bionic.list
      ...
      deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb bionic netq-latest
      ...
      

      The use of netq-latest in this example means that a get to the repository always retrieves the latest version of NetQ, even in the case where a major version update has been made. If you want to keep the repository on a specific version - such as netq-3.1 - use that instead.

  6. Install Python.

    Run the following commands:

    root@ubuntu:~# apt-get update
    root@ubuntu:~# apt-get install python python2.7 python-apt python3-lib2to3 python3-distutils
    
  7. Obtain the latest NetQ Agent and CLI package.

    Run the following commands:

    root@ubuntu:~# apt-get update
    root@ubuntu:~# apt-get install netq-agent netq-apps
    
  8. Download the bootstrap and NetQ installation tarballs.

    Download the software from the MyMellanox downloads page page.

    1. Select NetQ from the Product list.

    2. Select 3.2 from the Version list, and then select 3.2.1 from the submenu.

    3. Select Bootstrap from the Hypervisor/Platform list. Note that the bootstrap file is the same for both appliances.

    4. Scroll down and click Download.

    5. Select Appliance for the NetQ On-premises Appliance or Appliance (Cloud) for the NetQ Cloud Appliance from the Hypervisor/Platform list.

      Make sure you select the right install choice based on whether you are preparing the on-premises or cloud version of the appliance.

    6. Scroll down and click Download.

    7. Copy these two files, netq-bootstrap-3.2.1.tgz and either NetQ-3.2.1.tgz (on-premises) or NetQ-3.2.1-opta.tgz (cloud), to the /mnt/installables/ directory on the appliance.

    8. Verify that the needed files are present and of the correct release. This example shows on-premises files. The only difference for cloud files is that it should list NetQ-3.2.1-opta.tgz instead of NetQ-3.2.1.tgz.

      cumulus@<hostname>:~$ dpkg -l | grep netq
      ii  netq-agent   3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Telemetry Agent for Ubuntu
      ii  netq-apps    3.2.1-ub18.04u31~1603789872.6f62fad_amd64   Cumulus NetQ Fabric Validation Application for Ubuntu
      
      cumulus@<hostname>:~$ cd /mnt/installables/
      cumulus@<hostname>:/mnt/installables$ ls
      NetQ-3.2.1.tgz  netq-bootstrap-3.2.1.tgz
      
    9. Run the following commands.

      sudo systemctl disable apt-{daily,daily-upgrade}.{service,timer}
      sudo systemctl stop apt-{daily,daily-upgrade}.{service,timer}
      sudo systemctl disable motd-news.{service,timer}
      sudo systemctl stop motd-news.{service,timer}
      
  9. Run the Bootstrap CLI.

    Run the bootstrap CLI on your appliance. Be sure to replace the eth0 interface used in this example with the interface or IP address on the appliance used to listen for NetQ Agents.

If you are creating a server cluster, you need to prepare each of those appliances as well. Repeat these steps if you are using a previously deployed appliance or refer to Install the NetQ System for a new appliance.

You are now ready to install the NetQ Software. Refer to Install NetQ Using the Admin UI (recommended) or Install NetQ Using the CLI.

Install NetQ Using the Admin UI

You can now install the NetQ software using the Admin UI using the default basic installation or an advanced installation.

This is the final set of steps for installing NetQ. If you have not already performed the installation preparation steps, go to Install the NetQ System before continuing here.

Install NetQ

To install NetQ:

  1. Log in to your NetQ On-premises Appliance, NetQ Cloud Appliance, the master node of your cluster, or VM.

    In your browser address field, enter https://<hostname-or-ipaddr>:8443.

  2. Enter your NetQ credentials to enter the application.

    The default username is admin and the default password in admin.

  3. Click Begin Installation.

  4. Choose an installation type: basic or advanced.

    Read the descriptions carefully to be sure to select the correct type. Then follow these instructions based on your selection.

  1. Select Basic Install, then click .
  1. Select a deployment type.

    Choose which type of deployment model you want to use. Both options provide secure access to data and features useful for monitoring and troubleshooting your network.

  1. Install the NetQ software according to your deployment type.

Installation Results

If the installation succeeds, you are directed to the Health page of the Admin UI. Refer to View NetQ System Health.

If the installation fails, a failure indication is given.

  1. Click to view the reason.
  1. Can the error can be resolved by moving to the advanced configuration flow:

    • No: close the Admin UI, resolve the error, then reopen the Admin UI to start installation again.
    • Yes: click to be taken to the advanced installation flow and retry the failed task. Refer to the Advanced tab for instructions.
  1. Select Advanced Install, then click .
  1. Select your deployment type.

    Choose the deployment model you want to use. Both options provide secure access to data and features useful for monitoring and troubleshooting your network.

  1. Monitor the initialization of the master node. When complete, click .
  1. For on-premises deployments only, select your install method. For cloud deployments, skip to Step 5.

    Choose between restoring data from a previous version of NetQ or performing a fresh installation.

  1. Select your server arrangement.

    Select whether you want to deploy your infrastructure as a single stand-alone server or as a cluster of servers.

  1. Install the NetQ software.

    You install the NetQ software using the installation files (NetQ-3.2.1-tgz for on-premises deployments or NetQ-3.2.1-opta.tgz for cloud deployments) that you downloaded and stored previously.

    For on-premises: Accept the path and filename suggested, or modify these to reflect where you stored your installation file, then click . Alternately, upload the file.

For cloud: Accept the path and filename suggested, or modify these to reflect where you stored your installation file. Enter your configuration key. Then click .

If the installation fails, a failure indication is given. For example:

Click to download an error file in JSON format, or click to return to the previous step.

  1. Activate NetQ.

    This final step activates the software and enables you to view the health of your NetQ system. For cloud deployments, you must enter your configuration key.

View NetQ System Health

When the installation and activation is complete, the NetQ System Health dashboard is visible for tracking the status of key components in the system. The cards displayed represent the deployment chosen:

Server ArrangementDeployment TypeNode Card/sPod CardKafka CardZookeeper CardCassandra Card
Standalone serverOn-premisesMasterYesYesYesYes
Standalone serverCloudMasterYesNoNoNo
Server clusterOn-premisesMaster, 2+ WorkersYesYesYesYes
Server clusterCloudMaster, 2+ WorkersYesNoNoNo

This example shows a standalone server in an on-premises deployment.

If you have deployed an on-premises solution, you can add a custom signed certificate. Refer to Install a Certificate for instructions.

Click Open NetQ to enter the NetQ application.

Install NetQ Using the CLI

You can now install the NetQ software using the NetQ CLI.

This is the final set of steps for installing NetQ. If you have not already performed the installation preparation steps, go to Install the NetQ System before continuing here.

To install NetQ:

  1. Log in to your NetQ platform server, NetQ Appliance, NetQ Cloud Appliance or the master node of your cluster.

  2. Install the software.

    Run the following command on your NetQ platform server or NetQ Appliance:

    cumulus@hostname:~$ netq install standalone full interface eth0 bundle /mnt/installables/NetQ-3.2.1.tgz
    

    Run the netq show opta-health command to verify all applications are operating properly. Please allow 10-15 minutes for all applications to come up and report their status.

    cumulus@hostname:~$ netq show opta-health
    Application                                            Status    Namespace      Restarts    Timestamp
    -----------------------------------------------------  --------  -------------  ----------  ------------------------
    cassandra-rc-0-w7h4z                                   READY     default        0           Fri Apr 10 16:08:38 2020
    cp-schema-registry-deploy-6bf5cbc8cc-vwcsx             READY     default        0           Fri Apr 10 16:08:38 2020
    kafka-broker-rc-0-p9r2l                                READY     default        0           Fri Apr 10 16:08:38 2020
    kafka-connect-deploy-7799bcb7b4-xdm5l                  READY     default        0           Fri Apr 10 16:08:38 2020
    netq-api-gateway-deploy-55996ff7c8-w4hrs               READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-address-deploy-66776ccc67-phpqk               READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-admin-oob-mgmt-server                         READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-bgp-deploy-7dd4c9d45b-j9bfr                   READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-clagsession-deploy-69564895b4-qhcpr           READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-configdiff-deploy-ff54c4cc4-7rz66             READY     default        0           Fri Apr 10 16:08:38 2020
    ...
    

    Run the following commands on your master node, using the IP addresses of your worker nodes:

    cumulus@<hostname>:~$ netq install cluster full interface eth0 bundle /mnt/installables/NetQ-3.2.1.tgz workers <worker-1-ip> <worker-2-ip>
    

    Run the netq show opta-health command to verify all applications are operating properly. Please allow 10-15 minutes for all applications to come up and report their status.

    cumulus@hostname:~$ netq show opta-health
    Application                                            Status    Namespace      Restarts    Timestamp
    -----------------------------------------------------  --------  -------------  ----------  ------------------------
    cassandra-rc-0-w7h4z                                   READY     default        0           Fri Apr 10 16:08:38 2020
    cp-schema-registry-deploy-6bf5cbc8cc-vwcsx             READY     default        0           Fri Apr 10 16:08:38 2020
    kafka-broker-rc-0-p9r2l                                READY     default        0           Fri Apr 10 16:08:38 2020
    kafka-connect-deploy-7799bcb7b4-xdm5l                  READY     default        0           Fri Apr 10 16:08:38 2020
    netq-api-gateway-deploy-55996ff7c8-w4hrs               READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-address-deploy-66776ccc67-phpqk               READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-admin-oob-mgmt-server                         READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-bgp-deploy-7dd4c9d45b-j9bfr                   READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-clagsession-deploy-69564895b4-qhcpr           READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-configdiff-deploy-ff54c4cc4-7rz66             READY     default        0           Fri Apr 10 16:08:38 2020
    ...
    

    Run the following command on your NetQ Cloud Appliance with the config-key sent by Cumulus Networks in an email titled “A new site has been added to your Cumulus NetQ account.”

    cumulus@<hostname>:~$ netq install opta standalone full interface eth0 bundle /mnt/installables/NetQ-3.2.1-opta.tgz config-key <your-config-key-from-email> proxy-host <proxy-hostname> proxy-port <proxy-port>
    

    Run the netq show opta-health command to verify all applications are operating properly.

    cumulus@hostname:~$ netq show opta-health
    OPTA is healthy
    

    Run the following commands on your master NetQ Cloud Appliance with the config-key sent by Cumulus Networks in an email titled “A new site has been added to your Cumulus NetQ account.”

    cumulus@<hostname>:~$ netq install opta cluster full interface eth0 bundle /mnt/installables/NetQ-3.2.1-opta.tgz config-key <your-config-key-from-email> workers <worker-1-ip> <worker-2-ip> proxy-host <proxy-hostname> proxy-port <proxy-port>
    

    Run the netq show opta-health command to verify all applications are operating properly.

    cumulus@hostname:~$ netq show opta-health
    OPTA is healthy
    

Install NetQ Quick Start

If you know how you would answer the key installation questions, you can go directly to the instructions for those choices using the table here.

Do not skip the normal installation flow until you have performed this process multiple times and are fully familiar with it.

Deployment TypeServer ArrangementSystemHypervisorInstallation Instructions
On premisesSingle serverCumulus NetQ ApplianceNAStart Install
On premisesSingle serverOwn Hardware plus VMKVMStart Install
On premisesSingle serverOwn Hardware plus VMVMwareStart Install
On premisesServer clusterCumulus NetQ ApplianceNAStart Install
On premisesServer clusterOwn Hardware plus VMKVMStart Install
On premisesServer clusterOwn Hardware plus VMVMwareStart Install
CloudSingle serverCumulus NetQ Cloud ApplianceNAStart Install
CloudSingle serverOwn Hardware plus VMKVMStart Install
CloudSingle serverOwn Hardware plus VMVMwareStart Install
CloudServer clusterCumulus NetQ Cloud ApplianceNAStart Install
CloudServer clusterOwn Hardware plus VMKVMStart Install
CloudServer clusterOwn Hardware plus VMVMwareStart Install

Install NetQ Switch and Host Software

After installing your Cumulus NetQ Platform or Collector software, the next step is to install NetQ switch software for all switches and host servers that you want to monitor in your network. This includes the NetQ Agent, and optionally the NetQ CLI. While the CLI is optional, it can be very useful to be able to access a switch or host through the command line for troubleshooting or device management. The telemetry data is sent by the NetQ Agent on a switch or host to your NetQ Platform or Collector on your NetQ On-premises or Cloud Appliance or VM.

Install NetQ Agents

Cumulus NetQ Agents can be installed on switches or hosts running Cumulus Linux, Ubuntu, Red Hat Enterprise, or CentOS operating systems (OSs). Install the NetQ Agent based on the OS:

Install and Configure the NetQ Agent on Cumulus Linux Switches

After installing your Cumulus NetQ software, you should install the NetQ 3.2.1 Agents on each switch you want to monitor. NetQ Agents can be installed on switches running:

Prepare for NetQ Agent Installation on a Cumulus Linux Switch

For servers running Cumulus Linux, you need to:

If your network uses a proxy server for external connections, you should first configure a global proxy so apt-get can access the software package in the Cumulus Networks repository.

Verify NTP is Installed and Configured

Verify that NTP is running on the switch. The switch must be in time synchronization with the NetQ Platform or NetQ Appliance to enable useful statistical analysis.

cumulus@switch:~$ sudo systemctl status ntp
[sudo] password for cumulus:
● ntp.service - LSB: Start NTP daemon
        Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
        Active: active (running) since Fri 2018-06-01 13:49:11 EDT; 2 weeks 6 days ago
          Docs: man:systemd-sysv-generator(8)
        CGroup: /system.slice/ntp.service
                └─2873 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -c /var/lib/ntp/ntp.conf.dhcp -u 109:114

If NTP is not installed, install and configure it before continuing.

If NTP is not running:

If you are running NTP in your out-of-band management network with VRF, specify the VRF (ntp@<vrf-name> versus just ntp) in the above commands.

Obtain NetQ Agent Software Package

To install the NetQ Agent you need to install netq-agent on each switch or host. This is available from the Cumulus Networks repository.

To obtain the NetQ Agent package:

Edit the /etc/apt/sources.list file to add the repository for Cumulus NetQ.

Note that NetQ has a separate repository from Cumulus Linux.

cumulus@switch:~$ sudo nano /etc/apt/sources.list
...
deb http://apps3.cumulusnetworks.com/repos/deb CumulusLinux-3 netq-3.2
...

Add the repository:

cumulus@switch:~$ sudo nano /etc/apt/sources.list
...
deb http://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-3.2
...

Add the apps3.cumulusnetworks.com authentication key to Cumulus Linux:

cumulus@switch:~$ wget -qO - https://apps3.cumulusnetworks.com/setup/cumulus-apps-deb.pubkey | sudo apt-key add -

Install the NetQ Agent on Cumulus Linux Switch

After completing the preparation steps, you can successfully install the agent onto your switch.

To install the NetQ Agent:

  1. Update the local apt repository, then install the NetQ software on the switch.

    cumulus@switch:~$ sudo apt-get update
    cumulus@switch:~$ sudo apt-get install netq-agent
    
  2. Verify you have the correct version of the Agent.

    cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-agent
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • Cumulus Linux 3.3.2-3.7.x
      • netq-agent_3.2.1-cl3u30~1603788322.6f62fadf_armel.deb
      • netq-agent_3.2.1-cl3u30~1603788322.6f62fadf_amd64.deb
    • Cumulus Linux 4.0.0-4.1.x
      • netq-agent_3.2.1-cl4u31~1603788322.6f62fadf_armel.deb
      • netq-agent_3.2.1-cl4u31~1603788322.6f62fadf_amd64.deb
  3. Restart rsyslog so log files are sent to the correct destination.

    cumulus@switch:~$ sudo systemctl restart rsyslog.service
    
  4. Continue with NetQ Agent configuration in the next section.

Configure the NetQ Agent on a Cumulus Linux Switch

After the NetQ Agent and CLI have been installed on the servers you want to monitor, the NetQ Agents must be configured to obtain useful and relevant data.

The NetQ Agent is aware of and communicates through the designated VRF. If you do not specify one, the default VRF (named default) is used. If you later change the VRF configured for the NetQ Agent (using a lifecycle management configuration profile, for example), you might cause the NetQ Agent to lose communication.

Two methods are available for configuring a NetQ Agent:

Configure NetQ Agents Using a Configuration File

You can configure the NetQ Agent in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:

    cumulus@switch:~$ sudo nano /etc/netq/netq.yml
    
  2. Locate the netq-agent section, or add it.

  3. Set the parameters for the agent as follows:

    • port: 31980 (default configuration)
    • server: IP address of the NetQ Appliance or VM where the agent should send its collected data
    • vrf: default (default) or one that you specify

    Your configuration should be similar to this:

    netq-agent:
    port: 31980
    server: 127.0.0.1
    vrf: default
    

Configure NetQ Agents Using the NetQ CLI

If the CLI is configured, you can use it to configure the NetQ Agent to send telemetry data to the NetQ Appliance or VM. To configure the NetQ CLI, refer to Install and Configure the NetQ CLI on Cumulus Linux Switches.

If you intend to use VRF, refer to Configure the Agent to Use VRF. If you intend to specify a port for communication, refer to Configure the Agent to Communicate over a Specific Port.

Use the following command to configure the NetQ Agent:

netq config add agent server <text-opta-ip> [port <text-opta-port>] [vrf <text-vrf-name>]

This example uses an IP address of 192.168.1.254 and the default port and VRF for the NetQ Appliance or VM.

cumulus@switch:~$ sudo netq config add agent server 192.168.1.254
Updated agent server 192.168.1.254 vrf default. Please restart netq-agent (netq config restart agent).
cumulus@switch:~$ sudo netq config restart agent

Configure Advanced NetQ Agent Settings on a Cumulus Linux Switch

A couple of additional options are available for configuring the NetQ Agent. If you are using VRF, you can configure the agent to communicate over a specific VRF. You can also configure the agent to use a particular port.

Configure the Agent to Use a VRF

While optional, Cumulus strongly recommends that you configure NetQ Agents to communicate with the NetQ Appliance or VM only via a VRF , including a management VRF . To do so, you need to specify the VRF name when configuring the NetQ Agent. For example, if the management VRF is configured and you want the agent to communicate with the NetQ Appliance or VM over it, configure the agent like this:

cumulus@leaf01:~$ sudo netq config add agent server 192.168.1.254 vrf mgmt
cumulus@leaf01:~$ sudo netq config restart agent

Configure the Agent to Communicate over a Specific Port

By default, NetQ uses port 31980 for communication between the NetQ Appliance or VM and NetQ Agents. If you want the NetQ Agent to communicate with the NetQ Appliance or VM via a different port, you need to specify the port number when configuring the NetQ Agent, like this:

cumulus@leaf01:~$ sudo netq config add agent server 192.168.1.254 port 7379
cumulus@leaf01:~$ sudo netq config restart agent

Install and Configure the NetQ Agent on Ubuntu Servers

After installing your Cumulus NetQ software, you should install the NetQ 3.2.1 Agent on each server you want to monitor. NetQ Agents can be installed on servers running:

Prepare for NetQ Agent Installation on an Ubuntu Server

For servers running Ubuntu OS, you need to:

If your network uses a proxy server for external connections, you should first configure a global proxy so apt-get can access the agent package on the Cumulus Networks repository.

Verify Service Package Versions

Before you install the NetQ Agent on an Ubuntu server, make sure the following packages are installed and running these minimum versions:

Verify the Server is Running lldpd

Make sure you are running lldpd, not lldpad. Ubuntu does not include lldpd by default, which is required for the installation.

To install this package, run the following commands:

root@ubuntu:~# sudo apt-get update
root@ubuntu:~# sudo apt-get install lldpd
root@ubuntu:~# sudo systemctl enable lldpd.service
root@ubuntu:~# sudo systemctl start lldpd.service

Install and Configure Network Time Server

If NTP is not already installed and configured, follow these steps:

  1. Install NTP on the server, if not already installed. Servers must be in time synchronization with the NetQ Platform or NetQ Appliance to enable useful statistical analysis.

    root@ubuntu:~# sudo apt-get install ntp
    
  2. Configure the network time server.

    1. Open the /etc/ntp.conf file in your text editor of choice.

    2. Under the Server section, specify the NTP server IP address or hostname.

    3. Enable and start the NTP service.

      root@ubuntu:~# sudo systemctl enable ntp
      root@ubuntu:~# sudo systemctl start ntp
      
    1. Verify NTP is operating correctly. Look for an asterisk (*) or a plus sign (+) that indicates the clock is synchronized.

      root@ubuntu:~# ntpq -pn
      remote           refid            st t when poll reach   delay   offset  jitter
      ==============================================================================
      +173.255.206.154 132.163.96.3     2 u   86  128  377   41.354    2.834   0.602
      +12.167.151.2    198.148.79.209   3 u  103  128  377   13.395   -4.025   0.198
      2a00:7600::41    .STEP.          16 u    - 1024    0    0.000    0.000   0.000
      \*129.250.35.250 249.224.99.213   2 u  101  128  377   14.588   -0.299   0.243
      
    1. Install chrony if needed.

      root@ubuntu:~# sudo apt install chrony
      
    2. Start the chrony service.

      root@ubuntu:~# sudo /usr/local/sbin/chronyd
      
    3. Verify it installed successfully.

      root@ubuntu:~# chronyc activity
      200 OK
      8 sources online
      0 sources offline
      0 sources doing burst (return to online)
      0 sources doing burst (return to offline)
      0 sources with unknown address
      
    4. View the time servers chrony is using.

      root@ubuntu:~# chronyc sources
      210 Number of sources = 8
      
      MS Name/IP address         Stratum Poll Reach LastRx Last sample
      ===============================================================================
      ^+ golem.canonical.com           2   6   377    39  -1135us[-1135us] +/-   98ms
      ^* clock.xmission.com            2   6   377    41  -4641ns[ +144us] +/-   41ms
      ^+ ntp.ubuntu.net              2   7   377   106   -746us[ -573us] +/-   41ms
      ...
      

      Open the chrony.conf configuration file (by default at /etc/chrony/) and edit if needed.

      Example with individual servers specified:

      server golem.canonical.com iburst
      server clock.xmission.com iburst
      server ntp.ubuntu.com iburst
      driftfile /var/lib/chrony/drift
      makestep 1.0 3
      rtcsync
      

      Example when using a pool of servers:

      pool pool.ntp.org iburst
      driftfile /var/lib/chrony/drift
      makestep 1.0 3
      rtcsync
      
    5. View the server chrony is currently tracking.

      root@ubuntu:~# chronyc tracking
      Reference ID    : 5BBD59C7 (golem.canonical.com)
      Stratum         : 3
      Ref time (UTC)  : Mon Feb 10 14:35:18 2020
      System time     : 0.0000046340 seconds slow of NTP time
      Last offset     : -0.000123459 seconds
      RMS offset      : 0.007654410 seconds
      Frequency       : 8.342 ppm slow
      Residual freq   : -0.000 ppm
      Skew            : 26.846 ppm
      Root delay      : 0.031207654 seconds
      Root dispersion : 0.001234590 seconds
      Update interval : 115.2 seconds
      Leap status     : Normal
      

Obtain NetQ Agent Software Package

To install the NetQ Agent you need to install netq-agent on each server. This is available from the Cumulus Networks repository.

To obtain the NetQ Agent package:

  1. Reference and update the local apt repository.
root@ubuntu:~# sudo wget -O- https://apps3.cumulusnetworks.com/setup/cumulus-apps-deb.pubkey | apt-key add -
  1. Add the Ubuntu repository:

    Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-xenial.list and add the following line:

    root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-xenial.list
    ...
    deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb xenial netq-latest
    ...
    

    Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-bionic.list and add the following line:

    root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-bionic.list
    ...
    deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb bionic netq-latest
    ...
    

    The use of netq-latest in these examples means that a get to the repository always retrieves the latest version of NetQ, even in the case where a major version update has been made. If you want to keep the repository on a specific version - such as netq-3.1 - use that instead.

Install NetQ Agent on an Ubuntu Server

After completing the preparation steps, you can successfully install the agent software onto your server.

To install the NetQ Agent:

  1. Install the software packages on the server.

    root@ubuntu:~# sudo apt-get update
    root@ubuntu:~# sudo apt-get install netq-agent
    
  2. Verify you have the correct version of the Agent.

    root@ubuntu:~# dpkg-query -W -f '${Package}\t${Version}\n' netq-agent
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • netq-agent_3.2.1-ub18.04u31~1603789872.6f62fad_amd64.deb
    • netq-agent_3.2.1-ub16.04u31~1603788317.6f62fad_amd64.deb
  3. Restart rsyslog so log files are sent to the correct destination.

root@ubuntu:~# sudo systemctl restart rsyslog.service
  1. Continue with NetQ Agent Configuration in the next section.

Configure the NetQ Agent on an Ubuntu Server

After the NetQ Agent and CLI have been installed on the servers you want to monitor, the NetQ Agents must be configured to obtain useful and relevant data.

The NetQ Agent is aware of and communicates through the designated VRF. If you do not specify one, the default VRF (named default) is used. If you later change the VRF configured for the NetQ Agent (using a lifecycle management configuration profile, for example), you might cause the NetQ Agent to lose communication.

Two methods are available for configuring a NetQ Agent:

Configure the NetQ Agents Using a Configuration File

You can configure the NetQ Agent in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:
root@ubuntu:~# sudo nano /etc/netq/netq.yml
  1. Locate the netq-agent section, or add it.

  2. Set the parameters for the agent as follows:

Your configuration should be similar to this:

netq-agent:
    port: 31980
    server: 127.0.0.1
    vrf: default

Configure NetQ Agents Using the NetQ CLI

If the CLI is configured, you can use it to configure the NetQ Agent to send telemetry data to the NetQ Server or Appliance. If it is not configured, refer to Configure the NetQ CLI on an Ubuntu Server and then return here.

If you intend to use VRF, skip to Configure the Agent to Use VRF. If you intend to specify a port for communication, skip to Configure the Agent to Communicate over a Specific Port.

Use the following command to configure the NetQ Agent:

netq config add agent server <text-opta-ip> [port <text-opta-port>] [vrf <text-vrf-name>]

This example uses an IP address of 192.168.1.254 and the default port and VRF for the NetQ hardware.

root@ubuntu:~# sudo netq config add agent server 192.168.1.254
Updated agent server 192.168.1.254 vrf default. Please restart netq-agent (netq config restart agent).
root@ubuntu:~# sudo netq config restart agent

Configure Advanced NetQ Agent Settings

A couple of additional options are available for configuring the NetQ Agent. If you are using VRF, you can configure the agent to communicate over a specific VRF. You can also configure the agent to use a particular port.

Configure the NetQ Agent to Use a VRF

While optional, Cumulus strongly recommends that you configure NetQ Agents to communicate with the NetQ Platform only via a VRF , including a management VRF . To do so, you need to specify the VRF name when configuring the NetQ Agent. For example, if the management VRF is configured and you want the agent to communicate with the NetQ Platform over it, configure the agent like this:

root@ubuntu:~# sudo netq config add agent server 192.168.1.254 vrf mgmt
root@ubuntu:~# sudo netq config restart agent

Configure the NetQ Agent to Communicate over a Specific Port

By default, NetQ uses port 31980 for communication between the NetQ Platform and NetQ Agents. If you want the NetQ Agent to communicate with the NetQ Platform via a different port, you need to specify the port number when configuring the NetQ Agent like this:

root@ubuntu:~# sudo netq config add agent server 192.168.1.254 port 7379
root@ubuntu:~# sudo netq config restart agent

Install and Configure the NetQ Agent on RHEL and CentOS Servers

After installing your Cumulus NetQ software, you should install the NetQ 3.2.1 Agents on each server you want to monitor. NetQ Agents can be installed on servers running:

Prepare for NetQ Agent Installation on a RHEL or CentOS Server

For servers running RHEL or CentOS, you need to:

If your network uses a proxy server for external connections, you should first configure a global proxy so `apt-get` can access the software package in the Cumulus Networks repository.

Verify Service Package Versions

Before you install the NetQ Agent on a Red Hat or CentOS server, make sure the following packages are installed and running these minimum versions:

Verify the Server is Running lldpd and wget

Make sure you are running lldpd, not lldpad. CentOS does not include lldpd by default, nor does it include wget, which is required for the installation.

To install this package, run the following commands:

root@rhel7:~# sudo yum -y install epel-release
root@rhel7:~# sudo yum -y install lldpd
root@rhel7:~# sudo systemctl enable lldpd.service
root@rhel7:~# sudo systemctl start lldpd.service
root@rhel7:~# sudo yum install wget

Install and Configure NTP

If NTP is not already installed and configured, follow these steps:

  1. Install NTP on the server. Servers must be in time synchronization with the NetQ Platform or NetQ Appliance to enable useful statistical analysis.

    root@rhel7:~# sudo yum install ntp
    
  2. Configure the NTP server.

    1. Open the /etc/ntp.conf file in your text editor of choice.

    2. Under the Server section, specify the NTP server IP address or hostname.

  3. Enable and start the NTP service.

    root@rhel7:~# sudo systemctl enable ntp
    root@rhel7:~# sudo systemctl start ntp
    

    If you are running NTP in your out-of-band management network with VRF, specify the VRF (ntp@<vrf-name> versus just ntp) in the above commands.

  4. Verify NTP is operating correctly. Look for an asterisk (*) or a plus sign (+) that indicates the clock is synchronized.

    root@rhel7:~# ntpq -pn
    remote           refid            st t when poll reach   delay   offset  jitter
    ==============================================================================
    +173.255.206.154 132.163.96.3     2 u   86  128  377   41.354    2.834   0.602
    +12.167.151.2    198.148.79.209   3 u  103  128  377   13.395   -4.025   0.198
    2a00:7600::41    .STEP.          16 u    - 1024    0    0.000    0.000   0.000
    \*129.250.35.250 249.224.99.213   2 u  101  128  377   14.588   -0.299   0.243
    

Obtain NetQ Agent Software Package

To install the NetQ Agent you need to install netq-agent on each switch or host. This is available from the Cumulus Networks repository.

To obtain the NetQ Agent package:

  1. Reference and update the local yum repository.

    root@rhel7:~# sudo rpm --import https://apps3.cumulusnetworks.com/setup/cumulus-apps-rpm.pubkey
    root@rhel7:~# sudo wget -O- https://apps3.cumulusnetworks.com/setup/cumulus-apps-rpm-el7.repo > /etc/yum.repos.d/cumulus-host-el.repo
    
  2. Edit /etc/yum.repos.d/cumulus-host-el.repo to set the enabled=1 flag for the two NetQ repositories.

    root@rhel7:~# vi /etc/yum.repos.d/cumulus-host-el.repo
    ...
    [cumulus-arch-netq-3.2]
    name=Cumulus netq packages
    baseurl=https://apps3.cumulusnetworks.com/repos/rpm/el/7/netq-3.2/$basearch
    gpgcheck=1
    enabled=1
    [cumulus-noarch-netq-3.2]
    name=Cumulus netq architecture-independent packages
    baseurl=https://apps3.cumulusnetworks.com/repos/rpm/el/7/netq-3.2/noarch
    gpgcheck=1
    enabled=1
    ...
    

Install NetQ Agent on a RHEL or CentOS Server

After completing the preparation steps, you can successfully install the agent software onto your server.

To install the NetQ Agent:

  1. Install the Bash completion and NetQ packages on the server.

    root@rhel7:~# sudo yum -y install bash-completion
    root@rhel7:~# sudo yum install netq-agent
    
  2. Verify you have the correct version of the Agent.

    root@rhel7:~# rpm -q -netq-agent
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • netq-agent-3.2.1-rh7u30~1603791304.6f62fad.x86_64.rpm
  3. Restart rsyslog so log files are sent to the correct destination.

    root@rhel7:~# sudo systemctl restart rsyslog
    
  4. Continue with NetQ Agent Configuration in the next section.

Configure the NetQ Agent on a RHEL or CentOS Server

After the NetQ Agent and CLI have been installed on the servers you want to monitor, the NetQ Agents must be configured to obtain useful and relevant data.

The NetQ Agent is aware of and communicates through the designated VRF. If you do not specify one, the default VRF (named default) is used. If you later change the VRF configured for the NetQ Agent (using a lifecycle management configuration profile, for example), you might cause the NetQ Agent to lose communication.

Two methods are available for configuring a NetQ Agent:

Configure the NetQ Agents Using a Configuration File

You can configure the NetQ Agent in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:

    root@rhel7:~# sudo nano /etc/netq/netq.yml
    
  2. Locate the netq-agent section, or add it.

  3. Set the parameters for the agent as follows:

    • port: 31980 (default) or one that you specify
    • server: IP address of the NetQ server or appliance where the agent should send its collected data
    • vrf: default (default) or one that you specify

    Your configuration should be similar to this:

    netq-agent:
    port: 31980
    server: 127.0.0.1
    vrf: default
    

Configure NetQ Agents Using the NetQ CLI

If the CLI is configured, you can use it to configure the NetQ Agent to send telemetry data to the NetQ Server or Appliance. If it is not configured, refer to Configure the NetQ CLI on a RHEL or CentOS Server and then return here.

If you intend to use VRF, skip to Configure the Agent to Use VRF. If you intend to specify a port for communication, skip to Configure the Agent to Communicate over a Specific Port.

Use the following command to configure the NetQ Agent:

netq config add agent server <text-opta-ip> [port <text-opta-port>] [vrf <text-vrf-name>]

This example uses an IP address of 192.168.1.254 and the default port and VRF for the NetQ hardware.

root@rhel7:~# sudo netq config add agent server 192.168.1.254
Updated agent server 192.168.1.254 vrf default. Please restart netq-agent (netq config restart agent).
root@rhel7:~# sudo netq config restart agent

Configure Advanced NetQ Agent Settings

A couple of additional options are available for configuring the NetQ Agent. If you are using VRF, you can configure the agent to communicate over a specific VRF. You can also configure the agent to use a particular port.

Configure the NetQ Agent to Use a VRF

While optional, Cumulus strongly recommends that you configure NetQ Agents to communicate with the NetQ Platform only via a VRF , including a management VRF . To do so, you need to specify the VRF name when configuring the NetQ Agent. For example, if the management VRF is configured and you want the agent to communicate with the NetQ Platform over it, configure the agent like this:

root@rhel7:~# sudo netq config add agent server 192.168.1.254 vrf mgmt
root@rhel7:~# sudo netq config restart agent

Configure the NetQ Agent to Communicate over a Specific Port

By default, NetQ uses port 31980 for communication between the NetQ Platform and NetQ Agents. If you want the NetQ Agent to communicate with the NetQ Platform via a different port, you need to specify the port number when configuring the NetQ Agent like this:

root@rhel7:~# sudo netq config add agent server 192.168.1.254 port 7379
root@rhel7:~# sudo netq config restart agent

Install NetQ CLI

When installing NetQ 3.2.x, it is not required that you install the NetQ CLI on your NetQ Appliances or VMs, or monitored switches and hosts, but it provides new features, important bug fixes, and the ability to manage your network from multiple points in the network.

Use the instructions in the following sections based on the OS installed on the switch or server:

Install and Configure the NetQ CLI on Cumulus Linux Switches

After installing your Cumulus NetQ software and the NetQ 3.2.1 Agent on each switch you want to monitor, you can also install the NetQ CLI on switches running:

Install the NetQ CLI on a Cumulus Linux Switch

A simple process installs the NetQ CLI on a Cumulus Linux switch.

To install the NetQ CLI you need to install netq-apps on each switch. This is available from the Cumulus Networks repository.

If your network uses a proxy server for external connections, you should first configure a global proxy so apt-get can access the software package in the Cumulus Networks repository.

To obtain the NetQ Agent package:

Edit the /etc/apt/sources.list file to add the repository for Cumulus NetQ.

Note that NetQ has a separate repository from Cumulus Linux.

cumulus@switch:~$ sudo nano /etc/apt/sources.list
...
deb http://apps3.cumulusnetworks.com/repos/deb CumulusLinux-3 netq-3.2
...
cumulus@switch:~$ sudo nano /etc/apt/sources.list
...
deb http://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-3.2
...
  1. Update the local apt repository and install the software on the switch.

    cumulus@switch:~$ sudo apt-get update
    cumulus@switch:~$ sudo apt-get install netq-apps
    
  2. Verify you have the correct version of the CLI.

    cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-agent
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • Cumulus Linux 3.3.2-3.7.x
      • netq-apps_3.2.1-cl3u30~1603788322.6f62fad_armel.deb
      • netq-apps_3.2.1-cl3u30~1603788322.6f62fad_amd64.deb
    • Cumulus Linux 4.0.0-4.1.x
      • netq-apps_3.2.1-cl4u31~1603788322.6f62fadf_armel.deb
      • netq-apps_3.2.1-cl4u31~1603788322.6f62fadf_amd64.deb
  3. Continue with NetQ CLI configuration in the next section.

Configure the NetQ CLI on a Cumulus Linux Switch

Two methods are available for configuring the NetQ CLI on a switch:

By default, the NetQ CLI is not configured during the NetQ installation. The configuration is stored in /etc/netq/netq.yml.

While the CLI is not configured, you can run only netq config commandsand netq help commands, and you must use sudo to run them.

At minimum, you need to configure the NetQ CLI and NetQ Agent to communicate with the telemetry server. To do so, configure the NetQ Agent and the NetQ CLI so that they are running in the VRF where the routing tables are set for connectivity to the telemetry server. Typically this is the management VRF.

To configure the NetQ CLI, run the following command, then restart the NetQ CLI. This example assumes the telemetry server is reachable via the IP address 10.0.1.1 over port 32000 and the management VRF (mgmt).

cumulus@switch:~$ sudo netq config add cli server 10.0.1.1 vrf mgmt port 32000
cumulus@switch:~$ sudo netq config restart cli

Restarting the CLI stops the current running instance of netqd and starts netqd in the specified VRF.

To configure the NetQ Agent, read the Configure Advanced NetQ Agent Settings topic.

Configure NetQ CLI Using the CLI

The steps to configure the CLI are different depending on whether the NetQ software has been installed for an on-premises or cloud deployment. Follow the instructions for your deployment type.

Use the following command to configure the CLI:

netq config add cli server <text-gateway-dest> [vrf <text-vrf-name>] [port <text-gateway-port>]

Restart the CLI afterward to activate the configuration.

This example uses an IP address of 192.168.1.0 and the default port and VRF.

cumulus@switch:~$ sudo netq config add cli server 192.168.1.0
cumulus@switch:~$ sudo netq config restart cli

To access and configure the CLI on your NetQ Cloud Appliance or VM, you must have your username and password to access the NetQ UI to generate AuthKeys. These keys provide authorized access (access key) and user authentication (secret key). Your credentials and NetQ Cloud addresses were provided by Cumulus Networks via an email titled Welcome to Cumulus NetQ!

To generate AuthKeys:

  1. In your Internet browser, enter netq.cumulusnetworks.com into the address field to open the NetQ UI login page.

  2. Enter your username and password.

  3. Click (Main Menu), select Management in the Admin column.

  1. Click Manage on the User Accounts card.

  2. Select your user and click above the table.

  3. Copy these keys to a safe place.

  • store the file wherever you like, for example in /home/cumulus/ or /etc/netq
  • name the file whatever you like, for example credentials.yml, creds.yml, or keys.yml

BUT, the file must have the following format:

access-key: <user-access-key-value-here>
secret-key: <user-secret-key-value-here>
  1. Now that you have your AuthKeys, use the following command to configure the CLI:

    netq config add cli server <text-gateway-dest> [access-key <text-access-key> secret-key <text-secret-key> premises <text-premises-name> | cli-keys-file <text-key-file> premises <text-premises-name>] [vrf <text-vrf-name>] [port <text-gateway-port>]
    
  2. Restart the CLI afterward to activate the configuration.

    This example uses the individual access key, a premises of datacenterwest, and the default Cloud address, port and VRF. Be sure to replace the key values with your generated keys if you are using this example on your server.

    cumulus@switch:~$ sudo netq config add cli server api.netq.cumulusnetworks.com access-key 123452d9bc2850a1726f55534279dd3c8b3ec55e8b25144d4739dfddabe8149e secret-key /vAGywae2E4xVZg8F+HtS6h6yHliZbBP6HXU3J98765= premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    cumulus@switch:~$ sudo netq config restart cli
    Restarting NetQ CLI... Success!
    

    This example uses an optional keys file. Be sure to replace the keys filename and path with the full path and name of your keys file, and the datacenterwest premises name with your premises name if you are using this example on your server.

    cumulus@switch:~$ sudo netq config add cli server api.netq.cumulusnetworks.com cli-keys-file /home/netq/nq-cld-creds.yml premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    cumulus@switch:~$ netq config restart cli
    Restarting NetQ CLI... Success!
    

Configure NetQ CLI Using a Configuration File

You can configure the NetQ CLI in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:

    cumulus@switch:~$ sudo nano /etc/netq/netq.yml
    
  2. Locate the netq-cli section, or add it.

  3. Set the parameters for the CLI.

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: IP address of the NetQ server or NetQ Appliance
    • port (default): 32708
    netq-cli:
    netq-user: admin@company.com
    port: 32708
    server: 192.168.0.254
    

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: api.netq.cumulusnetworks.com
    • port (default): 443
    • premises: Name of premises you want to query
    netq-cli:
    netq-user: admin@company.com
    port: 443
    premises: datacenterwest
    server: api.netq.cumulusnetworks.com
    

Install and Configure the NetQ CLI on Ubuntu Servers

After installing your Cumulus NetQ software, you should install the NetQ 3.2.1 Agents on each switch you want to monitor. NetQ Agents can be installed on servers running:

Prepare for NetQ CLI Installation on an Ubuntu Server

For servers running Ubuntu OS, you need to:

If your network uses a proxy server for external connections, you should first configure a global proxy so apt-get can access the software package in the Cumulus Networks repository.

Verify Service Package Versions

Before you install the NetQ Agent on an Ubuntu server, make sure the following packages are installed and running these minimum versions:

Verify the Server is Running lldpd

Make sure you are running lldpd, not lldpad. Ubuntu does not include lldpd by default, which is required for the installation.

To install this package, run the following commands:

root@ubuntu:~# sudo apt-get update
root@ubuntu:~# sudo apt-get install lldpd
root@ubuntu:~# sudo systemctl enable lldpd.service
root@ubuntu:~# sudo systemctl start lldpd.service

Install and Configure Network Time Server

If NTP is not already installed and configured, follow these steps:

  1. Install NTP on the server, if not already installed. Servers must be in time synchronization with the NetQ Platform or NetQ Appliance to enable useful statistical analysis.

    root@ubuntu:~# sudo apt-get install ntp
    
  2. Configure the network time server.

    1. Open the /etc/ntp.conf file in your text editor of choice.

    2. Under the Server section, specify the NTP server IP address or hostname.

    3. Enable and start the NTP service.

      root@ubuntu:~# sudo systemctl enable ntp
      root@ubuntu:~# sudo systemctl start ntp
      
    1. Verify NTP is operating correctly. Look for an asterisk (*) or a plus sign (+) that indicates the clock is synchronized.

      root@ubuntu:~# ntpq -pn
      remote           refid            st t when poll reach   delay   offset  jitter
      ==============================================================================
      +173.255.206.154 132.163.96.3     2 u   86  128  377   41.354    2.834   0.602
      +12.167.151.2    198.148.79.209   3 u  103  128  377   13.395   -4.025   0.198
      2a00:7600::41    .STEP.          16 u    - 1024    0    0.000    0.000   0.000
      \*129.250.35.250 249.224.99.213   2 u  101  128  377   14.588   -0.299   0.243
      
      
    1. Install chrony if needed.

      root@ubuntu:~# sudo apt install chrony
      
    2. Start the chrony service.

      root@ubuntu:~# sudo /usr/local/sbin/chronyd
      
    3. Verify it installed successfully.

      root@ubuntu:~# chronyc activity
      200 OK
      8 sources online
      0 sources offline
      0 sources doing burst (return to online)
      0 sources doing burst (return to offline)
      0 sources with unknown address
      
    4. View the time servers chrony is using.

      root@ubuntu:~# chronyc sources
      210 Number of sources = 8
      
      MS Name/IP address         Stratum Poll Reach LastRx Last sample
      ===============================================================================
      ^+ golem.canonical.com           2   6   377    39  -1135us[-1135us] +/-   98ms
      ^* clock.xmission.com            2   6   377    41  -4641ns[ +144us] +/-   41ms
      ^+ ntp.ubuntu.net              2   7   377   106   -746us[ -573us] +/-   41ms
      ...
      

      Open the chrony.conf configuration file (by default at /etc/chrony/) and edit if needed.

      Example with individual servers specified:

      server golem.canonical.com iburst
      server clock.xmission.com iburst
      server ntp.ubuntu.com iburst
      driftfile /var/lib/chrony/drift
      makestep 1.0 3
      rtcsync
      

      Example when using a pool of servers:

      pool pool.ntp.org iburst
      driftfile /var/lib/chrony/drift
      makestep 1.0 3
      rtcsync
      
    5. View the server chrony is currently tracking.

      root@ubuntu:~# chronyc tracking
      Reference ID    : 5BBD59C7 (golem.canonical.com)
      Stratum         : 3
      Ref time (UTC)  : Mon Feb 10 14:35:18 2020
      System time     : 0.0000046340 seconds slow of NTP time
      Last offset     : -0.000123459 seconds
      RMS offset      : 0.007654410 seconds
      Frequency       : 8.342 ppm slow
      Residual freq   : -0.000 ppm
      Skew            : 26.846 ppm
      Root delay      : 0.031207654 seconds
      Root dispersion : 0.001234590 seconds
      Update interval : 115.2 seconds
      Leap status     : Normal
      

Obtain NetQ CLI Software Package

To install the NetQ Agent you need to install netq-apps on each server. This is available from the Cumulus Networks repository.

To obtain the NetQ CLI package:

  1. Reference and update the local apt repository.

    root@ubuntu:~# sudo wget -O- https://apps3.cumulusnetworks.com/setup/cumulus-apps-deb.pubkey | apt-key add -
    
  2. Add the Ubuntu repository:

    Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-xenial.list and add the following line:

    root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-xenial.list
    ...
    deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb xenial netq-latest
    ...
    

    Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-bionic.list and add the following line:

    root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-bionic.list
    ...
    deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb bionic netq-latest
    ...
    

    The use of netq-latest in these examples means that a get to the repository always retrieves the latest version of NetQ, even in the case where a major version update has been made. If you want to keep the repository on a specific version - such as netq-3.1 - use that instead.

Install NetQ CLI on an Ubuntu Server

A simple process installs the NetQ CLI on an Ubuntu server.

  1. Install the CLI software on the server.

    root@ubuntu:~# sudo apt-get update
    root@ubuntu:~# sudo apt-get install netq-apps
    
  2. Verify you have the correct version of the CLI.

    root@ubuntu:~# dpkg-query -W -f '${Package}\t${Version}\n' netq-apps
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • netq-apps_3.2.1-ub18.04u31~1603789872.6f62fad_amd64.deb
    • netq-apps_3.2.1-ub16.04u31~1603788317.6f62fad_amd64.deb
  3. Continue with NetQ CLI configuration in the next section.

Configure the NetQ CLI on an Ubuntu Server

Two methods are available for configuring the NetQ CLI on a switch:

By default, the NetQ CLI is not configured during the NetQ installation. The configuration is stored in /etc/netq/netq.yml.

While the CLI is not configured, you can run only netq config commandsand netq help commands, and you must use sudo to run them.

At minimum, you need to configure the NetQ CLI and NetQ Agent to communicate with the telemetry server. To do so, configure the NetQ Agent and the NetQ CLI so that they are running in the VRF where the routing tables are set for connectivity to the telemetry server. Typically this is the management VRF.

To configure the NetQ CLI, run the following command, then restart the NetQ CLI. This example assumes the telemetry server is reachable via the IP address 10.0.1.1 over port 32000 and the management VRF (mgmt).

root@host:~# sudo netq config add cli server 10.0.1.1 vrf mgmt port 32000
root@host:~# sudo netq config restart cli

Restarting the CLI stops the current running instance of netqd and starts netqd in the specified VRF.

To configure the NetQ Agent, read the Configure Advanced NetQ Agent Settings topic.

Configure NetQ CLI Using the CLI

The steps to configure the CLI are different depending on whether the NetQ software has been installed for an on-premises or cloud deployment. Follow the instruction for your deployment type.

Use the following command to configure the CLI:

netq config add cli server <text-gateway-dest> [vrf <text-vrf-name>] [port <text-gateway-port>]

Restart the CLI afterward to activate the configuration.

This example uses an IP address of 192.168.1.0 and the default port and VRF.

root@ubuntu:~# sudo netq config add cli server 192.168.1.0
root@ubuntu:~# sudo netq config restart cli

To access and configure the CLI on your NetQ Platform or NetQ Cloud Appliance, you must have your username and password to access the NetQ UI to generate AuthKeys. These keys provide authorized access (access key) and user authentication (secret key). Your credentials and NetQ Cloud addresses were provided by Cumulus Networks via an email titled Welcome to Cumulus NetQ!

To generate AuthKeys:

  1. In your Internet browser, enter netq.cumulusnetworks.com into the address field to open the NetQ UI login page.

  2. Enter your username and password.

  3. From the Main Menu, select Management in the Admin column.

  1. Click Manage on the User Accounts card.

  2. Select your user and click above the table.

  3. Copy these keys to a safe place.

  • store the file wherever you like, for example in /home/cumulus/ or /etc/netq
  • name the file whatever you like, for example credentials.yml, creds.yml, or keys.yml

BUT, the file must have the following format:

access-key: <user-access-key-value-here>
secret-key: <user-secret-key-value-here>
  1. Now that you have your AuthKeys, use the following command to configure the CLI:

    netq config add cli server <text-gateway-dest> [access-key <text-access-key> secret-key <text-secret-key> premises <text-premises-name> | cli-keys-file <text-key-file> premises <text-premises-name>] [vrf <text-vrf-name>] [port <text-gateway-port>]
    
  2. Restart the CLI afterward to activate the configuration.

    This example uses the individual access key, a premises of datacenterwest, and the default Cloud address, port and VRF. Be sure to replace the key values with your generated keys if you are using this example on your server.

    root@ubuntu:~# sudo netq config add cli server api.netq.cumulusnetworks.com access-key 123452d9bc2850a1726f55534279dd3c8b3ec55e8b25144d4739dfddabe8149e secret-key /vAGywae2E4xVZg8F+HtS6h6yHliZbBP6HXU3J98765= premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    root@ubuntu:~# sudo netq config restart cli
    Restarting NetQ CLI... Success!
    

    This example uses an optional keys file. Be sure to replace the keys filename and path with the full path and name of your keys file, and the datacenterwest premises name with your premises name if you are using this example on your server.

    root@ubuntu:~# sudo netq config add cli server api.netq.cumulusnetworks.com cli-keys-file /home/netq/nq-cld-creds.yml premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    root@ubuntu:~# sudo netq config restart cli
    Restarting NetQ CLI... Success!
    

Configure NetQ CLI Using Configuration File

You can configure the NetQ CLI in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:

    root@ubuntu:~# sudo nano /etc/netq/netq.yml
    
  2. Locate the netq-cli section, or add it.

  3. Set the parameters for the CLI.

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: IP address of the NetQ server or NetQ Appliance
    • port (default): 32708
    netq-cli:
    netq-user: admin@company.com
    port: 32708
    server: 192.168.0.254
    

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: api.netq.cumulusnetworks.com
    • port (default): 443
    • premises: Name of premises you want to query
    netq-cli:
    netq-user: admin@company.com
    port: 443
    premises: datacenterwest
    server: api.netq.cumulusnetworks.com
    

Install and Configure the NetQ CLI on RHEL and CentOS Servers

After installing your Cumulus NetQ software and the NetQ 3.2.1 Agents on each switch you want to monitor, you can also install the NetQ CLI on servers running:

Prepare for NetQ CLI Installation on a RHEL or CentOS Server

For servers running RHEL or CentOS, you need to:

If your network uses a proxy server for external connections, you should first configure a global proxy so apt-get can access the software package in the Cumulus Networks repository.

Verify Service Package Versions

Before you install the NetQ CLI on a Red Hat or CentOS server, make sure the following packages are installed and running these minimum versions:

Verify the Server is Running lldpd and wget

Make sure you are running lldpd, not lldpad. CentOS does not include lldpd by default, nor does it include wget, which is required for the installation.

To install this package, run the following commands:

root@rhel7:~# sudo yum -y install epel-release
root@rhel7:~# sudo yum -y install lldpd
root@rhel7:~# sudo systemctl enable lldpd.service
root@rhel7:~# sudo systemctl start lldpd.service
root@rhel7:~# sudo yum install wget

Install and Configure NTP

If NTP is not already installed and configured, follow these steps:

  1. Install NTP on the server. Servers must be in time synchronization with the NetQ Appliance or VM to enable useful statistical analysis.

    root@rhel7:~# sudo yum install ntp
    
  2. Configure the NTP server.

    1. Open the /etc/ntp.conf file in your text editor of choice.

    2. Under the Server section, specify the NTP server IP address or hostname.

  3. Enable and start the NTP service.

    root@rhel7:~# sudo systemctl enable ntp
    root@rhel7:~# sudo systemctl start ntp
    

    If you are running NTP in your out-of-band management network with VRF, specify the VRF (ntp@<vrf-name> versus just ntp) in the above commands.

  4. Verify NTP is operating correctly. Look for an asterisk (*) or a plus sign (+) that indicates the clock is synchronized.

    root@rhel7:~# ntpq -pn
    remote           refid            st t when poll reach   delay   offset  jitter
    ==============================================================================
    +173.255.206.154 132.163.96.3     2 u   86  128  377   41.354    2.834   0.602
    +12.167.151.2    198.148.79.209   3 u  103  128  377   13.395   -4.025   0.198
    2a00:7600::41    .STEP.          16 u    - 1024    0    0.000    0.000   0.000
    \*129.250.35.250 249.224.99.213   2 u  101  128  377   14.588   -0.299   0.243
    

Install NetQ CLI on a RHEL or CentOS Server

A simple process installs the NetQ CLI on a RHEL or CentOS server.

  1. Reference and update the local yum repository and key.

    root@rhel7:~# rpm --import https://apps3.cumulusnetworks.com/setup/cumulus-apps-rpm.pubkey
    root@rhel7:~# wget -O- https://apps3.cumulusnetworks.com/setup/cumulus-apps-rpm-el7.repo > /etc/yum.repos.d/cumulus-host-el.repo
    
  2. Edit /etc/yum.repos.d/cumulus-host-el.repo to set the enabled=1 flag for the two NetQ repositories.

    root@rhel7:~# vi /etc/yum.repos.d/cumulus-host-el.repo
    ...
    [cumulus-arch-netq-3.2]
    name=Cumulus netq packages
    baseurl=https://apps3.cumulusnetworks.com/repos/rpm/el/7/netq-3.2/$basearch
    gpgcheck=1
    enabled=1
    [cumulus-noarch-netq-3.2]
    name=Cumulus netq architecture-independent packages
    baseurl=https://apps3.cumulusnetworks.com/repos/rpm/el/7/netq-3.2/noarch
    gpgcheck=1
    enabled=1
    ...
    
  3. Install the Bash completion and CLI software on the server.

    root@rhel7:~# sudo yum -y install bash-completion
    root@rhel7:~# sudo yum install netq-apps
    
  4. Verify you have the correct version of the CLI.

    root@rhel7:~# rpm -q -netq-apps
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • netq-apps-3.2.1-rh7u30~1603791304.6f62fad.x86_64.rpm
  5. Continue with the next section.

Configure the NetQ CLI on a RHEL or CentOS Server

Two methods are available for configuring the NetQ CLI on a switch:

By default, the NetQ CLI is not configured during the NetQ installation. The configuration is stored in /etc/netq/netq.yml.

While the CLI is not configured, you can run only netq config commandsand netq help commands, and you must use sudo to run them.

At minimum, you need to configure the NetQ CLI and NetQ Agent to communicate with the telemetry server. To do so, configure the NetQ Agent and the NetQ CLI so that they are running in the VRF where the routing tables are set for connectivity to the telemetry server. Typically this is the management VRF.

To configure the NetQ CLI, run the following command, then restart the NetQ CLI. This example assumes the telemetry server is reachable via the IP address 10.0.1.1 over port 32000 and the management VRF (mgmt).

root@host:~# sudo netq config add cli server 10.0.1.1 vrf mgmt port 32000
root@host:~# sudo netq config restart cli

Restarting the CLI stops the current running instance of netqd and starts netqd in the specified VRF.

To configure the NetQ Agent, read the Configure Advanced NetQ Agent Settings topic.

Configure NetQ CLI Using the CLI

The steps to configure the CLI are different depending on whether the NetQ software has been installed for an on-premises or cloud deployment. Follow the instructions for your deployment type.

Use the following command to configure the CLI:

netq config add cli server <text-gateway-dest> [vrf <text-vrf-name>] [port <text-gateway-port>]

Restart the CLI afterward to activate the configuration.

This example uses an IP address of 192.168.1.0 and the default port and VRF.

root@rhel7:~# sudo netq config add cli server 192.168.1.0
root@rhel7:~# sudo netq config restart cli

To access and configure the CLI on your NetQ Platform or NetQ Cloud Appliance, you must have your username and password to access the NetQ UI to generate AuthKeys. These keys provide authorized access (access key) and user authentication (secret key). Your credentials and NetQ Cloud addresses were provided by Cumulus Networks via an email titled Welcome to Cumulus NetQ!

To generate AuthKeys:

  1. In your Internet browser, enter netq.cumulusnetworks.com into the address field to open the NetQ UI login page.

  2. Enter your username and password.

  3. From the Main Menu, select Management in the Admin column.

  1. Click Manage on the User Accounts card.

  2. Select your user and click above the table.

  3. Copy these keys to a safe place.

  • store the file wherever you like, for example in /home/cumulus/ or /etc/netq
  • name the file whatever you like, for example credentials.yml, creds.yml, or keys.yml

BUT, the file must have the following format:

access-key: <user-access-key-value-here>
secret-key: <user-secret-key-value-here>
  1. Now that you have your AuthKeys, use the following command to configure the CLI:

    netq config add cli server <text-gateway-dest> [access-key <text-access-key> secret-key <text-secret-key> premises <text-premises-name> | cli-keys-file <text-key-file> premises <text-premises-name>] [vrf <text-vrf-name>] [port <text-gateway-port>]
    
  2. Restart the CLI afterward to activate the configuration.

    This example uses the individual access key, a premises of datacenterwest, and the default Cloud address, port and VRF. Be sure to replace the key values with your generated keys if you are using this example on your server.

    root@rhel7:~# sudo netq config add cli server api.netq.cumulusnetworks.com access-key 123452d9bc2850a1726f55534279dd3c8b3ec55e8b25144d4739dfddabe8149e secret-key /vAGywae2E4xVZg8F+HtS6h6yHliZbBP6HXU3J98765= premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    root@rhel7:~# sudo netq config restart cli
    Restarting NetQ CLI... Success!
    

    This example uses an optional keys file. Be sure to replace the keys filename and path with the full path and name of your keys file, and the datacenterwest premises name with your premises name if you are using this example on your server.

    root@rhel7:~# sudo netq config add cli server api.netq.cumulusnetworks.com cli-keys-file /home/netq/nq-cld-creds.yml premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    root@rhel7:~# sudo netq config restart cli
    Restarting NetQ CLI... Success!
    

Configure NetQ CLI Using Configuration File

You can configure the NetQ CLI in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:

    root@rhel7:~# sudo nano /etc/netq/netq.yml
    
  2. Locate the netq-cli section, or add it.

  3. Set the parameters for the CLI.

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: IP address of the NetQ server or NetQ Appliance
    • port (default): 32708
    netq-cli:
    netq-user: admin@company.com
    port: 32708
    server: 192.168.0.254
    

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: api.netq.cumulusnetworks.com
    • port (default): 443
    • premises: Name of premises you want to query
    netq-cli:
    netq-user: admin@company.com
    port: 443
    premises: datacenterwest
    server: api.netq.cumulusnetworks.com
    

Install NetQ Agent and CLI

To collect network telemetry data, the NetQ Agents must be installed on the relevant switches and hosts. It is a time saving process to update the NetQ Agent and CLI at the same time, but is not required. It always recommended that the NetQ Agents be updated. The NetQ CLI is optional, but can be very useful.

Use the instructions in the following sections based on the OS installed on the switch or server to install both the NetQ Agent and the CLI at the same time.

Install and Configure the NetQ Agent and CLI on Cumulus Linux Switches

After installing your Cumulus NetQ software, you can install the NetQ 3.2.1 Agents and CLI on each switch you want to monitor. These can be installed on switches running:

Prepare for NetQ Agent and CLI Installation on a Cumulus Linux Switch

For servers running Cumulus Linux, you need to:

If your network uses a proxy server for external connections, you should first configure a global proxy so apt-get can access the software package in the Cumulus Networks repository.

Verify NTP is Installed and Configured

Verify that NTP is running on the switch. The switch must be in time synchronization with the NetQ Platform or NetQ Appliance to enable useful statistical analysis.

cumulus@switch:~$ sudo systemctl status ntp
[sudo] password for cumulus:
● ntp.service - LSB: Start NTP daemon
        Loaded: loaded (/etc/init.d/ntp; bad; vendor preset: enabled)
        Active: active (running) since Fri 2018-06-01 13:49:11 EDT; 2 weeks 6 days ago
          Docs: man:systemd-sysv-generator(8)
        CGroup: /system.slice/ntp.service
                └─2873 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -c /var/lib/ntp/ntp.conf.dhcp -u 109:114

If NTP is not installed, install and configure it before continuing.

If NTP is not running:

If you are running NTP in your out-of-band management network with VRF, specify the VRF (ntp@<vrf-name> versus just ntp) in the above commands.

Obtain NetQ Agent and CLI Software Packages

To install the NetQ Agent you need to install netq-agent on each switch or host. To install the NetQ CLI you need to install netq-apps on each switch. These are available from the Cumulus Networks repository.

To obtain the NetQ packages:

Edit the /etc/apt/sources.list file to add the repository for Cumulus NetQ.

Note that NetQ has a separate repository from Cumulus Linux.

cumulus@switch:~$ sudo nano /etc/apt/sources.list
...
deb http://apps3.cumulusnetworks.com/repos/deb CumulusLinux-3 netq-3.2
...

Add the repository:

cumulus@switch:~$ sudo nano /etc/apt/sources.list
...
deb http://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-3.2
...

Add the apps3.cumulusnetworks.com authentication key to Cumulus Linux:

cumulus@switch:~$ wget -qO - https://apps3.cumulusnetworks.com/setup/cumulus-apps-deb.pubkey | sudo apt-key add -

Install the NetQ Agent on Cumulus Linux Switch

After completing the preparation steps, you can successfully install the agent onto your switch.

To install the NetQ Agent and CLI:

  1. Update the local apt repository, then install the NetQ software on the switch.

    cumulus@switch:~$ sudo apt-get update
    cumulus@switch:~$ sudo apt-get install netq-agent netq-apps
    
  2. Verify you have the correct version of the Agent and CLI.

    cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-agent
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • Cumulus Linux 3.3.2-3.7.x
      • netq-agent_3.2.1-cl3u30~1603788322.6f62fadf_armel.deb
      • netq-agent_3.2.1-cl3u30~1603788322.6f62fadf_amd64.deb
    • Cumulus Linux 4.0.0-4.1.x
      • netq-agent_3.2.1-cl4u31~1603788322.6f62fadf_armel.deb
      • netq-agent_3.2.1-cl4u31~1603788322.6f62fadf_amd64.deb
    cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-agent
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • Cumulus Linux 3.3.2-3.7.x
      • netq-apps_3.2.1-cl3u30~1603788322.6f62fad_armel.deb
      • netq-apps_3.2.1-cl3u30~1603788322.6f62fad_amd64.deb
    • Cumulus Linux 4.0.0-4.1.x
      • netq-apps_3.2.1-cl4u31~1603788322.6f62fadf_armel.deb
      • netq-apps_3.2.1-cl4u31~1603788322.6f62fadf_amd64.deb
  3. Restart rsyslog so log files are sent to the correct destination.

    cumulus@switch:~$ sudo systemctl restart rsyslog.service
    
  4. Continue with NetQ Agent and CLI configuration in the next section.

Configure the NetQ Agent and CLI on a Cumulus Linux Switch

After the NetQ Agent and CLI have been installed on the servers you want to monitor, the NetQ Agents must be configured to obtain useful and relevant data.

The NetQ Agent is aware of and communicates through the designated VRF. If you do not specify one, the default VRF (named default) is used. If you later change the VRF configured for the NetQ Agent (using a lifecycle management configuration profile, for example), you might cause the NetQ Agent to lose communication.

Two methods are available for configuring a NetQ Agent:

Configure NetQ Agent and CLI Using a Configuration File

You can configure the NetQ Agent and CLI in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:

    cumulus@switch:~$ sudo nano /etc/netq/netq.yml
    
  2. Locate the netq-agent section, or add it.

  3. Set the parameters for the agent as follows:

    • port: 31980 (default configuration)
    • server: IP address of the NetQ Appliance or VM where the agent should send its collected data
    • vrf: default (default) or one that you specify

    Your configuration should be similar to this:

    netq-agent:
    port: 31980
    server: 127.0.0.1
    vrf: default
    
  4. Locate the netq-cli section, or add it.

  5. Set the parameters for the CLI based on your deployment type.

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: IP address of the NetQ server or NetQ Appliance
    • port (default): 32708
    netq-cli:
    netq-user: admin@company.com
    port: 32708
    server: 192.168.0.254
    

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: api.netq.cumulusnetworks.com
    • port (default): 443
    • premises: Name of premises you want to query
    netq-cli:
    netq-user: admin@company.com
    port: 443
    premises: datacenterwest
    server: api.netq.cumulusnetworks.com
    

Configure NetQ Agent and CLI Using the NetQ CLI

If the CLI is configured, you can use it to configure the NetQ Agent to send telemetry data to the NetQ Appliance or VM.

If you intend to use VRF, refer to Configure the Agent to Use VRF. If you intend to specify a port for communication, refer to Configure the Agent to Communicate over a Specific Port.

Use the following command to configure the NetQ Agent:

netq config add agent server <text-opta-ip> [port <text-opta-port>] [vrf <text-vrf-name>]

This example uses an IP address of 192.168.1.254 and the default port and VRF for the NetQ Appliance or VM.

cumulus@switch:~$ sudo netq config add agent server 192.168.1.254
Updated agent server 192.168.1.254 vrf default. Please restart netq-agent (netq config restart agent).
cumulus@switch:~$ sudo netq config restart agent

The steps to configure the CLI are different depending on whether the NetQ software has been installed for an on-premises or cloud deployment. Follow the instructions for your deployment type.

Use the following command to configure the CLI:

netq config add cli server <text-gateway-dest> [vrf <text-vrf-name>] [port <text-gateway-port>]

Restart the CLI afterward to activate the configuration.

This example uses an IP address of 192.168.1.0 and the default port and VRF.

cumulus@switch:~$ sudo netq config add cli server 192.168.1.0
cumulus@switch:~$ sudo netq config restart cli

To access and configure the CLI on your NetQ Cloud Appliance or VM, you must have your username and password to access the NetQ UI to generate AuthKeys. These keys provide authorized access (access key) and user authentication (secret key). Your credentials and NetQ Cloud addresses were provided by Cumulus Networks via an email titled Welcome to Cumulus NetQ!

To generate AuthKeys:

  1. In your Internet browser, enter netq.cumulusnetworks.com into the address field to open the NetQ UI login page.

  2. Enter your username and password.

  3. Click (Main Menu), select Management in the Admin column.

  1. Click Manage on the User Accounts card.

  2. Select your user and click above the table.

  3. Copy these keys to a safe place.

  • store the file wherever you like, for example in /home/cumulus/ or /etc/netq
  • name the file whatever you like, for example credentials.yml, creds.yml, or keys.yml

BUT, the file must have the following format:

access-key: <user-access-key-value-here>
secret-key: <user-secret-key-value-here>
  1. Now that you have your AuthKeys, use the following command to configure the CLI:

    netq config add cli server <text-gateway-dest> [access-key <text-access-key> secret-key <text-secret-key> premises <text-premises-name> | cli-keys-file <text-key-file> premises <text-premises-name>] [vrf <text-vrf-name>] [port <text-gateway-port>]
    
  2. Restart the CLI afterward to activate the configuration.

    This example uses the individual access key, a premises of datacenterwest, and the default Cloud address, port and VRF. Be sure to replace the key values with your generated keys if you are using this example on your server.

    cumulus@switch:~$ sudo netq config add cli server api.netq.cumulusnetworks.com access-key 123452d9bc2850a1726f55534279dd3c8b3ec55e8b25144d4739dfddabe8149e secret-key /vAGywae2E4xVZg8F+HtS6h6yHliZbBP6HXU3J98765= premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    cumulus@switch:~$ sudo netq config restart cli
    Restarting NetQ CLI... Success!
    

    This example uses an optional keys file. Be sure to replace the keys filename and path with the full path and name of your keys file, and the datacenterwest premises name with your premises name if you are using this example on your server.

    cumulus@switch:~$ sudo netq config add cli server api.netq.cumulusnetworks.com cli-keys-file /home/netq/nq-cld-creds.yml premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    cumulus@switch:~$ netq config restart cli
    Restarting NetQ CLI... Success!
    

Configure Advanced NetQ Agent Settings on a Cumulus Linux Switch

A couple of additional options are available for configuring the NetQ Agent. If you are using VRF, you can configure the agent to communicate over a specific VRF. You can also configure the agent to use a particular port.

Configure the Agent to Use a VRF

While optional, Cumulus strongly recommends that you configure NetQ Agents to communicate with the NetQ Appliance or VM only via a VRF , including a management VRF . To do so, you need to specify the VRF name when configuring the NetQ Agent. For example, if the management VRF is configured and you want the agent to communicate with the NetQ Appliance or VM over it, configure the agent like this:

cumulus@leaf01:~$ sudo netq config add agent server 192.168.1.254 vrf mgmt
cumulus@leaf01:~$ sudo netq config restart agent

Configure the Agent to Communicate over a Specific Port

By default, NetQ uses port 31980 for communication between the NetQ Appliance or VM and NetQ Agents. If you want the NetQ Agent to communicate with the NetQ Appliance or VM via a different port, you need to specify the port number when configuring the NetQ Agent, like this:

cumulus@leaf01:~$ sudo netq config add agent server 192.168.1.254 port 7379
cumulus@leaf01:~$ sudo netq config restart agent

Install and Configure the NetQ Agent and CLI on Ubuntu Servers

After installing your Cumulus NetQ software, you should install the NetQ 3.2.1 Agent on each server you want to monitor. NetQ Agents can be installed on servers running:

Prepare for NetQ Agent Installation on an Ubuntu Server

For servers running Ubuntu OS, you need to:

If your network uses a proxy server for external connections, you should first configure a global proxy so apt-get can access the agent package on the Cumulus Networks repository.

Verify Service Package Versions

Before you install the NetQ Agent on an Ubuntu server, make sure the following packages are installed and running these minimum versions:

Verify the Server is Running lldpd

Make sure you are running lldpd, not lldpad. Ubuntu does not include lldpd by default, which is required for the installation.

To install this package, run the following commands:

root@ubuntu:~# sudo apt-get update
root@ubuntu:~# sudo apt-get install lldpd
root@ubuntu:~# sudo systemctl enable lldpd.service
root@ubuntu:~# sudo systemctl start lldpd.service

Install and Configure Network Time Server

If NTP is not already installed and configured, follow these steps:

  1. Install NTP on the server, if not already installed. Servers must be in time synchronization with the NetQ Platform or NetQ Appliance to enable useful statistical analysis.

    root@ubuntu:~# sudo apt-get install ntp
    
  2. Configure the network time server.

    1. Open the /etc/ntp.conf file in your text editor of choice.

    2. Under the Server section, specify the NTP server IP address or hostname.

    3. Enable and start the NTP service.

      root@ubuntu:~# sudo systemctl enable ntp
      root@ubuntu:~# sudo systemctl start ntp
      
    1. Verify NTP is operating correctly. Look for an asterisk (*) or a plus sign (+) that indicates the clock is synchronized.

      root@ubuntu:~# ntpq -pn
      remote           refid            st t when poll reach   delay   offset  jitter
      ==============================================================================
      +173.255.206.154 132.163.96.3     2 u   86  128  377   41.354    2.834   0.602
      +12.167.151.2    198.148.79.209   3 u  103  128  377   13.395   -4.025   0.198
      2a00:7600::41    .STEP.          16 u    - 1024    0    0.000    0.000   0.000
      \*129.250.35.250 249.224.99.213   2 u  101  128  377   14.588   -0.299   0.243
      
    1. Install chrony if needed.

      root@ubuntu:~# sudo apt install chrony
      
    2. Start the chrony service.

      root@ubuntu:~# sudo /usr/local/sbin/chronyd
      
    3. Verify it installed successfully.

      root@ubuntu:~# chronyc activity
      200 OK
      8 sources online
      0 sources offline
      0 sources doing burst (return to online)
      0 sources doing burst (return to offline)
      0 sources with unknown address
      
    4. View the time servers chrony is using.

      root@ubuntu:~# chronyc sources
      210 Number of sources = 8
      
      MS Name/IP address         Stratum Poll Reach LastRx Last sample
      ===============================================================================
      ^+ golem.canonical.com           2   6   377    39  -1135us[-1135us] +/-   98ms
      ^* clock.xmission.com            2   6   377    41  -4641ns[ +144us] +/-   41ms
      ^+ ntp.ubuntu.net              2   7   377   106   -746us[ -573us] +/-   41ms
      ...
      

      Open the chrony.conf configuration file (by default at /etc/chrony/) and edit if needed.

      Example with individual servers specified:

      server golem.canonical.com iburst
      server clock.xmission.com iburst
      server ntp.ubuntu.com iburst
      driftfile /var/lib/chrony/drift
      makestep 1.0 3
      rtcsync
      

      Example when using a pool of servers:

      pool pool.ntp.org iburst
      driftfile /var/lib/chrony/drift
      makestep 1.0 3
      rtcsync
      
    5. View the server chrony is currently tracking.

      root@ubuntu:~# chronyc tracking
      Reference ID    : 5BBD59C7 (golem.canonical.com)
      Stratum         : 3
      Ref time (UTC)  : Mon Feb 10 14:35:18 2020
      System time     : 0.0000046340 seconds slow of NTP time
      Last offset     : -0.000123459 seconds
      RMS offset      : 0.007654410 seconds
      Frequency       : 8.342 ppm slow
      Residual freq   : -0.000 ppm
      Skew            : 26.846 ppm
      Root delay      : 0.031207654 seconds
      Root dispersion : 0.001234590 seconds
      Update interval : 115.2 seconds
      Leap status     : Normal
      

Obtain NetQ Agent Software Package

To install the NetQ Agent you need to install netq-agent on each server. This is available from the Cumulus Networks repository.

To obtain the NetQ Agent package:

  1. Reference and update the local apt repository.
root@ubuntu:~# sudo wget -O- https://apps3.cumulusnetworks.com/setup/cumulus-apps-deb.pubkey | apt-key add -
  1. Add the Ubuntu repository:

    Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-xenial.list and add the following line:

    root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-xenial.list
    ...
    deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb xenial netq-latest
    ...
    

    Create the file /etc/apt/sources.list.d/cumulus-host-ubuntu-bionic.list and add the following line:

    root@ubuntu:~# vi /etc/apt/sources.list.d/cumulus-apps-deb-bionic.list
    ...
    deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb bionic netq-latest
    ...
    

    The use of netq-latest in these examples means that a get to the repository always retrieves the latest version of NetQ, even in the case where a major version update has been made. If you want to keep the repository on a specific version - such as netq-2.4 - use that instead.

Install NetQ Agent on an Ubuntu Server

After completing the preparation steps, you can successfully install the agent software onto your server.

To install the NetQ Agent:

  1. Install the software packages on the server.

    root@ubuntu:~# sudo apt-get update
    root@ubuntu:~# sudo apt-get install netq-agent
    
  2. Verify you have the correct version of the Agent.

    root@ubuntu:~# dpkg-query -W -f '${Package}\t${Version}\n' netq-agent
    
    You should see version 3.0.0 and update 27 or later in the results. For example:
    • netq-agent_3.0.0-ub18.04u27~1588242914.9fb5b87_amd64.deb
    • netq-agent_3.0.0-ub16.04u27~1588242914.9fb5b87_amd64.deb
  3. Restart rsyslog so log files are sent to the correct destination.

root@ubuntu:~# sudo systemctl restart rsyslog.service
  1. Continue with NetQ Agent Configuration in the next section.

Configure the NetQ Agent on an Ubuntu Server

After the NetQ Agent and CLI have been installed on the servers you want to monitor, the NetQ Agents must be configured to obtain useful and relevant data.

The NetQ Agent is aware of and communicates through the designated VRF. If you do not specify one, the default VRF (named default) is used. If you later change the VRF configured for the NetQ Agent (using a lifecycle management configuration profile, for example), you might cause the NetQ Agent to lose communication.

Two methods are available for configuring a NetQ Agent:

Configure the NetQ Agents Using a Configuration File

You can configure the NetQ Agent in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:
root@ubuntu:~# sudo nano /etc/netq/netq.yml
  1. Locate the netq-agent section, or add it.

  2. Set the parameters for the agent as follows:

Your configuration should be similar to this:

netq-agent:
    port: 31980
    server: 127.0.0.1
    vrf: default

Configure NetQ Agents Using the NetQ CLI

If the CLI is configured, you can use it to configure the NetQ Agent to send telemetry data to the NetQ Server or Appliance. If it is not configured, refer to Configure the NetQ CLI on an Ubuntu Server and then return here.

If you intend to use VRF, skip to Configure the Agent to Use VRF. If you intend to specify a port for communication, skip to Configure the Agent to Communicate over a Specific Port.

Use the following command to configure the NetQ Agent:

netq config add agent server <text-opta-ip> [port <text-opta-port>] [vrf <text-vrf-name>]

This example uses an IP address of 192.168.1.254 and the default port and VRF for the NetQ hardware.

root@ubuntu:~# sudo netq config add agent server 192.168.1.254
Updated agent server 192.168.1.254 vrf default. Please restart netq-agent (netq config restart agent).
root@ubuntu:~# sudo netq config restart agent

Configure Advanced NetQ Agent Settings

A couple of additional options are available for configuring the NetQ Agent. If you are using VRF, you can configure the agent to communicate over a specific VRF. You can also configure the agent to use a particular port.

Configure the NetQ Agent to Use a VRF

While optional, Cumulus strongly recommends that you configure NetQ Agents to communicate with the NetQ Platform only via a VRF , including a management VRF . To do so, you need to specify the VRF name when configuring the NetQ Agent. For example, if the management VRF is configured and you want the agent to communicate with the NetQ Platform over it, configure the agent like this:

root@ubuntu:~# sudo netq config add agent server 192.168.1.254 vrf mgmt
root@ubuntu:~# sudo netq config restart agent

Configure the NetQ Agent to Communicate over a Specific Port

By default, NetQ uses port 31980 for communication between the NetQ Platform and NetQ Agents. If you want the NetQ Agent to communicate with the NetQ Platform via a different port, you need to specify the port number when configuring the NetQ Agent like this:

root@ubuntu:~# sudo netq config add agent server 192.168.1.254 port 7379
root@ubuntu:~# sudo netq config restart agent

Install and Configure the NetQ Agent and CLI on RHEL and CentOS Servers

After installing your Cumulus NetQ software, you can install the NetQ 3.2.1 Agent and CLI on each server you want to monitor. These can be installed on servers running:

Prepare for NetQ Agent and CLI Installation on a RHEL or CentOS Server

For servers running RHEL or CentOS, you need to:

If your network uses a proxy server for external connections, you should first configure a global proxy so `apt-get` can access the software package in the Cumulus Networks repository.

Verify Service Package Versions

Before you install the NetQ Agent and CLI on a Red Hat or CentOS server, make sure the following packages are installed and running these minimum versions:

Verify the Server is Running lldpd and wget

Make sure you are running lldpd, not lldpad. CentOS does not include lldpd by default, nor does it include wget, which is required for the installation.

To install this package, run the following commands:

root@rhel7:~# sudo yum -y install epel-release
root@rhel7:~# sudo yum -y install lldpd
root@rhel7:~# sudo systemctl enable lldpd.service
root@rhel7:~# sudo systemctl start lldpd.service
root@rhel7:~# sudo yum install wget

Install and Configure NTP

If NTP is not already installed and configured, follow these steps:

  1. Install NTP on the server. Servers must be in time synchronization with the NetQ Platform or NetQ Appliance to enable useful statistical analysis.

    root@rhel7:~# sudo yum install ntp
    
  2. Configure the NTP server.

    1. Open the /etc/ntp.conf file in your text editor of choice.

    2. Under the Server section, specify the NTP server IP address or hostname.

  3. Enable and start the NTP service.

    root@rhel7:~# sudo systemctl enable ntp
    root@rhel7:~# sudo systemctl start ntp
    

    If you are running NTP in your out-of-band management network with VRF, specify the VRF (ntp@<vrf-name> versus just ntp) in the above commands.

  4. Verify NTP is operating correctly. Look for an asterisk (*) or a plus sign (+) that indicates the clock is synchronized.

    root@rhel7:~# ntpq -pn
    remote           refid            st t when poll reach   delay   offset  jitter
    ==============================================================================
    +173.255.206.154 132.163.96.3     2 u   86  128  377   41.354    2.834   0.602
    +12.167.151.2    198.148.79.209   3 u  103  128  377   13.395   -4.025   0.198
    2a00:7600::41    .STEP.          16 u    - 1024    0    0.000    0.000   0.000
    \*129.250.35.250 249.224.99.213   2 u  101  128  377   14.588   -0.299   0.243
    

Obtain NetQ Agent and CLI Package

To install the NetQ Agent you need to install netq-agent on each switch or host. To install the NetQ CLI you need to install netq-apps on each switch or host. These are available from the Cumulus Networks repository.

To obtain the NetQ packages:

  1. Reference and update the local yum repository.

    root@rhel7:~# sudo rpm --import https://apps3.cumulusnetworks.com/setup/cumulus-apps-rpm.pubkey
    root@rhel7:~# sudo wget -O- https://apps3.cumulusnetworks.com/setup/cumulus-apps-rpm-el7.repo > /etc/yum.repos.d/cumulus-host-el.repo
    
  2. Edit /etc/yum.repos.d/cumulus-host-el.repo to set the enabled=1 flag for the two NetQ repositories.

    root@rhel7:~# vi /etc/yum.repos.d/cumulus-host-el.repo
    ...
    [cumulus-arch-netq-3.2]
    name=Cumulus netq packages
    baseurl=https://apps3.cumulusnetworks.com/repos/rpm/el/7/netq-3.2/$basearch
    gpgcheck=1
    enabled=1
    [cumulus-noarch-netq-3.2]
    name=Cumulus netq architecture-independent packages
    baseurl=https://apps3.cumulusnetworks.com/repos/rpm/el/7/netq-3.2/noarch
    gpgcheck=1
    enabled=1
    ...
    

Install NetQ Agent and CLI on a RHEL or CentOS Server

After completing the preparation steps, you can successfully install the NetQ Agent and CLI software onto your server.

To install the NetQ software:

  1. Install the Bash completion and NetQ packages on the server.

    root@rhel7:~# sudo yum -y install bash-completion
    root@rhel7:~# sudo yum install netq-agent netq-apps
    
  2. Verify you have the correct version of the Agent.

    root@rhel7:~# rpm -q -netq-agent
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • netq-agent-3.2.1-rh7u30~1603791304.6f62fad.x86_64.rpm
    root@rhel7:~# rpm -q -netq-apps
    
    You should see version 3.2.1 and update 30 or 31 in the results. For example:
    • netq-apps-3.2.1-rh7u30~1603791304.6f62fad.x86_64.rpm
  3. Restart rsyslog so log files are sent to the correct destination.

    root@rhel7:~# sudo systemctl restart rsyslog
    
  4. Continue with NetQ Agent and CLI Configuration in the next section.

Configure the NetQ Agent and CLI on a RHEL or CentOS Server

After the NetQ Agent and CLI have been installed on the servers you want to monitor, the NetQ Agents must be configured to obtain useful and relevant data.

The NetQ Agent is aware of and communicates through the designated VRF. If you do not specify one, the default VRF (named default) is used. If you later change the VRF configured for the NetQ Agent (using a lifecycle management configuration profile, for example), you might cause the NetQ Agent to lose communication.

Two methods are available for configuring a NetQ Agent:

Configure the NetQ Agents Using a Configuration File

You can configure the NetQ Agent and CLI in the netq.yml configuration file contained in the /etc/netq/ directory.

  1. Open the netq.yml file using your text editor of choice. For example:

    root@rhel7:~# sudo nano /etc/netq/netq.yml
    
  2. Locate the netq-agent section, or add it.

  3. Set the parameters for the agent as follows:

    • port: 31980 (default) or one that you specify
    • server: IP address of the NetQ server or appliance where the agent should send its collected data
    • vrf: default (default) or one that you specify

    Your configuration should be similar to this:

    netq-agent:
    port: 31980
    server: 127.0.0.1
    vrf: default
    

Locate the netq-cli section, or add it.

  1. Set the parameters for the CLI based on your deployment type.

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: IP address of the NetQ server or NetQ Appliance
    • port (default): 32708
    netq-cli:
    netq-user: admin@company.com
    port: 32708
    server: 192.168.0.254
    

    Specify the following parameters:

    • netq-user: User who can access the CLI
    • server: api.netq.cumulusnetworks.com
    • port (default): 443
    • premises: Name of premises you want to query
    netq-cli:
    netq-user: admin@company.com
    port: 443
    premises: datacenterwest
    server: api.netq.cumulusnetworks.com
    

Configure NetQ Agent adn CLI Using the NetQ CLI

If the CLI is configured, you can use it to configure the NetQ Agent to send telemetry data to the NetQ Server or Appliance.

If you intend to use VRF, skip to Configure the Agent to Use VRF. If you intend to specify a port for communication, skip to Configure the Agent to Communicate over a Specific Port.

Use the following command to configure the NetQ Agent:

netq config add agent server <text-opta-ip> [port <text-opta-port>] [vrf <text-vrf-name>]

This example uses an IP address of 192.168.1.254 and the default port and VRF for the NetQ hardware.

root@rhel7:~# sudo netq config add agent server 192.168.1.254
Updated agent server 192.168.1.254 vrf default. Please restart netq-agent (netq config restart agent).
root@rhel7:~# sudo netq config restart agent

The steps to configure the CLI are different depending on whether the NetQ software has been installed for an on-premises or cloud deployment. Follow the instructions for your deployment type.

Use the following command to configure the CLI:

netq config add cli server <text-gateway-dest> [vrf <text-vrf-name>] [port <text-gateway-port>]

Restart the CLI afterward to activate the configuration.

This example uses an IP address of 192.168.1.0 and the default port and VRF.

root@rhel7:~# sudo netq config add cli server 192.168.1.0
root@rhel7:~# sudo netq config restart cli

To access and configure the CLI on your NetQ Platform or NetQ Cloud Appliance, you must have your username and password to access the NetQ UI to generate AuthKeys. These keys provide authorized access (access key) and user authentication (secret key). Your credentials and NetQ Cloud addresses were provided by Cumulus Networks via an email titled Welcome to Cumulus NetQ!

To generate AuthKeys:

  1. In your Internet browser, enter netq.cumulusnetworks.com into the address field to open the NetQ UI login page.

  2. Enter your username and password.

  3. From the Main Menu, select Management in the Admin column.

  1. Click Manage on the User Accounts card.

  2. Select your user and click above the table.

  3. Copy these keys to a safe place.

  • store the file wherever you like, for example in /home/cumulus/ or /etc/netq
  • name the file whatever you like, for example credentials.yml, creds.yml, or keys.yml

BUT, the file must have the following format:

access-key: <user-access-key-value-here>
secret-key: <user-secret-key-value-here>
  1. Now that you have your AuthKeys, use the following command to configure the CLI:

    netq config add cli server <text-gateway-dest> [access-key <text-access-key> secret-key <text-secret-key> premises <text-premises-name> | cli-keys-file <text-key-file> premises <text-premises-name>] [vrf <text-vrf-name>] [port <text-gateway-port>]
    
  2. Restart the CLI afterward to activate the configuration.

    This example uses the individual access key, a premises of datacenterwest, and the default Cloud address, port and VRF. Be sure to replace the key values with your generated keys if you are using this example on your server.

    root@rhel7:~# sudo netq config add cli server api.netq.cumulusnetworks.com access-key 123452d9bc2850a1726f55534279dd3c8b3ec55e8b25144d4739dfddabe8149e secret-key /vAGywae2E4xVZg8F+HtS6h6yHliZbBP6HXU3J98765= premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    root@rhel7:~# sudo netq config restart cli
    Restarting NetQ CLI... Success!
    

    This example uses an optional keys file. Be sure to replace the keys filename and path with the full path and name of your keys file, and the datacenterwest premises name with your premises name if you are using this example on your server.

    root@rhel7:~# sudo netq config add cli server api.netq.cumulusnetworks.com cli-keys-file /home/netq/nq-cld-creds.yml premises datacenterwest
    Successfully logged into NetQ cloud at api.netq.cumulusnetworks.com:443
    Updated cli server api.netq.cumulusnetworks.com vrf default port 443. Please restart netqd (netq config restart cli)
    
    root@rhel7:~# sudo netq config restart cli
    Restarting NetQ CLI... Success!
    

Configure Advanced NetQ Agent Settings

A couple of additional options are available for configuring the NetQ Agent. If you are using VRF, you can configure the agent to communicate over a specific VRF. You can also configure the agent to use a particular port.

Configure the NetQ Agent to Use a VRF

While optional, Cumulus strongly recommends that you configure NetQ Agents to communicate with the NetQ Platform only via a VRF , including a management VRF . To do so, you need to specify the VRF name when configuring the NetQ Agent. For example, if the management VRF is configured and you want the agent to communicate with the NetQ Platform over it, configure the agent like this:

root@rhel7:~# sudo netq config add agent server 192.168.1.254 vrf mgmt
root@rhel7:~# sudo netq config restart agent

Configure the NetQ Agent to Communicate over a Specific Port

By default, NetQ uses port 31980 for communication between the NetQ Platform and NetQ Agents. If you want the NetQ Agent to communicate with the NetQ Platform via a different port, you need to specify the port number when configuring the NetQ Agent like this:

root@rhel7:~# sudo netq config add agent server 192.168.1.254 port 7379
root@rhel7:~# sudo netq config restart agent

Upgrade NetQ

This topic describes how to upgrade from your current NetQ 2.4.1-3.2.0 installation to the NetQ 3.2.1 release to take advantage of new capabilities and bug fixes (refer to the release notes).

You must upgrade your NetQ On-premises or Cloud Appliance(s) or Virtual Machines (VMs). While NetQ 2.x Agents are compatible with NetQ 3.x, upgrading NetQ Agents is always recommended. If you want access to new and updated commands, you can upgrade the CLI on your physical servers or VMs, and monitored switches and hosts as well.

To complete the upgrade for either an on-premises or a cloud deployment:

Upgrade NetQ Appliances and Virtual Machines

The first step in upgrading your NetQ 2.4.1 - 3.2.0 installation to NetQ 3.2.1 is to upgrade your NetQ appliance(s) or VM(s). This topic describes how to upgrade this for both on-premises and cloud deployments.

Prepare for Upgrade

Three important steps are required to prepare for upgrade of your NetQ Platform:

Optionally, you can choose to back up your NetQ Data before performing the upgrade.

To complete the preparation:

  1. For on-premises deployments only, optionally back up your NetQ data. Refer to Back Up and Restore NetQ.

  2. Download the relevant software.

    1. Go to the MyMellanox downloads page page, and select NetQ from the Product list.

    2. Select 3.2 from the Version list, and then click 3.2.1 in the submenu.

    3. Select the relevant software from the HyperVisor/Platform list:

      If you are upgrading NetQ Platform software for a NetQ On-premises Appliance or VM, select Appliance to download the NetQ-3.2.1.tgz file. If you are upgrading NetQ Collector software for a NetQ Cloud Appliance or VM, select Appliance (Cloud) to download the NetQ-3.2.1-opta.tgz file.

    4. Scroll down and click Download on the on-premises or cloud NetQ Appliance image.

      You can ignore the note on the image card because, unlike during installation, you do not need to download the bootstrap file for an upgrade.

  3. Copy the file to the /mnt/installables/ directory on your appliance or VM.

  4. Update /etc/apt/sources.list.d/cumulus-netq.list to netq-3.2 as follows:

    cat /etc/apt/sources.list.d/cumulus-netq.list
    deb [arch=amd64] https://apps3.cumulusnetworks.com/repos/deb bionic netq-3.2
    
  5. Update the NetQ debian packages.

    cumulus@<hostname>:~$ sudo apt-get update
    Get:1 http://apps3.cumulusnetworks.com/repos/deb bionic InRelease [13.8 kB]
    Get:2 http://apps3.cumulusnetworks.com/repos/deb bionic/netq-3.2 amd64 Packages [758 B]
    Hit:3 http://archive.ubuntu.com/ubuntu bionic InRelease
    Get:4 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
    Get:5 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
    ...
    Get:24 http://archive.ubuntu.com/ubuntu bionic-backports/universe Translation-en [1900 B]
    Fetched 4651 kB in 3s (1605 kB/s)
    Reading package lists... Done
    
    cumulus@<hostname>:~$ sudo apt-get install -y netq-agent netq-apps
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    ...
    The following NEW packages will be installed:
    netq-agent netq-apps
    ...
    Fetched 39.8 MB in 3s (13.5 MB/s)
    ...
    Unpacking netq-agent (3.2.1-ub18.04u31~1603789872.6f62fad) ...
    ...
    Unpacking netq-apps (3.2.1-ub18.04u31~1603789872.6f62fad) ...
    Setting up netq-apps (3.2.1-ub18.04u31~1603789872.6f62fad) ...
    Setting up netq-agent (3.2.1-ub18.04u31~1603789872.6f62fad) ...
    Processing triggers for rsyslog (8.32.0-1ubuntu4) ...
    Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
    
  6. If you are upgrading NetQ as a VM in the cloud from version 3.1.0 or earlier, you must increase the root volume disk image size for proper operation of the lifecycle management feature.

    1. Check the size of the existing disk in the VM to confirm it is 32 GB. In this example, the number of 1 MB blocks is 31583, or 32 GB.

      cumulus@netq-310-cloud:~$ df -hm /
      Filesystem     1M-blocks  Used Available Use% Mounted on
      /dev/sda1          31583  4771     26797  16% /
      
    2. Shutdown the VM.

    1. After the VM is shutdown (Shut down button is grayed out), click Edit.
    1. In the Edit settings > Virtual Hardware > Hard disk field, change the 32 to 64 on the server hosting the VM.
    1. Click Save.

    2. Start the VM, log back in.

    3. From step 1 we know the name of the root disk is /dev/sda1. Use that to run the following commands on the partition.

      cumulus@netq-310-cloud:~$ sudo growpart /dev/sda 1
      CHANGED: partition=1 start=227328 old: size=66881503 end=67108831 new: size=133990367,end=134217695
      
      cumulus@netq-310-cloud:~$ sudo resize2fs /dev/sda1
      resize2fs 1.44.1 (24-Mar-2018)
      Filesystem at /dev/sda1 is mounted on /; on-line resizing required
      old_desc_blocks = 4, new_desc_blocks = 8
      The filesystem on /dev/sda1 is now 16748795 (4k) blocks long.
      
    4. Verify the disk is now configured with 64 GB. In this example, the number of 1 MB blocks is now 63341, or 64 GB.

      cumulus@netq-310-cloud:~$ df -hm /
      Filesystem     1M-blocks  Used Available Use% Mounted on
      /dev/sda1          63341  4772     58554   8% /
      
    1. Check the size of the existing hard disk in the VM to confirm it is 32 GB. In this example, the number of 1 MB blocks is 31583, or 32 GB.

      cumulus@netq-310-cloud:~$ df -hm /
      Filesystem     1M-blocks  Used Available Use% Mounted on
      /dev/vda1          31583  1192     30375   4% /
      
    2. Shutdown the VM.

    3. Check the size of the existing disk on the server hosting the VM to confirm it is 32 GB. In this example, the size is shown in the virtual size field.

      root@server:/var/lib/libvirt/images# qemu-img info netq-3.1.0-ubuntu-18.04-tscloud-qemu.qcow2
      image: netq-3.1.0-ubuntu-18.04-tscloud-qemu.qcow2
      file format: qcow2
      virtual size: 32G (34359738368 bytes)
      disk size: 1.3G
      cluster_size: 65536
      Format specific information:
          compat: 1.1
          lazy refcounts: false
          refcount bits: 16
          corrupt: false
      
    4. Add 32 GB to the image.

      root@server:/var/lib/libvirt/images# qemu-img resize netq-3.1.0-ubuntu-18.04-tscloud-qemu.qcow2 +32G
      Image resized.
      
    5. Verify the change.

      root@server:/var/lib/libvirt/images# qemu-img info netq-3.1.0-ubuntu-18.04-tscloud-qemu.qcow2
      image: netq-3.1.0-ubuntu-18.04-tscloud-qemu.qcow2
      file format: qcow2
      virtual size: 64G (68719476736 bytes)
      disk size: 1.3G
      cluster_size: 65536
      Format specific information:
          compat: 1.1
          lazy refcounts: false
          refcount bits: 16
          corrupt: false
      
    6. Start the VM and log back in.

    7. From step 1 we know the name of the root disk is /dev/vda 1. Use that to run the following commands on the partition.

      cumulus@netq-310-cloud:~$ sudo growpart /dev/vda 1
      CHANGED: partition=1 start=227328 old: size=66881503 end=67108831 new: size=133990367,end=134217695
      
      cumulus@netq-310-cloud:~$ sudo resize2fs /dev/vda1
      resize2fs 1.44.1 (24-Mar-2018)
      Filesystem at /dev/vda1 is mounted on /; on-line resizing required
      old_desc_blocks = 4, new_desc_blocks = 8
      The filesystem on /dev/vda1 is now 16748795 (4k) blocks long.
      
    8. Verify the disk is now configured with 64 GB. In this example, the number of 1 MB blocks is now 63341, or 64 GB.

    cumulus@netq-310-cloud:~$ df -hm /
    Filesystem     1M-blocks  Used Available Use% Mounted on
    /dev/vda1          63341  1193     62132   2% /
    

You can now upgrade your appliance using the NetQ Admin UI, in the next section. Alternately, you can upgrade using the CLI here: Upgrade Your Platform Using the NetQ CLI.

Upgrade Your Platform Using the NetQ Admin UI

After completing the preparation steps, upgrading your NetQ On-premises or Cloud Appliance(s) or VMs is simple using the Admin UI.

To upgrade your NetQ software:

  1. Run the bootstrap CLI to upgrade the Admin UI application.
cumulus@<hostname>:~$ netq bootstrap master upgrade /mnt/installables/NetQ-3.2.1.tgz
2020-04-28 15:39:37.016710: master-node-installer: Extracting tarball /mnt/installables/NetQ-3.2.1.tgz
2020-04-28 15:44:48.188658: master-node-installer: Upgrading NetQ Admin container
2020-04-28 15:47:35.667579: master-node-installer: Removing old images
-----------------------------------------------
Successfully bootstrap-upgraded the master node
netq bootstrap master upgrade /mnt/installables/NetQ-3.2.1-opta.tgz
  1. Open the Admin UI by entering http://<hostname-or-ipaddress>:8443 in your browser address field.

  2. Enter your NetQ credentials to enter the application.

    The default username is admin and the default password in admin.

    On-premises deployment (cloud deployment only has Node and Pod cards

    On-premises deployment (cloud deployment only has Node and Pod cards

  3. Click Upgrade.

  4. Enter NetQ-3.2.1.tgz or NetQ-3.2.1-opta.tgz and click .

    The is only visible after you enter your tar file information.

  5. Monitor the progress. Click to monitor each step in the jobs.

    The following example is for an on-premises upgrade. The jobs for a cloud upgrade are slightly different.

  6. When it completes, click to be returned to the Health dashboard.

Upgrade Your Platform Using the NetQ CLI

After completing the preparation steps, upgrading your NetQ On-premises/Cloud Appliance(s) or VMs is simple using the NetQ CLI.

To upgrade:

  1. Run the appropriate netq upgrade command.
netq upgrade bundle /mnt/installables/NetQ-3.2.1.tgz
netq upgrade bundle /mnt/installables/NetQ-3.2.1-opta.tgz
  1. After the upgrade completes, confirm the upgrade was successful.

    cumulus@<hostname>:~$ cat /etc/app-release
    BOOTSTRAP_VERSION=3.2.1
    APPLIANCE_MANIFEST_HASH=74ac3017d5
    APPLIANCE_VERSION=3.2.1
    

Upgrade NetQ Agents

Cumulus Networks strongly recommends that you upgrade your NetQ Agents when you install or upgrade to a new release. If you are using NetQ Agent 2.4.0 update 24 or earlier, you must upgrade to ensure proper operation.

Upgrade NetQ Agents on Cumulus Linux Switches

The following instructions are applicable to both Cumulus Linux 3.x and 4.x, and for both on-premises and cloud deployments.

To upgrade the NetQ Agent:

  1. Log in to your switch or host.

  2. Update and install the new NetQ debian package.

    sudo apt-get update
    sudo apt-get install -y netq-agent
    
    sudo yum update
    sudo yum install netq-agent
    
  3. Restart the NetQ Agent.

    netq config restart agent
    

Refer to Install and Configure the NetQ Agent on Cumulus Linux Switches to complete the upgrade.

Upgrade NetQ Agents on Ubuntu Servers

The following instructions are applicable to both NetQ Platform and NetQ Appliances running Ubuntu 16.04 or 18.04 in on-premises and cloud deployments.

To upgrade the NetQ Agent:

  1. Log in to your NetQ Platform or Appliance.

  2. Update your NetQ repository.

root@ubuntu:~# sudo apt-get update
  1. Install the agent software.
root@ubuntu:~# sudo apt-get install -y netq-agent
  1. Restart the NetQ Agent.
root@ubuntu:~# netq config restart agent

Refer to Install and Configure the NetQ Agent on Ubuntu Servers to complete the upgrade.

Upgrade NetQ Agents on RHEL or CentOS Servers

The following instructions are applicable to both on-premises and cloud deployments.

To upgrade the NetQ Agent:

  1. Log in to your NetQ Platform.

  2. Update your NetQ repository.

root@rhel7:~# sudo yum update
  1. Install the agent software.
root@rhel7:~# sudo yum install netq-agent
  1. Restart the NetQ Agent.
root@rhel7:~# netq config restart agent

Refer to Install and Configure the NetQ Agent on RHEL and CentOS Servers to complete the upgrade.

Verify NetQ Agent Version

You can verify the version of the agent software you have deployed as described in the following sections.

For Switches Running Cumulus Linux 3.x or 4.x

Run the following command to view the NetQ Agent version.

cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-agent
You should see version 3.2.1 and update 30 or 31 in the results. For example:

If you see an older version, refer to Upgrade NetQ Agents on Cumulus Linux Switches.

For Servers Running Ubuntu 16.04 or 18.04

Run the following command to view the NetQ Agent version.

root@ubuntu:~# dpkg-query -W -f '${Package}\t${Version}\n' netq-agent
You should see version 3.2.1 and update 30 or 31 in the results. For example:

If you see an older version, refer to Upgrade NetQ Agents on Ubuntu Servers.

For Servers Running RHEL7 or CentOS

Run the following command to view the NetQ Agent version.

root@rhel7:~# rpm -q -netq-agent
You should see version 3.2.1 and update 30 or 31 in the results. For example:

If you see an older version, refer to Upgrade NetQ Agents on RHEL or CentOS Servers.

Upgrade NetQ CLI

While it is not required to upgrade the NetQ CLI on your monitored switches and hosts when you upgrade to NetQ 3.2.1, doing so gives you access to new features and important bug fixes. Refer to the release notes for details.

To upgrade the NetQ CLI:

  1. Log in to your switch or host.

  2. Update and install the new NetQ debian package.

    sudo apt-get update
    sudo apt-get install -y netq-apps
    
    sudo yum update
    sudo yum install netq-apps
    
  3. Restart the CLI.

    netq config restart cli
    

To complete the upgrade, refer to the relevant configuration topic:

Upgrade NetQ Agents and CLI on Cumulus Linux Switches

The following instructions are applicable to both Cumulus Linux 3.x and 4.x, and for both on-premises and cloud deployments.

To upgrade the NetQ Agent and CLI on a switch or host:

  1. Log in to your switch or host.

  2. Update and install the new NetQ debian packages.

    sudo apt-get update
    sudo apt-get install -y netq-agent netq-apps
    
    sudo yum update
    sudo yum install netq-agent netq-apps
    
  3. Restart the NetQ Agent and CLI.

    netq config restart agent
    netq config restart cli
    

Refer to Install and Configure the NetQ Agent on Cumulus Linux Switches to complete the upgrade.

Upgrade NetQ Agents and CLI on Ubuntu Servers

The following instructions are applicable to both NetQ Platform and NetQ Appliances running Ubuntu 16.04 or 18.04 in on-premises and cloud deployments.

To upgrade the NetQ Agent:

  1. Log in to your NetQ Platform or Appliance.

  2. Update your NetQ repository.

root@ubuntu:~# sudo apt-get update
  1. Install the agent software.
root@ubuntu:~# sudo apt-get install -y netq-agent netq-apps
  1. Restart the NetQ Agent.
root@ubuntu:~# netq config restart agent
root@ubuntu:~# netq config restart cli

Refer to Install and Configure the NetQ Agent on Ubuntu Servers to complete the upgrade.

Upgrade NetQ Agents and CLI on RHEL or CentOS Servers

The following instructions are applicable to both on-premises and cloud deployments.

To upgrade the NetQ Agent:

  1. Log in to your NetQ Platform.

  2. Update your NetQ repository.

root@rhel7:~# sudo yum update
  1. Install the agent software.
root@rhel7:~# sudo yum install netq-agent netq-apps
  1. Restart the NetQ Agent and CLI.
root@rhel7:~# netq config restart agent
root@rhel7:~# netq config restart cli

Refer to Install and Configure the NetQ Agent on RHEL and CentOS Servers to complete the upgrade.

Back Up and Restore NetQ

It is recommended that you back up your NetQ data according to your company policy. Typically this includes after key configuration changes and on a scheduled basis.

These topics describe how to backup and also restore your NetQ data for NetQ On-premises Appliance and VMs.

These procedures do not apply to your NetQ Cloud Appliance or VM. Data backup is handled automatically with the NetQ cloud service.

Back Up Your NetQ Data

NetQ data is stored in a Cassandra database. A backup is performed by running scripts provided with the software and located in the /usr/sbin directory. When a backup is performed, a single tar file is created. The file is stored on a local drive that you specify and is named netq_master_snapshot_<timestamp>.tar.gz. Currently, only one backup file is supported, and includes the entire set of data tables. It is replaced each time a new backup is created.

If the rollback option is selected during the lifecycle management upgrade process (the default behavior), a backup is created automatically.

To manually create a backup:

  1. If you are backing up data from NetQ 2.4.0 or earlier, or you upgraded from NetQ 2.4.0 to 2.4.1, obtain an updated backuprestore script. If you installed NetQ 2.4.1 as a fresh install, you can skip this step. Replace <version> in these commands with 2.4.1 or later release version.

    cumulus@switch:~$ tar -xvzf  /mnt/installables/NetQ-<version>.tgz  -C /tmp/ ./netq-deploy-<version>.tgz
    cumulus@switch:~$ tar -xvzf /tmp/netq-deploy-<version>.tgz   -C /usr/sbin/ --strip-components 1 --wildcards backuprestore/*.sh
    
  2. Run the backup script to create a backup file in /opt/<backup-directory> being sure to replace the backup-directory option with the name of the directory you want to use for the backup file.

    cumulus@switch:~$ ./backuprestore.sh --backup --localdir /opt/<backup-directory>
    

    You can abbreviate the backup and localdir options of this command to -b and -l to reduce typing. If the backup directory identified does not already exist, the script creates the directory during the backup process.

    This is a sample of what you see as the script is running:

    [Fri 26 Jul 2019 02:35:35 PM UTC] - Received Inputs for backup ...
    [Fri 26 Jul 2019 02:35:36 PM UTC] - Able to find cassandra pod: cassandra-0
    [Fri 26 Jul 2019 02:35:36 PM UTC] - Continuing with the procedure ...
    [Fri 26 Jul 2019 02:35:36 PM UTC] - Removing the stale backup directory from cassandra pod...
    [Fri 26 Jul 2019 02:35:36 PM UTC] - Able to successfully cleanup up /opt/backuprestore from cassandra pod ...
    [Fri 26 Jul 2019 02:35:36 PM UTC] - Copying the backup script to cassandra pod ....
    /opt/backuprestore/createbackup.sh: line 1: cript: command not found
    [Fri 26 Jul 2019 02:35:48 PM UTC] - Able to exeute /opt/backuprestore/createbackup.sh script on cassandra pod
    [Fri 26 Jul 2019 02:35:48 PM UTC] - Creating local directory:/tmp/backuprestore/ ...  
    Directory /tmp/backuprestore/ already exists..cleaning up
    [Fri 26 Jul 2019 02:35:48 PM UTC] - Able to copy backup from cassandra pod  to local directory:/tmp/backuprestore/ ...
    [Fri 26 Jul 2019 02:35:48 PM UTC] - Validate the presence of backup file in directory:/tmp/backuprestore/
    [Fri 26 Jul 2019 02:35:48 PM UTC] - Able to find backup file:netq_master_snapshot_2019-07-26_14_35_37_UTC.tar.gz
    [Fri 26 Jul 2019 02:35:48 PM UTC] - Backup finished successfully!
    
  3. Verify the backup file has been created.

    cumulus@switch:~$ cd /opt/<backup-directory>
    cumulus@switch:~/opt/<backup-directory># ls
    netq_master_snapshot_2019-06-04_07_24_50_UTC.tar.gz
    

To create a scheduled backup, add ./backuprestore.sh --backup --localdir /opt/<backup-directory> to an existing cron job, or create a new one.

Restore Your NetQ Data

You can restore NetQ data using the backup file you created above in Back Up and Restore NetQ. You can restore your instance to the same NetQ Platform or NetQ Appliance or to a new platform or appliance. You do not need to stop the server where the backup file resides to perform the restoration, but logins to the NetQ UI will fail during the restoration process.The restore option of the backup script, copies the data from the backup file to the database, decompresses it, verifies the restoration, and starts all necessary services. You should not see any data loss as a result of a restore operation.

To restore NetQ on the same hardware where the backup file resides:

  1. If you are restoring data from NetQ 2.4.0 or earlier, or you upgraded from NetQ 2.4.0 to 2.4.1, obtain an updated backuprestore script. If you installed NetQ 2.4.1 as a fresh install, you can skip this step. Replace <version> in these commands with 2.4.1 or later release version.

    cumulus@switch:~$ tar -xvzf  /mnt/installables/NetQ-<version>.tgz  -C /tmp/ ./netq-deploy-<version>.tgz
    cumulus@switch:~$ tar -xvzf /tmp/netq-deploy-<version>.tgz   -C /usr/sbin/ --strip-components 1 --wildcards backuprestore/*.sh
    
  2. Run the restore script being sure to replace the backup-directory option with the name of the directory where the backup file resides.

    cumulus@switch:~$ ./backuprestore.sh --restore --localdir /opt/<backup-directory>
    

    You can abbreviate the restore and localdir options of this command to -r and -l to reduce typing.

    This is a sample of what you see while the script is running:

    [Fri 26 Jul 2019 02:37:49 PM UTC] - Received Inputs for restore ...
    WARNING: Restore procedure wipes out the existing contents of Database.
      Once the Database is restored you loose the old data and cannot be recovered.
    "Do you like to continue with Database restore:[Y(yes)/N(no)]. (Default:N)"
    

    You must answer the above question to continue the restoration. After entering Y or yes, the output continues as follows:

    [Fri 26 Jul 2019 02:37:50 PM UTC] - Able to find cassandra pod: cassandra-0
    [Fri 26 Jul 2019 02:37:50 PM UTC] - Continuing with the procedure ...
    [Fri 26 Jul 2019 02:37:50 PM UTC] - Backup local directory:/tmp/backuprestore/ exists....
    [Fri 26 Jul 2019 02:37:50 PM UTC] - Removing any stale restore directories ...
    Copying the file for restore to cassandra pod ....
    [Fri 26 Jul 2019 02:37:50 PM UTC] - Able to copy the local directory contents to cassandra pod in /tmp/backuprestore/.
    [Fri 26 Jul 2019 02:37:50 PM UTC] - copying the script to cassandra pod in dir:/tmp/backuprestore/....
    Executing the Script for restoring the backup ...
    /tmp/backuprestore//createbackup.sh: line 1: cript: command not found
    [Fri 26 Jul 2019 02:40:12 PM UTC] - Able to exeute /tmp/backuprestore//createbackup.sh script on cassandra pod
    [Fri 26 Jul 2019 02:40:12 PM UTC] - Restore finished successfully!
    

To restore NetQ on new hardware:

  1. Copy the backup file from /opt/<backup-directory> on the older hardware to the backup directory on the new hardware.

  2. Run the restore script on the new hardware, being sure to replace the backup-directory option with the name of the directory where the backup file resides.

    cumulus@switch:~$ ./backuprestore.sh --restore --localdir /opt/<backup-directory>
    

Configure Integrations

After you have completed the installation of Cumulus NetQ, you may want to configure some of the additional capabilities that NetQ offers or integrate it with third-party software or hardware.

This topic describes how to:

Integrate NetQ with Your LDAP Server

With this release and an administrator role, you are able to integrate the NetQ role-based access control (RBAC) with your lightweight directory access protocol (LDAP) server in on-premises deployments. NetQ maintains control over role-based permissions for the NetQ application. Currently there are two roles, admin and user. With the integration, user authentication is handled through LDAP and your directory service, such as Microsoft Active Directory, Kerberos, OpenLDAP, and Red Hat Directory Service. A copy of each user from LDAP is stored in the local NetQ database.

Integrating with an LDAP server does not prevent you from configuring local users (stored and managed in the NetQ database) as well.

Read Get Started to become familiar with LDAP configuration parameters, or skip to Create an LDAP Configuration if you are already an LDAP expert.

Get Started

LDAP integration requires information about how to connect to your LDAP server, the type of authentication you plan to use, bind credentials, and, optionally, search attributes.

Provide Your LDAP Server Information

To connect to your LDAP server, you need the URI and bind credentials. The URI identifies the location of the LDAP server. It is comprised of a FQDN (fully qualified domain name) or IP address, and the port of the LDAP server where the LDAP client can connect. For example: myldap.mycompany.com or 192.168.10.2. Typically port 389 is used for connection over TCP or UDP. In production environments, a secure connection with SSL can be deployed. In this case, the port used is typically 636. Setting the Enable SSL toggle automatically sets the server port to 636.

Specify Your Authentication Method

Two methods of user authentication are available: anonymous and basic.

If you are unfamiliar with the configuration of your LDAP server, contact your administrator to ensure you select the appropriate authentication method and credentials.

Define User Attributes

Two attributes are required to define a user entry in a directory:

Optionally, you can specify the first name, last name, and email address of the user.

Set Search Attributes

While optional, specifying search scope indicates where to start and how deep a given user can search within the directory. The data to search for is specified in the search query.

Search scope options include:

A typical search query for users would be {userIdAttribute}={userId}.

Now that you are familiar with the various LDAP configuration parameters, you can configure the integration of your LDAP server with NetQ using the instructions in the next section.

Create an LDAP Configuration

One LDAP server can be configured per bind DN (distinguished name). Once LDAP is configured, you can validate the connectivity (and configuration) and save the configuration.

To create an LDAP configuration:

  1. Click , then select Management under Admin.

  2. Locate the LDAP Server Info card, and click Configure LDAP.

  3. Fill out the LDAP Server Configuration form according to your particular configuration. Refer to Overview for details about the various parameters.

    Note: Items with an asterisk (*) are required. All others are optional.

  4. Click Save to complete the configuration, or click Cancel to discard the configuration.

LDAP config cannot be changed once configured. If you need to change the configuration, you must delete the current LDAP configuration and create a new one. Note that if you change the LDAP server configuration, all users created against that LDAP server remain in the NetQ database and continue to be visible, but are no longer viable. You must manually delete those users if you do not want to see them.

Example LDAP Configurations

A variety of example configurations are provided here. Scenarios 1-3 are based on using an OpenLDAP or similar authentication service. Scenario 4 is based on using the Active Directory service for authentication.

Scenario 1: Base Configuration

In this scenario, we are configuring the LDAP server with anonymous authentication, a User ID based on an email address, and a search scope of base.

ParameterValue
Host Server URLldap1.mycompany.com
Host Server Port389
AuthenticationAnonymous
Base DNdc=mycompany,dc=com
User IDemail
Search ScopeBase
Search Query{userIdAttribute}={userId}

Scenario 2: Basic Authentication and Subset of Users

In this scenario, we are configuring the LDAP server with basic authentication, for access only by the persons in the network operators group, and a limited search scope.

ParameterValue
Host Server URLldap1.mycompany.com
Host Server Port389
AuthenticationBasic
Admin Bind DNuid =admin,ou=netops,dc=mycompany,dc=com
Admin Bind Passwordnqldap!
Base DNdc=mycompany,dc=com
User IDUID
Search ScopeOne Level
Search Query{userIdAttribute}={userId}

Scenario 3: Scenario 2 with Widest Search Capability

In this scenario, we are configuring the LDAP server with basic authentication, for access only by the persons in the network administrators group, and an unlimited search scope.

ParameterValue
Host Server URL192.168.10.2
Host Server Port389
AuthenticationBasic
Admin Bind DNuid =admin,ou=netadmin,dc=mycompany,dc=com
Admin Bind Password1dap*netq
Base DNdc=mycompany, dc=net
User IDUID
Search ScopeSubtree
Search QueryuserIdAttribute}={userId}

Scenario 4: Scenario 3 with Active Directory Service

In this scenario, we are configuring the LDAP server with basic authentication, for access only by the persons in the given Active Directory group, and an unlimited search scope.

ParameterValue
Host Server URL192.168.10.2
Host Server Port389
AuthenticationBasic
Admin Bind DNcn=netq,ou=45,dc=mycompany,dc=com
Admin Bind Passwordnq&4mAd!
Base DNdc=mycompany, dc=net
User IDsAMAccountName
Search ScopeSubtree
Search Query{userIdAttribute}={userId}

Add LDAP Users to NetQ

  1. Click , then select Management under Admin.

  2. Locate the User Accounts card, and click Manage.

  3. On the User Accounts tab, click Add User.

  4. Select LDAP User.

  5. Enter the user’s ID.

  6. Enter your administrator password.

  7. Click Search.

  8. If the user is found, the email address, first and last name fields are automatically filled in on the Add New User form. If searching is not enabled on the LDAP server, you must enter the information manually.

    If the fields are not automatically filled in, and searching is enabled on the LDAP server, you might require changes to the mapping file.

  9. Select the NetQ user role for this user, admin or user, in the User Type dropdown.

  10. Enter your admin password, and click Save, or click Cancel to discard the user account.

    LDAP user passwords are not stored in the NetQ database and are always authenticated against LDAP.

  11. Repeat these steps to add additional LDAP users.

Remove LDAP Users from NetQ

You can remove LDAP users in the same manner as local users.

  1. Click , then select Management under Admin.

  2. Locate the User Accounts card, and click Manage.

  3. Select the user or users you want to remove.

  4. Click in the Edit menu.

If an LDAP user is deleted in LDAP it is not automatically deleted from NetQ; however, the login credentials for these LDAP users stop working immediately.

Integrate NetQ with Grafana

Switches collect statistics about the performance of their interfaces. The NetQ Agent on each switch collects these statistics every 15 seconds and then sends them to your NetQ Appliance or Virtual Machine.

NetQ collects statistics for physical interfaces; it does not collect statistics for virtual interfaces, such as bonds, bridges, and VXLANs. NetQ collects these statistics from two data sources: Net-Q and Net-Q-Ethtool.

Net-Q displays:

Net-Q-Ethtool displays:

You can use Grafana version 6.x, an open source analytics and monitoring tool, to view these statistics. The fastest way to achieve this is by installing Grafana on an application server or locally per user, and then installing the NetQ plug-in containing the prepared NetQ dashboard.

If you do not have Grafana installed already, refer to grafana.com for instructions on installing and configuring the Grafana tool.

Install NetQ Plug-in for Grafana

Use the Grafana CLI to install the NetQ plug-in. For more detail about this command, refer to the Grafana CLI documentation.

grafana-cli --pluginUrl https://netq-grafana-dsrc.s3-us-west-2.amazonaws.com/dist.zip plugins install netq-dashboard
installing netq-dashboard @ 
from: https://netq-grafana-dsrc.s3-us-west-2.amazonaws.com/dist.zip
into: /usr/local/var/lib/grafana/plugins

✔ Installed netq-dashboard successfully

Restart grafana after installing plugins . <service grafana-server restart>

Set Up the Pre-configured NetQ Dashboard

The quickest way to view the interface statistics for your Cumulus Linux network is to make use of the pre-configured dashboard installed with the plug-in. Once you are familiar with that dashboard, you can create new dashboards or add new panels to the NetQ dashboard.

  1. Open the Grafana user interface.

  2. Log in using your application credentials.

    The Home Dashboard appears.

  3. Click Add data source or > Data Sources.

  4. Enter Net-Q or Net-Q-Ethtool in the search box. Alternately, scroll down to the Other category, and select one of these sources from there.

  5. Enter Net-Q or Net-Q-Ethtool into the Name field.

  6. Enter the URL used to access the database:

    • Cloud: api.netq.cumulusnetworks.com
    • On-premises: <hostname-or-ipaddr-of-netq-appl-or-vm>/api
    • Cumulus in the Cloud (CITC): air.netq.cumulusnetworks.com
  7. Select which statistics you want to view from the Module dropdown; either procdevstats or ethtool.

  8. Enter your credentials (the ones used to login).

  9. For NetQ cloud deployments only, if you have more than one premises configured, you can select the premises you want to view, as follows:

    • If you leave the Premises field blank, the first premises name is selected by default

    • If you enter a premises name, that premises is selected for viewing

      Note: If multiple premises are configured with the same name, then the first premises of that name is selected for viewing

  10. Click Save & Test.

  11. Go to analyzing your data.

Create a Custom Dashboard

You can create a dashboard with only the statistics of interest to you.

To create your own dashboard:

  1. Click to open a blank dashboard.

  2. Click (Dashboard Settings) at the top of the dashboard.

  3. Click Variables.

  4. Enter hostname into the Name field.

  5. Enter Hostname into the Label field.

  6. Select Net-Q or Net-Q-Ethtool from the Data source list.

  7. Enter hostname into the Query field.

  8. Click Add.

    You should see a preview at the bottom of the hostname values.

  9. Click to return to the new dashboard.

  10. Click Add Query.

  11. Select Net-Q or Net-Q-Ethtool from the Query source list.

  12. Select the interface statistic you want to view from the Metric list.

  13. Click the General icon.

  14. Select hostname from the Repeat list.

  15. Set any other parameters around how to display the data.

  16. Return to the dashboard.

  17. Add additional panels with other metrics to complete your dashboard.

Analyze the Data

Once you have your dashboard configured, you can start analyzing the data:

  1. Select the hostname from the variable list at the top left of the charts to see the statistics for that switch or host.

  2. Review the statistics, looking for peaks and valleys, unusual patterns, and so forth.

  3. Explore the data more by modifying the data view in one of several ways using the dashboard tool set:

    • Select a different time period for the data by clicking the forward or back arrows. The default time range is dependent on the width of your browser window.
    • Zoom in on the dashboard by clicking the magnifying glass.
    • Manually refresh the dashboard data, or set an automatic refresh rate for the dashboard from the down arrow.
    • Add a new variable by clicking the cog wheel, then selecting Variables
    • Add additional panels
    • Click any chart title to edit or remove it from the dashboard
    • Rename the dashboard by clicking the cog wheel and entering the new name

Uninstall NetQ

You can remove the NetQ software from your system server and switches when necessary.

Remove the NetQ Agent and CLI from a Cumulus Linux Switch or Ubuntu Host

Use the apt-get purge command to remove the NetQ agent or CLI package from a Cumulus Linux switch or an Ubuntu host.

cumulus@switch:~$ sudo apt-get update
cumulus@switch:~$ sudo apt-get purge netq-agent netq-apps
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
  netq-agent* netq-apps*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
After this operation, 310 MB disk space will be freed.
Do you want to continue? [Y/n] Y
Creating pre-apt snapshot... 2 done.
(Reading database ... 42026 files and directories currently installed.)
Removing netq-agent (3.0.0-cl3u27~1587646213.c5bc079) ...
/usr/sbin/policy-rc.d returned 101, not running 'stop netq-agent.service'
Purging configuration files for netq-agent (3.0.0-cl3u27~1587646213.c5bc079) ...
dpkg: warning: while removing netq-agent, directory '/etc/netq/config.d' not empty so not removed
Removing netq-apps (3.0.0-cl3u27~1587646213.c5bc079) ...
/usr/sbin/policy-rc.d returned 101, not running 'stop netqd.service'
Purging configuration files for netq-apps (3.0.0-cl3u27~1587646213.c5bc079) ...
dpkg: warning: while removing netq-apps, directory '/etc/netq' not empty so not removed
Processing triggers for man-db (2.7.0.2-5) ...
grep: extra.services.enabled: No such file or directory
Creating post-apt snapshot... 3 done.

If you only want to remove the agent or the CLI, but not both, specify just the relevant package in the apt-get purge command.

To verify the packages have been removed from the switch, run:

cumulus@switch:~$ dpkg-query -l netq-agent
dpkg-query: no packages found matching netq-agent
cumulus@switch:~$ dpkg-query -l netq-apps
dpkg-query: no packages found matching netq-apps

Remove the NetQ Agent and CLI from a RHEL7 or CentOS Host

Use the yum remove command to remove the NetQ agent or CLI package from a RHEL7 or CentOS host.

root@rhel7:~# sudo yum remove netq-agent netq-apps
Loaded plugins: fastestmirror
Resolving Dependencies
--> Running transaction check
---> Package netq-agent.x86_64 0:3.1.0-rh7u28~1594097110.8f00ba1 will be erased
--> Processing Dependency: netq-agent >= 3.2.0 for package: cumulus-netq-3.1.0-rh7u28~1594097110.8f00ba1.x86_64
--> Running transaction check
---> Package cumulus-netq.x86_64 0:3.1.0-rh7u28~1594097110.8f00ba1 will be erased
--> Finished Dependency Resolution

Dependencies Resolved

...

Removed:
  netq-agent.x86_64 0:3.1.0-rh7u28~1594097110.8f00ba1

Dependency Removed:
  cumulus-netq.x86_64 0:3.1.0-rh7u28~1594097110.8f00ba1

Complete!

If you only want to remove the agent or the CLI, but not both, specify just the relevant package in the yum remove command.

To verify the packages have been removed from the switch, run:

root@rhel7:~# rpm -q netq-agent
package netq-agent is not installed
root@rhel7:~# rpm -q netq-apps
package netq-apps is not installed

Uninstall NetQ from the System Server

First remove the data collected to free up used disk space. Then remove the software.

  1. Log on to the NetQ system server.

  2. Remove the data.

netq bootstrap reset purge-db
  1. Remove the software.

Use the apt-get purge command.

cumulus@switch:~$ sudo apt-get update
cumulus@switch:~$ sudo apt-get purge netq-agent netq-apps
  1. Verify the packages have been removed from the switch.
cumulus@switch:~$ dpkg-query -l netq-agent
dpkg-query: no packages found matching netq-agent
cumulus@switch:~$ dpkg-query -l netq-apps
dpkg-query: no packages found matching netq-apps
  1. Delete the Virtual Machine according to the usual VMware or KVM practice.

Delete a virtual machine from the host computer using one of the following methods:

  • Right-click the name of the virtual machine in the Favorites list, then select Delete from Disk
  • Select the virtual machine and choose VM > Delete from disk

Delete a virtual machine from the host computer using one of the following methods:

  • Run virsch undefine <vm-domain> --remove-all-storage
  • Run virsh undefine <vm-domain> --wipe-storage

Manage Configurations

The network has a numerous configurations that must be managed. From initial configuration and provisioning of devices to events and notifications, administrators and operators are responsible for setting up and managing the configuration of the network. The topics in this section provide instructions for managing the NetQ UI, physical and software inventory, events and notifications, and for provisioning your devices and network.

Refer to Monitor Operations and Validate Operations for tasks related to monitoring and validating devices and network operations.

Manage the NetQ UI

As an administrator, you can manage access to and various application-wide settings for the Cumulus NetQ UI from a single location.

Individual users have the ability to set preferences specific to their workspaces. This information is covered separately. Refer to Set User Preferences.

NetQ Management Workbench

The NetQ Management workbench is accessed from the main menu. For the user(s) responsible for maintaining the application, this is a good place to start each day.

To open the workbench, click , and select Management under the Admin column.

For cloud deployments, the LDAP Server Info card is not available. Refer to Integrate NetQ with Your LDAP server for details.

Manage User Accounts

From the NetQ Management workbench, you can view the number of users with accounts in the system. As an administrator, you can also add, modify, and delete user accounts using the User Accounts card.

Add New User Account

For each user that monitors at least one aspect of your data center network, a user account is needed. Adding a local user is described here. Refer to Integrate NetQ with Your LDAP server for instructions for adding LDAP users.

To add a new user account:

  1. Click Manage on the User Accounts card to open the User Accounts tab.

  2. Click Add User.

  3. Enter the user’s email address, along with their first and last name.

    Be especially careful entering the email address as you cannot change it once you save the account. If you save a mistyped email address, you must delete the account and create a new one.

  4. Select the user type: Admin or User.

  5. Enter your password in the Admin Password field (only users with administrative permissions can add users).

  6. Create a password for the user.

    1. Enter a password for the user.
    2. Re-enter the user password. If you do not enter a matching password, it will be underlined in red.
  7. Click Save to create the user account, or Cancel to discard the user account.

    By default the User Accounts table is sorted by Role.

  8. Repeat these steps to add all of your users.

Edit a User Name

If a user’s first or last name was incorrectly entered, you can fix them easily.

To change a user name:

  1. Click Manage on the User Accounts card to open the User Accounts tab.

  2. Click the checkbox next to the account you want to edit.

  3. Click above the account list.

  4. Modify the first and/or last name as needed.

  5. Enter your admin password.

  6. Click Save to commit the changes or Cancel to discard them.

Change a User’s Password

Should a user forget his password or for security reasons, you can change a password for a particular user account.

To change a password:

  1. Click Manage on the User Accounts card to open the User Accounts tab.

  2. Click the checkbox next to the account you want to edit.

  3. Click above the account list.

  4. Click Reset Password.

  5. Enter your admin password.

  6. Enter a new password for the user.

  7. Re-enter the user password. Tip: If the password you enter does not match, Save is gray (not activated).

  8. Click Save to commit the change, or Cancel to discard the change.

Change a User’s Access Permissions

If a particular user has only standard user permissions and they need administrator permissions to perform their job (or the opposite, they have administrator permissions, but only need user permissions), you can modify their access rights.

To change access permissions:

  1. Click Manage on the User Accounts card to open the User Accounts tab.

  2. Click the checkbox next to the account you want to edit.

  3. Click above the account list.

  4. Select the appropriate user type from the dropdown list.

  5. Enter your admin password.

  6. Click Save to commit the change, or Cancel to discard the change.

Correct a Mistyped User ID (Email Address)

You cannot edit a user’s email address, because this is the identifier the system uses for authentication. If you need to change an email address, you must create a new one for this user. Refer to Add New User Account. You should delete the incorrect user account. Select the user account, and click .

Export a List of User Accounts

You can export user account information at any time using the User Accounts tab.

To export information for one or more user accounts:

  1. Click Manage on the User Accounts card to open the User Accounts tab.

  2. Select one or more accounts that you want to export by clicking the checkbox next to them. Alternately select all accounts by clicking .

  3. Click to export the selected user accounts.

Delete a User Account

NetQ application administrators should remove user accounts associated with users that are no longer using the application.

To delete one or more user accounts:

  1. Click Manage on the User Accounts card to open the User Accounts tab.

  2. Select one or more accounts that you want to remove by clicking the checkbox next to them.

  3. Click to remove the accounts.

Manage User Login Policies

NetQ application administrators can configure a session expiration time and the number of times users can refresh before requiring users to re-login to the NetQ application.

To configure these login policies:

  1. Click (main menu), and select Management under the Admin column.

  2. Locate the Login Management card.

  3. Click Manage.

  4. Select how long a user may be logged in before logging in again; 30 minutes, 1, 3, 5, or 8 hours.

    Default for on-premises deployments is 6 hours. Default for cloud deployments is 30 minutes.

  5. Indicate the number of times (between 1 and 100) the application can be refreshed before the user must log in again. Default is unspecified.

  6. Enter your admin password.

  7. Click Update to save the changes, or click Cancel to discard them.

    The Login Management card shows the configuration.

Monitor User Activity

NetQ application administrators can audit user activity in the NetQ UI using the Activity Log or in the CLI by checking syslog.

To view the log, click (main menu), then click Activity Log under the Admin column.

Click to filter the log by username, action, resource, and time period. Click to export the log a page at a time.

NetQ maintains an audit trail of user activity in syslog. Information logged includes when a user logs in or out of NetQ as well as when the user changes a configuration and what that change is.

cumulus@switch:~$ sudo tail /var/log/syslog
...

2020-10-16T11:43:04.976557-07:00 switch sshd[14568]: Accepted password for cumulus from 192.168.200.250 port 56930 ssh2
2020-10-16T11:43:04.977569-07:00 switch sshd[14568]: pam_unix(sshd:session): session opened for user cumulus by (uid=0)
...

Manage Scheduled Traces

From the NetQ Management workbench, you can view the number of traces scheduled to run in the system. A set of default traces are provided with the NetQ GUI. As an administrator, you can run one or more scheduled traces, add new scheduled traces, and edit or delete existing traces.

Add a Scheduled Trace

You can create a scheduled trace to provide regular status about a particularly important connection between a pair of devices in your network or for temporary troubleshooting.

To add a trace:

  1. Click Manage on the Scheduled Traces card to open the Scheduled Traces tab.

  2. Click Add Trace to open the large New Trace Request card.

  3. Enter source and destination addresses.

    For layer 2 traces, the source must be a hostname and the destination must be a MAC address. For layer 3 traces, the source can be a hostname or IP address, and the destination must be an IP address.

  4. Specify a VLAN for a layer 2 trace or (optionally) a VRF for a layer 3 trace.

  5. Set the schedule for the trace, by selecting how often to run the trace and when to start it the first time.

  6. Click Save As New to add the trace. You are prompted to enter a name for the trace in the Name field.

    If you want to run the new trace right away for a baseline, select the trace you just added from the dropdown list, and click Run Now.

Delete a Scheduled Trace

If you do not want to run a given scheduled trace any longer, you can remove it.

To delete a scheduled trace:

  1. Click Manage on the Scheduled Trace card to open the Scheduled Traces tab.

  2. Select at least one trace by clicking on the checkbox next to the trace.

  3. Click .

Export a Scheduled Trace

You can export a scheduled trace configuration at any time using the Scheduled Traces tab.

To export one or more scheduled trace configurations:

  1. Click Manage on the Scheduled Trace card to open the Scheduled Traces tab.

  2. Select one or more traces by clicking on the checkbox next to the trace. Alternately, click to select all traces.

  3. Click to export the selected traces.

Manage Scheduled Validations

From the NetQ Management workbench, you can view the total number of validations scheduled to run in the system. A set of default scheduled validations are provided and pre-configured with the NetQ UI. These are not included in the total count. As an administrator, you can view and export the configurations for all scheduled validations, or add a new validation.

View Scheduled Validation Configurations

You can view the configuration of a scheduled validation at any time. This can be useful when you are trying to determine if the validation request needs to be modified to produce a slightly different set of results (editing or cloning) or if it would be best to create a new one.

To view the configurations:

  1. Click Manage on the Scheduled Validations card to open the Scheduled Validations tab.

  2. Click in the top right to return to your NetQ Management cards.

Add a Scheduled Validation

You can add a scheduled validation at any time using the Scheduled Validations tab.

To add a scheduled validation:

  1. Click Manage on the Scheduled Validations card to open the Scheduled Validations tab.

  2. Click Add Validation to open the large Validation Request card.

  3. Configure the request. Refer to Validate Network Protocol and Service Operations for details.

Delete Scheduled Validations

You can remove a scheduled validation that you created (one of the 15 allowed) at any time. You cannot remove the default scheduled validations included with NetQ.

To remove a scheduled validation:

  1. Click Manage on the Scheduled Validations card to open the Scheduled Validations tab.

  2. Select one or more validations that you want to delete.

  3. Click above the validations list.

Export Scheduled Validation Configurations

You can export one or more scheduled validation configurations at any time using the Scheduled Validations tab.

To export a scheduled validation:

  1. Click Manage on the Scheduled Validations card to open the Scheduled Validations tab.

  2. Select one or more validations by clicking the checkbox next to the validation. Alternately, click to select all validations.

  3. Click to export selected validations.

Manage Threshold Crossing Rules

NetQ supports a set of events that are triggered by crossing a user-defined threshold. These events allow detection and prevention of network failures for selected interface, utilization, sensor, forwarding, and ACL events.

A notification configuration must contain one rule. Each rule must contain a scope and a threshold.

Supported Events

The following events are supported:

CategoryEvent IDDescription
Interface StatisticsTCA_RXBROADCAST_UPPERrx_broadcast bytes per second on a given switch or host is greater than maximum threshold
Interface StatisticsTCA_RXBYTES_UPPERrx_bytes per second on a given switch or host is greater than maximum threshold
Interface StatisticsTCA_RXMULTICAST_UPPERrx_multicast per second on a given switch or host is greater than maximum threshold
Interface StatisticsTCA_TXBROADCAST_UPPERtx_broadcast bytes per second on a given switch or host is greater than maximum threshold
Interface StatisticsTCA_TXBYTES_UPPERtx_bytes per second on a given switch or host is greater than maximum threshold
Interface StatisticsTCA_TXMULTICAST_UPPERtx_multicast bytes per second on a given switch or host is greater than maximum threshold
Resource UtilizationTCA_CPU_UTILIZATION_UPPERCPU utilization (%) on a given switch or host is greater than maximum threshold
Resource UtilizationTCA_DISK_UTILIZATION_UPPERDisk utilization (%) on a given switch or host is greater than maximum threshold
Resource UtilizationTCA_MEMORY_UTILIZATION_UPPERMemory utilization (%) on a given switch or host is greater than maximum threshold
SensorsTCA_SENSOR_FAN_UPPERSwitch sensor reported fan speed on a given switch or host is greater than maximum threshold
SensorsTCA_SENSOR_POWER_UPPERSwitch sensor reported power (Watts) on a given switch or host is greater than maximum threshold
SensorsTCA_SENSOR_TEMPERATURE_UPPERSwitch sensor reported temperature (°C) on a given switch or host is greater than maximum threshold
SensorsTCA_SENSOR_VOLTAGE_UPPERSwitch sensor reported voltage (Volts) on a given switch or host is greater than maximum threshold
Forwarding ResourcesTCA_TCAM_TOTAL_ROUTE_ENTRIES_UPPERNumber of routes on a given switch or host is greater than maximum threshold
Forwarding ResourcesTCA_TCAM_TOTAL_MCAST_ROUTES_UPPERNumber of multicast routes on a given switch or host is greater than maximum threshold
Forwarding ResourcesTCA_TCAM_MAC_ENTRIES_UPPERNumber of MAC addresses on a given switch or host is greater than maximum threshold
Forwarding ResourcesTCA_TCAM_IPV4_ROUTE_UPPERNumber of IPv4 routes on a given switch or host is greater than maximum threshold
Forwarding ResourcesTCA_TCAM_IPV4_HOST_UPPERNumber of IPv4 hosts on a given switch or host is greater than maximum threshold
Forwarding ResourcesTCA_TCAM_IPV6_ROUTE_UPPERNumber of IPv6 hosts on a given switch or host is greater than maximum threshold
Forwarding ResourcesTCA_TCAM_IPV6_HOST_UPPERNumber of IPv6 hosts on a given switch or host is greater than maximum threshold
Forwarding ResourcesTCA_TCAM_ECMP_NEXTHOPS_UPPERNumber of equal cost multi-path (ECMP) next hop entries on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_IN_ACL_V4_FILTER_UPPERNumber of ingress ACL filters for IPv4 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_EG_ACL_V4_FILTER_UPPERNumber of egress ACL filters for IPv4 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_IN_ACL_V4_MANGLE_UPPERNumber of ingress ACL mangles for IPv4 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_EG_ACL_V4_MANGLE_UPPERNumber of egress ACL mangles for IPv4 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_IN_ACL_V6_FILTER_UPPERNumber of ingress ACL filters for IPv6 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_EG_ACL_V6_FILTER_UPPERNumber of egress ACL filters for IPv6 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_IN_ACL_V6_MANGLE_UPPERNumber of ingress ACL mangles for IPv6 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_EG_ACL_V6_MANGLE_UPPERNumber of egress ACL mangles for IPv6 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_IN_ACL_8021x_FILTER_UPPERNumber of ingress ACL 802.1 filters on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_ACL_L4_PORT_CHECKERS_UPPERNumber of ACL port range checkers on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_ACL_REGIONS_UPPERNumber of ACL regions on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_IN_ACL_MIRROR_UPPERNumber of ingress ACL mirrors on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_ACL_18B_RULES_UPPERNumber of ACL 18B rules on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_ACL_32B_RULES_UPPERNumber of ACL 32B rules on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_ACL_54B_RULES_UPPERNumber of ACL 54B rules on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_IN_PBR_V4_FILTER_UPPERNumber of ingress policy-based routing (PBR) filters for IPv4 addresses on a given switch or host is greater than maximum threshold
ACL ResourcesTCA_TCAM_IN_PBR_V6_FILTER_UPPERNumber of ingress policy-based routing (PBR) filters for IPv6 addresses on a given switch or host is greater than maximum threshold

Define a Scope

A scope is used to filter the events generated by a given rule. Scope values are set on a per TCA rule basis. All rules can be filtered on Hostname. Some rules can also be filtered by other parameters, as shown in this table:

CategoryEvent IDScope Parameters
Interface StatisticsTCA_RXBROADCAST_UPPERHostname, Interface
Interface StatisticsTCA_RXBYTES_UPPERHostname, Interface
Interface StatisticsTCA_RXMULTICAST_UPPERHostname, Interface
Interface StatisticsTCA_TXBROADCAST_UPPERHostname, Interface
Interface StatisticsTCA_TXBYTES_UPPERHostname, Interface
Interface StatisticsTCA_TXMULTICAST_UPPERHostname, Interface
Resource UtilizationTCA_CPU_UTILIZATION_UPPERHostname
Resource UtilizationTCA_DISK_UTILIZATION_UPPERHostname
Resource UtilizationTCA_MEMORY_UTILIZATION_UPPERHostname
SensorsTCA_SENSOR_FAN_UPPERHostname, Sensor Name
SensorsTCA_SENSOR_POWER_UPPERHostname, Sensor Name
SensorsTCA_SENSOR_TEMPERATURE_UPPERHostname, Sensor Name
SensorsTCA_SENSOR_VOLTAGE_UPPERHostname, Sensor Name
Forwarding ResourcesTCA_TCAM_TOTAL_ROUTE_ENTRIES_UPPERHostname
Forwarding ResourcesTCA_TCAM_TOTAL_MCAST_ROUTES_UPPERHostname
Forwarding ResourcesTCA_TCAM_MAC_ENTRIES_UPPERHostname
Forwarding ResourcesTCA_TCAM_ECMP_NEXTHOPS_UPPERHostname
Forwarding ResourcesTCA_TCAM_IPV4_ROUTE_UPPERHostname
Forwarding ResourcesTCA_TCAM_IPV4_HOST_UPPERHostname
Forwarding ResourcesTCA_TCAM_IPV6_ROUTE_UPPERHostname
Forwarding ResourcesTCA_TCAM_IPV6_HOST_UPPERHostname
ACL ResourcesTCA_TCAM_IN_ACL_V4_FILTER_UPPERHostname
ACL ResourcesTCA_TCAM_EG_ACL_V4_FILTER_UPPERHostname
ACL ResourcesTCA_TCAM_IN_ACL_V4_MANGLE_UPPERHostname
ACL ResourcesTCA_TCAM_EG_ACL_V4_MANGLE_UPPERHostname
ACL ResourcesTCA_TCAM_IN_ACL_V6_FILTER_UPPERHostname
ACL ResourcesTCA_TCAM_EG_ACL_V6_FILTER_UPPERHostname
ACL ResourcesTCA_TCAM_IN_ACL_V6_MANGLE_UPPERHostname
ACL ResourcesTCA_TCAM_EG_ACL_V6_MANGLE_UPPERHostname
ACL ResourcesTCA_TCAM_IN_ACL_8021x_FILTER_UPPERHostname
ACL ResourcesTCA_TCAM_ACL_L4_PORT_CHECKERS_UPPERHostname
ACL ResourcesTCA_TCAM_ACL_REGIONS_UPPERHostname
ACL ResourcesTCA_TCAM_IN_ACL_MIRROR_UPPERHostname
ACL ResourcesTCA_TCAM_ACL_18B_RULES_UPPERHostname
ACL ResourcesTCA_TCAM_ACL_32B_RULES_UPPERHostname
ACL ResourcesTCA_TCAM_ACL_54B_RULES_UPPERHostname
ACL ResourcesTCA_TCAM_IN_PBR_V4_FILTER_UPPERHostname
ACL ResourcesTCA_TCAM_IN_PBR_V6_FILTER_UPPERHostname

Scopes are displayed as regular expressions in the rule card.

ScopeDisplay in CardResult
All deviceshostname = *Show events for all devices
All interfacesifname = *Show events for all devices and all interfaces
All sensorss_name = *Show events for all devices and all sensors
Particular devicehostname = leaf01Show events for leaf01 switch
Particular interfacesifname = swp14Show events for swp14 interface
Particular sensorss_name = fan2Show events for the fan2 fan
Set of deviceshostname ^ leafShow events for switches having names starting with leaf
Set of interfacesifname ^ swpShow events for interfaces having names starting with swp
Set of sensorss_name ^ fanShow events for sensors having names starting with fan

When a rule is filtered by more than one parameter, each is displayed on the card. Leaving a value blank for a parameter defaults to all; all hostnames, interfaces, sensors, forwarding and ACL resources.

Specify Notification Channels

The notification channel specified by a TCA rule tells NetQ where to send the notification message. Refer to Create a Channel.

Create a TCA Rule

Now that you know which events are supported and how to set the scope, you can create a basic rule to deliver one of the TCA events to a notification channel.

To create a TCA rule:

  1. Click to open the Main Menu.

  2. Click Threshold Crossing Rules under Notifications.

  3. Click to add a rule.

    The Create TCA Rule dialog opens. Four steps create the rule.

    You can move forward and backward until you are satisfied with your rule definition.

  4. On the Enter Details step, enter a name for your rule, choose your TCA event type, and assign a severity.

    The rule name has a maximum of 20 characters (including spaces).

  5. Click Next.

  6. On the Choose Event step, select the attribute to measure against.

    The attributes presented depend on the event type chosen in the Enter Details step. This example shows the attributes available when Resource Utilization was selected.

  7. Click Next.

  8. On the Set Threshold step, enter a threshold value.

  9. Define the scope of the rule.

    • If you want to restrict the rule to a particular device, and enter values for one or more of the available parameters.

    • If you want the rule to apply to all devices, click the scope toggle.

  10. Click Next.

  11. Optionally, select a notification channel where you want the events to be sent. If no channel is select, the notifications are only available from the database. You can add a channel at a later time. Refer to Modify TCA Rules.

  12. Click Finish.

This example shows two rules. The rule on the left triggers an informational event when switch leaf01 exceeds the maximum CPU utilization of 87%. The rule on the right triggers a critical event when any device exceeds the maximum CPU utilization of 93%. Note that the cards indicate both rules are currently Active.

View All TCA Rules

You can view all of the threshold-crossing event rules you have created by clicking and then selecting Threshold Crossing Rules under Notifications.

Modify TCA Rules

You can modify the threshold value and scope of any existing rules.

To edit a rule:

  1. Click to open the Main Menu.

  2. Click Threshold Crossing Rules under Notifications.

  3. Locate the rule you want to modify and hover over the card.

  4. Click .

  5. Modify the rule, changing the threshold, scope or associated channel.

    If you want to modify the rule name or severity after creating the rule, you must delete the rule and recreate it.

  6. Click Update Rule.

Manage TCA Rules

Once you have created a bunch of rules, you might have the need to manage them; suppress a rule, disable a rule, or delete a rule.

Rule States

The TCA rules have three possible states:

Suppress a Rule

To suppress a rule for a designated amount of time, you must change the state of the rule.

To suppress a rule:

  1. Click to open the Main Menu.

  2. Click Threshold Crossing Rules under Notifications.

  3. Locate the rule you want to suppress.

  4. Click Disable.

  5. Click in the Date/Time field to set when you want the rule to be automatically re-enabled.

  6. Click Disable.

    Note the changes in the card:

    • The state is now marked as Inactive, but remains green
    • The date and time that the rule will be enabled is noted in the Suppressed field
    • The Disable option has changed to Disable Forever. Refer to Disable a Rule for information about this change.

Disable a Rule

To disable a rule until you want to manually re-enable it, you must change the state of the rule.

To disable a rule that is currently active:

  1. Click to open the Main Menu.

  2. Click Threshold Crossing Rules under Notifications.

  3. Locate the rule you want to disable.

  4. Click Disable.

  5. Leave the Date/Time field blank.

  6. Click Disable.

    Note the changes in the card:

    • The state is now marked as Inactive and is red
    • The rule definition is grayed out
    • The Disable option has changed to Enable to reactivate the rule when you are ready

To disable a rule that is currently suppressed:

  1. Click to open the Main Menu.

  2. Click Threshold Crossing Rules under Notifications.

  3. Locate the rule you want to disable.

  4. Click Disable Forever.

    Note the changes in the card:

    • The state is now marked as Inactive and is red
    • The rule definition is grayed out
    • The Disable option has changed to Enable to reactivate the rule when you are ready

Delete a Rule

You might find that you no longer want to received event notifications for a particular TCA event. In that case, you can either disable the event if you think you may want to receive them again or delete the rule altogether. Refer to Disable a Rule in the first case. Follow the instructions here to remove the rule. The rule can be in any of the three states.

To delete a rule:

  1. Click to open the Main Menu.

  2. Click Threshold Crossing Rules under Notifications.

  3. Locate the rule you want to remove and hover over the card.

  4. Click .

Resolve Scope Conflicts

There may be occasions where the scope defined by multiple rules for a given TCA event may overlap each other. In such cases, the TCA rule with the most specific scope that is still true is used to generate the event.

To clarify this, consider this example. Three events have occurred:

NetQ attempts to match the TCA event against hostname and interface name with three TCA rules with different scopes:

The result is:

In summary:

Input EventScope ParametersRule 1, Scope 1Rule 2, Scope 2Rule 3, Scope 3Scope Applied
leaf01, swp1Hostname, Interfacehostname=leaf01, ifname=swp1hostname ^ leaf, ifname=*hostname=*, ifname=*Scope 1
leaf01, swp3Hostname, Interfacehostname=leaf01, ifname=swp1hostname ^ leaf, ifname=*hostname=*, ifname=*Scope 2
spine01, swp1Hostname, Interfacehostname=leaf01, ifname=swp1hostname ^ leaf, ifname=*hostname=*, ifname=*Scope 3

Manage Notification Channels

NetQ supports Slack, PagerDuty, and syslog notification channels for reporting system and threshold-based events. You can access channel configuration in one of two ways:

In either case, the Channels view is opened.

Determine the type of channel you want to add and follow the instructions for the selected type.

Specify Slack Channels

To specify Slack channels:

  1. Create one or more channels using Slack.

  2. In NetQ, click Slack in the Channels view.

  3. When no channels have been specified, click on the note. When at least one channel has been specified, click above the table.

  4. Provide a unique name for the channel. Note that spaces are not allowed. Use dashes or camelCase instead.

  5. Copy and paste the incoming webhook URL for a channel you created in Step 1 (or earlier).

  6. Click Add.

  7. Repeat to add additional Slack channels as needed.

Specify PagerDuty Channels

To specify PagerDuty channels:

  1. Create one or more channels using PagerDuty.

  2. In NetQ, click PagerDuty in the Channels view.

  3. When no channels have been specified, click on the note. When at least one channel has been specified, click above the table.

  4. Provide a unique name for the channel. Note that spaces are not allowed. Use dashes or camelCase instead.

  5. Copy and paste the integration key for a PagerDuty channel you created in Step 1 (or earlier).

  6. Click Add.

  7. Repeat to add additional PagerDuty channels as needed.

Specify a Syslog Channel

To specify a Syslog channel:

  1. Click Syslog in the Channels view.

  2. When no channels have been specified, click on the note. When at least one channel has been specified, click above the table.

  3. Provide a unique name for the channel. Note that spaces are not allowed. Use dashes or camelCase instead.

  4. Enter the IP address and port of the Syslog server.

  5. Click Add.

  6. Repeat to add additional Syslog channels as needed.

Remove Notification Channels

You can view your notification channels at any time. If you create new channels or retire selected channels, you might need to add or remove them from NetQ as well. To add channels refer to Specify Notification Channels.

To remove channels:

  1. Click , and then click Channels in the Notifications column.

    This opens the Channels view.

  2. Click the tab for the type of channel you want to remove (Slack, PagerDuty, or Syslog).

  3. Select one or more channels.

  4. Click .

Configure Multiple Premises

The NetQ Management dashboard provides the ability to configure a single NetQ UI and CLI for monitoring data from multiple external premises in addition to your local premises.

A complete NetQ deployment is required at each premises. The NetQ appliance or VM of one of the deployments acts as the primary (similar to a proxy) for the premises in the other deployments. A list of these external premises is stored with the primary deployment. After the multiple premises are configured, you can view this list of external premises, change the name of premises on the list, and delete premises from the list.

To configure monitoring of external premises:

  1. Sign in to primary NetQ Appliance or VM.

  2. In the NetQ UI, click Main Menu.

  3. Select Management from the Admin column.

  4. Locate the External Premises card.

  5. Click Manage.

  6. Click to open the Add Premises dialog.

  7. Specify an external premises.

    • Enter an IP address for the API gateway on the external NetQ Appliance or VM in the Hostname field (required)
    • Enter the access credentials
  8. Click Next.

  9. Select from the available premises associated with this deployment by clicking on their names.

  10. Click Finish.

  11. Add more external premises by repeating Steps 6-10.

System Server Information

You can easily view the configuration of the physical server or VM from the NetQ Management dashboard.

To view the server information:

  1. Click Main Menu.

  2. Select Management from the Admin column.

  3. Locate the System Server Info card.

    If no data is present on this card, it is likely that the NetQ Agent on your server or VM is not running properly or the underlying streaming services are impaired.

Integrate with Your LDAP Server

For on-premises deployments you can integrate your LDAP server with NetQ to provide access to NetQ using LDAP user accounts instead of ,or in addition to, the NetQ user accounts. Refer to Integrate NetQ with Your LDAP Server for more detail.

Provision Your Devices and Network

NetQ enables you to provision your switches using the lifecycle management feature in the NetQ UI or the NetQ CLI. Also included here are management procedures for NetQ Agents and optional post-installation configurations.

Manage Switches through Their Lifecycle

Only administrative users can perform the tasks described in this topic.

As an administrator, you want to manage the deployment of Cumulus Networks product software onto your network devices (servers, appliances, and switches) in the most efficient way and with the most information about the process as possible. With this release, NetQ expands its lifecycle management (LCM) capabilities to support configuration management for Cumulus Linux switches.

Using the NetQ UI or CLI, lifecycle management enables you to:

This feature is fully enabled for on-premises deployments and fully disabled for cloud deployments. Contact your local Cumulus Networks sales representative or submit a support ticket to activate LCM on cloud deployments.

Access Lifecycle Management Features in the NetQ UI

To manage the various lifecycle management features from any workbench, click (Switches) in the workbench header, then select Manage switches.

The first time you open the Manage Switch Assets view, it provides a summary card for switch inventory, uploaded Cumulus Linux images, uploaded NetQ images, NetQ configuration profiles, and switch access settings. Additional cards appear after that based on your activity.

You can also access this view by clicking Main Menu (Main Menu) and selecting Manage Switches from the Admin section.

NetQ CLI Lifecycle Management Commands Summary

The NetQ CLI provides a number of netq lcm commands to perform the various LCM capabilities. The syntax of these commands is:

netq lcm upgrade name <text-job-name> cl-version <text-cumulus-linux-version> netq-version <text-netq-version> hostnames <text-switch-hostnames> [run-restore-on-failure] [run-before-after]
netq lcm add credentials username <text-switch-username> (password <text-switch-password> | ssh-key <text-ssh-key>)
netq lcm add role (superspine | spine | leaf | exit) switches <text-switch-hostnames>
netq lcm del credentials
netq lcm show credentials [json]
netq lcm show switches [version <text-cumulus-linux-version>] [json]
netq lcm show status <text-lcm-job-id> [json]
netq lcm add cl-image <text-image-path>
netq lcm add netq-image <text-image-path>
netq lcm del image <text-image-id>
netq lcm show images [<text-image-id>] [json]
netq lcm show upgrade-jobs [json]
netq [<hostname>] show events [level info | level error | level warning | level critical | level debug] type lcm [between <text-time> and <text-endtime>] [json]

Manage Cumulus Linux and NetQ Images

You can manage both Cumulus Linux and Cumulus NetQ images with LCM. They are managed in a similar manner.

Cumulus Linux binary images can be uploaded to a local LCM repository for upgrade of your switches. Cumulus NetQ debian packages can be uploaded to the local LCM repository for installation or upgrade. You can upload images from an external drive.

The Linux and NetQ images are available in several variants based on the software version (x.y.z), the CPU architecture (ARM, x86), platform (based on ASIC vendor, Broadcom or Mellanox), SHA Checksum, and so forth. When LCM discovers Cumulus Linux switches running NetQ 2.x or later in your network, it extracts the meta data needed to select the appropriate image for a given switch. Similarly, LCM discovers and extracts the meta data from NetQ images.

The Cumulus Linux Images and NetQ Images cards in the NetQ UI provide a summary of image status in LCM. They show the total number of images in the repository, a count of missing images, and the starting points for adding and managing your images.

The netq lcm show images command also displays a summary of the images uploaded to the LCM repo on the NetQ appliance or VM.

Default Cumulus Linux or Cumulus NetQ Version Assignment

In the NetQ UI, you can assign a specific Cumulus Linux or Cumulus NetQ version as the default version to use during installation or upgrade of switches. It is recommended that you choose the newest version that you intend to install or upgrade on all, or the majority, of your switches. The default selection can be overridden during individual installation and upgrade job creation if an alternate version is needed for a given set of switches.

Missing Images

You should upload images for each variant of Cumulus Linux and Cumulus NetQ currently installed on the switches in your inventory if you want to support rolling back to a known good version should an installation or upgrade fail. The NetQ UI prompts you to upload any missing images to the repository.

For example, if you have both Cumulus Linux 3.7.3 and 3.7.11 versions, some running on ARM and some on x86 architectures, then LCM verifies the presence of each of these images. If only the 3.7.3 x86, 3.7.3 ARM, and 3.7.11 x86 images are in the repository, the NetQ UI would list the 3.7.11 ARM image as missing. For Cumulus NetQ, you need both the netq-apps and netq-agent packages for each release variant.

If you have specified a default Cumulus Linux and/or Cumulus NetQ version, the NetQ UI also verifies that the necessary versions of the default image are available based on the known switch inventory, and if not, lists those that are missing.

While it is not required that you upload images that NetQ determines to be missing, not doing so may cause failures when you attempt to upgrade your switches.

Upload Images

For fresh installations of NetQ 3.2, no images have yet been uploaded to the LCM repository. If you are upgrading from NetQ 3.0.0 or 3.1.0, the Cumulus Linux images you have previously added are still present.

In preparation for Cumulus Linux upgrades, the recommended image upload flow is:

  1. In a fresh NetQ install, add images that match your current inventory: Upload Missing Images

  2. Add images you want to use for upgrade: Upload Upgrade Images

  3. In NetQ UI, optionally specify a default version for upgrades: Specify a Default Upgrade Image

In preparation for Cumulus NetQ installation or upgrade, the recommended image upload flow is:

  1. Add images you want to use for installation or upgrade: Upload Upgrade Images

  2. Add any missing images: Upload Missing Images

  3. In NetQ UI, optionally specify a default version for installation or upgrade | Specify a Default Upgrade Image

Upload Missing Images

Use the following instructions to upload missing Cumulus Linux and NetQ images:

For Cumulus Linux images:

  1. On the Cumulus Linux Images card, click the View # missing CL images link to see what images you need. This opens the list of missing images.
  1. Select one or more of the missing images and make note of the version, ASIC Vendor, and CPU architecture for each.
  1. Download the Cumulus Linux disk images (.bin files) needed for upgrade from the MyMellanox downloads page, selecting the appropriate version, CPU, and ASIC. Place them in an accessible part of your local network.

  2. Back in the UI, click (Add Image) above the table.

  1. Provide the .bin file from an external drive that matches the criteria for the selected image(s), either by dragging and dropping onto the dialog or by selecting from a directory.

  2. Click Import.

  1. Click Done.

  2. Click Uploaded to verify the image is in the repository.

  1. Click to return to the LCM dashboard.

    The Cumulus Linux Images card now shows the number of images you uploaded.

  1. Download the Cumulus Linux disk images (.bin files) needed for upgrade from the MyMellanox downloads page, selecting the appropriate version, CPU, and ASIC. Place them in an accessible part of your local network.

  2. Upload the images to the LCM repository. This example uses a Cumulus Linux 4.1.0 disk image.

    cumulus@switch:~$ netq lcm add cl-image /path/to/download/cumulus-linux-4.1.0-vx-amd64.bin
    
  3. Repeat Step 2 for each image you need to upload to the LCM repository.

For Cumulus NetQ images:

  1. On the NetQ Images card, click the View # missing NetQ images link to see what images you need. This opens the list of missing images.
  1. Select one or all of the missing images and make note of the OS version, CPU architecture, and image type. Remember that you need both image types for NetQ to perform the installation or upgrade.
  1. Download the Cumulus NetQ debian packages needed for upgrade from the MyMellanox downloads page, selecting the appropriate version and hypervisor/platform. Place them in an accessible part of your local network.

  2. Back in the UI, click (Add Image) above the table.

  1. Provide the .deb file(s) from an external drive that matches the criteria for the selected image, either by dragging and dropping it onto the dialog or by selecting it from a directory.

  2. Click Import.

  1. Click Done.

  2. Click Uploaded to verify the images are in the repository.

    When all of the missing images have been uploaded, the Missing list will be empty.

  3. Click to return to the LCM dashboard.

    The NetQ Images card now shows the number of images you uploaded.

  1. Download the Cumulus NetQ debian packages needed for upgrade from the MyMellanox downloads page, selecting the appropriate version and hypervisor/platform. Place them in an accessible part of your local network.

  2. Upload the images to the LCM repository. This example uploads the two packages (netq-agent and netq-apps) needed for NetQ version 3.2.0 for a NetQ appliance or VM running Ubuntu 18.04 with an x86 architecture.

    cumulus@switch:~$ netq lcm add netq-image /path/to/download/netq-agent_3.2.1-ub18.04u31~1603789872.6f62fad_amd64
    cumulus@switch:~$ netq lcm add netq-image /path/to/download/netq-apps_3.2.1-ub18.04u31~1603789872.6f62fad_amd64
    

Upload Upgrade Images

To upload the Cumulus Linux or Cumulus NetQ images that you want to use for upgrade:

First download the Cumulus Linux disk images (.bin files) and Cumulus NetQ debian packages needed for upgrade from the MyMellanox downloads. Place them in an accessible part of your local network.

If you are upgrading Cumulus Linux on switches with different ASIC vendors or CPU architectures, you will need more than one image. For NetQ, you need both the netq-apps and netq-agent packages for each variant.

Then continue with the instructions here based on whether you want to use the NetQ UI or CLI.

  1. Click Add Image on the Cumulus Linux Images or NetQ Images card.

  2. Provide one or more images from an external drive, either by dragging and dropping onto the dialog or by selecting from a directory.

  1. Click Import.

  2. Monitor the progress until it completes. Click Done.

  3. Click to return to the LCM dashboard.

    The NetQ Images card is updated to show the number of additional images you uploaded.

Use the netq lcm add cl-image <text-image-path> and netq lcm add netq-image <text-image-path> commands to upload the images. Run the relevant command for each image that needs to be uploaded.

Cumulus Linux images:

cumulus@switch:~$ netq lcm add image /path/to/download/cumulus-linux-4.2.0-mlx-amd64.bin

Cumulus NetQ images:

cumulus@switch:~$ netq lcm add image /path/to/download/netq-agent_3.2.1-ub18.04u31~1603789872.6f62fad_amd64
cumulus@switch:~$ netq lcm add image /path/to/download/netq-apps_3.2.1-ub18.04u31~1603789872.6f62fad_amd64

Specify a Default Upgrade Version

Lifecycle management does not have a default Cumulus Linux or Cumulus NetQ upgrade version specified automatically. With the NetQ UI, you can specify the version that is appropriate for your network to ease the upgrade process.

To specify a default Cumulus Linux or Cumulus NetQ version in the NetQ UI:

  1. Click the Click here to set the default CL version link in the middle of the Cumulus Linux Images card, or click the Click here to set the default NetQ version link in the middle of the NetQ Images card.

  2. Select the version you want to use as the default for switch upgrades.

  3. Click Save. The default version is now displayed on the relevant Images card.

After you have specified a default version, you have the option to change it.

To change the default Cumulus Linux or Cumulus NetQ version:

  1. Click change next to the currently identified default image on the Cumulus Linux Images or NetQ Images card.

  2. Select the image you want to use as the default version for upgrades.

  3. Click Save.

Export Images

You can export a listing of the Cumulus Linux and NetQ images stored in the LCM repository for reference.

To export image listings:

  1. Open the LCM dashboard.

  2. Click Manage on the Cumulus Linux Images or NetQ Images card.

  3. Optionally, use the filter option above the table on the Uploaded tab to narrow down a large listing of images.

  1. Click above the table.

  2. Choose the export file type and click Export.

Use the json option with the netq lcm show images command to output a list of the Cumulus Linux image files stored in the LCM repository.

cumulus@switch:~$ netq lcm show images json
[
    {
        "id": "image_cc97be3955042ca41857c4d0fe95296bcea3e372b437a535a4ad23ca300d52c3",
        "name": "cumulus-linux-4.2.0-vx-amd64-1594775435.dirtyzc24426ca.bin",
        "clVersion": "4.2.0",
        "cpu": "x86_64",
        "asic": "VX",
        "lastChanged": 1600726385400.0
    },
    {
        "id": "image_c6e812f0081fb03b9b8625a3c0af14eb82c35d79997db4627c54c76c973ce1ce",
        "name": "cumulus-linux-4.1.0-vx-amd64.bin",
        "clVersion": "4.1.0",
        "cpu": "x86_64",
        "asic": "VX",
        "lastChanged": 1600717860685.0
    }
]

Remove Images from Local Repository

Once you have upgraded all of your switches beyond a particular release of Cumulus Linux or NetQ, you may want to remove those images from the LCM repository to save space on the server.

To remove images:

  1. Open the LCM dashboard.

  2. Click Manage on the Cumulus Linux Images or NetQ Images card.

  3. On Uploaded, select the images you want to remove. Use the filter option above the table to narrow down a large listing of images.

  1. Click .

To remove Cumulus Linux images, run:

netq lcm show images [json]
netq lcm del image <text-image-id>
  1. Determine the ID of the image you want to remove.

    cumulus@switch:~$ netq lcm show images
    [
        {
            "id": "image_cc97be3955042ca41857c4d0fe95296bcea3e372b437a535a4ad23ca300d52c3",
            "name": "cumulus-linux-4.2.0-vx-amd64-1594775435.dirtyzc24426ca.bin",
            "clVersion": "4.2.0",
            "cpu": "x86_64",
            "asic": "VX",
            "lastChanged": 1600726385400.0
        },
        {
            "id": "image_c6e812f0081fb03b9b8625a3c0af14eb82c35d79997db4627c54c76c973ce1ce",
            "name": "cumulus-linux-4.1.0-vx-amd64.bin",
            "clVersion": "4.1.0",
            "cpu": "x86_64",
            "asic": "VX",
            "lastChanged": 1600717860685.0
        }
    ]
    
  2. Remove the image you no longer need.

    cumulus@switch:~$ netq lcm del image image_c6e812f0081fb03b9b8625a3c0af14eb82c35d79997db4627c54c76c973ce1ce
    
  3. Verify it has been removed.

    cumulus@switch:~$ netq lcm show images
    [
        {
            "id": "image_cc97be3955042ca41857c4d0fe95296bcea3e372b437a535a4ad23ca300d52c3",
            "name": "cumulus-linux-4.2.0-vx-amd64-1594775435.dirtyzc24426ca.bin",
            "clVersion": "4.2.0",
            "cpu": "x86_64",
            "asic": "VX",
            "lastChanged": 1600726385400.0
        }
    ]
    

Manage Switch Credentials

Switch access credentials are needed for performing installations and upgrades of software. You can choose between basic authentication (SSH username/password) and SSH (Public/Private key) authentication. These credentials apply to all switches. If some of your switches have alternate access credentials, you must change them or modify the credential information before attempting installations or upgrades with the lifecycle management feature.

Specify Switch Credentials

Switch access credentials are not specified by default. You must add these.

To specify access credentials:

  1. Open the LCM dashboard.

  2. Click the Click here to add Switch access link on the Access card.

  1. Select the authentication method you want to use; SSH or Basic Authentication. Basic authentication is selected by default.

To configure basic authentication, run:

cumulus@switch:~$ netq lcm add credentials username cumulus password cumulus

The default credentials for Cumulus Linux have changed from cumulus/CumulusLinux! to cumulus/cumulus for releases 4.2 and later. For details, read {{kb_link url=“cumulus-linux-43/System-Configuration/Authentication-Authorization-and-Accounting/User-Accounts/” text=“Cumulus Linux User Accounts”>}}.

To configure SSH authentication using a public/private key:

  1. If the keys do not yet exist, create a pair of SSH private and public keys.

    ssh-keygen -t rsa -C "<USER>"
    
  2. Copy the SSH public key to each switch that you want to upgrade using one of the following methods:

    • Manually copy the the SSH public key to the /home/<USER>/.ssh/authorized_keys file on each switch, or
    • Run ssh-copy-id USER@<switch_ip> on the server where the SSH key pair was generated for each switch
  3. Add these credentials to the switch.

    cumulus@switch:~$ netq lcm add credentials ssh-key PUBLIC_SSH_KEY
    

View Switch Credentials

You can view the type of credentials being used to access your switches in the NetQ UI. You can view the details of the credentials using the NetQ CLI.

  1. Open the LCM dashboard.

  2. On the Access card, either Basic or SSH is indicated.

To see the credentials, run netq lcm show credentials.

If an SSH key is used for the credentials, the public key is displayed in the command output:

cumulus@switch:~$ netq lcm show credentials
Type             SSH Key        Username         Password         Last Changed
---------------- -------------- ---------------- ---------------- -------------------------
SSH              MY-SSH-KEY                                       Tue Apr 28 19:08:52 2020

If a username and password is used for the credentials, the username is displayed in the command output but the password is masked:

cumulus@switch:~$ netq lcm show credentials
Type             SSH Key        Username         Password         Last Changed
---------------- -------------- ---------------- ---------------- -------------------------
BASIC                           cumulus          **************   Tue Apr 28 19:10:27 2020

Modify Switch Credentials

You can modify your switch access credentials at any time. You can change between authentication methods or change values for either method.

To change your access credentials:

  1. Open the LCM dashboard.

  2. On the Access card, click the Click here to change access mode link in the center of the card.

  3. Select the authentication method you want to use; SSH or Basic Authentication. Basic authentication is selected by default.

  4. Based on your selection:

    • Basic: Enter a new username and/or password
    • SSH: Copy and paste a new SSH private key
  1. Click Save.

To change the basic authentication credentials, run the add credentials command with the new username and/or password. This example changes the password for the cumulus account created above:

cumulus@switch:~$ netq lcm add credentials username cumulus password Admin#123

To configure SSH authentication using a public/private key:

  1. If the new keys do not yet exist, create a pair of SSH private and public keys.

    ssh-keygen -t rsa -C "<USER>"
    
  2. Copy the SSH public key to each switch that you want to upgrade using one of the following methods:

    • Manually copy the the SSH public key to the /home/<USER>/.ssh/authorized_keys file on each switch, or
    • Run ssh-copy-id USER@<switch_ip> on the server where the SSH key pair was generated for each switch
  3. Add these new credentials to the switch.

    cumulus@switch:~$ netq lcm add credentials ssh-key PUBLIC_SSH_KEY
    

Remove Switch Credentials

You can remove the access credentials for switches using the NetQ CLI. Note that without valid credentials, you will not be able to upgrade your switches.

To remove the credentials, run netq lcm del credentials. Verify they are removed by running netq lcm show credentials.

Manage Switch Inventory and Roles

On initial installation, the lifecycle management feature provides an inventory of switches that have been automatically discovered by NetQ 3.x and are available for software installation or upgrade through NetQ. This includes all switches running Cumulus Linux 3.6 or later and Cumulus NetQ Agent 2.4 or later in your network. You assign network roles to switches and select switches for software installation and upgrade from this inventory listing.

View the LCM Switch Inventory

The switch inventory can be viewed from the NetQ UI and the NetQ CLI.

A count of the switches NetQ was able to discover and the Cumulus Linux versions that are running on those switches is available from the LCM dashboard.

To view a list of all switches known to lifecycle management, click Manage on the Switches card.

Review the list:

  • Sort the list by any column; hover over column title and click to toggle between ascending and descending order
  • Filter the list: click and enter parameter value of interest

To view a list of all switches known to lifecycle management, run:

netq lcm show switches [version <text-cumulus-linux-version>] [json]

Use the version option to only show switches with a given Cumulus Linux version, X.Y.Z.

This example shows all switches known by lifecycle management.

cumulus@switch:~$ netq lcm show switches
Hostname          Role       IP Address                MAC Address        CPU      CL Version           NetQ Version             Last Changed
----------------- ---------- ------------------------- ------------------ -------- -------------------- ------------------------ -------------------------
leaf01            leaf       192.168.200.11            44:38:39:00:01:7A  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:55:37 2020
                                                                                                        104fb9ed
spine04           spine      192.168.200.24            44:38:39:00:01:6C  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:25:16 2020
                                                                                                        104fb9ed
leaf03            leaf       192.168.200.13            44:38:39:00:01:84  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:55:56 2020
                                                                                                        104fb9ed
leaf04            leaf       192.168.200.14            44:38:39:00:01:8A  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:55:07 2020
                                                                                                        104fb9ed
border02                     192.168.200.64            44:38:39:00:01:7C  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:56:49 2020
                                                                                                        104fb9ed
border01                     192.168.200.63            44:38:39:00:01:74  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:56:37 2020
                                                                                                        104fb9ed
fw2                          192.168.200.62            44:38:39:00:01:8E  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:24:58 2020
                                                                                                        104fb9ed
spine01           spine      192.168.200.21            44:38:39:00:01:82  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:25:07 2020
                                                                                                        104fb9ed
spine02           spine      192.168.200.22            44:38:39:00:01:92  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:25:08 2020
                                                                                                        104fb9ed
spine03           spine      192.168.200.23            44:38:39:00:01:70  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:25:16 2020
                                                                                                        104fb9ed
fw1                          192.168.200.61            44:38:39:00:01:8C  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:24:58 2020
                                                                                                        104fb9ed
leaf02            leaf       192.168.200.12            44:38:39:00:01:78  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:55:53 2020
                                                                                                        104fb9ed

This listing is the starting point for Cumulus Linux upgrades or Cumulus NetQ installations and upgrades. If the switches you want to upgrade are not present in the list, you can:

Role Management

Four pre-defined switch roles are available based on the Clos architecture: Superspine, Spine, Leaf, and Exit. With this release, you cannot create your own roles.

Switch roles are used to:

When roles are assigned, the upgrade process begins with switches having the superspine role, then continues with the spine switches, leaf switches, exit switches, and finally switches with no role assigned. All switches with a given role must be successfully upgraded before the switches with the closest dependent role can be upgraded.

For example, a group of seven switches are selected for upgrade. Three are spine switches and four are leaf switches. After all of the spine switches are successfully upgraded, then the leaf switches are upgraded. If one of the spine switches were to fail the upgrade, the other two spine switches are upgraded, but the upgrade process stops after that, leaving the leaf switches untouched, and the upgrade job fails.

When only some of the selected switches have roles assigned in an upgrade job, the switches with roles are upgraded first and then all the switches with no roles assigned are upgraded.

While role assignment is optional, using roles can prevent switches from becoming unreachable due to dependencies between switches or single attachments. And when MLAG pairs are deployed, switch roles avoid upgrade conflicts. For these reasons, Cumulus Networks highly recommends assigning roles to all of your switches.

Assign Switch Roles

Roles can be assigned to one or more switches using the NetQ UI or the NetQ CLI.

  1. Open the LCM dashboard.

  2. On the Switches card, click Manage.

  3. Select one switch or multiple switches that should be assigned to the same role.

  4. Click .

  5. Select the role that applies to the selected switch(es).

  1. Click Assign.

    Note that the Role column is updated with the role assigned to the selected switch(es).

  1. Continue selecting switches and assigning roles until most or all switches have roles assigned.

A bonus of assigning roles to switches is that you can then filter the list of switches by their roles by clicking the appropriate tab.

To add a role to one or more switches, run:

netq lcm add role (superspine | spine | leaf | exit) switches <text-switch-hostnames>

For a single switch, run:

netq lcm add role leaf switches leaf01

For multiple switches to be assigned the same role, separate the hostnames with commas (no spaces). This example configures leaf01 through leaf04 switches with the leaf role:

netq lcm add role leaf switches leaf01,leaf02,leaf03,leaf04

View Switch Roles

You can view the roles assigned to the switches in the LCM inventory at any time.

  1. Open the LCM dashboard.

  2. On the Switches card, click Manage.

    The assigned role is displayed in the Role column of the listing.

To view all switch roles, run:

netq lcm show switches [version <text-cumulus-linux-version>] [json]

Use the version option to only show switches with a given Cumulus Linux version, X.Y.Z.

This example shows the role of all switches in the Role column of the listing.

cumulus@switch:~$ netq lcm show switches
Hostname          Role       IP Address                MAC Address        CPU      CL Version           NetQ Version             Last Changed
----------------- ---------- ------------------------- ------------------ -------- -------------------- ------------------------ -------------------------
leaf01            leaf       192.168.200.11            44:38:39:00:01:7A  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:55:37 2020
                                                                                                        104fb9ed
spine04           spine      192.168.200.24            44:38:39:00:01:6C  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:25:16 2020
                                                                                                        104fb9ed
leaf03            leaf       192.168.200.13            44:38:39:00:01:84  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:55:56 2020
                                                                                                        104fb9ed
leaf04            leaf       192.168.200.14            44:38:39:00:01:8A  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:55:07 2020
                                                                                                        104fb9ed
border02                     192.168.200.64            44:38:39:00:01:7C  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:56:49 2020
                                                                                                        104fb9ed
border01                     192.168.200.63            44:38:39:00:01:74  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:56:37 2020
                                                                                                        104fb9ed
fw2                          192.168.200.62            44:38:39:00:01:8E  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:24:58 2020
                                                                                                        104fb9ed
spine01           spine      192.168.200.21            44:38:39:00:01:82  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:25:07 2020
                                                                                                        104fb9ed
spine02           spine      192.168.200.22            44:38:39:00:01:92  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:25:08 2020
                                                                                                        104fb9ed
spine03           spine      192.168.200.23            44:38:39:00:01:70  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:25:16 2020
                                                                                                        104fb9ed
fw1                          192.168.200.61            44:38:39:00:01:8C  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Tue Sep 29 21:24:58 2020
                                                                                                        104fb9ed
leaf02            leaf       192.168.200.12            44:38:39:00:01:78  x86_64   4.1.0                3.2.0-cl4u30~1601410518. Wed Sep 30 21:55:53 2020
                                                                                                        104fb9ed

Change the Role of a Switch

If you accidentally assign an incorrect role to a switch, it can easily be changed to the correct role.

To change a switch role:

  1. Open the LCM dashboard.

  2. On the Switches card, click Manage.

  3. Select the switches with the incorrect role from the list.

  4. Click .

  5. Select the correct role. (Note that you can select No Role here as well to remove the role from the switches.)

  6. Click Assign.

You use the same command to assign a role as you use to change the role.

For a single switch, run:

netq lcm add role exit switches border01

For multiple switches to be assigned the same role, separate the hostnames with commas (no spaces). For example:

cumulus@switch:~$ netq lcm add role exit switches border01,border02

Export List of Switches

Using the Switch Management feature you can export a listing of all or a selected set of switches.

To export the switch listing:

  1. Open the LCM dashboard.

  2. On the Switches card, click Manage.

  3. Select one or more switches, filtering as needed, or select all switches (click ).

  4. Click .

  5. Choose the export file type and click Export.

Use the json option with the netq lcm show switches command to output a list of all switches in the LCM repository. Alternately, output only switches running a particular version of Cumulus Linux by including the version option.

cumulus@switch:~$ netq lcm show switches json

cumulus@switch:~$ netq lcm show switches version 3.7.11 json

Manage Switch Configurations

You can use the NetQ UI to configure switches using one or more switch configurations. To enable consistent application of configurations, switch configurations can contain network templates for SNMP, NTP, and user accounts, and configuration profiles for Cumulus NetQ Agents.

If you intend to use network templates or configuration profiles, the recommended workflow is as follows:

Manage Network Templates

Network templates provide administrators the option to create switch configuration profiles that can be applied to multiple switches. They can help reduce inconsistencies with switch configuration and speed the process of initial configuration and upgrades. No default templates are provided.

View Network Templates

You can view existing templates using the Network Templates card.

  1. Open the lifecycle management (Manage Switch Assets) dashboard.

  2. Locate the Network Templates card.

  3. Click Manage to view the list of existing switch templates.

Create Network Templates

No default templates are provided on installation of NetQ. This enables you to create configurations that match your specifications.

To create a network template:

  1. Open the lifecycle management (Manage Switch Assets) dashboard.

  2. Click Add on the Network Templates card.

  3. Click Create New.

  4. Decide which aspects of configuration you want included in this template: SNMP, NTP, and/or User accounts.

    You can specify your template in any order, but to complete the configuration, you must open the User form to click Save and Finish.

  5. Configure the template using the following instructions.

SNMP provides a way to query, monitor, and manage your devices in addition to NetQ.

To create a network template with SNMP parameters included:

  1. Provide a name for the template. This field is required and can be a maximum of 22 characters, including spaces.

    All other parameters are optional. Configure those as desired, and described here.

  2. Enter a comma-separated list of IP addresses of the SNMP Agents on the switches and hosts in your network.

  3. Accept the management VRF or change to the default VRF.

  4. Enter contact information for the SNMP system administrator, including an email address or phone number, their location, and name.

  5. Restrict the hosts that should accept SNMP packets:

    1. Click .

    2. Enter the name of an IPv4 or IPv6 community string.

    3. Indicate which hosts should accept messages:

      Accept any to indicate all hosts are to accept messages (default), or enter the hostnames or IP addresses of the specific hosts that should accept messages.

    4. Click to add additional community strings.

  6. Specify traps to be included:

    1. Click .

    2. Specify the traps as follows:

      ParameterDescription
      Load(1 min)Threshold CPU load must cross within a minute to trigger a trap
      Trap link down frequencyToggle on to set the frequency at which to collect link down trap information. Default value is 60 seconds.
      Trap link up frequencyToggle on to set the frequency at which to collect link up trap information. Default value is 60 seconds.
      IQuery SecnameSecurity name for SNMP query
      Trap Destination IPIPv4 or IPv6 address where the trap information is to be sent. This can be a local host or other valid location.
      Community PasswordAuthorization password. Any valid string, where an exclamation mark (!) is the only allowed special character.
      VersionSNMP version to use
  7. If you are using SNMP version 3, specify relevant V3 support parameters:

    1. Enter the user name of someone who has full access to the SNMP server.

    2. Enter the user name of someone who has only read access to the SNMP server.

    3. Toggle Authtrap to enable authentication for users accessing the SNMP server.

    4. Select an authorization type.

      For either MDS or SHA, enter an authorization key and optionally specify AES or DES encryption.

  8. Click Save and Continue.

Switches and hosts must be kept in time synchronization with the NetQ appliance or VM to ensure accurate data reporting. NTP is one protocol that can be used to synchronize the clocks of these devices. None of the parameters are required. Specify those which apply to your configuration.

To create a network template with NTP parameters included:

  1. Click NTP.
  1. Enter the address of one or more of your NTP servers. Toggle to choose between Burst and IBurst to specify whether the server should send a burst of packets when the server is reachable or unreachable, respectively.

  2. Specify either the Default or Management VRF for communication with the NTP server.

  3. Enter the interfaces that the NTP server should listen to for synchronization. This can be a IP, broadcast, manycastclient, or reference clock address.

  4. Enter the timezone of the NTP server.

  5. Specify advanced parameters:

    1. Click Advanced.

    2. Specify the location of a Drift file containing the frequency offset between the NTP server clock and the UTC clock. It is used to adjust the system clock frequency on every system or service start. Be sure that the location you enter can be written by the NTP daemon.

    3. Enter an interface for the NTP server to ignore. Click to add more interfaces to be ignored.

    4. Enter one or more interfaces that xxx. Click to add more interfaces to be dropped.

    5. Restrict query/configuration access to the NTP server.

      Enter restrict <values>. Common values include:

      ValueDescription
      defaultBlock all queries except as explicitly indicated
      kod (kiss-o-death)block all, but time and statistics queries
      nomodifyblock changes to NTP configuration
      notrapblock control message protocol traps
      nopeerblock the creation of a peer
      noqueryblock NTP daemon queries, but allow time queries

      Click to add more access control restrictions.

    6. Restrict administrative control (host) access to the NTP server.

      Enter the IP address for a host or set of hosts, with or without a mask, followed by a restriction value (as described in step 5.) If no mask is provided, 255.255.255.255 is used. If default is specified for query/configuration access, entering the IP address and mask for a host or set of hosts in this field allows query access for these hosts (explicit indication).

      Click to add more administrative control restrictions.

  6. Click Save and Continue.

Creating a User template controls who or what accounts can access the switch and what permissions they have with respect to the data found (read/write/execute). You can also control access using groups of users. No parameters are required. Specify parameters which apply to your specific configuration need.

To create a network template with user parameters included:

  1. Click User.
  1. For individual users or accounts:

    1. Enter a username and password for the individual or account.

    2. Provide a description of the user.

    3. Toggle Should Expire to require changes to the password to expire on a given date.

      The current date and time are automatically provided to show the correct entry format. Modify this to the appropriate expiration date.

  2. Specify advanced parameters:

    1. Click .

    2. If you do not want a home folder created for this user or account, toggle Create home folder.

    3. Generate an SSH key pair for this user or account. Toggle Generate SSH key. When generation is selected, the key pair are stored in the /home/<user>/.ssh directory.

    4. If you are looking to remove access for the user or account, toggle Delete user if present. If you do not want to remove the directories associated with this user or account at the same time, toggle Delete user directory.

    5. Identify this account as a system account. Toggle Is system account.

    6. To specify a group this user or account belongs to, enter the group name in the Groups field.

      Click to add additional groups.

  3. Click Save and Finish.

  1. Once you have finished the template configuration, you are returned to the network templates library.

    This shows the new template you created and which forms have been included in the template. You may only have one or two of the forms in a given template.

Modify Network Templates

For each template that you have created, you can edit, clone, or discard it altogether.

Edit a Network Template

You can change a switch configuration template at any time. The process is similar to creating the template.

To edit a network template:

  1. Enter template edit mode in one of two ways:

    • Hover over the template , then click (edit).

    • Click , then select Edit.

  2. Modify the parameters of the SNMP, NTP, or User forms in the same manner as when you created the template.

  3. Click User, then Save and Finish.

Clone a Network Template

You can take advantage of a template that is significantly similar to another template that you want to create by cloning an existing template. This can save significant time and reduce errors.

To clone a network template:

  1. Enter template clone mode in one of two ways:

    • Hover over the template , then click (clone).

    • Click , then select Clone.

  2. Modify the parameters of the SNMP, NTP, or User forms in the same manner as when you created the template to create the new template.

  3. Click User, then Save and Finish.

    The newly cloned template is now visible on the template library.

Delete a Network Template

You can remove a template when it is no longer needed.

To delete a network template, do one of the following:

The template is no longer visible in the network templates library.

Manage NetQ Configuration Profiles

You can set up a configuration profile to indicate how you want NetQ configured when it is installed or upgraded on your Cumulus Linux switches.

The default configuration profile, NetQ default config, is set up to run in the management VRF and provide info level logging. Both WJH and CPU Limiting are disabled.

You can view, add, and remove NetQ configuration profiles at any time.

View Cumulus NetQ Configuration Profiles

To view existing profiles:

  1. Click (Switches) in the workbench header, then click Manage switches, or click Main Menu (Main Menu) and select Manage Switches.

  2. Click Manage on the NetQ Configurations card.

    Note that the initial value on first installation of NetQ shows one profile. This is the default profile provided with NetQ.

  3. Review the profiles.

Create Cumulus NetQ Configuration Profiles

You can specify four options when creating NetQ configuration profiles:

To create a profile:

  1. Click (Switches) in the workbench header, then click Manage switches, or click Main Menu (Main Menu) and select Manage Switches.

  2. Click Manage on the NetQ Configurations card.

  3. Click Add Config Profile (Add Config) above the listing.

  4. Enter a name for the profile. This is required.

  5. If you do not want NetQ Agent to run in the management VRF, select either Default or Custom. The Custom option lets you enter the name of a user-defined VRF.

  6. Optionally enable WJH.

    Refer to WJH for information about this feature. WJH is only available on Mellanox switches.

  7. To set a logging level, click Advanced, then choose the desired level.

  8. Optionally set a CPU usage limit for the NetQ Agent. Click Enable and drag the dot to the desired limit. Refer to this Knowledge Base article for information about this feature.

  9. Click Add to complete the configuration or Close to discard the configuration.

    This example shows the addition of a profile with the CPU limit set to 75 percent.

Remove Cumulus NetQ Configuration Profiles

To remove a NetQ configuration profile:

  1. Click (Switches) in the workbench header, then click Manage switches, or click Main Menu (Main Menu) and select Manage Switches.

  2. Click Manage on the NetQ Configurations card.

  3. Select the profile(s) you want to remove and click (Delete).

Manage Switch Configuration

To ease the consistent configuration of your switches, NetQ enables you to create and manage multiple switch configuration profiles. Each configuration can contain Cumulus Linux- and NetQ Agent-related settings. These can then be applied to a group of switches at once.

You can view, create, and modify switch configuration profiles and their assignments at any time using the Switch Configurations card.

View Switch Configuration Profiles

You can view existing switch configuration profiles using the Switch Configurations card.

  1. Open the lifecycle management (Manage Switch Assets) dashboard.

  2. Locate the Switch Configurations card.

  3. Click Manage to view the list of existing switch templates.

Create Switch Configuration Profiles

No default configurations are provided on installation of NetQ. This enables you to create configurations that match your specifications.

To create a switch configuration profile:

  1. Open the lifecycle management (Manage Switch Assets) dashboard.

  2. Click Add on the Switch Configurations card.

  3. Enter a name for the configuration. This is required and must be a maximum of 22 characters, including spaces.

  4. Decide which aspects of configuration you want included in this template: CL configuration and/or NetQ Agent configuration profiles.

  5. Specify the settings for each using the following instructions.

Three configuration options are available for the Cumulus Linux configuration portion of the switch configuration profile. Note that two of those are required.

  1. Select either the Default or Management interface to be used for communications with the switches with this profile assigned. Typically the default interface is xxx and the management interface is either eth0 or eth1.

  2. Select the type of switch that will have this configuration assigned from the Choose Switch type dropdown. Currently this includes Mellanox SN series of switches.

  3. If you want to include network settings in this configuration, click Add.

    This opens the Network Template forms. You can select an existing network template to pre-populate the parameters already specified in that template, or you can start from scratch to create a different set of network settings.

  1. Select the template from the dropdown.

  2. If you have selected a network template that has any SNMP parameters specified, you must specify the additional required parameters, then click Continue or click NTP.

  3. If the selected network template has any NTP parameters specified, you must specify the additional required parameters, then click Continue or click User.

  4. If the selected network template has any User parameters specified, you must specify the additional required parameters, then click Done.

  5. If you think this Cumulus Linux configuration is one that you will use regularly, you can make it a template. Enter a name for the configuration and click Yes.

  1. Select the SNMP, NTP, or User forms to specify parameters for this configuration. Note that selected parameters are required on each form, noted by red asterisks (*). Refer to Create Network Templates for a description of the fields.

  2. When you have completed the network settings, click Done.

    If you are not on the User form, you need to go to that tab for the Done option to appear.

In either case, if you change your mind about including network settings, click to exit the form.

  1. Click NetQ Agent Configuration.
  1. Select an existing NetQ Configuration profile or create a custom one.

    To use an existing network template as a starting point:

    1. Select the configuration profile from the dropdown.

    2. Modify any of the parameters as needed or click Continue.

    To create a new configuration profile:

    1. Select values as appropriate for your situation. Refer to Create NetQ Configuration Profiles for descriptions of these parameters.

    2. Click Continue.

The final step is to assign the switch configuration that you have just created to one or more switches.

To assign the configuration:

  1. Click Switches.

    A few items to note on this tab:

    • Above the switches (left), the number of switches that can be assigned and the number of switches that have already been assigned
    • Above the switches (right), management tools to help find the switches you want to assign with this configuration, including select all, clear, filter, and search.
  1. Select the switches to be assigned this configuration.

    In this example, we searched for all leaf switches, then clicked select all.

  1. Click Save and Finish.

  2. To run the job to apply the configuration, you first have the option to change the hostnames of the selected switches.

    Either change the hostnames and then click Continue or just click Continue without changing the hostnames.

  3. Enter a name for the job (maximum of 22 characters including spaces), then click Continue.

    This opens the monitoring page for the assignment jobs, similar to the upgrade jobs. The job title bar indicates the name of the switch configuration being applied and the number of switches that to be assigned with the configuration. (After you have mulitple switch configurations created, you might have more than one configuration being applied in a single job.) Each switch element indicates its hostname, IP address, installed Cumulus Linux and NetQ versions, a note indicating this is a new assignment, the switch configuration being applied, and a menu that provides the detailed steps being executed. The last is useful when the assignment fails as any errors are included in this popup.

  1. Click to return to the switch configuration page where you can either create another configuration and apply it. If you are finished assigning switch configurations to switches, click to return to the lifecycle management dashboard.

  2. When you return the dashboard, your Switch Configurations card shows the new configurations and the Config Assignment History card appears that shows a summary status of all configuration assignment jobs attempted.

  1. Click View on the Config Assignment History card to open the details of all assignment jobs. Refer to Manage Switch Configurations for more detail about this card.

Edit a Switch Configuration

You can edit a switch configuration at any time. After you have made changes to the configuration, you can apply it to the same set of switches or modify the switches using the configuration as part of the editing process.

To edit a switch configuration:

  1. Locate the Switch Configurations card on the lifecycle management dashboard.

  2. Click Manage.

  3. Locate the configuration you want to edit. Scroll down or filter the listing to help find the configuration when there are multiple configurations.

  4. Click , then select Edit.

  5. Follow the instructions in Create Switch Configuration Profiles, starting at Step 5, to make any required edits.

Clone a Switch Configuration

You can clone a switch configuration assignment job at any time.

To clone an assignment job:

  1. Locate the Switch Configurations card on the lifecycle management dashboard.

  2. Click Manage.

  3. Locate the configuration you want to clone. Scroll down or filter the listing to help find the configuration when there are multiple configurations.

  4. Click , then select Clone.

  5. Click , then select Edit.

  6. Change the Configuration Name.

  7. Follow the instructions in Create Switch Configuration Profiles, starting at Step 5, to make any required edits.

Remove a Switch Configuration

You can remove a switch configuration at any time; however if there are switches with the given configuration assigned, you must first assign an alternate configuration to those switches.

To remove a switch configuration:

  1. Locate the Switch Configurations card on the lifecycle management dashboard.

  2. Click Manage.

  3. Locate the configuration you want to remove. Scroll down or filter the listing to help find the configuration when there are multiple configurations.

  4. Click , then select Delete.

    • If any switches are assigned to this configuration, an error message appears. Assign a different switch configuration to the relevant switches and repeat the removal steps.

    • Otherwise, confirm the removal by clicking Yes.

Assign Existing Switch Configuration Profiles

You can assign existing switch configurations to one or more switches at any time. You can also change the switch configuration already assigned to a switch.

If you need to create a new switch configuration, follow the instructions in Create Switch Configuration Profiles.

Add an Assignment

As new switches are added to your network, you might want to use a switch configuration to speed the process and make sure it matches the configuration of similarly designated switches.

To assign an existing switch configuration to switches:

  1. Locate the Switch Configurations card on the lifecycle management dashboard.

  2. Click Manage.

  3. Locate the configuration you want to assign.

    Scroll down or filter the listing by:

    • Time Range: Enter a range of time in which the switch configuration was created, then click Done.
    • All switches: Search for or select individual switches from the list, then click Done.
    • All switch types: Search for or select individual switch series, then click Done.
    • All users: Search for or select individual users who created a switch configuration, then click Done.
    • All filters: Display all filters at once to apply multiple filters at once. Additional filter options are included here. Click Done when satisfied with your filter criteria.

    By default, filters show all of that items of the given filter type until it is restricted by these settings.

  4. Click Select switches in the switch configuration summary.

  5. Select the switches that you want to assign to the switch configuration.

    Scroll down or use the select all, clear, filter , and Search options to help find the switches of interest. You can filter by role, Cumulus Linux version, or NetQ version. The badge on the filter icon indicates the number of filters applied. Colors on filter options are only used to distinguish between options. No other indication is intended.

    In this example, we have one role defined, and we have selected that role.

    The result is two switches. Note that only the switches that meet the criteria and have no switch configuration assigned are shown. In this example, there are two additional switches with the spine role, but they already have a switch configuration assigned to them. Click on the link above the list to view those switches.

    Continue narrowing the list of switches until all or most of the switches are visible.

  6. Hover over the switches and click or click select all.

  7. Click Done.

  8. To run the job to apply the configuration, you first have the option to change the hostnames of the selected switches.

    Either change the hostnames and then click Continue or just click Continue without changing the hostnames.

  9. If you have additional switches that you want to assign a different switch configuration, follow Steps 3-7 for each switch configuration.

    If you do this, multiple assignment configurations are listed in the bottom area of the page. They all become part of a single assignment job.

  10. When you have all the assignments configured, click Start Assignment to start the job.

  11. Enter a name for the job (maximum of 22 characters including spaces), then click Continue.

  12. Watch the progress or click to return to the switch configuration page where you can either create another configuration and apply it. If you are finished assigning switch configurations to switches, click to return to the lifecycle management dashboard.

    The Config Assignment History card is updated to include the status of the job you just ran.

Change the Configuration Assignment on a Switch

You can change the switch configuration assignment at any time. For example you might have a switch that is starting to experience reduced performance, so you want to run What Just Happened on it to see if there is a particular problem area. You can reassign this switch to a new configuration with WJH enabled on the NetQ Agent while you test it. Then you can change it back to its original assignment.

To change the configuration assignment on a switch:

  1. Locate the Switch Configurations card on the lifecycle management dashboard.

  2. Click Manage.

  3. Locate the configuration you want to assign. Scroll down or filter the listing to help find the configuration when there are multiple configurations.

  4. Click Select switches in the switch configuration summary.

  5. Select the switches that you want to assign to the switch configuration.

    Scroll down or use the select all, clear, filter , and Search options to help find the switch(es) of interest.

  6. Hover over the switches and click or click select all.

  7. Click Done.

  8. Click Start Assignment.

  9. Watch the progress.

    On completion, each switch shows the previous assignment and the newly applied configuration assignment.

  10. Click to return to the switch configuration page where you can either create another configuration and apply it. If you are finished assigning switch configurations to switches, click to return to the lifecycle management dashboard.

    The Config Assignment History card is updated to include the status of the job you just ran.

View Switch Configuration History

You can view a history of switch configuration assignments using the Config Assignment History card.

To view a summary, locate the Config Assignment History card on the lifecycle management dashboard.

To view details of the assignment jobs, click View.

Above the jobs, a number of filters are provided to help you find a particular job. To the right of those is a status summary of all jobs. Click in the job listing to see the details of that job. Click to return to the lifecycle management dashboard.

Upgrade Cumulus NetQ Agent Using LCM

The lifecycle management (LCM) feature enables you to upgrade to Cumulus NetQ 3.2.0 on switches with an existing NetQ Agent 2.4.x, 3.0.0, or 3.1.0 release using the NetQ UI. You can upgrade only the NetQ Agent or upgrade both the NetQ Agent and the NetQ CLI at the same time. Up to five jobs can be run simultaneously; however, a given switch can only be contained in one running job at a time.

The upgrade workflow includes the following steps:

Upgrades can be performed from NetQ Agents of 2.4.x, 3.0.0, and 3.1.0 releases to the NetQ 3.2.0 release. Lifecycle management does not support upgrades from NetQ 2.3.1 or earlier releases; you must perform a new installation in these cases. Refer to Install NetQ Agents.

Prepare for a Cumulus NetQ Agent Upgrade

Prepare for NetQ Agent upgrade on switches as follows:

  1. Click (Switches) in the workbench header, then click Manage switches, or click (Main Menu) and select Manage Switches.

  2. Add the upgrade images.

  3. Optionally, specify a default upgrade version.

  4. Verify or add switch access credentials.

  5. Optionally, create a new switch configuration profile.

Your LCM dashboard should look similar to this after you have completed the above steps:

  1. Verify or add switch access credentials.

  2. Configure switch roles to determine the order in which the switches get upgraded.

  3. Upload the Cumulus Linux install images.

Perform a Cumulus NetQ Agent Upgrade

You can upgrade Cumulus NetQ Agents on switches as follows:

  1. Click Manage on the Switches card.

  2. Select the individual switches (or click to select all switches) with older NetQ releases that you want to upgrade. If needed, use the filter to narrow the listing and find the relevant switches.

  3. Click (Upgrade NetQ) above the table.

    From this point forward, the software walks you through the upgrade process, beginning with a review of the switches that you selected for upgrade.

  1. Verify that the number of switches selected for upgrade matches your expectation.

  2. Enter a name for the upgrade job. The name can contain a maximum of 22 characters (including spaces).

  3. Review each switch:

    • Is the NetQ Agent version between 2.4.0 and 3.1.1? If not, this switch can only be upgraded through the switch discovery process.
    • Is the configuration profile the one you want to apply? If not, click Change config, then select an alternate profile to apply to all selected switches.

You can apply different profiles to switches in a single upgrade job by selecting a subset of switches (click checkbox for each switch) and then choosing a different profile. You can also change the profile on a per switch basis by clicking the current profile link and selecting an alternate one.

Scroll down to view all selected switches or use Search to find a particular switch of interest.

  1. After you are satisfied with the included switches, click Next.

  2. Review the summary indicating the number of switches and the configuration profile to be used. If either is incorrect, click Back and review your selections.

  1. Select the version of NetQ Agent for upgrade. If you have designated a default version, keep the Default selection. Otherwise, select an alternate version by clicking Custom and selecting it from the list.
  1. Click Next.

  2. Several checks are performed to eliminate preventable problems during the upgrade process.

  1. Watch the progress of the upgrade job.
  1. Click to return to Switches listing.

    For the switches you upgraded, you can verify the version is correctly listed in the NetQ_Version column. Click to return to the lifecycle management dashboard.

    The NetQ Install and Upgrade History card is now visible and shows the status of this upgrade job.

To upgrade the NetQ Agent on one or more switches, run:

netq lcm upgrade name <text-job-name> cl-version <text-cumulus-linux-version> netq-version <text-netq-version> hostnames <text-switch-hostnames> [run-restore-on-failure] [run-before-after]

This example creates a NetQ Agent upgrade job called upgrade-cl410-nq320. It upgrades the spine01 and spine02 switches with NetQ Agents version 3.2.0.

cumulus@switch:~$ netq lcm upgrade name upgrade-cl410-nq320 cl-version 4.1.0 netq-version 3.2.0 hostnames spine01,spine02

Including the run-restore-on-failure option restores the switch(es) with their earlier version of NetQ Agent should the upgrade fail. The run-before-after option generates a network snapshot before upgrade begins and another when it is completed. The snapshots are visible in the NetQ UI.

Analyze the NetQ Agent Upgrade Results

After starting the upgrade you can monitor the progress in the NetQ UI. Progress can be monitored from the preview page or the Upgrade History page.

From the preview page, a green circle with rotating arrows is shown on each switch as it is working. Alternately, you can close the detail of the job and see a summary of all current and past upgrade jobs on the NetQ Install and Upgrade History page. The job started most recently is shown at the top, and the data is refreshed periodically.

If you are disconnected while the job is in progress, it may appear as if nothing is happening. Try closing (click ) and reopening your view (click ), or refreshing the page.

Monitor the NetQ Agent Upgrade Job

Several viewing options are available for monitoring the upgrade job.

Sample Successful NetQ Agent Upgrade

This example shows that all four of the selected switches were upgraded successfully. You can see the results in the Switches list as well.

Sample Failed NetQ Agent Upgrade

This example shows that an error has occurred trying to upgrade two of the four switches in a job. The error indicates that the access permissions for the switches are invalid. In this case, you need to modify the switch access credentials and then create a new upgrade job.

If you were watching this job from the LCM dashboard view, click View on the NetQ Install and Upgrade History card to return to the detailed view to resolve any issues that occurred.

Reasons for NetQ Agent Upgrade Failure

Upgrades can fail at any of the stages of the process, including when backing up data, upgrading the Cumulus NetQ software, and restoring the data. Failures can also occur when attempting to connect to a switch or perform a particular task on the switch.

Some of the common reasons for upgrade failures and the errors they present:

ReasonError Message
Switch is not reachable via SSHData could not be sent to remote host “192.168.0.15”. Make sure this host can be reached over ssh: ssh: connect to host 192.168.0.15 port 22: No route to host
Switch is reachable, but user-provided credentials are invalidInvalid/incorrect username/password. Skipping remaining 2 retries to prevent account lockout: Warning: Permanently added ‘<hostname-ipaddr>’ to the list of known hosts. Permission denied, please try again.
Switch is reachable, but a valid Cumulus Linux license is not installed1587866683.880463 2020-04-26 02:04:43 license.c:336 CRIT No license file. No license installed!
Upgrade task could not be runFailure message depends on the why the task could not be run. For example: /etc/network/interfaces: No such file or directory
Upgrade task failedFailed at- <task that failed>. For example: Failed at- MLAG check for the peerLink interface status
Retry failed after five attemptsFAILED In all retries to process the LCM Job

Upgrade Cumulus Linux Using LCM

LCM provides the ability to upgrade Cumulus Linux on one or more switches in your network through the NetQ UI or the NetQ CLI. Up to five upgrade jobs can be run simultaneously; however, a given switch can only be contained in one running job at a time.

Upgrades can be performed between Cumulus Linux 3.x releases, and between Cumulus Linux 4.x releases. Lifecycle management does not support upgrades from Cumulus Linux 3.x to 4.x releases.

Workflows for Cumulus Linux Upgrades Using LCM

There are three methods available through LCM for upgrading Cumulus Linux on your switches based on whether the NetQ Agent is already installed on the switch or not, and whether you want to use the NetQ UI or the NetQ CLI:

The workflows vary slightly with each approach:

Upgrade Cumulus Linux on Switches with NetQ Agent Installed

You can upgrade Cumulus Linux on switches that already have a NetQ Agent (version 2.4.x or later) installed using either the NetQ UI or NetQ CLI.

Prepare for Upgrade

  1. Click (Switches) in any workbench header, then click Manage switches.

  2. Upload the Cumulus Linux upgrade images.

  3. Optionally, specify a default upgrade version.

  4. Verify the switches you want to manage are running NetQ Agent 2.4 or later. Refer to Manage Switches.

  5. Optionally, create a new NetQ configuration profile.

  6. Configure switch access credentials.

  7. Assign a role to each switch (optional, but recommended).

Your LCM dashboard should look similar to this after you have completed these steps:

  1. Verify network access to the relevant Cumulus Linux license file.

  2. Upload the Cumulus Linux upgrade images.

  3. Verify the switches you want to manage are running NetQ Agent 2.4 or later. Refer to Manage Switches.

  4. Configure switch access credentials.

  5. Assign a role to each switch (optional, but recommended).

Perform a Cumulus Linux Upgrade

Upgrade Cumulus Linux on switches through either the NetQ UI or NetQ CLI:

  1. Click (Switches) in any workbench header, then select Manage switches.

  2. Click Manage on the Switches card.

  1. Select the individual switches (or click to select all switches) that you want to upgrade. If needed, use the filter to the narrow the listing and find the relevant switches.
  1. Click (Upgrade CL) above the table.

    From this point forward, the software walks you through the upgrade process, beginning with a review of the switches that you selected for upgrade.

  1. Give the upgrade job a name. This is required, but can be no more than 22 characters, including spaces and special characters.

  2. Verify that the switches you selected are included, and that they have the correct IP address and roles assigned.

    • If you accidentally included a switch that you do NOT want to upgrade, hover over the switch information card and click to remove it from the upgrade job.
    • If the role is incorrect or missing, click , then select a role for that switch from the dropdown. Click to discard a role change.
  1. When you are satisfied that the list of switches is accurate for the job, click Next.

  2. Verify that you want to use the default Cumulus Linux or NetQ version for this upgrade job. If not, click Custom and select an alternate image from the list.

  1. Note that the switch access authentication method, Using global access credentials, indicates you have chosen either basic authentication with a username and password or SSH key-based authentication for all of your switches. Authentication on a per switch basis is not currently available.

  2. Click Next.

  3. Verify the upgrade job options.

    By default, NetQ takes a network snapshot before the upgrade and then one after the upgrade is complete. It also performs a roll back to the original Cumulus Linux version on any server which fails to upgrade.

    You can exclude selected services and protocols from the snapshots. By default, node and services are included, but you can deselect any of the other items. Click on one to remove it; click again to include it. This is helpful when you are not running a particular protocol or you have concerns about the amount of time it will take to run the snapshot. Note that removing services or protocols from the job may produce non-equivalent results compared with prior snapshots.

    While these options provide a smoother upgrade process and are highly recommended, you have the option to disable these options by clicking No next to one or both options.

  1. Click Next.

  2. After the pre-checks have completed successfully, click Preview. If there are failures, refer to Precheck Failures.

    These checks verify the following:

    • Selected switches are not currently scheduled for, or in the middle of, a Cumulus Linux or NetQ Agent upgrade
    • Selected versions of Cumulus Linux and NetQ Agent are valid upgrade paths
    • All mandatory parameters have valid values, including MLAG configurations
    • All switches are reachable
    • The order to upgrade the switches, based on roles and configurations
  1. Review the job preview.

    When all of your switches have roles assigned, this view displays the chosen job options (top center), the pre-checks status (top right and left in Pre-Upgrade Tasks), the order in which the switches are planned for upgrade (center; upgrade starts from the left), and the post-upgrade tasks status (right).

  1. When you are happy with the job specifications, click Start Upgrade.

  2. Click Yes to confirm that you want to continue with the upgrade, or click Cancel to discard the upgrade job.

Perform the upgrade using the netq lcm upgrade command, providing a name for the upgrade job, the Cumulus Linux and NetQ version, and the hostname(s) to be upgraded:

cumulus@switch:~$ netq lcm upgrade name upgrade-cl410 cl-version 4.1.0 netq-version 3.1.0 hostnames spine01,spine02

Optionally, you can apply some job options, including creation of network snapshots and previous version restoration if a failure occurs.

Network Snapshot Creation

You can also generate a Network Snapshot before and after the upgrade by adding the run-before-after option to the command:

cumulus@switch:~$ netq lcm upgrade name upgrade-3712 cl-version 3.7.12 netq-version 3.1.0 hostnames spine01,spine02,leaf01,leaf02 order spine,leaf run-before-after

Restore on an Upgrade Failure

You can have LCM restore the previous version of Cumulus Linux if the upgrade job fails by adding the run-restore-on-failure option to the command. This is highly recommended.

cumulus@switch:~$ netq lcm upgrade name upgrade-3712 cl-version 3.7.12 netq-version 3.1.0 hostnames spine01,spine02,leaf01,leaf02 order spine,leaf run-restore-on-failure

Precheck Failures

If one or more of the pre-checks fail, resolve the related issue and start the upgrade again. In the NetQ UI these failures appear on the Upgrade Preview page. In the NetQ CLI, it appears in the form of error messages in the netq lcm show upgrade-jobs command output.

Expand the following dropdown to view common failures, their causes and corrective actions.

Precheck Failure Messages

Analyze Results

After starting the upgrade you can monitor the progress of your upgrade job and the final results. While the views are different, essentially the same information is available from either the NetQ UI or the NetQ CLI.

You can track the progress of your upgrade job from the Preview page or the Upgrade History page of the NetQ UI.

From the preview page, a green circle with rotating arrows is shown above each step as it is working. Alternately, you can close the detail of the job and see a summary of all current and past upgrade jobs on the Upgrade History page. The job started most recently is shown at the bottom, and the data is refreshed every minute.

If you are disconnected while the job is in progress, it may appear as if nothing is happening. Try closing (click ) and reopening your view (click ), or refreshing the page.

Several viewing options are available for monitoring the upgrade job.

  • Monitor the job with full details open on the Preview page:
  • Monitor the job with summary information only in the CL Upgrade History page. Open this view by clicking in the full details view:
  • Monitor the job through the CL Upgrade History card on the LCM dashboard. Click twice to return to the LCM dashboard. As you perform more upgrades the graph displays the success and failure of each job.

Sample Successful Upgrade

On successful completion, you can:

  • Compare the network snapshots taken before and after the upgrade.
  • Download details about the upgrade in the form of a JSON-formatted file, by clicking Download Report.

  • View the changes on the Switches card of the LCM dashboard.

    Click , then Upgrade Switches.

Sample Failed Upgrade

If an upgrade job fails for any reason, you can view the associated error(s):

  1. From the CL Upgrade History dashboard, find the job of interest.
  1. Click .

  2. Click .

  1. To view what step in the upgrade process failed, click and scroll down. Click to close the step list.
  1. To view details about the errors, either double-click the failed step or click Details and scroll down as needed. Click collapse the step detail. Click to close the detail popup.

To see the progress of current upgrade jobs and the history of previous upgrade jobs, run netq lcm show upgrade-jobs:

cumulus@switch:~$ netq lcm show upgrade-jobs
Job ID       Name            CL Version           Pre-Check Status                 Warnings         Errors       Start Time
------------ --------------- -------------------- -------------------------------- ---------------- ------------ --------------------
job_cl_upgra Leafs upgr to C 4.2.0                COMPLETED                                                      Fri Sep 25 17:16:10
de_ff9c35bc4 L410                                                                                                2020
950e92cf49ac
bb7eb4fc6e3b
7feca7d82960
570548454c50
cd05802
job_cl_upgra Spines to 4.2.0 4.2.0                COMPLETED                                                      Fri Sep 25 16:37:08
de_9b60d3a1f                                                                                                     2020
dd3987f787c7
69fd92f2eef1
c33f56707f65
4a5dfc82e633
dc3b860
job_upgrade_ 3.7.12 Upgrade  3.7.12               WARNING                                                        Fri Apr 24 20:27:47
fda24660-866                                                                                                     2020
9-11ea-bda5-
ad48ae2cfafb
job_upgrade_ DataCenter      3.7.12               WARNING                                                        Mon Apr 27 17:44:36
81749650-88a                                                                                                     2020
e-11ea-bda5-
ad48ae2cfafb
job_upgrade_ Upgrade to CL3. 3.7.12               COMPLETED                                                      Fri Apr 24 17:56:59
4564c160-865 7.12                                                                                                2020
3-11ea-bda5-
ad48ae2cfafb

To see details of a particular upgrade job, run netq lcm show status job-ID:

cumulus@switch:~$ netq lcm show status job_upgrade_fda24660-8669-11ea-bda5-ad48ae2cfafb
Hostname    CL Version    Backup Status    Backup Start Time         Restore Status    Restore Start Time        Upgrade Status    Upgrade Start Time
----------  ------------  ---------------  ------------------------  ----------------  ------------------------  ----------------  ------------------------
spine02     4.1.0         FAILED           Fri Sep 25 16:37:40 2020  SKIPPED_ON_FAILURE  N/A                   SKIPPED_ON_FAILURE  N/A
spine03     4.1.0         FAILED           Fri Sep 25 16:37:40 2020  SKIPPED_ON_FAILURE  N/A                   SKIPPED_ON_FAILURE  N/A
spine04     4.1.0         FAILED           Fri Sep 25 16:37:40 2020  SKIPPED_ON_FAILURE  N/A                   SKIPPED_ON_FAILURE  N/A
spine01     4.1.0         FAILED           Fri Sep 25 16:40:26 2020  SKIPPED_ON_FAILURE  N/A                   SKIPPED_ON_FAILURE  N/A

Postcheck Failures

Upgrades can be considered successful and still have post-check warnings. For example, the OS has been updated, but not all services are fully up and running after the upgrade. If one or more of the post-checks fail, warning messages are provided in the Post-Upgrade Tasks section of the preview. Click on the warning category to view the detailed messages.

Expand the following dropdown to view common failures, their causes and corrective actions.

Post-check Failure Messages

Reasons for Upgrade Job Failure

Upgrades can fail at any of the stages of the process, including when backing up data, upgrading the Cumulus Linux software, and restoring the data. Failures can occur when attempting to connect to a switch or perform a particular task on the switch.

Some of the common reasons for upgrade failures and the errors they present:

ReasonError Message
Switch is not reachable via SSHData could not be sent to remote host “192.168.0.15”. Make sure this host can be reached over ssh: ssh: connect to host 192.168.0.15 port 22: No route to host
Switch is reachable, but user-provided credentials are invalidInvalid/incorrect username/password. Skipping remaining 2 retries to prevent account lockout: Warning: Permanently added ‘<hostname-ipaddr>’ to the list of known hosts. Permission denied, please try again.
Switch is reachable, but a valid Cumulus Linux license is not installed1587866683.880463 2020-04-26 02:04:43 license.c:336 CRIT No license file. No license installed!
Upgrade task could not be runFailure message depends on the why the task could not be run. For example: /etc/network/interfaces: No such file or directory
Upgrade task failedFailed at- <task that failed>. For example: Failed at- MLAG check for the peerLink interface status
Retry failed after five attemptsFAILED In all retries to process the LCM Job

Upgrade Cumulus Linux on Switches Without NetQ Agent Installed

When you want to update Cumulus Linux on switches without NetQ installed, NetQ provides the LCM switch discovery feature. The feature browses your network to find all Cumulus Linux Switches, with and without NetQ currently installed and determines the versions of Cumulus Linux and NetQ installed. The results of switch discovery are then used to install or upgrade Cumulus Linux and Cumulus NetQ on all discovered switches in a single procedure rather than in two steps. Up to five jobs can be run simultaneously; however, a given switch can only be contained in one running job at a time.

If all of your Cumulus Linux switches already have NetQ 2.4.x or later installed, you can upgrade them directly. Refer to Upgrade Cumulus Linux.

To discover switches running Cumulus Linux and upgrade Cumulus Linux and NetQ on them:

  1. Click Main Menu (Main Menu) and select Upgrade Switches, or click (Switches) in the workbench header, then click Manage switches.

  2. On the Switches card, click Discover.

  3. Enter a name for the scan.

  4. Choose whether you want to look for switches by entering IP address ranges OR import switches using a comma-separated values (CSV) file.

    If you do not have a switch listing, then you can manually add the address ranges where your switches are located in the network. This has the advantage of catching switches that may have been missed in a file.

    To discover switches using address ranges:

    1. Enter an IP address range in the IP Range field.

      Ranges can be contiguous, for example 192.168.0.24-64, or non-contiguous, for example 192.168.0.24-64,128-190,235, but they must be contained within a single subnet.

    2. Optionally, enter another IP address range (in a different subnet) by clicking .

      For example, 198.51.100.0-128 or 198.51.100.0-128,190,200-253.

    3. Add additional ranges as needed. Click to remove a range if needed.

    If you decide to use a CSV file instead, the ranges you entered will remain if you return to using IP ranges again.

    If you have a file of switches that you want to import, then it can be easier to use that, than to enter the IP address ranges manually.

    To import switches through a CSV file:

    1. Click Browse.

    2. Select the CSV file containing the list of switches.

      The CSV file must include a header containing hostname, ip, and port. They can be in any order you like, but the data must match that order. For example, a CSV file that represents the Cumulus reference topology could look like this:

    Click Remove if you decide to use a different file or want to use IP address ranges instead. If you had entered ranges prior to selecting the CSV file option, they will have remained.

  5. Note that the switch access credentials defined in Manage Switch Credentials are used to access these switches. If you have issues accessing the switches, you may need to update your credentials.

  6. Click Next.

    When the network discovery is complete, NetQ presents the number of Cumulus Linux switches it has found. They are displayed in categories:

    • Discovered without NetQ: Switches found without NetQ installed
    • Discovered with NetQ: Switches found with some version of NetQ installed
    • Discovered but Rotten: Switches found that are unreachable
    • Incorrect Credentials: Switches found that cannot be reached because the provided access credentials do not match those for the switches
    • OS not Supported: Switches found that are running Cumulus Linux version not supported by the LCM upgrade feature
    • Not Discovered: IP addresses which did not have an associated Cumulus Linux switch

    If no switches are found for a particular category, that category is not displayed.

  7. Select which switches you want to upgrade from each category by clicking the checkbox on each switch card.

  8. Click Next.

  9. Verify the number of switches identified for upgrade and the configuration profile to be applied is correct.

  10. Accept the default NetQ version or click Custom and select an alternate version.

  11. By default, the NetQ Agent and CLI are upgraded on the selected switches. If you do not want to upgrade the NetQ CLI, click Advanced and change the selection to No.

  12. Click Next.

  13. Several checks are performed to eliminate preventable problems during the install process.

    These checks verify the following:

    • Selected switches are not currently scheduled for, or in the middle of, a Cumulus Linux or NetQ Agent upgrade
    • Selected versions of Cumulus Linux and NetQ Agent are valid upgrade paths
    • All mandatory parameters have valid values, including MLAG configurations
    • All switches are reachable
    • The order to upgrade the switches, based on roles and configurations

    If any of the pre-checks fail, review the error messages and take appropriate action.

    If all of the pre-checks pass, click Install to initiate the job.

  14. Monitor the job progress.

    After starting the upgrade you can monitor the progress from the preview page or the Upgrade History page.

    From the preview page, a green circle with rotating arrows is shown on each switch as it is working. Alternately, you can close the detail of the job and see a summary of all current and past upgrade jobs on the NetQ Install and Upgrade History page. The job started most recently is shown at the top, and the data is refreshed periodically.

    If you are disconnected while the job is in progress, it may appear as if nothing is happening. Try closing (click ) and reopening your view (click ), or refreshing the page.

    Several viewing options are available for monitoring the upgrade job.

    • Monitor the job with full details open:

    • Monitor the job with only summary information in the NetQ Install and Upgrade History page. Open this view by clicking in the full details view; useful when you have multiple jobs running simultaneously

    • Monitor the job through the NetQ Install and Upgrade History card on the LCM dashboard. Click twice to return to the LCM dashboard.

  15. Investigate any failures and create new jobs to reattempt the upgrade.

Manage Network Snapshots

Creating and comparing network snapshots can be useful to validate that the network state has not changed. Snapshots are typically created when you upgrade or change the configuration of your switches in some way. This section describes the Snapshot card and content, as well as how to create and compare network snapshots at any time. Snapshots can be automatically created during the upgrade process for Cumulus Linux. Refer to Perform a Cumulus Linux Upgrade.

Create a Network Snapshot

It is simple to capture the state of your network currently or for a time in the past using the snapshot feature.

To create a snapshot:

  1. From any workbench in the NetQ UI, click in the workbench header.

  2. Click Create Snapshot.

  3. Enter a name for the snapshot.

  4. Choose the time for the snapshot:

    • For the current network state, click Now.

    • For the network state at a previous date and time, click Past, then click in Start Time field to use the calendar to step through selection of the date and time. You may need to scroll down to see the entire calendar.

  5. Choose the services to include in the snapshot.

    In the Choose options field, click any service name to remove that service from the snapshot. This would be appropriate if you do not support a particular service, or you are concerned that including that service might cause the snapshot to take an excessive amount of time to complete if included. The checkmark next to the service and the service itself is grayed out when the service is removed. Click any service again to re-include the service in the snapshot. The checkmark is highlighted in green next to the service name and is no longer grayed out.

    The Node and Services options are mandatory, and cannot be selected or unselected.

    If you remove services, be aware that snapshots taken in the past or future may not be equivalent when performing a network state comparison.

    This example removes the OSPF and Route services from the snapshot being created.

  6. Optionally, scroll down and click in the Notes field to add descriptive text for the snapshot to remind you of its purpose. For example: “This was taken before adding MLAG pairs,” or “Taken after removing the leaf36 switch.”

  7. Click Finish.

    A medium Snapshot card appears on your desktop. Spinning arrows are visible while it works. When it finishes you can see the number of items that have been captured, and if any failed. This example shows a successful result.

    If you have already created other snapshots, Compare is active. Otherwise it is inactive (grayed-out).

  8. When you are finished viewing the snapshot, click Dismiss to close the snapshot. The snapshot is not deleted, merely removed from the workbench.

Compare Network Snapshots

You can compare the state of your network before and after an upgrade or other configuration change to validate that the changes have not created an unwanted change in your network state.

To compare network snapshots:

  1. Create a snapshot (as described in previous section) before you make any changes.

  2. Make your changes.

  3. Create a second snapshot.

  4. Compare the results of the two snapshots.

    Depending on what, if any, cards are open on your workbench:

  1. Put the cards next to each other to view a high-level comparison. Scroll down to see all of the items.

  2. To view a more detailed comparison, click Compare on one of the cards. Select the other snapshot from the list.

  1. Click Compare on the open card.

  2. Select the other snapshot to compare.

  1. Click .

  2. Click Compare Snapshots.

  3. Click on the two snapshots you want to compare.

  4. Click Finish. Note that two snapshots must be selected before Finish is active.

In the latter two cases, the large Snapshot card opens. The only difference is in the card title. If you opened the comparison card from a snapshot on your workbench, the title includes the name of that card. If you open the comparison card through the Snapshot menu, the title is generic, indicating a comparison only. Functionally, you have reached the same point.

Scroll down to view all element comparisons.

Interpreting the Comparison Data

For each network element that is compared, count values and changes are shown:

In this example, a change was made to the VLAN. The snapshot taken before the change (17Apr2020) had a total count of 765 neighbors. The snapshot taken after the change (20Apr2020) had a total count of 771 neighbors. Between the two totals you can see the number of neighbors added and removed from one time to the next, resulting in six new neighbors after the change.

The red and green coloring indicates only that items were removed (red) or added (green). The coloring does not indicate whether the removal or addition of these items is bad or good.

From this card, you can also change which snapshots to compare. Select an alternate snapshot from one of the two snapshot dropdowns and then click Compare.

View Change Details

You can view additional details about the changes that have occurred between the two snapshots by clicking View Details. This opens the full screen Detailed Snapshot Comparison card.

From this card you can:

The following table describes the information provided for each element type when changes are present:

ElementData Descriptions
BGPHostname: Name of the host running the BGP sessionVRF: Virtual route forwarding interface if usedBGP Session: Session that was removed or addedASN: Autonomous system number
CLAGHostname: Name of the host running the CLAG sessionCLAG Sysmac: MAC address for a bond interface pair that was removed or added
InterfaceHostname: Name of the host where the interface residesIF Name: Name of the interface that was removed or added
IP AddressHostname: Name of the host where address was removed or addedPrefix: IP address prefixMask: IP address maskIF Name: Name of the interface that owns the address
LinksHostname: Name of the host where the link was removed or addedIF Name: Name of the linkKind: Bond, bridge, eth, loopback, macvlan, swp, vlan, vrf, or vxlan
LLDPHostname: Name of the discovered host that was removed or addedIF Name: Name of the interface
MAC AddressHostname: Name of the host where MAC address residesMAC address: MAC address that was removed or addedVLAN: VLAN associated with the MAC address
NeighborHostname: Name of the neighbor peer that was removed or addedVRF: Virtual route forwarding interface if usedIF Name: Name of the neighbor interfaceIP address: Neighbor IP address
NodeHostname: Name of the network node that was removed or added
OSPFHostname: Name of the host running the OSPF sessionIF Name: Name of the associated interface that was removed or addedArea: Routing domain for this host devicePeer ID: Network subnet address of router with access to the peer device
RouteHostname: Name of the host running the route that was removed or addedVRF: Virtual route forwarding interface associated with routePrefix: IP address prefix
SensorsHostname: Name of the host where sensor residesKind: Power supply unit, fan, or temperatureName: Name of the sensor that was removed or added
ServicesHostname: Name of the host where service is runningName: Name of the service that was removed or addedVRF: Virtual route forwarding interface associated with service

Manage Network Snapshots

You can create as many snapshots as you like and view them at any time. When a snapshot becomes old and no longer useful, you can remove it.

To view an existing snapshot:

  1. From any workbench, click in the workbench header.

  2. Click View/Delete Snapshots.

  3. Click View.

  4. Click one or more snapshots you want to view, then click Finish.

    Click Back or Choose Action to cancel viewing of your selected snapshot(s).

To remove an existing snapshot:

  1. From any workbench, click in the workbench header.

  2. Click View/Delete Snapshots.

  3. Click Delete.

  4. Click one or more snapshots you want to remove, then click Finish.

    Click Back or Choose Action to cancel the deletion of your selected snapshot(s).

Decommission Switches

You can decommission a switch or host at any time. You might need to do this when you:

Decommissioning the switch or host removes information about the switch or host from the NetQ database.

To decommission a switch or host:

  1. On the given switch or host, stop and disable the NetQ Agent service.

    cumulus@switch:~$ sudo systemctl stop netq-agent
    cumulus@switch:~$ sudo systemctl disable netq-agent
    
  2. On the NetQ On-premises or Cloud Appliance or VM, decommission the switch or host.

    cumulus@netq-appliance:~$ netq decommission <hostname>
    

Manage NetQ Agents

At various points in time, you might want to change which network nodes are being monitored by NetQ or look more closely at a network node for troubleshooting purposes. Adding the NetQ Agent to a switch or host is described in Install NetQ. Viewing the status of an Agent, disabling an Agent, managing NetQ Agent logging, and configuring the events the agent collects are presented here.

View NetQ Agent Status

To view the health of your NetQ Agents, run:

netq [<hostname>] show agents [fresh | dead | rotten | opta] [around <text-time>] [json]

You can view the status for a given switch, host or NetQ Appliance or Virtual Machine. You can also filter by the status and view the status at a time in the past.

To view the current status of all NetQ Agents:

cumulus@switch~:$ netq show agents
Matching agents records:
Hostname          Status           NTP Sync Version                              Sys Uptime                Agent Uptime              Reinitialize Time          Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
border01          Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:54 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:38 2020
border02          Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:57 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:33 2020
fw1               Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:44 2020  Tue Sep 29 21:24:48 2020  Tue Sep 29 21:24:48 2020   Thu Oct  1 16:07:26 2020
fw2               Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:42 2020  Tue Sep 29 21:24:48 2020  Tue Sep 29 21:24:48 2020   Thu Oct  1 16:07:22 2020
leaf01            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 16:49:04 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:10 2020
leaf02            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:14 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:30 2020
leaf03            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:37 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:24 2020
leaf04            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:35 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:13 2020
oob-mgmt-server   Fresh            yes      3.1.1-ub18.04u29~1599111022.78b9e43  Mon Sep 21 16:43:58 2020  Mon Sep 21 17:55:00 2020  Mon Sep 21 17:55:00 2020   Thu Oct  1 16:07:31 2020
server01          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:16 2020
server02          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:24 2020
server03          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:56 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:12 2020
server04          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:17 2020
server05          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:25 2020
server06          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:21 2020
server07          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:06:48 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:28 2020
server08          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:06:45 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:31 2020
spine01           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:34 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:20 2020
spine02           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:33 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:16 2020
spine03           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:34 2020  Tue Sep 29 21:25:07 2020  Tue Sep 29 21:25:07 2020   Thu Oct  1 16:07:20 2020
spine04           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:32 2020  Tue Sep 29 21:25:07 2020  Tue Sep 29 21:25:07 2020   Thu Oct  1 16:07:33 2020

To view NetQ Agents that are not communicating, run:

cumulus@switch~:$ netq show agents rotten
No matching agents records found

To view NetQ Agent status on the NetQ appliance or VM, run:

cumulus@switch~:$ netq show agents opta
Matching agents records:
Hostname          Status           NTP Sync Version                              Sys Uptime                Agent Uptime              Reinitialize Time          Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
netq-ts           Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 16:46:53 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:29:51 2020

View NetQ Agent Configuration

You can view the current configuration of a NetQ Agent to determine what data is being collected and where it is being sent. To view this configuration, run:

netq config show agent [kubernetes-monitor|loglevel|stats|sensors|frr-monitor|wjh|wjh-threshold|cpu-limit] [json]

This example shows a NetQ Agent in an on-premises deployment, talking to an appliance or VM at 127.0.0.1 using the default ports and VRF. No special configuration is included to monitor kubernetes, FRR, interface statistics, sensors, WJH. No limit has been set on the CPU usage or alter the default logging level.

cumulus@switch:~$ netq config show agent
netq-agent             value      default
---------------------  ---------  ---------
exhibitport
exhibiturl
server                 127.0.0.1  127.0.0.1
cpu-limit              100        100
agenturl
enable-opta-discovery  True       True
agentport              8981       8981
port                   31980      31980
vrf                    default    default
()

To view the configuration of a particular aspect of a NetQ Agent, use the various options.

This example show a NetQ Agent that has been configured with a CPU limit of 60%.

cumulus@switch:~$ netq config show agent cpu-limit
CPU Quota
-----------
60%
()

Modify the Configuration of the NetQ Agent on a Node

The agent configuration commands enable you to do the following:

Commands apply to one agent at a time, and are run from the switch or host where the NetQ Agent resides.

Add and Remove a NetQ Agent

Adding or removing a NetQ Agent is to add or remove the IP address (and port and VRF when specified) from NetQ configuration file (at /etc/netq/netq.yml). This adds or removes the information about the appliance or VM where the agent sends the data it collects.

To use the NetQ CLI to add or remove a NetQ Agent on a switch or host, run:

netq config add agent server <text-opta-ip> [port <text-opta-port>] [vrf <text-vrf-name>]
netq config del agent server

If you want to use a specific port on the appliance or VM, use the port option. If you want the data sent over a particular virtual route interface, use the vrf option.

This example shows how to add a NetQ Agent and tell it to send the data it collects to the NetQ Appliance or VM at the IPv4 address of 10.0.0.23 using the default port (on-premises = 31980; cloud = 443) and vrf (default).

cumulus@switch~:$ netq config add agent server 10.0.0.23
cumulus@switch~:$ netq config restart agent

Disable and Re-enable a NetQ Agent

You can temporarily disable NetQ Agent on a node. Disabling the NetQ Agent maintains the data already collected in the NetQ database, but stops the NetQ Agent from collecting new data until it is re-enabled.

To disable a NetQ Agent, run:

cumulus@switch:~$ netq config stop agent

To re-enable a NetQ Agent, run:

cumulus@switch:~$ netq config restart agent

Configure a NetQ Agent to Limit Switch CPU Usage

While not typically an issue, you can restrict the NetQ Agent from using more than a configurable amount of the CPU resources. This setting requires Cumulus Linux versions 3.6 or later or 4.1.0 or later to be running on the switch.

For more detail about this feature, refer to this Knowledge Base article .

This example limits a NetQ Agent from consuming more than 40% of the CPU resources on a Cumulus Linux switch.

cumulus@switch:~$ netq config add agent cpu-limit 40
cumulus@switch:~$ netq config restart agent

To remove the limit, run:

cumulus@switch:~$ netq config del agent cpu-limit
cumulus@switch:~$ netq config restart agent

Configure a NetQ Agent to Collect Data from Selected Services

You can enable and disable collection of data from the FRR (FR Routing), Kubernetes, sensors, and WJH (What Just Happened) by the NetQ Agent.

To configure the agent to start or stop collecting FRR data, run:

cumulus@chassis~:$ netq config add agent frr-monitor
cumulus@switch:~$ netq config restart agent

cumulus@chassis~:$ netq config del agent frr-monitor
cumulus@switch:~$ netq config restart agent

To configure the agent to start or stop collecting Kubernetes data, run:

cumulus@switch:~$ netq config add agent kubernetes-monitor
cumulus@switch:~$ netq config restart agent

cumulus@switch:~$ netq config del agent kubernetes-monitor
cumulus@switch:~$ netq config restart agent

To configure the agent to start or stop collecting chassis sensor data, run:

cumulus@chassis~:$ netq config add agent sensors
cumulus@switch:~$ netq config restart agent

cumulus@chassis~:$ netq config del agent sensors
cumulus@switch:~$ netq config restart agent

This command is only valid when run on a chassis, not a switch.

To configure the agent to start or stop collecting WJH data, run:

cumulus@chassis~:$ netq config add agent wjh
cumulus@switch:~$ netq config restart agent

cumulus@chassis~:$ netq config del agent wjh
cumulus@switch:~$ netq config restart agent

Configure a NetQ Agent to Send Data to a Server Cluster

If you have a server cluster arrangement for NetQ, you will want to configure the NetQ Agent to send the data it collects to all of the servers in the cluster.

To configure the agent to send data to the servers in your cluster, run:

netq config add agent cluster-servers <text-opta-ip-list> [port <text-opta-port>] [vrf <text-vrf-name>]

The list of IP addresses must be separated by commas, but no spaces. You can optionally specify a port or VRF.

This example configures the NetQ Agent on a switch to send the data to three servers located at 10.0.0.21, 10.0.0.22, and 10.0.0.23 using the rocket VRF.

cumulus@switch:~$ netq config add agent cluster-servers 10.0.0.21,10.0.0.22,10.0.0.23 vrf rocket

To stop a NetQ Agent from sending data to a server cluster, run:

cumulus@switch:~$ netq config del agent cluster-servers

Configure Logging to Troubleshoot a NetQ Agent

The logging level used for a NetQ Agent determines what types of events are logged about the NetQ Agent on the switch or host.

First, you need to decide what level of logging you want to configure. You can configure the logging level to be the same for every NetQ Agent, or selectively increase or decrease the logging level for a NetQ Agent on a problematic node.

Logging LevelDescription
debugSends notifications for all debugging-related, informational, warning, and error messages.
infoSends notifications for informational, warning, and error messages (default).
warningSends notifications for warning and error messages.
errorSends notifications for errors messages.

You can view the NetQ Agent log directly. Messages have the following structure:

<timestamp> <node> <service>[PID]: <level>: <message>

ElementDescription
timestampDate and time event occurred in UTC format
nodeHostname of network node where event occurred
service [PID]Service and Process IDentifier that generated the event
levelLogging level in which the given event is classified; debug, error, info, or warning
messageText description of event, including the node where the event occurred

For example:

This example shows a portion of a NetQ Agent log with debug level logging.

...
2020-02-16T18:45:53.951124+00:00 spine-1 netq-agent[8600]: INFO: OPTA Discovery exhibit url switch.domain.com port 4786
2020-02-16T18:45:53.952035+00:00 spine-1 netq-agent[8600]: INFO: OPTA Discovery Agent ID spine-1
2020-02-16T18:45:53.960152+00:00 spine-1 netq-agent[8600]: INFO: Received Discovery Response 0
2020-02-16T18:46:54.054160+00:00 spine-1 netq-agent[8600]: INFO: OPTA Discovery exhibit url switch.domain.com port 4786
2020-02-16T18:46:54.054509+00:00 spine-1 netq-agent[8600]: INFO: OPTA Discovery Agent ID spine-1
2020-02-16T18:46:54.057273+00:00 spine-1 netq-agent[8600]: INFO: Received Discovery Response 0
2020-02-16T18:47:54.157985+00:00 spine-1 netq-agent[8600]: INFO: OPTA Discovery exhibit url switch.domain.com port 4786
2020-02-16T18:47:54.158857+00:00 spine-1 netq-agent[8600]: INFO: OPTA Discovery Agent ID spine-1
2020-02-16T18:47:54.171170+00:00 spine-1 netq-agent[8600]: INFO: Received Discovery Response 0
2020-02-16T18:48:54.260903+00:00 spine-1 netq-agent[8600]: INFO: OPTA Discovery exhibit url switch.domain.com port 4786
...

To configure debug-level logging:

  1. Set the logging level to debug.

    cumulus@switch:~$ netq config add agent loglevel debug
    
  2. Restart the NetQ Agent.

    cumulus@switch:~$ netq config restart agent
    
  3. Optionally, verify connection to the NetQ appliance or VM by viewing the netq-agent.log messages.

To configure warning-level logging:

cumulus@switch:~$ netq config add agent loglevel warning
cumulus@switch:~$ netq config restart agent

Disable Agent Logging

If you have set the logging level to debug for troubleshooting, it is recommended that you either change the logging level to a less heavy mode or completely disable agent logging altogether when you are finished troubleshooting.

To change the logging level from debug to another level, run:

cumulus@switch:~$ netq config add agent loglevel [info|warning|error]
cumulus@switch:~$ netq config restart agent

To disable all logging:

cumulus@switch:~$ netq config del agent loglevel
cumulus@switch:~$ netq config restart agent

Change NetQ Agent Polling Data and Frequency

The NetQ Agent contains a pre-configured set of modular commands that run periodically and send event and resource data to the NetQ appliance or VM. You can fine tune which events the agent can poll and vary frequency of polling using the NetQ CLI.

For example, if your network is not running OSPF, you can disable the command that polls for OSPF events. Or you can decrease the polling interval for LLDP from the default of 60 seconds to 120 seconds. By not polling for selected data or polling less frequently, you can reduce switch CPU usage by the NetQ Agent.

Depending on the switch platform, some supported protocol commands may not be executed by the NetQ Agent. For example, if a switch has no VXLAN capability, then all VXLAN-related commands get skipped by agent.

You cannot create new commands in this release.

Supported Commands

To see the list of supported modular commands, run:

cumulus@switch:~$ netq config show agent commands
 Service Key               Period  Active       Command
-----------------------  --------  --------  ---------------------------------------------------------------------
bgp-neighbors                  60  yes       ['/usr/bin/vtysh', '-c', 'show ip bgp vrf all neighbors json']
evpn-vni                       60  yes       ['/usr/bin/vtysh', '-c', 'show bgp l2vpn evpn vni json']
lldp-json                     120  yes       /usr/sbin/lldpctl -f json
clagctl-json                   60  yes       /usr/bin/clagctl -j
dpkg-query                  21600  yes       dpkg-query --show -f ${Package},${Version},${Status}\n
ptmctl-json                   120  yes       ptmctl
mstpctl-bridge-json            60  yes       /sbin/mstpctl showall json
cl-license                  21600  yes       /usr/sbin/switchd -lic
ports                        3600  yes       Netq Predefined Command
proc-net-dev                   30  yes       Netq Predefined Command
agent_stats                   300  yes       Netq Predefined Command
agent_util_stats               30  yes       Netq Predefined Command
tcam-resource-json            120  yes       /usr/cumulus/bin/cl-resource-query -j
btrfs-json                   1800  yes       /sbin/btrfs fi usage -b /
config-mon-json               120  yes       Netq Predefined Command
running-config-mon-json        30  yes       Netq Predefined Command
cl-support-json               180  yes       Netq Predefined Command
resource-util-json            120  yes       findmnt / -n -o FS-OPTIONS
smonctl-json                   30  yes       /usr/sbin/smonctl -j
sensors-json                   30  yes       sensors -u
ssd-util-json               86400  yes       sudo /usr/sbin/smartctl -a /dev/sda
ospf-neighbor-json             60  yes       ['/usr/bin/vtysh', '-c', 'show ip ospf vrf all neighbor detail json']
ospf-interface-json            60  yes       ['/usr/bin/vtysh', '-c', 'show ip ospf vrf all interface json']

The NetQ predefined commands are described as follows:

Modify the Polling Frequency

You can change the polling frequency of a modular command. The frequency is specified in seconds. For example, to change the polling frequency of the lldp-json command to 60 seconds from its default of 120 seconds, run:

cumulus@switch:~$ netq config add agent command service-key lldp-json poll-period 60
Successfully added/modified Command service lldpd command /usr/sbin/lldpctl -f json

cumulus@switch:~$ netq config show agent commands
 Service Key               Period  Active       Command
-----------------------  --------  --------  ---------------------------------------------------------------------
bgp-neighbors                  60  yes       ['/usr/bin/vtysh', '-c', 'show ip bgp vrf all neighbors json']
evpn-vni                       60  yes       ['/usr/bin/vtysh', '-c', 'show bgp l2vpn evpn vni json']
lldp-json                      60  yes       /usr/sbin/lldpctl -f json
clagctl-json                   60  yes       /usr/bin/clagctl -j
dpkg-query                  21600  yes       dpkg-query --show -f ${Package},${Version},${Status}\n
ptmctl-json                   120  yes       /usr/bin/ptmctl -d -j
mstpctl-bridge-json            60  yes       /sbin/mstpctl showall json
cl-license                  21600  yes       /usr/sbin/switchd -lic
ports                        3600  yes       Netq Predefined Command
proc-net-dev                   30  yes       Netq Predefined Command
agent_stats                   300  yes       Netq Predefined Command
agent_util_stats               30  yes       Netq Predefined Command
tcam-resource-json            120  yes       /usr/cumulus/bin/cl-resource-query -j
btrfs-json                   1800  yes       /sbin/btrfs fi usage -b /
config-mon-json               120  yes       Netq Predefined Command
running-config-mon-json        30  yes       Netq Predefined Command
cl-support-json               180  yes       Netq Predefined Command
resource-util-json            120  yes       findmnt / -n -o FS-OPTIONS
smonctl-json                   30  yes       /usr/sbin/smonctl -j
sensors-json                   30  yes       sensors -u
ssd-util-json               86400  yes       sudo /usr/sbin/smartctl -a /dev/sda
ospf-neighbor-json             60  no        ['/usr/bin/vtysh', '-c', 'show ip ospf vrf all neighbor detail json']
ospf-interface-json            60  no        ['/usr/bin/vtysh', '-c', 'show ip ospf vrf all interface json']

Disable a Command

You can disable any of these commands if they are not needed on your network. This can help reduce the compute resources the NetQ Agent consumes on the switch. For example, if your network does not run OSPF, you can disable the two OSPF commands:

cumulus@switch:~$ netq config add agent command service-key ospf-interface-json enable False
Command Service ospf-interface-json is disabled

cumulus@switch:~$ netq config add agent command service-key ospf-neighbor-json enable False
Command Service ospf-neighbor-json is disabled

cumulus@switch:~$ netq config show agent commands
 Service Key               Period  Active       Command
-----------------------  --------  --------  ---------------------------------------------------------------------
bgp-neighbors                  60  yes       ['/usr/bin/vtysh', '-c', 'show ip bgp vrf all neighbors json']
evpn-vni                       60  yes       ['/usr/bin/vtysh', '-c', 'show bgp l2vpn evpn vni json']
lldp-json                      60  yes       /usr/sbin/lldpctl -f json
clagctl-json                   60  yes       /usr/bin/clagctl -j
dpkg-query                  21600  yes       dpkg-query --show -f ${Package},${Version},${Status}\n
ptmctl-json                   120  yes       /usr/bin/ptmctl -d -j
mstpctl-bridge-json            60  yes       /sbin/mstpctl showall json
cl-license                  21600  yes       /usr/sbin/switchd -lic
ports                        3600  yes       Netq Predefined Command
proc-net-dev                   30  yes       Netq Predefined Command
agent_stats                   300  yes       Netq Predefined Command
agent_util_stats               30  yes       Netq Predefined Command
tcam-resource-json            120  yes       /usr/cumulus/bin/cl-resource-query -j
btrfs-json                   1800  yes       /sbin/btrfs fi usage -b /
config-mon-json               120  yes       Netq Predefined Command
running-config-mon-json        30  yes       Netq Predefined Command
cl-support-json               180  yes       Netq Predefined Command
resource-util-json            120  yes       findmnt / -n -o FS-OPTIONS
smonctl-json                   30  yes       /usr/sbin/smonctl -j
sensors-json                   30  yes       sensors -u
ssd-util-json               86400  yes       sudo /usr/sbin/smartctl -a /dev/sda
ospf-neighbor-json             60  no        ['/usr/bin/vtysh', '-c', 'show ip ospf vrf all neighbor detail json']
ospf-interface-json            60  no        ['/usr/bin/vtysh', '-c', 'show ip ospf vrf all interface json']

Reset to Default

To quickly revert to the original command settings, run:

cumulus@switch:~$ netq config agent factory-reset commands
Netq Command factory reset successful

cumulus@switch:~$ netq config show agent commands
 Service Key               Period  Active       Command
-----------------------  --------  --------  ---------------------------------------------------------------------
bgp-neighbors                  60  yes       ['/usr/bin/vtysh', '-c', 'show ip bgp vrf all neighbors json']
evpn-vni                       60  yes       ['/usr/bin/vtysh', '-c', 'show bgp l2vpn evpn vni json']
lldp-json                     120  yes       /usr/sbin/lldpctl -f json
clagctl-json                   60  yes       /usr/bin/clagctl -j
dpkg-query                  21600  yes       dpkg-query --show -f ${Package},${Version},${Status}\n
ptmctl-json                   120  yes       /usr/bin/ptmctl -d -j
mstpctl-bridge-json            60  yes       /sbin/mstpctl showall json
cl-license                  21600  yes       /usr/sbin/switchd -lic
ports                        3600  yes       Netq Predefined Command
proc-net-dev                   30  yes       Netq Predefined Command
agent_stats                   300  yes       Netq Predefined Command
agent_util_stats               30  yes       Netq Predefined Command
tcam-resource-json            120  yes       /usr/cumulus/bin/cl-resource-query -j
btrfs-json                   1800  yes       /sbin/btrfs fi usage -b /
config-mon-json               120  yes       Netq Predefined Command
running-config-mon-json        30  yes       Netq Predefined Command
cl-support-json               180  yes       Netq Predefined Command
resource-util-json            120  yes       findmnt / -n -o FS-OPTIONS
smonctl-json                   30  yes       /usr/sbin/smonctl -j
sensors-json                   30  yes       sensors -u
ssd-util-json               86400  yes       sudo /usr/sbin/smartctl -a /dev/sda
ospf-neighbor-json             60  yes       ['/usr/bin/vtysh', '-c', 'show ip ospf vrf all neighbor detail json']
ospf-interface-json            60  yes       ['/usr/bin/vtysh', '-c', 'show ip ospf vrf all interface json']

Post Installation Configuration Options

This topic describes how to configure deployment options that can only be performed after installation or upgrade of NetQ is complete.

Install a Custom Signed Certificate

The NetQ UI version 3.0.x and later ships with a self-signed certificate which is sufficient for non-production environments or cloud deployments. For on-premises deployments, however, you receive a warning from your browser that this default certificate is not trusted when you first log in to the NetQ UI. You can avoid this by installing your own signed certificate.

The following items are needed to perform the certificate installation:

You can install a certificate using the Admin UI or the NetQ CLI.

  1. Enter https://<hostname-or-ipaddr-of-netq-appliance-or-vm>:8443 in your broswer address bar to open the Admin UI.

  2. From the Health page, click Settings.

  1. Click Edit.

  2. Enter the hostname, certificate and certificate key in the relevant fields.

  3. Click Lock.

  1. Log in to the NetQ On-premises Appliance or VM via SSH and copy your certificate and key file there.

  2. Generate a Kubernetes secret called netq-gui-ingress-tls.

    cumulus@netq-ts:~$ kubectl create secret tls netq-gui-ingress-tls \
        --namespace default \
        --key <name of your key file>.key \
        --cert <name of your cert file>.crt
    
  3. Verify that the secret is created.

    cumulus@netq-ts:~$ kubectl get secret
    
    NAME                               TYPE                                  DATA   AGE
    netq-gui-ingress-tls               kubernetes.io/tls                     2      5s
    
  4. Update the ingress rule file to install self-signed certificates.

    1. Create a new file called ingress.yaml.

    2. Copy and add this content to the file.

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        annotations:
          kubernetes.io/ingress.class: "ingress-nginx"
          nginx.ingress.kubernetes.io/ssl-passthrough: "true"
          nginx.ingress.kubernetes.io/ssl-redirect: "true"
          nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
          nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
          nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
          nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
          nginx.ingress.kubernetes.io/proxy-body-size: 10g
          nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
        name: netq-gui-ingress-external
        namespace: default
      spec:
        rules:
        - host: <your-hostname>
          http:
            paths:
            - backend:
                serviceName: netq-gui
                servicePort: 80
        tls:
        - hosts:
          - <your-hostname>
          secretName: netq-gui-ingress-tls
      
    3. Replace <your-hostname> with the FQDN of the NetQ On-premises Appliance or VM.

  5. Apply the new rule.

    cumulus@netq-ts:~$ kubectl apply -f ingress.yaml
    ingress.extensions/netq-gui-ingress-external configured
    

    A message like the one here is shown if your ingress rule is successfully configured.

Your custom certificate should now be working. Verify this by opening the NetQ UI at https://<your-hostname-or-ipaddr> in your browser.

Update Your Cloud Activation Key

The cloud activation key is the one used to access the Cloud services, not the authorization keys used for configuring the CLI. It is provided by Cumulus Networks when your premises is set up. It is called the config-key.

There are occasions where you might want to update your cloud service activation key. For example, if you mistyped the key during installation and now your existing key does not work, or you received a new key for your premises from Cumulus Networks.

Update the activation key using the Admin UI or NetQ CLI:

  1. Open the Admin UI by entering https://<master-hostname-or-ipaddress>:8443 in your browser address field.

  2. Click Settings.

  3. Click Activation.

  4. Click Edit.

  5. Enter your new configuration key in the designated text box.

  6. Click Apply.

Run the following command on your standalone or master NetQ Cloud Appliance or VM replacing text-opta-key with your new key.

cumulus@<hostname>:~$ netq install standalone activate-job config-key <text-opta-key>

Add More Nodes to Your Server Cluster

Installation of NetQ with a server cluster sets up the master and two worker nodes. To expand your cluster to include up to a total of nine worker nodes, use the Admin UI.

Adding additional worker nodes increases availability, but does not increase scalability at this time. A maximum of 1000 nodes is supported regardless of the number of worker nodes in your cluster.

To add more worker nodes:

  1. Prepare the nodes. Refer to the relevant server cluster instructions in Install the NetQ System.

  2. Open the Admin UI by entering https://<master-hostname-or-ipaddress>:8443 in your browser address field.

    This opens the Health dashboard for NetQ.

  3. Click Cluster to view your current configuration.

    On-premises deployment

    On-premises deployment

    This opens the Cluster dashboard, with the details about each node in the cluster.

  4. Click Add Worker Node.

  5. Enter the private IP address of the node you want to add.

  6. Click Add.

    Monitor the progress of the three jobs by clicking next to the jobs.

    On completion, a card for the new node is added to the Cluster dashboard.

    If the addition fails for any reason, download the log file by clicking , run netq bootstrap reset on this new worker node, and then try again.

  7. Repeat this process to add more worker nodes as needed.

Manage Inventory

This topic describes how to use the Cumulus NetQ UI and CLI to monitor your inventory from networkwide and device-specific perspectives.

You can monitor all of the hardware and software components installed and running on the switches and hosts across the entire network. This is extremely useful for understanding the dependence on various vendors and versions, when planning upgrades or the scope of any other required changes.

From a networkwide view, you can monitor all of the switches and hosts at once, or you can monitor all of the switches at once. You cannot currently monitor all hosts at once separate from switches.

Monitor Networkwide Inventory

With the NetQ UI and CLI, a user can monitor the inventory on a networkwide basis for all switches and hosts, or all switches. Inventory includes such items as the number of each device and what operating systems are installed. Additional details are available about the hardware and software components on individual switches, such as the motherboard, ASIC, microprocessor, disk, memory, fan and power supply information. This is extremely useful for understanding the dependence on various vendors and versions when planning upgrades or evaluating the scope of any other required changes.

The commands and cards available to obtain this type of information help you to answer questions such as:

To monitor the inventory of a given switch, refer to Monitor Switch Inventory.

Access Networkwide Inventory Data

The Cumulus NetQ UI provides the Inventory|Devices card for monitoring networkwide inventory information for all switches and hosts. The Inventory|Switches card provides a more detailed view of inventory information for all switches (no hosts) on a networkwide basis.

Access these card from the Cumulus Workbench, or add them to your own workbench by clicking (Add card) > Inventory > Inventory|Devices card or Inventory|Switches card > Open Cards.

    

The NetQ CLI provides detailed network inventory information through its netq show inventory command.

View Networkwide Inventory Summary

All of the devices in your network can be viewed from either the NetQ UI or NetQ CLI.

View the Number of Each Device Type in Your Network

You can view the number of switches and hosts deployed in your network. As you grow your network this can be useful for validating that devices have been added as scheduled.

To view the quantity of devices in your network, locate or open the small or medium Inventory|Devices card. The medium-sized card provide operating system distribution across the network in addition to the device count.

View All Switches

You can view all stored attributes for all switches in your network from either inventory card:

  • Open the full-screen Inventory|Devices card and click All Switches
  • Open the full-screen Inventory|Switches card and click Show All

To return to your workbench, click in the top right corner of the card.

View All Hosts

You can view all stored attributes for all hosts in your network. To view all host details, open the full screen Inventory|Devices card and click All Hosts.

To return to your workbench, click in the top right corner of the card.

To view a list of devices in your network, run:

netq show inventory brief [json]

This example shows that we have four spine switches, three leaf switches, two border switches, two firewall switches, seven hosts (servers), and an out-of-band management server in this network. For each of these we see the type of switch, operating system, CPU and ASIC.

cumulus@switch:~$ netq show inventory brief
Matching inventory records:
Hostname          Switch               OS              CPU      ASIC            Ports
----------------- -------------------- --------------- -------- --------------- -----------------------------------
border01          VX                   CL              x86_64   VX              N/A
border02          VX                   CL              x86_64   VX              N/A
fw1               VX                   CL              x86_64   VX              N/A
fw2               VX                   CL              x86_64   VX              N/A
leaf01            VX                   CL              x86_64   VX              N/A
leaf02            VX                   CL              x86_64   VX              N/A
leaf03            VX                   CL              x86_64   VX              N/A
oob-mgmt-server   N/A                  Ubuntu          x86_64   N/A             N/A
server01          N/A                  Ubuntu          x86_64   N/A             N/A
server02          N/A                  Ubuntu          x86_64   N/A             N/A
server03          N/A                  Ubuntu          x86_64   N/A             N/A
server04          N/A                  Ubuntu          x86_64   N/A             N/A
server05          N/A                  Ubuntu          x86_64   N/A             N/A
server06          N/A                  Ubuntu          x86_64   N/A             N/A
server07          N/A                  Ubuntu          x86_64   N/A             N/A
spine01           VX                   CL              x86_64   VX              N/A
spine02           VX                   CL              x86_64   VX              N/A
spine03           VX                   CL              x86_64   VX              N/A
spine04           VX                   CL              x86_64   VX              N/A

View Networkwide Hardware Inventory

You can view hardware components deployed on all switches and hosts, or on all of the switches in your network.

View Components Summary

It can be useful to know the quantity and ratio of many components deployed in your network to determine the scope of upgrade tasks, balance vendor reliance, or for detailed troubleshooting. Hardware and software component summary information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Devices card on your workbench.

  2. Hover over the card, and change to the large size card using the size picker.

    By default the Switches tab is shown displaying the total number of switches, ASIC vendors, OS versions, license status, NetQ Agent versions, and specific platforms deployed across all of your switches.

Additionally, sympathetic highlighting is used to show the related component types relevant to the highlighted segment and the number of unique component types associated with this type (shown in blue here).

  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over any of the segments in the distribution chart to highlight a specific component.

  1. Change to the large size card. The same information is shown separated by hardware and software, and sympathetic highlighting is used to show the related component types relevant to the highlighted segment and the number of unique component types associated with this type (shown in blue here).

To view switch components, run:

netq show inventory brief [json]

This example shows the operating systems (Cumulus Linux and Ubuntu), CPU architecture (all x86_64), ASIC (virtual), and ports (none, since virtual) for each device in the network. You can manually count the number of each of these, or export to a spreadsheet tool to sort and filter the list.

cumulus@switch:~$ netq show inventory brief
Matching inventory records:
Hostname          Switch               OS              CPU      ASIC            Ports
----------------- -------------------- --------------- -------- --------------- -----------------------------------
border01          VX                   CL              x86_64   VX              N/A
border02          VX                   CL              x86_64   VX              N/A
fw1               VX                   CL              x86_64   VX              N/A
fw2               VX                   CL              x86_64   VX              N/A
leaf01            VX                   CL              x86_64   VX              N/A
leaf02            VX                   CL              x86_64   VX              N/A
leaf03            VX                   CL              x86_64   VX              N/A
oob-mgmt-server   N/A                  Ubuntu          x86_64   N/A             N/A
server01          N/A                  Ubuntu          x86_64   N/A             N/A
server02          N/A                  Ubuntu          x86_64   N/A             N/A
server03          N/A                  Ubuntu          x86_64   N/A             N/A
server04          N/A                  Ubuntu          x86_64   N/A             N/A
server05          N/A                  Ubuntu          x86_64   N/A             N/A
server06          N/A                  Ubuntu          x86_64   N/A             N/A
server07          N/A                  Ubuntu          x86_64   N/A             N/A
spine01           VX                   CL              x86_64   VX              N/A
spine02           VX                   CL              x86_64   VX              N/A
spine03           VX                   CL              x86_64   VX              N/A
spine04           VX                   CL              x86_64   VX              N/A

View ASIC Information

ASIC information is available from the NetQ UI and NetQ CLI.

  1. Locate the medium Inventory|Devices card on your workbench.

  2. Hover over the card, and change to the large size card using the size picker.

  3. Click a segment of the ASIC graph in the component distribution charts.

  1. Select the first option from the popup, Filter ASIC. The card data is filtered to show only the components associated with selected component type. A filter tag appears next to the total number of switches indicating the filter criteria.
  1. Hover over the segments to view the related components.
  1. To return to the full complement of components, click the in the filter tag.

  2. Hover over the card, and change to the full-screen card using the size picker.

  1. Scroll to the right to view the above ASIC information.

  2. To return to your workbench, click in the top right corner of the card.

  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over a segment of the ASIC graph in the distribution chart.

    The same information is available on the summary tab of the large size card.

  1. Hover over the card header and click to view the ASIC vendor and model distribution.

  2. Hover over charts to view the name of the ASIC vendors or models, how many switches have that vendor or model deployed, and the percentage of this number compared to the total number of switches.

  1. Change to the full-screen card to view all of the available ASIC information. Note that if you are running CumulusVX switches, no detailed ASIC information is available.
  1. To return to your workbench, click in the top right corner of the card.

To view information about the ASIC installed on your devices, run:

netq show inventory asic [vendor <asic-vendor>|model <asic-model>|model-id <asic-model-id>] [json]

If you are running NetQ on a CumulusVX setup, there is no physical hardware to query and thus no ASIC information to display.

This example shows the ASIC information for all devices in your network:

cumulus@switch:~$ netq show inventory asic
Matching inventory records:
Hostname          Vendor               Model                          Model ID                  Core BW        Ports
----------------- -------------------- ------------------------------ ------------------------- -------------- -----------------------------------
dell-z9100-05     Broadcom             Tomahawk                       BCM56960                  2.0T           32 x 100G-QSFP28
mlx-2100-05       Mellanox             Spectrum                       MT52132                   N/A            16 x 100G-QSFP28
mlx-2410a1-05     Mellanox             Spectrum                       MT52132                   N/A            48 x 25G-SFP28 & 8 x 100G-QSFP28
mlx-2700-11       Mellanox             Spectrum                       MT52132                   N/A            32 x 100G-QSFP28
qct-ix1-08        Broadcom             Tomahawk                       BCM56960                  2.0T           32 x 100G-QSFP28
qct-ix7-04        Broadcom             Trident3                       BCM56870                  N/A            32 x 100G-QSFP28
st1-l1            Broadcom             Trident2                       BCM56854                  720G           48 x 10G-SFP+ & 6 x 40G-QSFP+
st1-l2            Broadcom             Trident2                       BCM56854                  720G           48 x 10G-SFP+ & 6 x 40G-QSFP+
st1-l3            Broadcom             Trident2                       BCM56854                  720G           48 x 10G-SFP+ & 6 x 40G-QSFP+
st1-s1            Broadcom             Trident2                       BCM56850                  960G           32 x 40G-QSFP+
st1-s2            Broadcom             Trident2                       BCM56850                  960G           32 x 40G-QSFP+

You can filter the results of the command to view devices with a particular vendor, model, or modelID. This example shows ASIC information for all devices with a vendor of Mellanox.

cumulus@switch:~$ netq show inventory asic vendor Mellanox
Matching inventory records:
Hostname          Vendor               Model                          Model ID                  Core BW        Ports
----------------- -------------------- ------------------------------ ------------------------- -------------- -----------------------------------
mlx-2100-05       Mellanox             Spectrum                       MT52132                   N/A            16 x 100G-QSFP28
mlx-2410a1-05     Mellanox             Spectrum                       MT52132                   N/A            48 x 25G-SFP28 & 8 x 100G-QSFP28
mlx-2700-11       Mellanox             Spectrum                       MT52132                   N/A            32 x 100G-QSFP28

View Motherboard/Platform Information

Motherboard and platform information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Devices card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  3. The All Switches tab is selected by default. Scroll to the right to view the various Platform parameters for your switches. Optionally drag and drop the relevant columns next to each other.

  1. Click All Hosts.

  2. Scroll to the right to view the various Platform parameters for your hosts. Optionally drag and drop the relevant columns next to each other.

To return to your workbench, click in the top right corner of the card.

  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over the card, and change to the large card using the size picker.

  3. Hover over the header and click .

  1. Hover over a segment in the Vendor or Platform graphic to view how many switches deploy the specified vendor or platform.

    Context sensitive highlighting is also employed here, such that when you select a vendor, the corresponding platforms are also highlighted; and vice versa. Note that you can also see the status of the Cumulus Linux license for each switch.

  2. Click either Show All link to open the full-screen card.

  3. Click Platform.

  1. To return to your workbench, click in the top right corner of the card.

To view a list of motherboards installed in your switches and hosts, run:

netq show inventory board [vendor <board-vendor>|model <board-model>] [json]

This example shows all of the motherboard data for all devices.

cumulus@switch:~$ netq show inventory board
Matching inventory records:
Hostname          Vendor               Model                          Base MAC           Serial No                 Part No          Rev    Mfg Date
----------------- -------------------- ------------------------------ ------------------ ------------------------- ---------------- ------ ----------
dell-z9100-05     DELL                 Z9100-ON                       4C:76:25:E7:42:C0  CN03GT5N779315C20001      03GT5N           A00    12/04/2015
mlx-2100-05       Penguin              Arctica 1600cs                 7C:FE:90:F5:61:C0  MT1623X10078              MSN2100-CB2FO    N/A    06/09/2016
mlx-2410a1-05     Mellanox             SN2410                         EC:0D:9A:4E:55:C0  MT1734X00067              MSN2410-CB2F_QP3 N/A    08/24/2017
mlx-2700-11       Penguin              Arctica 3200cs                 44:38:39:00:AB:80  MT1604X21036              MSN2700-CS2FO    N/A    01/31/2016
qct-ix1-08        QCT                  QuantaMesh BMS T7032-IX1       54:AB:3A:78:69:51  QTFCO7623002C             1IX1UZZ0ST6      H3B    05/30/2016
qct-ix7-04        QCT                  IX7                            D8:C4:97:62:37:65  QTFCUW821000A             1IX7UZZ0ST5      B3D    05/07/2018
qct-ix7-04        QCT                  T7032-IX7                      D8:C4:97:62:37:65  QTFCUW821000A             1IX7UZZ0ST5      B3D    05/07/2018
st1-l1            CELESTICA            Arctica 4806xp                 00:E0:EC:27:71:37  D2060B2F044919GD000011    R0854-F1004-01   Redsto 09/20/2014
                                                                                                                                    ne-XP
st1-l2            CELESTICA            Arctica 4806xp                 00:E0:EC:27:6B:3A  D2060B2F044919GD000060    R0854-F1004-01   Redsto 09/20/2014
                                                                                                                                    ne-XP
st1-l3            Penguin              Arctica 4806xp                 44:38:39:00:70:49  N/A                       N/A              N/A    N/A
st1-s1            Dell                 S6000-ON                       44:38:39:00:80:00  N/A                       N/A              N/A    N/A
st1-s2            Dell                 S6000-ON                       44:38:39:00:80:81  N/A                       N/A              N/A    N/A

You can filter the results of the command to capture only those devices with a particular motherboard vendor or model. This example shows only the devices with a Celestica motherboard.

cumulus@switch:~$ netq show inventory board vendor celestica
Matching inventory records:
Hostname          Vendor               Model                          Base MAC           Serial No                 Part No          Rev    Mfg Date
----------------- -------------------- ------------------------------ ------------------ ------------------------- ---------------- ------ ----------
st1-l1            CELESTICA            Arctica 4806xp                 00:E0:EC:27:71:37  D2060B2F044919GD000011    R0854-F1004-01   Redsto 09/20/2014
                                                                                                                                    ne-XP
st1-l2            CELESTICA            Arctica 4806xp                 00:E0:EC:27:6B:3A  D2060B2F044919GD000060    R0854-F1004-01   Redsto 09/20/2014
                                                                                                                                    ne-XP

View CPU Information

CPU information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Devices card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  3. The All Switches tab is selected by default. Scroll to the right to view the various CPU parameters. Optionally drag and drop relevant columns next to each other.

  1. Click All Hosts to view the CPU information for your host servers.
  1. To return to your workbench, click in the top right corner of the card.
  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over a segment of the CPU graph in the distribution chart.

    The same information is available on the summary tab of the large size card.

  1. Hover over the card, and change to the full-screen card using the size picker.

  2. Click CPU.

  1. To return to your workbench, click in the top right corner of the card.

To view CPU information for all devices in your network, run:

netq show inventory cpu [arch <cpu-arch>] [json]

This example shows the CPU information for all devices.

cumulus@switch:~$ netq show inventory cpu
Matching inventory records:
Hostname          Arch     Model                          Freq       Cores
----------------- -------- ------------------------------ ---------- -----
dell-z9100-05     x86_64   Intel(R) Atom(TM) C2538        2.40GHz    4
mlx-2100-05       x86_64   Intel(R) Atom(TM) C2558        2.40GHz    4
mlx-2410a1-05     x86_64   Intel(R) Celeron(R)  1047UE    1.40GHz    2
mlx-2700-11       x86_64   Intel(R) Celeron(R)  1047UE    1.40GHz    2
qct-ix1-08        x86_64   Intel(R) Atom(TM) C2558        2.40GHz    4
qct-ix7-04        x86_64   Intel(R) Atom(TM) C2558        2.40GHz    4
st1-l1            x86_64   Intel(R) Atom(TM) C2538        2.41GHz    4
st1-l2            x86_64   Intel(R) Atom(TM) C2538        2.41GHz    4
st1-l3            x86_64   Intel(R) Atom(TM) C2538        2.40GHz    4
st1-s1            x86_64   Intel(R) Atom(TM)  S1220       1.60GHz    4
st1-s2            x86_64   Intel(R) Atom(TM)  S1220       1.60GHz    4

You can filter the results of the command to view which switches employ a particular CPU architecture using the arch keyword. This example shows how to determine which architectures are deployed in your network, and then shows all devices with an x86_64 architecture.

cumulus@switch:~$ netq show inventory cpu arch
    x86_64  :  CPU Architecture
    
cumulus@switch:~$ netq show inventory cpu arch x86_64
Matching inventory records:
Hostname          Arch     Model                          Freq       Cores
----------------- -------- ------------------------------ ---------- -----
leaf01            x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
leaf02            x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
leaf03            x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
leaf04            x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
oob-mgmt-server   x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
server01          x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
server02          x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
server03          x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
server04          x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
spine01           x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)
spine02           x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)

View Disk Information

Disk information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Devices card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  1. The All Switches tab is selected by default. Locate the Disk Total Size column.

  2. Click All Hosts to view the total disk size of all host servers.

  1. To return to your workbench, click in the top right corner of the card.
  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over a segment of the disk graph in the distribution chart.

    The same information is available on the summary tab of the large size card.

  1. Hover over the card, and change to the full-screen card using the size picker.

  2. Click Disk.

  1. To return to your workbench, click in the top right corner of the card.

To view disk information for your switches, run:

netq show inventory disk [name <disk-name>|transport <disk-transport>|vendor <disk-vendor>] [json]

This example shows the disk information for all devices.

cumulus@switch:~$ netq show inventory disk
Matching inventory records:
Hostname          Name            Type             Transport          Size       Vendor               Model
----------------- --------------- ---------------- ------------------ ---------- -------------------- ------------------------------
leaf01            vda             disk             N/A                6G         0x1af4               N/A
leaf02            vda             disk             N/A                6G         0x1af4               N/A
leaf03            vda             disk             N/A                6G         0x1af4               N/A
leaf04            vda             disk             N/A                6G         0x1af4               N/A
oob-mgmt-server   vda             disk             N/A                256G       0x1af4               N/A
server01          vda             disk             N/A                301G       0x1af4               N/A
server02          vda             disk             N/A                301G       0x1af4               N/A
server03          vda             disk             N/A                301G       0x1af4               N/A
server04          vda             disk             N/A                301G       0x1af4               N/A
spine01           vda             disk             N/A                6G         0x1af4               N/A
spine02           vda             disk             N/A                6G         0x1af4               N/A

View Memory Information

Memory information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Devices card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  1. The All Switches tab is selected by default. Locate the Memory Size column.

  2. Click All Hosts to view the memory size for all host servers.

  1. To return to your workbench, click in the top right corner of the card.
  1. Locate the medium Inventory|Switches card on your workbench.

  2. Hover over a segment of the memory graph in the distribution chart.

    The same information is available on the summary tab of the large size card.

  1. Hover over the card, and change to the full-screen card using the size picker.

  2. Click Memory.

  1. To return to your workbench, click in the top right corner of the card.

To view memory information for your switches and host servers, run:

netq show inventory memory [type <memory-type>|vendor <memory-vendor>] [json]

This example shows all of the memory characteristics for all devices.

cumulus@switch:~$ netq show inventory memory
Matching inventory records:
Hostname          Name            Type             Size       Speed      Vendor               Serial No
----------------- --------------- ---------------- ---------- ---------- -------------------- -------------------------
dell-z9100-05     DIMM0 BANK 0    DDR3             8192 MB    1600 MHz   Hynix                14391421
mlx-2100-05       DIMM0 BANK 0    DDR3             8192 MB    1600 MHz   InnoDisk Corporation 00000000
mlx-2410a1-05     ChannelA-DIMM0  DDR3             8192 MB    1600 MHz   017A                 87416232
                    BANK 0
mlx-2700-11       ChannelA-DIMM0  DDR3             8192 MB    1600 MHz   017A                 73215444
                    BANK 0
mlx-2700-11       ChannelB-DIMM0  DDR3             8192 MB    1600 MHz   017A                 73215444
                    BANK 2
qct-ix1-08        N/A             N/A              7907.45MB  N/A        N/A                  N/A
qct-ix7-04        DIMM0 BANK 0    DDR3             8192 MB    1600 MHz   Transcend            00211415
st1-l1            DIMM0 BANK 0    DDR3             4096 MB    1333 MHz   N/A                  N/A
st1-l2            DIMM0 BANK 0    DDR3             4096 MB    1333 MHz   N/A                  N/A
st1-l3            DIMM0 BANK 0    DDR3             4096 MB    1600 MHz   N/A                  N/A
st1-s1            A1_DIMM0 A1_BAN DDR3             8192 MB    1333 MHz   A1_Manufacturer0     A1_SerNum0
                    K0
st1-s2            A1_DIMM0 A1_BAN DDR3             8192 MB    1333 MHz   A1_Manufacturer0     A1_SerNum0
                    K0

You can filter the results of the command to view devices with a particular memory type or vendor. This example shows all of the devices with memory from QEMU .

cumulus@switch:~$ netq show inventory memory vendor QEMU
Matching inventory records:
Hostname          Name            Type             Size       Speed      Vendor               Serial No
----------------- --------------- ---------------- ---------- ---------- -------------------- -------------------------
leaf01            DIMM 0          RAM              1024 MB    Unknown    QEMU                 Not Specified
leaf02            DIMM 0          RAM              1024 MB    Unknown    QEMU                 Not Specified
leaf03            DIMM 0          RAM              1024 MB    Unknown    QEMU                 Not Specified
leaf04            DIMM 0          RAM              1024 MB    Unknown    QEMU                 Not Specified
oob-mgmt-server   DIMM 0          RAM              4096 MB    Unknown    QEMU                 Not Specified
server01          DIMM 0          RAM              512 MB     Unknown    QEMU                 Not Specified
server02          DIMM 0          RAM              512 MB     Unknown    QEMU                 Not Specified
server03          DIMM 0          RAM              512 MB     Unknown    QEMU                 Not Specified
server04          DIMM 0          RAM              512 MB     Unknown    QEMU                 Not Specified
spine01           DIMM 0          RAM              1024 MB    Unknown    QEMU                 Not Specified
spine02           DIMM 0          RAM              1024 MB    Unknown    QEMU                 Not Specified

View Sensor Information

Fan, power supply unit (PSU), and temperature sensors are available to provide additional data about the NetQ system operation.

Sensor information is available from the NetQ UI and NetQ CLI.

Power Supply Unit Information

  1. Click (main menu), then click Sensors in the Network heading.
  1. The PSU tab is displayed by default.
  1. To return to your workbench, click in the top right corner of the card.

Fan Information

  1. Click (main menu), then click Sensors in the Network heading.

  2. Click Fan.

  1. To return to your workbench, click in the top right corner of the card.

Temperature Information

  1. Click (main menu), then click Sensors in the Network heading.

  2. Click Temperature.

  1. To return to your workbench, click in the top right corner of the card.

View All Sensor Information

To view information for power supplies, fans, and temperature sensors on all switches and host servers, run:

netq show sensors all [around <text-time>] [json]

Use the around option to view sensor information for a time in the past.

This example shows all of the sensors on all devices.

cumulus@switch:~$ netq show sensors all
Matching sensors records:
Hostname          Name            Description                         State      Message                             Last Changed
----------------- --------------- ----------------------------------- ---------- ----------------------------------- -------------------------
border01          fan5            fan tray 3, fan 1                   ok                                             Fri Aug 21 18:51:11 2020
border01          fan6            fan tray 3, fan 2                   ok                                             Fri Aug 21 18:51:11 2020
border01          fan1            fan tray 1, fan 1                   ok                                             Fri Aug 21 18:51:11 2020
...
fw1               fan2            fan tray 1, fan 2                   ok                                             Thu Aug 20 19:16:12 2020
...
fw2               fan3            fan tray 2, fan 1                   ok                                             Thu Aug 20 19:14:47 2020
...
leaf01            psu2fan1        psu2 fan                            ok                                             Fri Aug 21 16:14:22 2020
...
leaf02            fan3            fan tray 2, fan 1                   ok                                             Fri Aug 21 16:14:14 2020
...
leaf03            fan2            fan tray 1, fan 2                   ok                                             Fri Aug 21 09:37:45 2020
...
leaf04            psu1fan1        psu1 fan                            ok                                             Fri Aug 21 09:17:02 2020
...
spine01           psu2fan1        psu2 fan                            ok                                             Fri Aug 21 05:54:14 2020
...
spine02           fan2            fan tray 1, fan 2                   ok                                             Fri Aug 21 05:54:39 2020
...
spine03           fan4            fan tray 2, fan 2                   ok                                             Fri Aug 21 06:00:52 2020
...
spine04           fan2            fan tray 1, fan 2                   ok                                             Fri Aug 21 05:54:09 2020
...
border01          psu1temp1       psu1 temp sensor                    ok                                             Fri Aug 21 18:51:11 2020
border01          temp2           board sensor near virtual switch    ok                                             Fri Aug 21 18:51:11 2020
border01          temp3           board sensor at front left corner   ok                                             Fri Aug 21 18:51:11 2020
...
border02          temp1           board sensor near cpu               ok                                             Fri Aug 21 18:46:05 2020
...
fw1               temp4           board sensor at front right corner  ok                                             Thu Aug 20 19:16:12 2020
...
fw2               temp5           board sensor near fan               ok                                             Thu Aug 20 19:14:47 2020
...
leaf01            psu1temp1       psu1 temp sensor                    ok                                             Fri Aug 21 16:14:22 2020
...
leaf02            temp5           board sensor near fan               ok                                             Fri Aug 21 16:14:14 2020
...
leaf03            psu2temp1       psu2 temp sensor                    ok                                             Fri Aug 21 09:37:45 2020
...
leaf04            temp4           board sensor at front right corner  ok                                             Fri Aug 21 09:17:02 2020
...
spine01           psu1temp1       psu1 temp sensor                    ok                                             Fri Aug 21 05:54:14 2020
...
spine02           temp3           board sensor at front left corner   ok                                             Fri Aug 21 05:54:39 2020
...
spine03           temp1           board sensor near cpu               ok                                             Fri Aug 21 06:00:52 2020
...
spine04           temp3           board sensor at front left corner   ok                                             Fri Aug 21 05:54:09 2020
...
border01          psu1            N/A                                 ok                                             Fri Aug 21 18:51:11 2020
border01          psu2            N/A                                 ok                                             Fri Aug 21 18:51:11 2020
border02          psu1            N/A                                 ok                                             Fri Aug 21 18:46:05 2020
border02          psu2            N/A                                 ok                                             Fri Aug 21 18:46:05 2020
fw1               psu1            N/A                                 ok                                             Thu Aug 20 19:16:12 2020
fw1               psu2            N/A                                 ok                                             Thu Aug 20 19:16:12 2020
fw2               psu1            N/A                                 ok                                             Thu Aug 20 19:14:47 2020
fw2               psu2            N/A                                 ok                                             Thu Aug 20 19:14:47 2020
leaf01            psu1            N/A                                 ok                                             Fri Aug 21 16:14:22 2020
leaf01            psu2            N/A                                 ok                                             Fri Aug 21 16:14:22 2020
leaf02            psu1            N/A                                 ok                                             Fri Aug 21 16:14:14 2020
leaf02            psu2            N/A                                 ok                                             Fri Aug 21 16:14:14 2020
leaf03            psu1            N/A                                 ok                                             Fri Aug 21 09:37:45 2020
leaf03            psu2            N/A                                 ok                                             Fri Aug 21 09:37:45 2020
leaf04            psu1            N/A                                 ok                                             Fri Aug 21 09:17:02 2020
leaf04            psu2            N/A                                 ok                                             Fri Aug 21 09:17:02 2020
spine01           psu1            N/A                                 ok                                             Fri Aug 21 05:54:14 2020
spine01           psu2            N/A                                 ok                                             Fri Aug 21 05:54:14 2020
spine02           psu1            N/A                                 ok                                             Fri Aug 21 05:54:39 2020
spine02           psu2            N/A                                 ok                                             Fri Aug 21 05:54:39 2020
spine03           psu1            N/A                                 ok                                             Fri Aug 21 06:00:52 2020
spine03           psu2            N/A                                 ok                                             Fri Aug 21 06:00:52 2020
spine04           psu1            N/A                                 ok                                             Fri Aug 21 05:54:09 2020
spine04           psu2            N/A                                 ok                                             Fri Aug 21 05:54:09 2020

View Only Power Supply Sensors

To view information from all PSU sensors or PSU sensors with a given name on your switches and host servers, run:

netq show sensors psu [<psu-name>] [around <text-time>] [json]

Use the psu-name option to view all PSU sensors with a particular name. Use the around option to view sensor information for a time in the past.

Use Tab completion to determine the names of the PSUs in your switches.

cumulus@switch:~$ netq show sensors psu <press tab>
around  :  Go back in time to around ...
json    :  Provide output in JSON
psu1    :  Power Supply
psu2    :  Power Supply
<ENTER>

This example shows information from all PSU sensors on all switches and hosts.

cumulus@switch:~$ netq show sensor psu

Matching sensors records:
Hostname          Name            State      Pin(W)       Pout(W)        Vin(V)       Vout(V)        Message                             Last Changed
----------------- --------------- ---------- ------------ -------------- ------------ -------------- ----------------------------------- -------------------------
border01          psu1            ok                                                                                                     Tue Aug 25 21:45:21 2020
border01          psu2            ok                                                                                                     Tue Aug 25 21:45:21 2020
border02          psu1            ok                                                                                                     Tue Aug 25 21:39:36 2020
border02          psu2            ok                                                                                                     Tue Aug 25 21:39:36 2020
fw1               psu1            ok                                                                                                     Wed Aug 26 00:08:01 2020
fw1               psu2            ok                                                                                                     Wed Aug 26 00:08:01 2020
fw2               psu1            ok                                                                                                     Wed Aug 26 00:02:13 2020
fw2               psu2            ok                                                                                                     Wed Aug 26 00:02:13 2020
leaf01            psu1            ok                                                                                                     Wed Aug 26 16:14:41 2020
leaf01            psu2            ok                                                                                                     Wed Aug 26 16:14:41 2020
leaf02            psu1            ok                                                                                                     Wed Aug 26 16:14:08 2020
leaf02            psu2            ok                                                                                                     Wed Aug 26 16:14:08 2020
leaf03            psu1            ok                                                                                                     Wed Aug 26 14:41:57 2020
leaf03            psu2            ok                                                                                                     Wed Aug 26 14:41:57 2020
leaf04            psu1            ok                                                                                                     Wed Aug 26 14:20:22 2020
leaf04            psu2            ok                                                                                                     Wed Aug 26 14:20:22 2020
spine01           psu1            ok                                                                                                     Wed Aug 26 10:53:17 2020
spine01           psu2            ok                                                                                                     Wed Aug 26 10:53:17 2020
spine02           psu1            ok                                                                                                     Wed Aug 26 10:54:07 2020
spine02           psu2            ok                                                                                                     Wed Aug 26 10:54:07 2020
spine03           psu1            ok                                                                                                     Wed Aug 26 11:00:44 2020
spine03           psu2            ok                                                                                                     Wed Aug 26 11:00:44 2020
spine04           psu1            ok                                                                                                     Wed Aug 26 10:52:00 2020
spine04           psu2            ok                                                                                                     Wed Aug 26 10:52:00 2020

This example shows all PSUs with the name psu2.

cumulus@switch:~$ netq show sensors psu psu2
Matching sensors records:
Hostname          Name            State      Message                             Last Changed
----------------- --------------- ---------- ----------------------------------- -------------------------
exit01            psu2            ok                                             Fri Apr 19 16:01:17 2019
exit02            psu2            ok                                             Fri Apr 19 16:01:33 2019
leaf01            psu2            ok                                             Sun Apr 21 20:07:12 2019
leaf02            psu2            ok                                             Fri Apr 19 16:01:41 2019
leaf03            psu2            ok                                             Fri Apr 19 16:01:44 2019
leaf04            psu2            ok                                             Fri Apr 19 16:01:36 2019
spine01           psu2            ok                                             Fri Apr 19 16:01:52 2019
spine02           psu2            ok                                             Fri Apr 19 16:01:08 2019

View Only Fan Sensors

To view information from all fan sensors or fan sensors with a given name on your switches and host servers, run:

netq show sensors fan [<fan-name>] [around <text-time>] [json]

Use the around option to view sensor information for a time in the past.

Use tab completion to determine the names of the fans in your switches:

cumulus@switch:~$ netq show sensors fan <<press tab>>
   around : Go back in time to around ...
   fan1 : Fan Name
   fan2 : Fan Name
   fan3 : Fan Name
   fan4 : Fan Name
   fan5 : Fan Name
   fan6 : Fan Name
   json : Provide output in JSON
   psu1fan1 : Fan Name
   psu2fan1 : Fan Name
   <ENTER>

This example shows the state of all fans.

cumulus@switch:~$ netq show sensor fan

Matching sensors records:
Hostname          Name            Description                         State      Speed      Max      Min      Message                             Last Changed
----------------- --------------- ----------------------------------- ---------- ---------- -------- -------- ----------------------------------- -------------------------
border01          fan5            fan tray 3, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border01          fan6            fan tray 3, fan 2                   ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border01          fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border01          fan4            fan tray 2, fan 2                   ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border01          psu1fan1        psu1 fan                            ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border01          fan3            fan tray 2, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border01          fan2            fan tray 1, fan 2                   ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border01          psu2fan1        psu2 fan                            ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border02          fan2            fan tray 1, fan 2                   ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
border02          psu2fan1        psu2 fan                            ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
border02          psu1fan1        psu1 fan                            ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
border02          fan4            fan tray 2, fan 2                   ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
border02          fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
border02          fan6            fan tray 3, fan 2                   ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
border02          fan5            fan tray 3, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
border02          fan3            fan tray 2, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
fw1               fan2            fan tray 1, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw1               fan5            fan tray 3, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw1               psu1fan1        psu1 fan                            ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw1               fan4            fan tray 2, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw1               fan3            fan tray 2, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw1               psu2fan1        psu2 fan                            ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw1               fan6            fan tray 3, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw1               fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw2               fan3            fan tray 2, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
fw2               psu2fan1        psu2 fan                            ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
fw2               fan2            fan tray 1, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
fw2               fan6            fan tray 3, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
fw2               fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
fw2               fan4            fan tray 2, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
fw2               fan5            fan tray 3, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
fw2               psu1fan1        psu1 fan                            ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
leaf01            psu2fan1        psu2 fan                            ok         2500       29000    2500                                         Wed Aug 26 16:14:41 2020
leaf01            fan5            fan tray 3, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 16:14:41 2020
leaf01            fan3            fan tray 2, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 16:14:41 2020
leaf01            fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 16:14:41 2020
leaf01            fan6            fan tray 3, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 16:14:41 2020
leaf01            fan2            fan tray 1, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 16:14:41 2020
leaf01            psu1fan1        psu1 fan                            ok         2500       29000    2500                                         Wed Aug 26 16:14:41 2020
leaf01            fan4            fan tray 2, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 16:14:41 2020
leaf02            fan3            fan tray 2, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 16:14:08 2020
...
spine04           fan4            fan tray 2, fan 2                   ok         2500       29000    2500                                         Wed Aug 26 10:52:00 2020
spine04           psu1fan1        psu1 fan                            ok         2500       29000    2500                                         Wed Aug 26 10:52:00 2020

This example shows the state of all fans with the name fan1.

cumulus@switch~$ netq show sensors fan fan1
Matching sensors records:
Hostname          Name            Description                         State      Speed      Max      Min      Message                             Last Changed
----------------- --------------- ----------------------------------- ---------- ---------- -------- -------- ----------------------------------- -------------------------
border01          fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:45:21 2020
border02          fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:39:36 2020
fw1               fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 00:08:01 2020
fw2               fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 00:02:13 2020
leaf01            fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 18:30:07 2020
leaf02            fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 18:08:38 2020
leaf03            fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Tue Aug 25 21:20:34 2020
leaf04            fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 14:20:22 2020
spine01           fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 10:53:17 2020
spine02           fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 10:54:07 2020
spine03           fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 11:00:44 2020
spine04           fan1            fan tray 1, fan 1                   ok         2500       29000    2500                                         Wed Aug 26 10:52:00 2020

View Only Temperature Sensors

To view information from all temperature sensors or temperature sensors with a given name on your switches and host servers, run:

netq show sensors temp [<temp-name>] [around <text-time>] [json]

Use the around option to view sensor information for a time in the past.

Use tab completion to determine the names of the temperature sensors on your devices:

cumulus@switch:~$ netq show sensors temp <press tab>
    around     :  Go back in time to around ...
    json       :  Provide output in JSON
    psu1temp1  :  Temp Name
    psu2temp1  :  Temp Name
    temp1      :  Temp Name
    temp2      :  Temp Name
    temp3      :  Temp Name
    temp4      :  Temp Name
    temp5      :  Temp Name
    <ENTER>

This example shows the state of all temperature sensors.

cumulus@switch:~$ netq show sensor temp

Matching sensors records:
Hostname          Name            Description                         State      Temp     Critical Max      Min      Message                             Last Changed
----------------- --------------- ----------------------------------- ---------- -------- -------- -------- -------- ----------------------------------- -------------------------
border01          psu1temp1       psu1 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 21:45:21 2020
border01          temp2           board sensor near virtual switch    ok         25       85       80       5                                            Tue Aug 25 21:45:21 2020
border01          temp3           board sensor at front left corner   ok         25       85       80       5                                            Tue Aug 25 21:45:21 2020
border01          temp1           board sensor near cpu               ok         25       85       80       5                                            Tue Aug 25 21:45:21 2020
border01          temp4           board sensor at front right corner  ok         25       85       80       5                                            Tue Aug 25 21:45:21 2020
border01          psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 21:45:21 2020
border01          temp5           board sensor near fan               ok         25       85       80       5                                            Tue Aug 25 21:45:21 2020
border02          temp1           board sensor near cpu               ok         25       85       80       5                                            Tue Aug 25 21:39:36 2020
border02          temp5           board sensor near fan               ok         25       85       80       5                                            Tue Aug 25 21:39:36 2020
border02          temp3           board sensor at front left corner   ok         25       85       80       5                                            Tue Aug 25 21:39:36 2020
border02          temp4           board sensor at front right corner  ok         25       85       80       5                                            Tue Aug 25 21:39:36 2020
border02          psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 21:39:36 2020
border02          psu1temp1       psu1 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 21:39:36 2020
border02          temp2           board sensor near virtual switch    ok         25       85       80       5                                            Tue Aug 25 21:39:36 2020
fw1               temp4           board sensor at front right corner  ok         25       85       80       5                                            Wed Aug 26 00:08:01 2020
fw1               temp3           board sensor at front left corner   ok         25       85       80       5                                            Wed Aug 26 00:08:01 2020
fw1               psu1temp1       psu1 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 00:08:01 2020
fw1               temp1           board sensor near cpu               ok         25       85       80       5                                            Wed Aug 26 00:08:01 2020
fw1               temp2           board sensor near virtual switch    ok         25       85       80       5                                            Wed Aug 26 00:08:01 2020
fw1               temp5           board sensor near fan               ok         25       85       80       5                                            Wed Aug 26 00:08:01 2020
fw1               psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 00:08:01 2020
fw2               temp5           board sensor near fan               ok         25       85       80       5                                            Wed Aug 26 00:02:13 2020
fw2               temp2           board sensor near virtual switch    ok         25       85       80       5                                            Wed Aug 26 00:02:13 2020
fw2               psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 00:02:13 2020
fw2               temp3           board sensor at front left corner   ok         25       85       80       5                                            Wed Aug 26 00:02:13 2020
fw2               temp4           board sensor at front right corner  ok         25       85       80       5                                            Wed Aug 26 00:02:13 2020
fw2               temp1           board sensor near cpu               ok         25       85       80       5                                            Wed Aug 26 00:02:13 2020
fw2               psu1temp1       psu1 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 00:02:13 2020
leaf01            psu1temp1       psu1 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 16:14:41 2020
leaf01            temp5           board sensor near fan               ok         25       85       80       5                                            Wed Aug 26 16:14:41 2020
leaf01            temp4           board sensor at front right corner  ok         25       85       80       5                                            Wed Aug 26 16:14:41 2020
leaf01            temp1           board sensor near cpu               ok         25       85       80       5                                            Wed Aug 26 16:14:41 2020
leaf01            temp2           board sensor near virtual switch    ok         25       85       80       5                                            Wed Aug 26 16:14:41 2020
leaf01            temp3           board sensor at front left corner   ok         25       85       80       5                                            Wed Aug 26 16:14:41 2020
leaf01            psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 16:14:41 2020
leaf02            temp5           board sensor near fan               ok         25       85       80       5                                            Wed Aug 26 16:14:08 2020
...
spine04           psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 10:52:00 2020
spine04           temp5           board sensor near fan               ok         25       85       80       5                                            Wed Aug 26 10:52:00 2020

This example shows the state of all temperature sensors with the name psu2temp1.

cumulus@switch:~$ netq show sensors temp psu2temp1
Matching sensors records:
Hostname          Name            Description                         State      Temp     Critical Max      Min      Message                             Last Changed
----------------- --------------- ----------------------------------- ---------- -------- -------- -------- -------- ----------------------------------- -------------------------
border01          psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 21:45:21 2020
border02          psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 21:39:36 2020
fw1               psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 00:08:01 2020
fw2               psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 00:02:13 2020
leaf01            psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 18:30:07 2020
leaf02            psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 18:08:38 2020
leaf03            psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Tue Aug 25 21:20:34 2020
leaf04            psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 14:20:22 2020
spine01           psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 10:53:17 2020
spine02           psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 10:54:07 2020
spine03           psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 11:00:44 2020
spine04           psu2temp1       psu2 temp sensor                    ok         25       85       80       5                                            Wed Aug 26 10:52:00 2020

View Digital Optics Information

Digital optics information is available from any digital optics modules in the system using the NetQ UI and NetQ CLI.

Use the filter option to view laser power and bias current for a given interface and channel on a switch, and temperature and voltage for a given module. Select the relevant tab to view the data.

  1. Click (main menu), then click Digital Optics in the Network heading.
  1. The Laser Rx Power tab is displayed by default.
  1. Click each of the other Laser or Module tabs to view that information for all devices.

To view digital optics information for your switches and host servers, run one of the following:

netq show dom type (laser_rx_power|laser_output_power|laser_bias_current) [interface <text-dom-port-anchor>] [channel_id <text-channel-id>] [around <text-time>] [json]
netq show dom type (module_temperature|module_voltage) [interface <text-dom-port-anchor>] [around <text-time>] [json]

This example shows module temperature information for all devices.

cumulus@switch:~$ netq show dom type module_temperature
Matching dom records:
Hostname          Interface  type                 high_alarm_threshold low_alarm_threshold  high_warning_thresho low_warning_threshol value                Last Updated
                                                                                            ld                   d
----------------- ---------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- ------------------------
...
spine01           swp53s0    module_temperature   {‘degree_c’: 85,     {‘degree_c’: -10,    {‘degree_c’: 70,     {‘degree_c’: 0,      {‘degree_c’: 32,     Wed Jul  1 15:25:56 2020
                                                  ‘degree_f’: 185}     ‘degree_f’: 14}      ‘degree_f’: 158}     ‘degree_f’: 32}      ‘degree_f’: 89.6}
spine01           swp35      module_temperature   {‘degree_c’: 75,     {‘degree_c’: -5,     {‘degree_c’: 70,     {‘degree_c’: 0,      {‘degree_c’: 27.82,  Wed Jul  1 15:25:56 2020
                                                  ‘degree_f’: 167}     ‘degree_f’: 23}      ‘degree_f’: 158}     ‘degree_f’: 32}      ‘degree_f’: 82.08}
spine01           swp55      module_temperature   {‘degree_c’: 75,     {‘degree_c’: -5,     {‘degree_c’: 70,     {‘degree_c’: 0,      {‘degree_c’: 26.29,  Wed Jul  1 15:25:56 2020
                                                  ‘degree_f’: 167}     ‘degree_f’: 23}      ‘degree_f’: 158}     ‘degree_f’: 32}      ‘degree_f’: 79.32}
spine01           swp9       module_temperature   {‘degree_c’: 78,     {‘degree_c’: -13,    {‘degree_c’: 73,     {‘degree_c’: -8,     {‘degree_c’: 25.57,  Wed Jul  1 15:25:56 2020
                                                  ‘degree_f’: 172.4}   ‘degree_f’: 8.6}     ‘degree_f’: 163.4}   ‘degree_f’: 17.6}    ‘degree_f’: 78.02}
spine01           swp56      module_temperature   {‘degree_c’: 78,     {‘degree_c’: -10,    {‘degree_c’: 75,     {‘degree_c’: -5,     {‘degree_c’: 29.43,  Wed Jul  1 15:25:56 2020
                                                  ‘degree_f’: 172.4}   ‘degree_f’: 14}      ‘degree_f’: 167}     ‘degree_f’: 23}      ‘degree_f’: 84.97}
...

View Software Inventory across the Network

You can view software components deployed on all switches and hosts, or on all of the switches in your network.

View the Operating Systems Information

Knowing what operating systems (OSs) you have deployed across your network is useful for upgrade planning and understanding your relative dependence on a given OS in your network.

OS information is available from the NetQ UI and NetQ CLI.

  1. Locate the medium Inventory|Devices card on your workbench.
  1. Hover over the pie charts to view the total number of devices with a given operating system installed.
  1. Change to the large card using the size picker.

  2. Hover over a segment in the OS distribution chart to view the total number of devices with a given operating system installed.

    Note that sympathetic highlighting (in blue) is employed to show which versions of the other switch components are associated with this OS.

  1. Click on a segment in OS distribution chart.

  2. Click Filter OS at the top of the popup.

  1. The card updates to show only the components associated with switches running the selected OS. To return to all OSs, click X in the OS tag to remove the filter.
  1. Change to the full-screen card using the size picker.
  1. The All Switches tab is selected by default. Scroll to the right to locate all of the OS parameter data.

  2. Click All Hosts to view the OS parameters for all host servers.

  1. To return to your workbench, click in the top right corner of the card.
  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over a segment of the OS graph in the distribution chart.

    The same information is available on the summary tab of the large size card.

  1. Hover over the card, and change to the full-screen card using the size picker.

  2. Click OS.

  1. To return to your workbench, click in the top right corner of the card.

To view OS information for your switches and host servers, run:

netq show inventory os [version <os-version>|name <os-name>] [json]

This example shows the OS information for all devices.

cumulus@switch:~$ netq show inventory os
Matching inventory records:
Hostname          Name            Version                              Last Changed
----------------- --------------- ------------------------------------ -------------------------
border01          CL              3.7.13                               Tue Jul 28 18:49:46 2020
border02          CL              3.7.13                               Tue Jul 28 18:44:42 2020
fw1               CL              3.7.13                               Tue Jul 28 19:14:27 2020
fw2               CL              3.7.13                               Tue Jul 28 19:12:50 2020
leaf01            CL              3.7.13                               Wed Jul 29 16:12:20 2020
leaf02            CL              3.7.13                               Wed Jul 29 16:12:21 2020
leaf03            CL              3.7.13                               Tue Jul 14 21:18:21 2020
leaf04            CL              3.7.13                               Tue Jul 14 20:58:47 2020
oob-mgmt-server   Ubuntu          18.04                                Mon Jul 13 21:01:35 2020
server01          Ubuntu          18.04                                Mon Jul 13 22:09:18 2020
server02          Ubuntu          18.04                                Mon Jul 13 22:09:18 2020
server03          Ubuntu          18.04                                Mon Jul 13 22:09:20 2020
server04          Ubuntu          18.04                                Mon Jul 13 22:09:20 2020
server05          Ubuntu          18.04                                Mon Jul 13 22:09:20 2020
server06          Ubuntu          18.04                                Mon Jul 13 22:09:21 2020
server07          Ubuntu          18.04                                Mon Jul 13 22:09:21 2020
server08          Ubuntu          18.04                                Mon Jul 13 22:09:22 2020
spine01           CL              3.7.12                               Mon Aug 10 19:55:06 2020
spine02           CL              3.7.12                               Mon Aug 10 19:55:07 2020
spine03           CL              3.7.12                               Mon Aug 10 19:55:09 2020
spine04           CL              3.7.12                               Mon Aug 10 19:55:08 2020

You can filter the results of the command to view only devices with a particular operating system or version. This can be especially helpful when you suspect that a particular device has not been upgraded as expected.

This example shows all devices with the Cumulus Linux version 3.7.12 installed.

cumulus@switch:~$ netq show inventory os version 3.7.12

Matching inventory records:
Hostname          Name            Version                              Last Changed
----------------- --------------- ------------------------------------ -------------------------
spine01           CL              3.7.12                               Mon Aug 10 19:55:06 2020
spine02           CL              3.7.12                               Mon Aug 10 19:55:07 2020
spine03           CL              3.7.12                               Mon Aug 10 19:55:09 2020
spine04           CL              3.7.12                               Mon Aug 10 19:55:08 2020

View Cumulus Linux License Information

The state of a Cumulus Linux license can impact the function of your switches. If the license status is Bad or Missing, the license must be updated or applied for a switch to operate properly. Hosts do not require a Cumulus Linux or NetQ license.

Cumulus Linux license information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Devices card on your workbench.

  2. Change to the large card using the size picker.

  1. Hover over the distribution chart for license to view the total number of devices with a given license installed.

  2. Alternately, change to the full-screen card using the size picker.

  1. Scroll to the right to locate the License State and License Name columns. Based on these values:

    • OK: no action is required
    • Bad: validate the correct license is installed and has not expired
    • Missing: install a valid Cumulus Linux license
    • N/A: This device does not require a license; typically a host.
  2. To return to your workbench, click in the top right corner of the card.

  1. Locate the medium Inventory|Switches card on your workbench.

  2. Hover over a segment of the license graph in the distribution chart.

    The same information is available on the summary tab of the large size card.

  1. Hover over the card, and change to the full-screen card using the size picker.

  2. The Show All tab is displayed by default. Scroll to the right to locate the License State and License Name columns. Based on the state values:

    • OK: no action is required
    • Bad: validate the correct license is installed and has not expired
    • Missing: install a valid Cumulus Linux license
    • N/A: This device does not require a license; typically a host.
  3. To return to your workbench, click in the top right corner of the card.

To view license information for your switches, run:

netq show inventory license [cumulus] [status ok | status missing] [around <text-time>] [json]

Use the cumulus option to list only Cumulus Linux licenses. Use the status option to list only the switches with that status.

This example shows the license information for all switches.

cumulus@switch:~$ netq show inventory license

Matching inventory records:
Hostname          Name            State      Last Changed
----------------- --------------- ---------- -------------------------
border01          Cumulus Linux   missing    Tue Jul 28 18:49:46 2020
border02          Cumulus Linux   missing    Tue Jul 28 18:44:42 2020
fw1               Cumulus Linux   missing    Tue Jul 28 19:14:27 2020
fw2               Cumulus Linux   missing    Tue Jul 28 19:12:50 2020
leaf01            Cumulus Linux   missing    Wed Jul 29 16:12:20 2020
leaf02            Cumulus Linux   missing    Wed Jul 29 16:12:21 2020
leaf03            Cumulus Linux   missing    Tue Jul 14 21:18:21 2020
leaf04            Cumulus Linux   missing    Tue Jul 14 20:58:47 2020
oob-mgmt-server   Cumulus Linux   N/A        Mon Jul 13 21:01:35 2020
server01          Cumulus Linux   N/A        Mon Jul 13 22:09:18 2020
server02          Cumulus Linux   N/A        Mon Jul 13 22:09:18 2020
server03          Cumulus Linux   N/A        Mon Jul 13 22:09:20 2020
server04          Cumulus Linux   N/A        Mon Jul 13 22:09:20 2020
server05          Cumulus Linux   N/A        Mon Jul 13 22:09:20 2020
server06          Cumulus Linux   N/A        Mon Jul 13 22:09:21 2020
server07          Cumulus Linux   N/A        Mon Jul 13 22:09:21 2020
server08          Cumulus Linux   N/A        Mon Jul 13 22:09:22 2020
spine01           Cumulus Linux   missing    Mon Aug 10 19:55:06 2020
spine02           Cumulus Linux   missing    Mon Aug 10 19:55:07 2020
spine03           Cumulus Linux   missing    Mon Aug 10 19:55:09 2020
spine04           Cumulus Linux   missing    Mon Aug 10 19:55:08 2020

Based on the state value:

  • OK: no action is required
  • Bad: validate the correct license is installed and has not expired
  • Missing: install a valid Cumulus Linux license
  • N/A: This device does not require a license; typically a host.

You can view the historical state of licenses using the around keyword. This example shows the license state for all devices about 7 days ago. Remember to use measurement units on the time values.

cumulus@switch:~$ netq show inventory license around 7d

Matching inventory records:
Hostname          Name            State      Last Changed
----------------- --------------- ---------- -------------------------
edge01            Cumulus Linux   N/A        Tue Apr 2 14:01:18 2019
exit01            Cumulus Linux   ok         Tue Apr 2 14:01:13 2019
exit02            Cumulus Linux   ok         Tue Apr 2 14:01:38 2019
leaf01            Cumulus Linux   ok         Tue Apr 2 20:07:09 2019
leaf02            Cumulus Linux   ok         Tue Apr 2 14:01:46 2019
leaf03            Cumulus Linux   ok         Tue Apr 2 14:01:41 2019
leaf04            Cumulus Linux   ok         Tue Apr 2 14:01:32 2019
server01          Cumulus Linux   N/A        Tue Apr 2 14:01:55 2019
server02          Cumulus Linux   N/A        Tue Apr 2 14:01:55 2019
server03          Cumulus Linux   N/A        Tue Apr 2 14:01:55 2019
server04          Cumulus Linux   N/A        Tue Apr 2 14:01:55 2019
spine01           Cumulus Linux   ok         Tue Apr 2 14:01:49 2019
spine02           Cumulus Linux   ok         Tue Apr 2 14:01:05 2019

View the Supported Cumulus Linux Packages

When you are troubleshooting an issue with a switch, you might want to know what versions of the Cumulus Linux operating system are supported on that switch and on a switch that is not having the same issue.

To view package information for your switches, run:

netq show cl-manifest [json]

This example shows the OS packages supported for all switches.

cumulus@switch:~$ netq show cl-manifest

Matching manifest records:
Hostname          ASIC Vendor          CPU Arch             Manifest Version
----------------- -------------------- -------------------- --------------------
border01          vx                   x86_64               3.7.6.1
border01          vx                   x86_64               3.7.10
border01          vx                   x86_64               3.7.11
border01          vx                   x86_64               3.6.2.1
...
fw1               vx                   x86_64               3.7.6.1
fw1               vx                   x86_64               3.7.10
fw1               vx                   x86_64               3.7.11
fw1               vx                   x86_64               3.6.2.1
...
leaf01            vx                   x86_64               4.1.0
leaf01            vx                   x86_64               4.0.0
leaf01            vx                   x86_64               3.6.2
leaf01            vx                   x86_64               3.7.2
...
leaf02            vx                   x86_64               3.7.6.1
leaf02            vx                   x86_64               3.7.10
leaf02            vx                   x86_64               3.7.11
leaf02            vx                   x86_64               3.6.2.1
...

View All Software Packages Installed

If you are having an issue with several switches, you may want to verify what software packages are installed on them and compare that to the recommended packages for a given Cumulus Linux release.

To view installed package information for your switches, run:

netq show cl-pkg-info [<text-package-name>] [around <text-time>] [json]

Use the text-package-name option to narrow the results to a particular package or the around option to narrow the output to a particular time range.

This example shows all installed software packages for all devices.

cumulus@switch:~$ netq show cl-pkg-info
Matching package_info records:
Hostname          Package Name             Version              CL Version           Package Status       Last Changed
----------------- ------------------------ -------------------- -------------------- -------------------- -------------------------
border01          libcryptsetup4           2:1.6.6-5            Cumulus Linux 3.7.13 installed            Mon Aug 17 18:53:50 2020
border01          libedit2                 3.1-20140620-2       Cumulus Linux 3.7.13 installed            Mon Aug 17 18:53:50 2020
border01          libffi6                  3.1-2+deb8u1         Cumulus Linux 3.7.13 installed            Mon Aug 17 18:53:50 2020
...
border02          libdb5.3                 9999-cl3u2           Cumulus Linux 3.7.13 installed            Mon Aug 17 18:48:53 2020
border02          libnl-cli-3-200          3.2.27-cl3u15+1      Cumulus Linux 3.7.13 installed            Mon Aug 17 18:48:53 2020
border02          pkg-config               0.28-1               Cumulus Linux 3.7.13 installed            Mon Aug 17 18:48:53 2020
border02          libjs-sphinxdoc          1.2.3+dfsg-1         Cumulus Linux 3.7.13 installed            Mon Aug 17 18:48:53 2020
...
fw1               libpcap0.8               1.8.1-3~bpo8+1       Cumulus Linux 3.7.13 installed            Mon Aug 17 19:18:57 2020
fw1               python-eventlet          0.13.0-2             Cumulus Linux 3.7.13 installed            Mon Aug 17 19:18:57 2020
fw1               libapt-pkg4.12           1.0.9.8.5-cl3u2      Cumulus Linux 3.7.13 installed            Mon Aug 17 19:18:57 2020
fw1               libopts25                1:5.18.4-3           Cumulus Linux 3.7.13 installed            Mon Aug 17 19:18:57 2020
...

This example shows the installed switchd package version.

cumulus@switch:~$ netq spine01 show cl-pkg-info switchd

Matching package_info records:
Hostname          Package Name             Version              CL Version           Package Status       Last Changed
----------------- ------------------------ -------------------- -------------------- -------------------- -------------------------
spine01           switchd                  1.0-cl3u40           Cumulus Linux 3.7.12 installed            Thu Aug 27 01:58:47 2020

You can determine whether any of your switches are using a software package other than the default package associated with the Cumulus Linux release that is running on the switches. Use this list to determine which packages to install/upgrade on all devices. Additionally, you can determine if a software package is missing.

To view recommended package information for your switches, run:

netq show recommended-pkg-version [release-id <text-release-id>] [package-name <text-package-name>] [json]

The output may be rather lengthy if this command is run for all releases and packages. If desired, run the command using the release-id and/or package-name options to shorten the output.

This example looks for switches running Cumulus Linux 3.7.1 and switchd. The result is a single switch, leaf12, that has older software and is recommended for update.

cumulus@switch:~$ netq show recommended-pkg-version release-id 3.7.1 package-name switchd
Matching manifest records:
Hostname          Release ID           ASIC Vendor          CPU Arch             Package Name         Version              Last Changed
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------------
leaf12            3.7.1                vx                   x86_64               switchd              1.0-cl3u30           Wed Feb  5 04:36:30 2020

This example looks for switches running Cumulus Linux 3.7.1 and ptmd. The result is a single switch, server01, that has older software and is recommended for update.

cumulus@switch:~$ netq show recommended-pkg-version release-id 3.7.1 package-name ptmd
Matching manifest records:
Hostname          Release ID           ASIC Vendor          CPU Arch             Package Name         Version              Last Changed
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------------
server01            3.7.1                vx                   x86_64               ptmd                 3.0-2-cl3u8          Wed Feb  5 04:36:30 2020

This example looks for switches running Cumulus Linux 3.7.1 and lldpd. The result is a single switch, server01, that has older software and is recommended for update.

cumulus@switch:~$ netq show recommended-pkg-version release-id 3.7.1 package-name lldpd
Matching manifest records:
Hostname          Release ID           ASIC Vendor          CPU Arch             Package Name         Version              Last Changed
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------------
server01            3.7.1                vx                   x86_64               lldpd                0.9.8-0-cl3u11       Wed Feb  5 04:36:30 2020

This example looks for switches running Cumulus Linux 3.6.2 and switchd. The result is a single switch, leaf04, that has older software and is recommended for update.

cumulus@noc-pr:~$ netq show recommended-pkg-version release-id 3.6.2 package-name switchd
Matching manifest records:
Hostname          Release ID           ASIC Vendor          CPU Arch             Package Name         Version              Last Changed
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------------
leaf04            3.6.2                vx                   x86_64               switchd              1.0-cl3u27           Wed Feb  5 04:36:30 2020

View ACL Resources

Using the NetQ CLI, you can monitor the incoming and outgoing access control lists (ACLs) configured on all switches, currently or at a time in the past.

To view ACL resources for all of your switches, run:

netq show cl-resource acl [ingress | egress] [around <text-time>] [json]

Use the egress or ingress options to show only the outgoing or incoming ACLs. Use the around option to show this information for a time in the past.

This example shows the ACL resources for all configured switches:

cumulus@switch:~$ netq show cl-resource acl
Matching cl_resource records:
Hostname          In IPv4 filter       In IPv4 Mangle       In IPv6 filter       In IPv6 Mangle       In 8021x filter      In Mirror            In PBR IPv4 filter   In PBR IPv6 filter   Eg IPv4 filter       Eg IPv4 Mangle       Eg IPv6 filter       Eg IPv6 Mangle       ACL Regions          18B Rules Key        32B Rules Key        54B Rules Key        L4 Port range Checke Last Updated
                                                                                                                                                                                                                                                                                                                                                                  rs
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- ------------------------
act-5712-09       40,512(7%)           0,0(0%)              30,768(3%)           0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              32,256(12%)          0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              2,24(8%)             Tue Aug 18 20:20:39 2020
mlx-2700-04       0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              0,0(0%)              4,400(1%)            2,2256(0%)           0,1024(0%)           2,1024(0%)           0,0(0%)              Tue Aug 18 20:19:08 2020

The same information can be output to JSON format:

cumulus@noc-pr:~$ netq show cl-resource acl json
{
    "cl_resource":[
        {
            "egIpv6Mangle":"0,0(0%)",
            "egIpv6Filter":"0,0(0%)",
            "inIpv6Mangle":"0,0(0%)",
            "egIpv4Mangle":"0,0(0%)",
            "egIpv4Filter":"32,256(12%)",
            "inIpv4Mangle":"0,0(0%)",
            "in8021XFilter":"0,0(0%)",
            "inPbrIpv4Filter":"0,0(0%)",
            "inPbrIpv6Filter":"0,0(0%)",
            "l4PortRangeCheckers":"2,24(8%)",
            "lastUpdated":1597782039.632999897,
            "inMirror":"0,0(0%)",
            "hostname":"act-5712-09",
            "54bRulesKey":"0,0(0%)",
            "18bRulesKey":"0,0(0%)",
            "32bRulesKey":"0,0(0%)",
            "inIpv6Filter":"30,768(3%)",
            "aclRegions":"0,0(0%)",
            "inIpv4Filter":"40,512(7%)"
        },
        {
            "egIpv6Mangle":"0,0(0%)",
            "egIpv6Filter":"0,0(0%)",
            "inIpv6Mangle":"0,0(0%)",
            "egIpv4Mangle":"0,0(0%)",
            "egIpv4Filter":"0,0(0%)",
            "inIpv4Mangle":"0,0(0%)",
            "in8021XFilter":"0,0(0%)",
            "inPbrIpv4Filter":"0,0(0%)",
            "inPbrIpv6Filter":"0,0(0%)",
            "l4PortRangeCheckers":"0,0(0%)",
            "lastUpdated":1597781948.3259999752,
            "inMirror":"0,0(0%)",
            "hostname":"mlx-2700-04",
            "54bRulesKey":"2,1024(0%)",
            "18bRulesKey":"2,2256(0%)",
            "32bRulesKey":"0,1024(0%)",
            "inIpv6Filter":"0,0(0%)",
            "aclRegions":"4,400(1%)",
            "inIpv4Filter":"0,0(0%)"
	}
    ],
    "truncatedResult":false
}

View Forwarding Resources

With the NetQ CLI, you can monitor the amount of forwarding resources used by all devices, currently or at a time in the past.

To view forwarding resources for all of your switches, run:

netq show cl-resource forwarding [around <text-time>] [json]

Use the around option to show this information for a time in the past.

This example shows forwarding resources for all configured switches:

cumulus@noc-pr:~$ netq show cl-resource forwarding
Matching cl_resource records:
Hostname          IPv4 host entries    IPv6 host entries    IPv4 route entries   IPv6 route entries   ECMP nexthops        MAC entries          Total Mcast Routes   Last Updated
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- ------------------------
act-5712-09       0,16384(0%)          0,0(0%)              0,131072(0%)         23,20480(0%)         0,16330(0%)          0,32768(0%)          0,8192(0%)           Tue Aug 18 20:20:39 2020
mlx-2700-04       0,32768(0%)          0,16384(0%)          0,65536(0%)          4,28672(0%)          0,4101(0%)           0,40960(0%)          0,1000(0%)           Tue Aug 18 20:19:08 2020

View NetQ Agents

NetQ Agent information is available from the NetQ UI and NetQ CLI.

To view the NetQ Agents on all switches and hosts:

  1. Click to open the Main menu.

  2. Select Agents from the Network column.

  3. View the Version column to determine which release of the NetQ Agent is running on your devices. Ideally, this version should be the same as the NetQ release you are running, and is the same across all of your devices.

It is recommended that when you upgrade NetQ that you also upgrade the NetQ Agents. You can determine if you have covered all of your agents using the medium or large Switch Inventory card. To view the NetQ Agent distribution by version:

  1. Open the medium Switch Inventory card.

  2. View the number in the Unique column next to Agent.

  1. If the number is greater than one, you have multiple NetQ Agent versions deployed.

  2. If you have multiple versions, hover over the Agent chart to view the count of switches using each version.

  3. For more detail, switch to the large Switch Inventory card.

  4. Hover over the card and click to open the Software tab.

  1. Hover over the chart on the right to view the number of switches using the various versions of the NetQ Agent.

  2. Hover over the Operating System chart to see which NetQ Agent versions are being run on each OS.

  1. Click either chart to focus on a particular OS or agent version.

  2. To return to the full view, click in the filter tag.

  3. Filter the data on the card by switches that are having trouble communicating, by selecting Rotten Switches from the dropdown above the charts.

  4. Open the full screen Inventory|Switches card. The Show All tab is displayed by default, and shows the NetQ Agent status and version for all devices.

To view the NetQ Agents on all switches and hosts, run:

netq show agents [fresh | rotten ] [around <text-time>] [json]

Use the fresh keyword to view only the NetQ Agents that are in current communication with the NetQ Platform or NetQ Collector. Use the rotten keyword to view those that are not. Use the around keyword to view the state of NetQ Agents at an earlier time.

This example shows the current NetQ Agent state on all devices. View the Status column which indicates whether the agent is up and current, labelled Fresh, or down and stale, labelled Rotten. Additional information is provided about the agent status, including whether it is time synchronized, how long it has been up, and the last time its state changed. You can also see the version running. Ideally, this version should be the same as the NetQ release you are running, and is the same across all of your devices.


cumulus@switch:~$ netq show agents
Matching agents records:
Hostname          Status           NTP Sync Version                              Sys Uptime                Agent Uptime              Reinitialize Time          Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
border01          Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Tue Jul 28 18:48:31 2020  Tue Jul 28 18:49:46 2020  Tue Jul 28 18:49:46 2020   Sun Aug 23 18:56:56 2020
border02          Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Tue Jul 28 18:43:29 2020  Tue Jul 28 18:44:42 2020  Tue Jul 28 18:44:42 2020   Sun Aug 23 18:49:57 2020
fw1               Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Tue Jul 28 19:13:26 2020  Tue Jul 28 19:14:28 2020  Tue Jul 28 19:14:28 2020   Sun Aug 23 19:24:01 2020
fw2               Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Tue Jul 28 19:11:27 2020  Tue Jul 28 19:12:51 2020  Tue Jul 28 19:12:51 2020   Sun Aug 23 19:21:13 2020
leaf01            Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Tue Jul 14 21:04:03 2020  Wed Jul 29 16:12:22 2020  Wed Jul 29 16:12:22 2020   Sun Aug 23 16:16:09 2020
leaf02            Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Tue Jul 14 20:59:10 2020  Wed Jul 29 16:12:23 2020  Wed Jul 29 16:12:23 2020   Sun Aug 23 16:16:48 2020
leaf03            Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Tue Jul 14 21:04:03 2020  Tue Jul 14 21:18:23 2020  Tue Jul 14 21:18:23 2020   Sun Aug 23 21:25:16 2020
leaf04            Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Tue Jul 14 20:57:30 2020  Tue Jul 14 20:58:48 2020  Tue Jul 14 20:58:48 2020   Sun Aug 23 21:09:06 2020
oob-mgmt-server   Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 17:07:59 2020  Mon Jul 13 21:01:35 2020  Tue Jul 14 19:36:19 2020   Sun Aug 23 15:45:05 2020
server01          Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 18:30:46 2020  Mon Jul 13 22:09:19 2020  Tue Jul 14 19:36:22 2020   Sun Aug 23 19:43:34 2020
server02          Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 18:30:46 2020  Mon Jul 13 22:09:19 2020  Tue Jul 14 19:35:59 2020   Sun Aug 23 19:48:07 2020
server03          Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 18:30:46 2020  Mon Jul 13 22:09:20 2020  Tue Jul 14 19:36:22 2020   Sun Aug 23 19:47:47 2020
server04          Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 18:30:46 2020  Mon Jul 13 22:09:20 2020  Tue Jul 14 19:35:59 2020   Sun Aug 23 19:47:52 2020
server05          Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 18:30:46 2020  Mon Jul 13 22:09:20 2020  Tue Jul 14 19:36:02 2020   Sun Aug 23 19:46:27 2020
server06          Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 18:30:46 2020  Mon Jul 13 22:09:21 2020  Tue Jul 14 19:36:37 2020   Sun Aug 23 19:47:37 2020
server07          Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 17:58:02 2020  Mon Jul 13 22:09:21 2020  Tue Jul 14 19:36:01 2020   Sun Aug 23 18:01:08 2020
server08          Fresh            yes      3.1.0-ub18.04u28~1594095612.8f00ba1  Mon Jul 13 17:58:18 2020  Mon Jul 13 22:09:23 2020  Tue Jul 14 19:36:03 2020   Mon Aug 24 09:10:38 2020
spine01           Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Mon Jul 13 17:48:43 2020  Mon Aug 10 19:55:07 2020  Mon Aug 10 19:55:07 2020   Sun Aug 23 19:57:05 2020
spine02           Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Mon Jul 13 17:47:39 2020  Mon Aug 10 19:55:09 2020  Mon Aug 10 19:55:09 2020   Sun Aug 23 19:56:39 2020
spine03           Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Mon Jul 13 17:47:40 2020  Mon Aug 10 19:55:12 2020  Mon Aug 10 19:55:12 2020   Sun Aug 23 19:57:29 2020
spine04           Fresh            yes      3.1.0-cl3u28~1594095615.8f00ba1      Mon Jul 13 17:47:56 2020  Mon Aug 10 19:55:11 2020  Mon Aug 10 19:55:11 2020   Sun Aug 23 19:58:23 2020

Monitor Switch Inventory

With the NetQ UI and NetQ CLI, you can monitor your inventory of switches across the network or individually. A user can monitor such items as operating system, motherboard, ASIC, microprocessor, disk, memory, fan and power supply information. Being able to monitor this inventory aids in upgrades, compliance, and other planning tasks.

The commands and cards available to obtain this type of information help you to answer questions such as:

To monitor networkwide inventory, refer to Monitor Networkwide Inventory.

Access Switch Inventory Data

The Cumulus NetQ UI provides the Inventory | Switches card for monitoring the hardware and software component inventory on switches running NetQ in your network. Access this card from the Cumulus Workbench, or add it to your own workbench by clicking (Add card) > Inventory > Inventory|Switches card > Open Cards.

The CLI provides detailed switch inventory information through its netq <hostname> show inventory command.

View Switch Inventory Summary

Component information for all of the switches in your network can be viewed from both the NetQ UI and NetQ CLI.

View the Number of Types of Any Component Deployed

For each of the components monitored on a switch, NetQ displays the variety of those component by way of a count. For example, if you have three operating systems running on your switches, say Cumulus Linux, Ubuntu and RHEL, NetQ indicates a total unique count of three OSs. If you only use Cumulus Linux, then the count shows as one.

To view this count for all of the components on the switch:

  1. Open the medium Switch Inventory card.
  1. Note the number in the Unique column for each component.

    In the above example, there are four different disk sizes deployed, four different OSs running, four different ASIC vendors and models deployed, and so forth.

  2. Scroll down to see additional components.

By default, the data is shown for switches with a fresh communication status. You can choose to look at the data for switches in the rotten state instead. For example, if you wanted to see if there was any correlation to a version of OS to the switch having a rotten status, you could select Rotten Switches from the dropdown at the top of the card and see if they all use the same OS (count would be 1). It may not be the cause of the lack of communication, but you get the idea.

View the Distribution of Any Component Deployed

NetQ monitors a number of switch components. For each component you can view the distribution of versions or models or vendors deployed across your network for that component.

To view the distribution:

  1. Locate the Inventory|Switches card on your workbench.

  2. From the medium or large card, view the distribution of hardware and software components across the network.

  1. Hover over any of the segments in the distribution chart to highlight a specific component. Scroll down to view additional components.

On the large Switch Inventory card, hovering also highlights the related components for the selected component. This is shown in blue here.

  1. Choose Rotten Switches from the dropdown to see which, if any, switches are currently not communicating with NetQ.
  1. Return to your fresh switches, then hover over the card header and change to the small size card using the size picker.

To view the hardware and software components for a switch, run:

netq <hostname> show inventory brief

This example shows the type of switch (Cumulus VX), operating system (Cumulus Linux), CPU (x86_62), and ASIC (virtual) for the spine01 switch.

cumulus@switch:~$ netq spine01 show inventory brief
Matching inventory records:
Hostname          Switch               OS              CPU      ASIC            Ports
----------------- -------------------- --------------- -------- --------------- -----------------------------------
spine01           VX                   CL              x86_64   VX              N/A

This example show the components on the NetQ On-premises or Cloud Appliance.

cumulus@switch:~$ netq show inventory brief opta
Matching inventory records:
Hostname          Switch               OS              CPU      ASIC            Ports
----------------- -------------------- --------------- -------- --------------- -----------------------------------
netq-ts           N/A                  Ubuntu          x86_64   N/A             N/A

View Switch Hardware Inventory

You can view hardware components deployed on each switch in your network.

View ASIC Information for a Switch

ASIC information for a switch can be viewed from either the NetQ CLI or NetQ UI.

  1. Locate the medium Inventory|Switches card on your workbench.

  2. Change to the full-screen card and click ASIC.

  1. Click to quickly locate a switch that does not appear on the first page of the switch list.

  2. Select hostname from the Field dropdown.

  3. Enter the hostname of the switch you want to view, and click Apply.

  1. To return to your workbench, click in the top right corner of the card.

To view information about the ASIC on a switch, run:

netq [<hostname>] show inventory asic [opta] [json]

This example shows the ASIC information for the leaf02 switch.

cumulus@switch:~$ netq leaf02 show inventory asic
Matching inventory records:
Hostname          Vendor               Model                          Model ID                  Core BW        Ports
----------------- -------------------- ------------------------------ ------------------------- -------------- -----------------------------------
leaf02            Mellanox             Spectrum                       MT52132                   N/A            32 x 100G-QSFP28

This example shows the ASIC information for the NetQ On-premises or Cloud Appliance.

cumulus@switch:~$ netq show inventory asic opta
Matching inventory records:
Hostname          Vendor               Model                          Model ID                  Core BW        Ports
----------------- -------------------- ------------------------------ ------------------------- -------------- -----------------------------------
netq-ts            Mellanox             Spectrum                       MT52132                   N/A            32 x 100G-QSFP28

View Motherboard Information for a Switch

Motherboard/platform information is available from the NetQ UI and NetQ CLI.

  1. Locate the medium Inventory|Switches card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  3. Click Platform.

  1. Click to quickly locate a switch that does not appear on the first page of the switch list.

  2. Select hostname from the Field dropdown.

  3. Enter the hostname of the switch you want to view, and click Apply.

  1. To return to your workbench, click in the top right corner of the card.

To view a list of motherboards installed in a switch, run:

netq [<hostname>] show inventory board [opta] [json]

This example shows all of the motherboard data for the spine01 switch.

cumulus@switch:~$ netq spine01 show inventory board
Matching inventory records:
Hostname          Vendor               Model                          Base MAC           Serial No                 Part No          Rev    Mfg Date
----------------- -------------------- ------------------------------ ------------------ ------------------------- ---------------- ------ ----------
spine01           Dell                 S6000-ON                       44:38:39:00:80:00  N/A                       N/A              N/A    N/A

Use the opta option without the hostname option to view the motherboard data for the NetQ On-premises or Cloud Appliance. No motherboard data is available for NetQ On-premises or Cloud VMs.

View CPU Information for a Switch

CPU information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  3. Click CPU.

  1. Click to quickly locate a switch that does not appear on the first page of the switch list.

  2. Select hostname from the Field dropdown. Then enter the hostname of the switch you want to view.

  1. To return to your workbench, click in the top right corner of the card.

To view CPU information for a switch in your network, run:

netq [<hostname>] show inventory cpu [arch <cpu-arch>] [opta] [json]

This example shows CPU information for the server02 switch.

cumulus@switch:~$ netq server02 show inventory cpu
Matching inventory records:
Hostname          Arch     Model                          Freq       Cores
----------------- -------- ------------------------------ ---------- -----
server02          x86_64   Intel Core i7 9xx (Nehalem Cla N/A        1
                            ss Core i7)

This example shows the CPU information for the NetQ On-premises or Cloud Appliance.

cumulus@switch:~$ netq show inventory cpu opta
Matching inventory records:
Hostname          Arch     Model                          Freq       Cores
----------------- -------- ------------------------------ ---------- -----
netq-ts           x86_64   Intel Xeon Processor (Skylake, N/A        8
                           IBRS)

View Disk Information for a Switch

Disk information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  3. Click Disk.

  1. Click to quickly locate a switch that does not appear on the first page of the switch list.

  2. Select hostname from the Field dropdown. Then enter the hostname of the switch you want to view.

  1. To return to your workbench, click in the top right corner of the card.

To view disk information for a switch in your network, run:

netq [<hostname>] show inventory disk [opta] [json]

This example shows the disk information for the leaf03 switch.

cumulus@switch:~$ netq leaf03 show inventory disk
Matching inventory records:
Hostname          Name            Type             Transport          Size       Vendor               Model
----------------- --------------- ---------------- ------------------ ---------- -------------------- ------------------------------
leaf03            vda             disk             N/A                6G         0x1af4               N/A

This example show the disk information for the NetQ On-premises or Cloud Appliance.

cumulus@switch:~$ netq show inventory disk opta

Matching inventory records:
Hostname          Name            Type             Transport          Size       Vendor               Model
----------------- --------------- ---------------- ------------------ ---------- -------------------- ------------------------------
netq-ts           vda             disk             N/A                265G       0x1af4               N/A

View Memory Information for a Switch

Memory information is available from the NetQ UI and NetQ CLI.

  1. Locate the medium Inventory|Switches card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  3. Click Memory.

  1. Click to quickly locate a switch that does not appear on the first page of the switch list.

  2. Select hostname from the Field dropdown. Then enter the hostname of the switch you want to view.

  1. To return to your workbench, click in the top right corner of the card.

To view memory information for your switches and host servers, run:

netq [<hostname>] show inventory memory [opta] [json]

This example shows all of the memory characteristics for the leaf01 switch.

cumulus@switch:~$ netq leaf01 show inventory memory
Matching inventory records:
Hostname          Name            Type             Size       Speed      Vendor               Serial No
----------------- --------------- ---------------- ---------- ---------- -------------------- -------------------------
leaf01            DIMM 0          RAM              768 MB     Unknown    QEMU                 Not Specified

This example shows the memory information for the NetQ On-premises or Cloud Appliance.

cumulus@switch:~$ netq show inventory memory opta
Matching inventory records:
Hostname          Name            Type             Size       Speed      Vendor               Serial No
----------------- --------------- ---------------- ---------- ---------- -------------------- -------------------------
netq-ts           DIMM 0          RAM              16384 MB   Unknown    QEMU                 Not Specified
netq-ts           DIMM 1          RAM              16384 MB   Unknown    QEMU                 Not Specified
netq-ts           DIMM 2          RAM              16384 MB   Unknown    QEMU                 Not Specified
netq-ts           DIMM 3          RAM              16384 MB   Unknown    QEMU                 Not Specified

View Switch Software Inventory

You can view software components deployed on a given switch in your network.

View Operating System Information for a Switch

OS information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  3. Click OS.

  1. Click to quickly locate a switch that does not appear on the first page of the switch list.

  2. Enter a hostname, then click Apply.

  1. To return to your workbench, click in the top right corner of the card.

To view OS information for a switch, run:

netq [<hostname>] show inventory os [opta] [json]

This example shows the OS information for the leaf02 switch.

cumulus@switch:~$ netq leaf02 show inventory os
Matching inventory records:
Hostname          Name            Version                              Last Changed
----------------- --------------- ------------------------------------ -------------------------
leaf02            CL              3.7.5                                Fri Apr 19 16:01:46 2019

This example shows the OS information for the NetQ On-premises or Cloud Appliance.

cumulus@switch:~$ netq show inventory os opta

Matching inventory records:
Hostname          Name            Version                              Last Changed
----------------- --------------- ------------------------------------ -------------------------
netq-ts           Ubuntu          18.04                                Tue Jul 14 19:27:39 2020

View Cumulus Linux License Information for a Switch

It is important to know when you have switches that have invalid or missing Cumulus Linux licenses, as not all of the features are operational without a valid license. If the license status is Bad or Missing, the license must be updated or applied for a switch to operate properly. Hosts do not require a Cumulus Linux or NetQ license.

Cumulus Linux license information is available from the NetQ UI and NetQ CLI.

  1. Locate the Inventory|Switches card on your workbench.

  2. Hover over the card, and change to the full-screen card using the size picker.

  3. The Show All tab is displayed by default.

  4. Click to quickly locate a switch that does not appear on the first page of the switch list.

  5. Select hostname from the Field dropdown. Then enter the hostname of the switch you want to view.

  1. To return to your workbench, click in the top right corner of the card.

To view license information for a switch, run:

netq <hostname> show inventory license [opta] [around <text-time>] [json]

This example shows the license status for the leaf02 switch.

cumulus@switch:~$ netq leaf02 show inventory license
Matching inventory records:
Hostname          Name            State      Last Changed
----------------- --------------- ---------- -------------------------
leaf02            Cumulus Linux   ok         Fri Apr 19 16:01:46 2020

View the Cumulus Linux Packages on a Switch

When you are troubleshooting an issue with a switch, you might want to know what versions of the Cumulus Linux operating system are supported on that switch and on a switch that is not having the same issue.

To view package information for your switches, run:

netq <hostname> show cl-manifest [json]

This example shows the Cumulus Linux OS versions supported for the leaf01 switch, using the vx ASIC vendor (virtual, so simulated) and x86_64 CPU architecture.

cumulus@switch:~$ netq leaf01 show cl-manifest

Matching manifest records:
Hostname          ASIC Vendor          CPU Arch             Manifest Version
----------------- -------------------- -------------------- --------------------
leaf01            vx                   x86_64               3.7.6.1
leaf01            vx                   x86_64               3.7.10
leaf01            vx                   x86_64               3.6.2.1
leaf01            vx                   x86_64               3.7.4
leaf01            vx                   x86_64               3.7.2.5
leaf01            vx                   x86_64               3.7.1
leaf01            vx                   x86_64               3.6.0
leaf01            vx                   x86_64               3.7.0
leaf01            vx                   x86_64               3.4.1
leaf01            vx                   x86_64               3.7.3
leaf01            vx                   x86_64               3.2.0
...

View All Software Packages Installed on Switches

If you are having an issue with a particular switch, you may want to verify what software is installed and whether it needs updating.

To view package information for a switch, run:

netq <hostname> show cl-pkg-info [<text-package-name>] [around <text-time>] [json]

Use the text-package-name option to narrow the results to a particular package or the around option to narrow the output to a particular time range.

This example shows all installed software packages for spine01.

cumulus@switch:~$ netq spine01 show cl-pkg-info
Matching package_info records:
Hostname          Package Name             Version              CL Version           Package Status       Last Changed
----------------- ------------------------ -------------------- -------------------- -------------------- -------------------------
spine01           libfile-fnmatch-perl     0.02-2+b1            Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           screen                   4.2.1-3+deb8u1       Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           libudev1                 215-17+deb8u13       Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           libjson-c2               0.11-4               Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           atftp                    0.7.git20120829-1+de Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
                                           b8u1
spine01           isc-dhcp-relay           4.3.1-6-cl3u14       Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           iputils-ping             3:20121221-5+b2      Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           base-files               8+deb8u11            Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           libx11-data              2:1.6.2-3+deb8u2     Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           onie-tools               3.2-cl3u6            Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           python-cumulus-restapi   0.1-cl3u10           Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           tasksel                  3.31+deb8u1          Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           ncurses-base             5.9+20140913-1+deb8u Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
                                           3
spine01           libmnl0                  1.0.3-5-cl3u2        Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
spine01           xz-utils                 5.1.1alpha+20120614- Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020
...

This example shows the ntp package on the spine01 switch.

cumulus@switch:~$ netq spine01 show cl-pkg-info ntp
Matching package_info records:
Hostname          Package Name             Version              CL Version           Package Status       Last Changed
----------------- ------------------------ -------------------- -------------------- -------------------- -------------------------
spine01           ntp                      1:4.2.8p10-cl3u2     Cumulus Linux 3.7.12 installed            Wed Aug 26 19:58:45 2020

If you have a software manifest, you can determine what software packages and versions are recommended based on the Cumulus Linux release. You can then compare that to what is installed on your switch(es) to determine if it differs from the manifest. Such a difference might occur if one or more packages have been upgraded separately from the Cumulus Linux software itself.

To view recommended package information for a switch, run:

netq <hostname> show recommended-pkg-version [release-id <text-release-id>] [package-name <text-package-name>] [json]

This example shows packages that are recommended for upgrade on the leaf12 switch, namely switchd.

cumulus@switch:~$ netq leaf12 show recommended-pkg-version
Matching manifest records:
Hostname          Release ID           ASIC Vendor          CPU Arch             Package Name         Version              Last Changed
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------------
leaf12            3.7.1                vx                   x86_64               switchd              1.0-cl3u30           Wed Feb  5 04:36:30 2020

This example shows packages that are recommended for upgrade on the server01 switch, namely lldpd.

cumulus@switch:~$ netq server01 show recommended-pkg-version
Matching manifest records:
Hostname          Release ID           ASIC Vendor          CPU Arch             Package Name         Version              Last Changed
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------------
server01            3.7.1                vx                   x86_64               lldpd                0.9.8-0-cl3u11       Wed Feb  5 04:36:30 2020

This example shows the version of the switchd package that is recommended for use with Cumulus Linux 3.7.2.

cumulus@switch:~$ netq act-5712-09 show recommended-pkg-version release-id 3.7.2 package-name switchd
Matching manifest records:
Hostname          Release ID           ASIC Vendor          CPU Arch             Package Name         Version              Last Changed
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------------
act-5712-09       3.7.2                bcm                  x86_64               switchd              1.0-cl3u31           Wed Feb  5 04:36:30 2020

This example shows the version of the switchd package that is recommended for use with Cumulus Linux 3.1.0. Note the version difference from the example for Cumulus Linux 3.7.2.

cumulus@noc-pr:~$ netq act-5712-09 show recommended-pkg-version release-id 3.1.0 package-name switchd
Matching manifest records:
Hostname          Release ID           ASIC Vendor          CPU Arch             Package Name         Version              Last Changed
----------------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------------
act-5712-09       3.1.0                bcm                  x86_64               switchd              1.0-cl3u4            Wed Feb  5 04:36:30 2020

Validate NetQ Agents are Running

You can confirm that NetQ Agents are running on switches and hosts (if installed) using the netq show agents command. Viewing the Status column of the output indicates whether the agent is up and current, labelled Fresh, or down and stale, labelled Rotten. Additional information is provided about the agent status, including whether it is time synchronized, how long it has been up, and the last time its state changed.

This example shows NetQ Agent state on all devices.

cumulus@switch:~$ netq show agents
Matching agents records:
Hostname          Status           NTP Sync Version                              Sys Uptime                Agent Uptime              Reinitialize Time          Last Changed
----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- -------------------------
border01          Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:54 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:38 2020
border02          Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:57 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:33 2020
fw1               Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:44 2020  Tue Sep 29 21:24:48 2020  Tue Sep 29 21:24:48 2020   Thu Oct  1 16:07:26 2020
fw2               Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:04:42 2020  Tue Sep 29 21:24:48 2020  Tue Sep 29 21:24:48 2020   Thu Oct  1 16:07:22 2020
leaf01            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 16:49:04 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:10 2020
leaf02            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:14 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:30 2020
leaf03            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:37 2020  Tue Sep 29 21:24:49 2020  Tue Sep 29 21:24:49 2020   Thu Oct  1 16:07:24 2020
leaf04            Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:35 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:13 2020
oob-mgmt-server   Fresh            yes      3.1.1-ub18.04u29~1599111022.78b9e43  Mon Sep 21 16:43:58 2020  Mon Sep 21 17:55:00 2020  Mon Sep 21 17:55:00 2020   Thu Oct  1 16:07:31 2020
server01          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:16 2020
server02          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:24 2020
server03          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:56 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:12 2020
server04          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:07 2020  Tue Sep 29 21:13:07 2020   Thu Oct  1 16:07:17 2020
server05          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:25 2020
server06          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:19:57 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:21 2020
server07          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:06:48 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:28 2020
server08          Fresh            yes      3.2.0-ub18.04u30~1601393774.104fb9e  Mon Sep 21 17:06:45 2020  Tue Sep 29 21:13:10 2020  Tue Sep 29 21:13:10 2020   Thu Oct  1 16:07:31 2020
spine01           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:34 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:20 2020
spine02           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:33 2020  Tue Sep 29 21:24:58 2020  Tue Sep 29 21:24:58 2020   Thu Oct  1 16:07:16 2020
spine03           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:34 2020  Tue Sep 29 21:25:07 2020  Tue Sep 29 21:25:07 2020   Thu Oct  1 16:07:20 2020
spine04           Fresh            yes      3.2.0-cl4u30~1601410518.104fb9ed     Mon Sep 21 17:03:32 2020  Tue Sep 29 21:25:07 2020  Tue Sep 29 21:25:07 2020   Thu Oct  1 16:07:33 2020

You can narrow your focus in several ways:

Monitor Software Services

Cumulus Linux and NetQ run a number of services to deliver the various features of these products. You can monitor their status using the netq show services command. The services related to system-level operation are described here. Monitoring of other services, such as those related to routing, are described with those topics. NetQ automatically monitors the following services:

The CLI syntax for viewing the status of services is:

netq [<hostname>] show services [<service-name>] [vrf <vrf>] [active|monitored] [around <text-time>] [json]
netq [<hostname>] show services [<service-name>] [vrf <vrf>] status (ok|warning|error|fail) [around <text-time>] [json]
netq [<hostname>] show events [level info | level error | level warning | level critical | level debug] type services [between <text-time> and <text-endtime>] [json]

View All Services on All Devices

This example shows all of the available services on each device and whether each is enabled, active, and monitored, along with how long the service has been running and the last time it was changed.

It is useful to have colored output for this show command. To configure colored output, run the netq config add color command.

cumulus@switch:~$ netq show services
Hostname          Service              PID   VRF             Enabled Active Monitored Status           Uptime                    Last Changed
----------------- -------------------- ----- --------------- ------- ------ --------- ---------------- ------------------------- -------------------------
leaf01            bgpd                 2872  default         yes     yes    yes       ok               1d:6h:43m:59s             Fri Feb 15 17:28:24 2019
leaf01            clagd                n/a   default         yes     no     yes       n/a              1d:6h:43m:35s             Fri Feb 15 17:28:48 2019
leaf01            ledmgrd              1850  default         yes     yes    no        ok               1d:6h:43m:59s             Fri Feb 15 17:28:24 2019
leaf01            lldpd                2651  default         yes     yes    yes       ok               1d:6h:43m:27s             Fri Feb 15 17:28:56 2019
leaf01            mstpd                1746  default         yes     yes    yes       ok               1d:6h:43m:35s             Fri Feb 15 17:28:48 2019
leaf01            neighmgrd            1986  default         yes     yes    no        ok               1d:6h:43m:59s             Fri Feb 15 17:28:24 2019
leaf01            netq-agent           8654  mgmt            yes     yes    yes       ok               1d:6h:43m:29s             Fri Feb 15 17:28:54 2019
leaf01            netqd                8848  mgmt            yes     yes    yes       ok               1d:6h:43m:29s             Fri Feb 15 17:28:54 2019
leaf01            ntp                  8478  mgmt            yes     yes    yes       ok               1d:6h:43m:29s             Fri Feb 15 17:28:54 2019
leaf01            ptmd                 2743  default         yes     yes    no        ok               1d:6h:43m:59s             Fri Feb 15 17:28:24 2019
leaf01            pwmd                 1852  default         yes     yes    no        ok               1d:6h:43m:59s             Fri Feb 15 17:28:24 2019
leaf01            smond                1826  default         yes     yes    yes       ok               1d:6h:43m:27s             Fri Feb 15 17:28:56 2019
leaf01            ssh                  2106  default         yes     yes    no        ok               1d:6h:43m:59s             Fri Feb 15 17:28:24 2019
leaf01            syslog               8254  default         yes     yes    no        ok               1d:6h:43m:59s             Fri Feb 15 17:28:24 2019
leaf01            zebra                2856  default         yes     yes    yes       ok               1d:6h:43m:59s             Fri Feb 15 17:28:24 2019
leaf02            bgpd                 2867  default         yes     yes    yes       ok               1d:6h:43m:55s             Fri Feb 15 17:28:28 2019
leaf02            clagd                n/a   default         yes     no     yes       n/a              1d:6h:43m:31s             Fri Feb 15 17:28:53 2019
leaf02            ledmgrd              1856  default         yes     yes    no        ok               1d:6h:43m:55s             Fri Feb 15 17:28:28 2019
leaf02            lldpd                2646  default         yes     yes    yes       ok               1d:6h:43m:30s             Fri Feb 15 17:28:53 2019
...

You can also view services information in JSON format:

cumulus@switch:~$ netq show services json
{
    "services":[
        {
            "status":"ok",
            "uptime":1550251734.0,
            "monitored":"yes",
            "service":"ntp",
            "lastChanged":1550251734.4790000916,
            "pid":"8478",
            "hostname":"leaf01",
            "enabled":"yes",
            "vrf":"mgmt",
            "active":"yes"
        },
        {
            "status":"ok",
            "uptime":1550251704.0,
            "monitored":"no",
            "service":"ssh",
            "lastChanged":1550251704.0929999352,
            "pid":"2106",
            "hostname":"leaf01",
            "enabled":"yes",
        "vrf":"default",
        "active":"yes"
    },
    {
        "status":"ok",
        "uptime":1550251736.0,
        "monitored":"yes",
        "service":"lldpd",
        "lastChanged":1550251736.5160000324,
        "pid":"2651",
        "hostname":"leaf01",
        "enabled":"yes",
        "vrf":"default",
        "active":"yes"
    },
    {
        "status":"ok",
        "uptime":1550251704.0,
        "monitored":"yes",
        "service":"bgpd",
        "lastChanged":1550251704.1040000916,
        "pid":"2872",
        "hostname":"leaf01",
        "enabled":"yes",
        "vrf":"default",
        "active":"yes"
    },
    {
        "status":"ok",
        "uptime":1550251704.0,
        "monitored":"no",
        "service":"neighmgrd",
        "lastChanged":1550251704.0969998837,
        "pid":"1986",
        "hostname":"leaf01",
        "enabled":"yes",
        "vrf":"default",
        "active":"yes"
    },
...

If you want to view the service information for a given device, simply use the hostname option when running the command.

View Information about a Given Service on All Devices

You can view the status of a given service at the current time, at a prior point in time, or view the changes that have occurred for the service during a specified timeframe.

This example shows how to view the status of the NTP service across the network. In this case, VRF is configured so the NTP service runs on both the default and management interface. You can perform the same command with the other services, such as bgpd, lldpd, and clagd.

cumulus@switch:~$ netq show services ntp
Matching services records:
Hostname          Service              PID   VRF             Enabled Active Monitored Status           Uptime                    Last Changed
----------------- -------------------- ----- --------------- ------- ------ --------- ---------------- ------------------------- -------------------------
exit01            ntp                  8478  mgmt            yes     yes    yes       ok               1d:6h:52m:41s             Fri Feb 15 17:28:54 2019
exit02            ntp                  8497  mgmt            yes     yes    yes       ok               1d:6h:52m:36s             Fri Feb 15 17:28:59 2019
firewall01        ntp                  n/a   default         yes     yes    yes       ok               1d:6h:53m:4s              Fri Feb 15 17:28:31 2019
hostd-11          ntp                  n/a   default         yes     yes    yes       ok               1d:6h:52m:46s             Fri Feb 15 17:28:49 2019
hostd-21          ntp                  n/a   default         yes     yes    yes       ok               1d:6h:52m:37s             Fri Feb 15 17:28:58 2019
hosts-11          ntp                  n/a   default         yes     yes    yes       ok               1d:6h:52m:28s             Fri Feb 15 17:29:07 2019
hosts-13          ntp                  n/a   default         yes     yes    yes       ok               1d:6h:52m:19s             Fri Feb 15 17:29:16 2019
hosts-21          ntp                  n/a   default         yes     yes    yes       ok               1d:6h:52m:14s             Fri Feb 15 17:29:21 2019
hosts-23          ntp                  n/a   default         yes     yes    yes       ok               1d:6h:52m:4s              Fri Feb 15 17:29:31 2019
noc-pr            ntp                  2148  default         yes     yes    yes       ok               1d:6h:53m:43s             Fri Feb 15 17:27:52 2019
noc-se            ntp                  2148  default         yes     yes    yes       ok               1d:6h:53m:38s             Fri Feb 15 17:27:57 2019
spine01           ntp                  8414  mgmt            yes     yes    yes       ok               1d:6h:53m:30s             Fri Feb 15 17:28:05 2019
spine02           ntp                  8419  mgmt            yes     yes    yes       ok               1d:6h:53m:27s             Fri Feb 15 17:28:08 2019
spine03           ntp                  8443  mgmt            yes     yes    yes       ok               1d:6h:53m:22s             Fri Feb 15 17:28:13 2019
leaf01             ntp                  8765  mgmt            yes     yes    yes       ok               1d:6h:52m:52s             Fri Feb 15 17:28:43 2019
leaf02             ntp                  8737  mgmt            yes     yes    yes       ok               1d:6h:52m:46s             Fri Feb 15 17:28:49 2019
leaf11            ntp                  9305  mgmt            yes     yes    yes       ok               1d:6h:49m:22s             Fri Feb 15 17:32:13 2019
leaf12            ntp                  9339  mgmt            yes     yes    yes       ok               1d:6h:49m:9s              Fri Feb 15 17:32:26 2019
leaf21            ntp                  9367  mgmt            yes     yes    yes       ok               1d:6h:49m:5s              Fri Feb 15 17:32:30 2019
leaf22            ntp                  9403  mgmt            yes     yes    yes       ok               1d:6h:52m:57s             Fri Feb 15 17:28:38 2019

This example shows the status of the BGP daemon.

cumulus@switch:~$ netq show services bgpd
Matching services records:
Hostname          Service              PID   VRF             Enabled Active Monitored Status           Uptime                    Last Changed
----------------- -------------------- ----- --------------- ------- ------ --------- ---------------- ------------------------- -------------------------
exit01            bgpd                 2872  default         yes     yes    yes       ok               1d:6h:54m:37s             Fri Feb 15 17:28:24 2019
exit02            bgpd                 2867  default         yes     yes    yes       ok               1d:6h:54m:33s             Fri Feb 15 17:28:28 2019
firewall01        bgpd                 21766 default         yes     yes    yes       ok               1d:6h:54m:54s             Fri Feb 15 17:28:07 2019
spine01           bgpd                 2953  default         yes     yes    yes       ok               1d:6h:55m:27s             Fri Feb 15 17:27:34 2019
spine02           bgpd                 2948  default         yes     yes    yes       ok               1d:6h:55m:23s             Fri Feb 15 17:27:38 2019
spine03           bgpd                 2953  default         yes     yes    yes       ok               1d:6h:55m:18s             Fri Feb 15 17:27:43 2019
leaf01            bgpd                 3221  default         yes     yes    yes       ok               1d:6h:54m:48s             Fri Feb 15 17:28:13 2019
leaf02            bgpd                 3177  default         yes     yes    yes       ok               1d:6h:54m:42s             Fri Feb 15 17:28:19 2019
leaf11            bgpd                 3521  default         yes     yes    yes       ok               1d:6h:51m:18s             Fri Feb 15 17:31:43 2019
leaf12            bgpd                 3527  default         yes     yes    yes       ok               1d:6h:51m:6s              Fri Feb 15 17:31:55 2019
leaf21            bgpd                 3512  default         yes     yes    yes       ok               1d:6h:51m:1s              Fri Feb 15 17:32:00 2019
leaf22            bgpd                 3536  default         yes     yes    yes       ok               1d:6h:54m:54s             Fri Feb 15 17:28:07 2019

To view changes over a given time period, use the netq show events command. For more detailed information about events, refer to Manage Events and Notifications.

In this example, we want to view changes to the bgpd service in the last 48 hours.

cumulus@switch:/$ netq show events type bgp between now and 48h
Matching events records:
Hostname          Message Type Severity Message                             Timestamp
----------------- ------------ -------- ----------------------------------- -------------------------
leaf01            bgp          info     BGP session with peer spine-1 swp3. 1d:6h:55m:37s
                                        3 vrf DataVrf1081 state changed fro
                                        m failed to Established
leaf01            bgp          info     BGP session with peer spine-2 swp4. 1d:6h:55m:37s
                                        3 vrf DataVrf1081 state changed fro
                                        m failed to Established
leaf01            bgp          info     BGP session with peer spine-3 swp5. 1d:6h:55m:37s
                                        3 vrf DataVrf1081 state changed fro
                                        m failed to Established
leaf01            bgp          info     BGP session with peer spine-1 swp3. 1d:6h:55m:37s
                                        2 vrf DataVrf1080 state changed fro
                                        m failed to Established
leaf01            bgp          info     BGP session with peer spine-3 swp5. 1d:6h:55m:37s
                                        2 vrf DataVrf1080 state changed fro
                                        m failed to Established
leaf01            bgp          info     BGP session with peer spine-2 swp4. 1d:6h:55m:37s
                                        2 vrf DataVrf1080 state changed fro
                                        m failed to Established
leaf01            bgp          info     BGP session with peer spine-3 swp5. 1d:6h:55m:37s
                                        4 vrf DataVrf1082 state changed fro
                                        m failed to Established

Monitor System Inventory

In addition to network and switch inventory, the Cumulus NetQ UI provides a view into the current status and configuration of the software network constructs in a tabular, networkwide view. These are helpful when you want to see all data for all of a particular element in your network for troubleshooting, or you want to export a list view.

Some of these views provide data that is also available through the card workflows, but these views are not treated like cards. They only provide the current status; you cannot change the time period of the views, or graph the data within the UI.

Access these tables through the Main Menu (), under the Network heading.

Tables can be manipulated using the settings above the tables, shown here and described in Table Settings.

Pagination options are shown when there are more than 25 results.

View All NetQ Agents

The Agents view provides all available parameter data about all NetQ Agents in the system.

ParameterDescription
HostnameName of the switch or host
TimestampDate and time the data was captured
Last ReinitDate and time that the switch or host was reinitialized
Last Update TimeDate and time that the switch or host was updated
LastbootDate and time that the switch or host was last booted up
NTP StateStatus of NTP synchronization on the switch or host; yes = in synchronization, no = out of synchronization
Sys UptimeAmount of time the switch or host has been continuously up and running
VersionNetQ version running on the switch or host

View All Events

The Events view provides all available parameter data about all events in the system.

ParameterDescription
HostnameName of the switch or host that experienced the event
TimestampDate and time the event was captured
MessageDescription of the event
Message TypeNetwork service or protocol that generated the event
SeverityImportance of the event. Values include critical, warning, info, and debug.

View All MACs

The MACs (media access control addresses) view provides all available parameter data about all MAC addresses in the system.

ParameterDescription
HostnameName of the switch or host where the MAC address resides
TimestampDate and time the data was captured
Egress PortPort where traffic exits the switch or host
Is RemoteIndicates if the address is
Is StaticIndicates if the address is a static (true) or dynamic assignment (false)
MAC AddressMAC address
NexthopNext hop for traffic hitting this MAC address on this switch or host
OriginIndicates if address is owned by this switch or host (true) or by a peer (false)
VLANVLAN associated with the MAC address, if any

View All VLANs

The VLANs (virtual local area networks) view provides all available parameter data about all VLANs in the system.

ParameterDescription
HostnameName of the switch or host where the VLAN(s) reside(s)
TimestampDate and time the data was captured
If NameName of interface used by the VLAN(s)
Last ChangedDate and time when this information was last updated
PortsPorts on the switch or host associated with the VLAN(s)
SVISwitch virtual interface associated with a bridge interface
VLANsVLANs associated with the switch or host

View IP Routes

The IP Routes view provides all available parameter data about all IP routes. The list of routes can be filtered to view only the IPv4 or IPv6 routes by selecting the relevant tab.

ParameterDescription
HostnameName of the switch or host where the VLAN(s) reside(s)
TimestampDate and time the data was captured
Is IPv6Indicates if the address is an IPv6 (true) or IPv4 (false) address
Message TypeNetwork service or protocol; always Route in this table
NexthopsPossible ports/interfaces where traffic can be routed to next
OriginIndicates if this switch or host is the source of this route (true) or not (false)
PrefixIPv4 or IPv6 address prefix
PriorityRank of this route to be used before another, where the lower the number, less likely is to be used; value determined by routing protocol
ProtocolProtocol responsible for this route
Route TypeType of route
Rt Table IDThe routing table identifier where the route resides
SrcPrefix of the address where the route is coming from (the previous hop)
VRFAssociated virtual route interface associated with this route

View IP Neighbors

The IP Neighbors view provides all available parameter data about all IP neighbors. The list of neighbors can be filtered to view only the IPv4 or IPv6 neighbors by selecting the relevant tab.

ParameterDescription
HostnameName of the neighboring switch or host
TimestampDate and time the data was captured
IF IndexIndex of interface used to communicate with this neighbor
If NameName of interface used to communicate with this neighbor
IP AddressIPv4 or IPv6 address of the neighbor switch or host
Is IPv6Indicates if the address is an IPv6 (true) or IPv4 (false) address
Is RemoteIndicates if the address is
MAC AddressMAC address of the neighbor switch or host
Message TypeNetwork service or protocol; always Neighbor in this table
VRFAssociated virtual route interface associated with this neighbor

View IP Addresses

The IP Addresses view provides all available parameter data about all IP addresses. The list of addresses can be filtered to view only the IPv4 or IPv6 addresses by selecting the relevant tab.

ParameterDescription
HostnameName of the neighboring switch or host
TimestampDate and time the data was captured
If NameName of interface used to communicate with this neighbor
Is IPv6Indicates if the address is an IPv6 (true) or IPv4 (false) address
MaskHost portion of the address
PrefixNetwork portion of the address
VRFVirtual route interface associated with this address prefix and interface on this switch or host

Monitor Container Environments Using Kubernetes API Server

The NetQ Agent monitors many aspects of containers on your network by integrating with the Kubernetes API server. In particular, the NetQ Agent tracks:

This topic assumes a reasonable familiarity with Kubernetes terminology and architecture.

Use NetQ with Kubernetes Clusters

The NetQ Agent interfaces with the Kubernetes API server and listens to Kubernetes events. The NetQ Agent monitors network identity and physical network connectivity of Kubernetes resources like Pods, Daemon sets, Service, and so forth. NetQ works with any container network interface (CNI), such as Calico or Flannel.

The NetQ Kubernetes integration enables network administrators to:

NetQ also helps network administrators identify changes within a Kubernetes cluster and determine if such changes had an adverse effect on the network performance (caused by a noisy neighbor for example). Additionally, NetQ helps the infrastructure administrator determine how Kubernetes workloads are distributed within a network.

Requirements

The NetQ Agent supports Kubernetes version 1.9.2 or later.

Command Summary

There is a large set of commands available to monitor Kubernetes configurations, including the ability to monitor clusters, nodes, daemon-set, deployment, pods, replication, and services. Run netq show kubernetes help to see all the possible commands.

netq [<hostname>] show kubernetes cluster [name <kube-cluster-name>] [around <text-time>] [json]
netq [<hostname>] show kubernetes node [components] [name <kube-node-name>] [cluster <kube-cluster-name> ] [label <kube-node-label>] [around <text-time>] [json]
netq [<hostname>] show kubernetes daemon-set [name <kube-ds-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-ds-label>] [around <text-time>] [json]
netq [<hostname>] show kubernetes daemon-set [name <kube-ds-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-ds-label>] connectivity [around <text-time>] [json]
netq [<hostname>] show kubernetes deployment [name <kube-deployment-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-deployment-label>] [around <text-time>] [json]
netq [<hostname>] show kubernetes deployment [name <kube-deployment-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-deployment-label>] connectivity [around <text-time>] [json]
netq [<hostname>] show kubernetes pod [name <kube-pod-name>] [cluster <kube-cluster-name> ] [namespace <namespace>] [label <kube-pod-label>] [pod-ip <kube-pod-ipaddress>] [node <kube-node-name>] [around <text-time>] [json]
netq [<hostname>] show kubernetes replication-controller [name <kube-rc-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-rc-label>] [around <text-time>] [json]
netq [<hostname>] show kubernetes replica-set [name <kube-rs-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-rs-label>] [around <text-time>] [json]
netq [<hostname>] show kubernetes replica-set [name <kube-rs-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-rs-label>] connectivity [around <text-time>] [json]
netq [<hostname>] show kubernetes service [name <kube-service-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-service-label>] [service-cluster-ip <kube-service-cluster-ip>] [service-external-ip <kube-service-external-ip>] [around <text-time>] [json]
netq [<hostname>] show kubernetes service [name <kube-service-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-service-label>] [service-cluster-ip <kube-service-cluster-ip>] [service-external-ip <kube-service-external-ip>] connectivity [around <text-time>] [json]
netq <hostname> show impact kubernetes service [master <kube-master-node>] [name <kube-service-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-service-label>] [service-cluster-ip <kube-service-cluster-ip>] [service-external-ip <kube-service-external-ip>] [around <text-time>] [json]
netq <hostname> show impact kubernetes replica-set [master <kube-master-node>] [name <kube-rs-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-rs-label>] [around <text-time>] [json]
netq <hostname> show impact kubernetes deployment [master <kube-master-node>] [name <kube-deployment-name>] [cluster <kube-cluster-name>] [namespace <namespace>] [label <kube-deployment-label>] [around <text-time>] [json]
netq config add agent kubernetes-monitor [poll-period <text-duration-period>]
netq config del agent kubernetes-monitor
netq config show agent kubernetes-monitor [json]

Enable Kubernetes Monitoring

For Kubernetes monitoring, the NetQ Agent must be installed, running, and enabled on the host(s) providing the Kubernetes service.

To enable NetQ Agent monitoring of the containers using the Kubernetes API, you must configure the following on the Kubernetes master node:

  1. Install and configure the NetQ Agent and CLI on the master node.

    Follow the steps outlined in Install NetQ Agents and Install NetQ CLI.

  2. Enable Kubernetes monitoring by the NetQ Agent on the master node.

    You can specify a polling period between 10 and 120 seconds; 15 seconds is the default.

    cumulus@host:~$ netq config add agent kubernetes-monitor poll-period 20
    Successfully added kubernetes monitor. Please restart netq-agent.
    
  3. Restart the NetQ agent.

    cumulus@host:~$ netq config restart agent
    
  4. After waiting for a minute, run the show command to view the cluster.

    cumulus@host:~$netq show kubernetes cluster
    
  5. Next, you must enable the NetQ Agent on all of the worker nodes for complete insight into your container network. Repeat steps 2 and 3 on each worker node.

View Status of Kubernetes Clusters

Run the netq show kubernetes cluster command to view the status of all Kubernetes clusters in the fabric. In this example, we see there are two clusters; one with server11 as the master server and the other with server12 as the master server. Both are healthy and their associated worker nodes are listed.

cumulus@host:~$ netq show kubernetes cluster
Matching kube_cluster records:
Master                   Cluster Name     Controller Status    Scheduler Status Nodes
------------------------ ---------------- -------------------- ---------------- --------------------
server11:3.0.0.68        default          Healthy              Healthy          server11 server13 se
                                                                                rver22 server11 serv
                                                                                er12 server23 server
                                                                                24
server12:3.0.0.69        default          Healthy              Healthy          server12 server21 se
                                                                                rver23 server13 serv
                                                                                er14 server21 server
                                                                                22

For deployments with multiple clusters, you can use the hostname option to filter the output. This example shows filtering of the list by server11:

cumulus@host:~$ netq server11 show kubernetes cluster
Matching kube_cluster records:
Master                   Cluster Name     Controller Status    Scheduler Status Nodes
------------------------ ---------------- -------------------- ---------------- --------------------
server11:3.0.0.68        default          Healthy              Healthy          server11 server13 se
                                                                                rver22 server11 serv
                                                                                er12 server23 server
                                                                                24

Optionally, use the json option to present the results in JSON format.

cumulus@host:~$ netq show kubernetes cluster json
{
    "kube_cluster":[
        {
            "clusterName":"default",
            "schedulerStatus":"Healthy",
            "master":"server12:3.0.0.69",
            "nodes":"server12 server21 server23 server13 server14 server21 server22",
            "controllerStatus":"Healthy"
        },
        {
            "clusterName":"default",
            "schedulerStatus":"Healthy",
            "master":"server11:3.0.0.68",
            "nodes":"server11 server13 server22 server11 server12 server23 server24",
            "controllerStatus":"Healthy"
    }
    ],
    "truncatedResult":false
}

View Changes to a Cluster

If data collection from the NetQ Agents is not occurring as it once was, you can verify that no changes have been made to the Kubernetes cluster configuration using the around option. Be sure to include the unit of measure with the around value. Valid units include:

This example shows changes that have been made to the cluster in the last hour. In this example we see the addition of the two master nodes and the various worker nodes for each cluster.

cumulus@host:~$ netq show kubernetes cluster around 1h
Matching kube_cluster records:
Master                   Cluster Name     Controller Status    Scheduler Status Nodes                                    DBState  Last changed
------------------------ ---------------- -------------------- ---------------- ---------------------------------------- -------- -------------------------
server11:3.0.0.68        default          Healthy              Healthy          server11 server13 server22 server11 serv Add      Fri Feb  8 01:50:50 2019
                                                                                er12 server23 server24
server12:3.0.0.69        default          Healthy              Healthy          server12 server21 server23 server13 serv Add      Fri Feb  8 01:50:50 2019
                                                                                er14 server21 server22
server12:3.0.0.69        default          Healthy              Healthy          server12 server21 server23 server13      Add      Fri Feb  8 01:50:50 2019
server11:3.0.0.68        default          Healthy              Healthy          server11                                 Add      Fri Feb  8 01:50:50 2019
server12:3.0.0.69        default          Healthy              Healthy          server12                                 Add      Fri Feb  8 01:50:50 2019

View Kubernetes Pod Information

You can show configuration and status of the pods in a cluster, including the names, labels, addresses, associated cluster and containers, and whether the pod is running. This example shows pods for FRR, nginx, Calico, and various Kubernetes components sorted by master node.

cumulus@host:~$ netq show kubernetes pod
Matching kube_pod records:
Master                   Namespace    Name                 IP               Node         Labels               Status   Containers               Last Changed
------------------------ ------------ -------------------- ---------------- ------------ -------------------- -------- ------------------------ ----------------
server11:3.0.0.68        default      cumulus-frr-8vssx    3.0.0.70         server13     pod-template-generat Running  cumulus-frr:f8cac70bb217 Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server11:3.0.0.68        default      cumulus-frr-dkkgp    3.0.5.135        server24     pod-template-generat Running  cumulus-frr:577a60d5f40c Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server11:3.0.0.68        default      cumulus-frr-f4bgx    3.0.3.196        server11     pod-template-generat Running  cumulus-frr:1bc73154a9f5 Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server11:3.0.0.68        default      cumulus-frr-gqqxn    3.0.2.5          server22     pod-template-generat Running  cumulus-frr:3ee0396d126a Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server11:3.0.0.68        default      cumulus-frr-kdh9f    3.0.3.197        server12     pod-template-generat Running  cumulus-frr:94b6329ecb50 Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server11:3.0.0.68        default      cumulus-frr-mvv8m    3.0.5.134        server23     pod-template-generat Running  cumulus-frr:b5845299ce3c Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server11:3.0.0.68        default      httpd-5456469bfd-bq9 10.244.49.65     server22     app:httpd            Running  httpd:79b7f532be2d       Fri Feb  8 01:50:50 2019
                                      zm
server11:3.0.0.68        default      influxdb-6cdb566dd-8 10.244.162.128   server13     app:influx           Running  influxdb:15dce703cdec    Fri Feb  8 01:50:50 2019
                                      9lwn
server11:3.0.0.68        default      nginx-8586cf59-26pj5 10.244.9.193     server24     run:nginx            Running  nginx:6e2b65070c86       Fri Feb  8 01:50:50 2019
server11:3.0.0.68        default      nginx-8586cf59-c82ns 10.244.40.128    server12     run:nginx            Running  nginx:01b017c26725       Fri Feb  8 01:50:50 2019
server11:3.0.0.68        default      nginx-8586cf59-wjwgp 10.244.49.64     server22     run:nginx            Running  nginx:ed2b4254e328       Fri Feb  8 01:50:50 2019
server11:3.0.0.68        kube-system  calico-etcd-pfg9r    3.0.0.68         server11     k8s-app:calico-etcd  Running  calico-etcd:f95f44b745a7 Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:142071906
                                                                                         5
server11:3.0.0.68        kube-system  calico-kube-controll 3.0.2.5          server22     k8s-app:calico-kube- Running  calico-kube-controllers: Fri Feb  8 01:50:50 2019
                                      ers-d669cc78f-4r5t2                                controllers                   3688b0c5e9c5
server11:3.0.0.68        kube-system  calico-node-4px69    3.0.2.5          server22     k8s-app:calico-node  Running  calico-node:1d01648ebba4 Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:da350802a3d2
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:324404111
                                                                                         9
server11:3.0.0.68        kube-system  calico-node-bt8w6    3.0.3.196        server11     k8s-app:calico-node  Running  calico-node:9b3358a07e5e Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:d38713e6fdd8
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:324404111
                                                                                         9
server11:3.0.0.68        kube-system  calico-node-gtmkv    3.0.3.197        server12     k8s-app:calico-node  Running  calico-node:48fcc6c40a6b Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:f0838a313eff
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:324404111
                                                                                         9
server11:3.0.0.68        kube-system  calico-node-mvslq    3.0.5.134        server23     k8s-app:calico-node  Running  calico-node:7b361aece76c Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:f2da6bc36bf8
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:324404111
                                                                                         9
server11:3.0.0.68        kube-system  calico-node-sjj2s    3.0.5.135        server24     k8s-app:calico-node  Running  calico-node:6e13b2b73031 Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:fa4b2b17fba9
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:324404111
                                                                                         9
server11:3.0.0.68        kube-system  calico-node-vdkk5    3.0.0.70         server13     k8s-app:calico-node  Running  calico-node:fb3ec9429281 Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:b56980da7294
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:324404111
                                                                                         9
server11:3.0.0.68        kube-system  calico-node-zzfkr    3.0.0.68         server11     k8s-app:calico-node  Running  calico-node:c1ac399dd862 Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:60a779fdc47a
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:324404111
                                                                                         9
server11:3.0.0.68        kube-system  etcd-server11        3.0.0.68         server11     tier:control-plane c Running  etcd:dde63d44a2f5        Fri Feb  8 01:50:50 2019
                                                                                         omponent:etcd
server11:3.0.0.68        kube-system  kube-apiserver-hostd 3.0.0.68         server11     tier:control-plane c Running  kube-apiserver:0cd557bbf Fri Feb  8 01:50:50 2019
                                      -11                                                omponent:kube-apiser          2fe
                                                                                         ver
server11:3.0.0.68        kube-system  kube-controller-mana 3.0.0.68         server11     tier:control-plane c Running  kube-controller-manager: Fri Feb  8 01:50:50 2019
                                      ger-server11                                       omponent:kube-contro          89b2323d09b2
                                                                                         ller-manager
server11:3.0.0.68        kube-system  kube-dns-6f4fd4bdf-p 10.244.34.64     server23     k8s-app:kube-dns     Running  dnsmasq:284d9d363999 kub Fri Feb  8 01:50:50 2019
                                      lv7p                                                                             edns:bd8bdc49b950 sideca
                                                                                                                       r:fe10820ffb19
server11:3.0.0.68        kube-system  kube-proxy-4cx2t     3.0.3.197        server12     k8s-app:kube-proxy p Running  kube-proxy:49b0936a4212  Fri Feb  8 01:50:50 2019
                                                                                         od-template-generati
                                                                                         on:1 controller-revi
                                                                                         sion-hash:3953509896
server11:3.0.0.68        kube-system  kube-proxy-7674k     3.0.3.196        server11     k8s-app:kube-proxy p Running  kube-proxy:5dc2f5fe0fad  Fri Feb  8 01:50:50 2019
                                                                                         od-template-generati
                                                                                         on:1 controller-revi
                                                                                         sion-hash:3953509896
server11:3.0.0.68        kube-system  kube-proxy-ck5cn     3.0.2.5          server22     k8s-app:kube-proxy p Running  kube-proxy:6944f7ff8c18  Fri Feb  8 01:50:50 2019
                                                                                         od-template-generati
                                                                                         on:1 controller-revi
                                                                                         sion-hash:3953509896
server11:3.0.0.68        kube-system  kube-proxy-f9dt8     3.0.0.68         server11     k8s-app:kube-proxy p Running  kube-proxy:032cc82ef3f8  Fri Feb  8 01:50:50 2019
                                                                                         od-template-generati
                                                                                         on:1 controller-revi
                                                                                         sion-hash:3953509896
server11:3.0.0.68        kube-system  kube-proxy-j6qw6     3.0.5.135        server24     k8s-app:kube-proxy p Running  kube-proxy:10544e43212e  Fri Feb  8 01:50:50 2019
                                                                                         od-template-generati
                                                                                         on:1 controller-revi
                                                                                         sion-hash:3953509896
server11:3.0.0.68        kube-system  kube-proxy-lq8zz     3.0.5.134        server23     k8s-app:kube-proxy p Running  kube-proxy:1bcfa09bb186  Fri Feb  8 01:50:50 2019
                                                                                         od-template-generati
                                                                                         on:1 controller-revi
                                                                                         sion-hash:3953509896
server11:3.0.0.68        kube-system  kube-proxy-vg7kj     3.0.0.70         server13     k8s-app:kube-proxy p Running  kube-proxy:8fed384b68e5  Fri Feb  8 01:50:50 2019
                                                                                         od-template-generati
                                                                                         on:1 controller-revi
                                                                                         sion-hash:3953509896
server11:3.0.0.68        kube-system  kube-scheduler-hostd 3.0.0.68         server11     tier:control-plane c Running  kube-scheduler:c262a8071 Fri Feb  8 01:50:50 2019
                                      -11                                                omponent:kube-schedu          3cb
                                                                                         ler
server12:3.0.0.69        default      cumulus-frr-2gkdv    3.0.2.4          server21     pod-template-generat Running  cumulus-frr:25d1109f8898 Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server12:3.0.0.69        default      cumulus-frr-b9dm5    3.0.3.199        server14     pod-template-generat Running  cumulus-frr:45063f9a095f Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server12:3.0.0.69        default      cumulus-frr-rtqhv    3.0.2.6          server23     pod-template-generat Running  cumulus-frr:63e802a52ea2 Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server12:3.0.0.69        default      cumulus-frr-tddrg    3.0.5.133        server22     pod-template-generat Running  cumulus-frr:52dd54e4ac9f Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server12:3.0.0.69        default      cumulus-frr-vx7jp    3.0.5.132        server21     pod-template-generat Running  cumulus-frr:1c20addfcbd3 Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server12:3.0.0.69        default      cumulus-frr-x7ft5    3.0.3.198        server13     pod-template-generat Running  cumulus-frr:b0f63792732e Fri Feb  8 01:50:50 2019
                                                                                         ion:1 name:cumulus-f
                                                                                         rr controller-revisi
                                                                                         on-hash:3710533951
server12:3.0.0.69        kube-system  calico-etcd-btqgt    3.0.0.69         server12     k8s-app:calico-etcd  Running  calico-etcd:72b1a16968fb Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:142071906
                                                                                         5
server12:3.0.0.69        kube-system  calico-kube-controll 3.0.5.132        server21     k8s-app:calico-kube- Running  calico-kube-controllers: Fri Feb  8 01:50:50 2019
                                      ers-d669cc78f-bdnzk                                controllers                   6821bf04696f
server12:3.0.0.69        kube-system  calico-node-4g6vd    3.0.3.198        server13     k8s-app:calico-node  Running  calico-node:1046b559a50c Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:0a136851da17
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:490828062
server12:3.0.0.69        kube-system  calico-node-4hg6l    3.0.0.69         server12     k8s-app:calico-node  Running  calico-node:4e7acc83f8e8 Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:a26e76de289e
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:490828062
server12:3.0.0.69        kube-system  calico-node-4p66v    3.0.2.6          server23     k8s-app:calico-node  Running  calico-node:a7a44072e4e2 Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:9a19da2b2308
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:490828062
server12:3.0.0.69        kube-system  calico-node-5z7k4    3.0.5.133        server22     k8s-app:calico-node  Running  calico-node:9878b0606158 Fri Feb  8 01:50:50 2019
                                                                                         pod-template-generat          install-cni:489f8f326cf9
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:490828062
...

You can filter this information to focus on pods on a particular node:

cumulus@host:~$ netq show kubernetes pod node server11
Matching kube_pod records:
Master                   Namespace    Name                 IP               Node         Labels               Status   Containers               Last Changed
------------------------ ------------ -------------------- ---------------- ------------ -------------------- -------- ------------------------ ----------------
server11:3.0.0.68        kube-system  calico-etcd-pfg9r    3.0.0.68         server11     k8s-app:calico-etcd  Running  calico-etcd:f95f44b745a7 2d:14h:0m:59s
                                                                                         pod-template-generat
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:142071906
                                                                                         5
server11:3.0.0.68        kube-system  calico-node-zzfkr    3.0.0.68         server11     k8s-app:calico-node  Running  calico-node:c1ac399dd862 2d:14h:0m:59s
                                                                                         pod-template-generat          install-cni:60a779fdc47a
                                                                                         ion:1 controller-rev
                                                                                         ision-hash:324404111
                                                                                         9
server11:3.0.0.68        kube-system  etcd-server11        3.0.0.68         server11     tier:control-plane c Running  etcd:dde63d44a2f5        2d:14h:1m:44s
                                                                                         omponent:etcd
server11:3.0.0.68        kube-system  kube-apiserver-serve 3.0.0.68         server11     tier:control-plane c Running  kube-apiserver:0cd557bbf 2d:14h:1m:44s
                                      r11                                                omponent:kube-apiser          2fe
                                                                                         ver
server11:3.0.0.68        kube-system  kube-controller-mana 3.0.0.68         server11     tier:control-plane c Running  kube-controller-manager: 2d:14h:1m:44s
                                      ger-server11                                       omponent:kube-contro          89b2323d09b2
                                                                                         ller-manager
server11:3.0.0.68        kube-system  kube-proxy-f9dt8     3.0.0.68         server11     k8s-app:kube-proxy p Running  kube-proxy:032cc82ef3f8  2d:14h:0m:59s
                                                                                         od-template-generati
                                                                                         on:1 controller-revi
                                                                                         sion-hash:3953509896
server11:3.0.0.68        kube-system  kube-scheduler-serve 3.0.0.68         server11     tier:control-plane c Running  kube-scheduler:c262a8071 2d:14h:1m:44s
                                      r11                                                omponent:kube-schedu          3cb
                                                                                         ler

View Kubernetes Node Information

You can view detailed information about a node, including their role in the cluster, pod CIDR and kubelet status. This example shows all of the nodes in the cluster with server11 as the master. Note that server11 acts as a worker node along with the other nodes in the cluster, server12, server13, server22, server23, and server24.

cumulus@host:~$ netq server11 show kubernetes node
Matching kube_cluster records:
Master                   Cluster Name     Node Name            Role       Status           Labels               Pod CIDR                 Last Changed
------------------------ ---------------- -------------------- ---------- ---------------- -------------------- ------------------------ ----------------
server11:3.0.0.68        default          server11             master     KubeletReady     node-role.kubernetes 10.224.0.0/24            14h:23m:46s
                                                                                           .io/master: kubernet
                                                                                           es.io/hostname:hostd
                                                                                           -11 beta.kubernetes.
                                                                                           io/arch:amd64 beta.k
                                                                                           ubernetes.io/os:linu
                                                                                           x
server11:3.0.0.68        default          server13             worker     KubeletReady     kubernetes.io/hostna 10.224.3.0/24            14h:19m:56s
                                                                                           me:server13 beta.kub
                                                                                           ernetes.io/arch:amd6
                                                                                           4 beta.kubernetes.io
                                                                                           /os:linux
server11:3.0.0.68        default          server22             worker     KubeletReady     kubernetes.io/hostna 10.224.1.0/24            14h:24m:31s
                                                                                           me:server22 beta.kub
                                                                                           ernetes.io/arch:amd6
                                                                                           4 beta.kubernetes.io
                                                                                           /os:linux
server11:3.0.0.68        default          server11             worker     KubeletReady     kubernetes.io/hostna 10.224.2.0/24            14h:24m:16s
                                                                                           me:server11 beta.kub
                                                                                           ernetes.io/arch:amd6
                                                                                           4 beta.kubernetes.io
                                                                                           /os:linux
server11:3.0.0.68        default          server12             worker     KubeletReady     kubernetes.io/hostna 10.224.4.0/24            14h:24m:16s
                                                                                           me:server12 beta.kub
                                                                                           ernetes.io/arch:amd6
                                                                                           4 beta.kubernetes.io
                                                                                           /os:linux
server11:3.0.0.68        default          server23             worker     KubeletReady     kubernetes.io/hostna 10.224.5.0/24            14h:24m:16s
                                                                                           me:server23 beta.kub
                                                                                           ernetes.io/arch:amd6
                                                                                           4 beta.kubernetes.io
                                                                                           /os:linux
server11:3.0.0.68        default          server24             worker     KubeletReady     kubernetes.io/hostna 10.224.6.0/24            14h:24m:1s
                                                                                           me:server24 beta.kub
                                                                                           ernetes.io/arch:amd6
                                                                                           4 beta.kubernetes.io
                                                                                           /os:linux

To display the kubelet or Docker version, use the components option with the show command. This example lists the kublet version, a proxy address if used, and the status of the container for server11 master and worker nodes.

cumulus@host:~$ netq server11 show kubernetes node components
Matching kube_cluster records:
                         Master           Cluster Name         Node Name    Kubelet      KubeProxy         Container Runt
                                                                                                           ime
------------------------ ---------------- -------------------- ------------ ------------ ----------------- --------------
server11:3.0.0.68        default          server11             v1.9.2       v1.9.2       docker://17.3.2   KubeletReady
server11:3.0.0.68        default          server13             v1.9.2       v1.9.2       docker://17.3.2   KubeletReady
server11:3.0.0.68        default          server22             v1.9.2       v1.9.2       docker://17.3.2   KubeletReady
server11:3.0.0.68        default          server11             v1.9.2       v1.9.2       docker://17.3.2   KubeletReady
server11:3.0.0.68        default          server12             v1.9.2       v1.9.2       docker://17.3.2   KubeletReady
server11:3.0.0.68        default          server23             v1.9.2       v1.9.2       docker://17.3.2   KubeletReady
server11:3.0.0.68        default          server24             v1.9.2       v1.9.2       docker://17.3.2   KubeletReady

To view only the details for a selected node, the name option with the hostname of that node following the components option:

cumulus@host:~$ netq server11 show kubernetes node components name server13
Matching kube_cluster records:
                         Master           Cluster Name         Node Name    Kubelet      KubeProxy         Container Runt
                                                                                                           ime
------------------------ ---------------- -------------------- ------------ ------------ ----------------- --------------
server11:3.0.0.68        default          server13             v1.9.2       v1.9.2       docker://17.3.2   KubeletReady

View Kubernetes Replica Set on a Node

You can view information about the replica set, including the name, labels, and number of replicas present for each application. This example shows the number of replicas for each application in the server11 cluster:

cumulus@host:~$ netq server11 show kubernetes replica-set
Matching kube_replica records:
Master                   Cluster Name Namespace        Replication Name               Labels               Replicas                           Ready Replicas Last Changed
------------------------ ------------ ---------------- ------------------------------ -------------------- ---------------------------------- -------------- ----------------
server11:3.0.0.68        default      default          influxdb-6cdb566dd             app:influx           1                                  1              14h:19m:28s
server11:3.0.0.68        default      default          nginx-8586cf59                 run:nginx            3                                  3              14h:24m:39s
server11:3.0.0.68        default      default          httpd-5456469bfd               app:httpd            1                                  1              14h:19m:28s
server11:3.0.0.68        default      kube-system      kube-dns-6f4fd4bdf             k8s-app:kube-dns     1                                  1              14h:27m:9s
server11:3.0.0.68        default      kube-system      calico-kube-controllers-d669cc k8s-app:calico-kube- 1                                  1              14h:27m:9s
                                                       78f                            controllers

View the Daemon-sets on a Node

You can view information about the daemon set running on the node. This example shows that six copies of the cumulus-frr daemon are running on the server11 node:

cumulus@host:~$ netq server11 show kubernetes daemon-set namespace default
Matching kube_daemonset records:
Master                   Cluster Name Namespace        Daemon Set Name                Labels               Desired Count Ready Count Last Changed
------------------------ ------------ ---------------- ------------------------------ -------------------- ------------- ----------- ----------------
server11:3.0.0.68        default      default          cumulus-frr                    k8s-app:cumulus-frr  6             6           14h:25m:37s

View Pods on a Node

You can view information about the pods on the node. The first example shows all pods running nginx in the default namespace for the server11 cluster. The second example shows all pods running any application in the default namespace for the server11 cluster.

cumulus@host:~$ netq server11 show kubernetes pod namespace default label nginx
Matching kube_pod records:
Master                   Namespace    Name                 IP               Node         Labels               Status   Containers               Last Changed
------------------------ ------------ -------------------- ---------------- ------------ -------------------- -------- ------------------------ ----------------
server11:3.0.0.68        default      nginx-8586cf59-26pj5 10.244.9.193     server24     run:nginx            Running  nginx:6e2b65070c86       14h:25m:24s
server11:3.0.0.68        default      nginx-8586cf59-c82ns 10.244.40.128    server12     run:nginx            Running  nginx:01b017c26725       14h:25m:24s
server11:3.0.0.68        default      nginx-8586cf59-wjwgp 10.244.49.64     server22     run:nginx            Running  nginx:ed2b4254e328       14h:25m:24s
 
cumulus@host:~$ netq server11 show kubernetes pod namespace default label app
Matching kube_pod records:
Master                   Namespace    Name                 IP               Node         Labels               Status   Containers               Last Changed
------------------------ ------------ -------------------- ---------------- ------------ -------------------- -------- ------------------------ ----------------
server11:3.0.0.68        default      httpd-5456469bfd-bq9 10.244.49.65     server22     app:httpd            Running  httpd:79b7f532be2d       14h:20m:34s
                                      zm
server11:3.0.0.68        default      influxdb-6cdb566dd-8 10.244.162.128   server13     app:influx           Running  influxdb:15dce703cdec    14h:20m:34s
                                      9lwn

View Status of the Replication Controller on a Node

When replicas have been created, you are then able to view information about the replication controller:

cumulus@host:~$ netq server11 show kubernetes replication-controller
No matching kube_replica records found

View Kubernetes Deployment Information

For each depolyment, you can view the number of replicas associated with an application. This example shows information for a deployment of the nginx application:

cumulus@host:~$ netq server11 show kubernetes deployment name nginx
Matching kube_deployment records:
Master                   Namespace       Name                 Replicas                           Ready Replicas Labels                         Last Changed
------------------------ --------------- -------------------- ---------------------------------- -------------- ------------------------------ ----------------
server11:3.0.0.68        default         nginx                3                                  3              run:nginx                      14h:27m:20s

Search Using Labels

You can search for information about your Kubernetes clusters using labels. A label search is similar to a “contains” regular expression search. In the following example, we are looking for all nodes that contain kube in the replication set name or label:

cumulus@host:~$ netq server11 show kubernetes replica-set label kube
Matching kube_replica records:
Master                   Cluster Name Namespace        Replication Name               Labels               Replicas                           Ready Replicas Last Changed
------------------------ ------------ ---------------- ------------------------------ -------------------- ---------------------------------- -------------- ----------------
server11:3.0.0.68        default      kube-system      kube-dns-6f4fd4bdf             k8s-app:kube-dns     1                                  1              14h:30m:41s
server11:3.0.0.68        default      kube-system      calico-kube-controllers-d669cc k8s-app:calico-kube- 1                                  1              14h:30m:41s
                                                       78f                            controllers

View Container Connectivity

You can view the connectivity graph of a Kubernetes pod, seeing its replica set, deployment or service level. The connectivity graph starts with the server where the pod is deployed, and shows the peer for each server interface. This data is displayed in a similar manner as the netq trace command, showing the interface name, the outbound port on that interface, and the inbound port on the peer.

In this example shows connectivity at the deployment level, where the nginx-8586cf59-wjwgp replica is in a pod on the server22 node. It has four possible commumication paths, through interfaces swp1-4 out varying ports to peer interfaces swp7 and swp20 on torc-21, torc-22, edge01 and edge02 nodes. Similarly, the connections are shown for two additional nginx replicas.

cumulus@host:~$ netq server11 show kubernetes deployment name nginx connectivity
nginx -- nginx-8586cf59-wjwgp -- server22:swp1:torbond1 -- swp7:hostbond3:torc-21
                              -- server22:swp2:torbond1 -- swp7:hostbond3:torc-22
                              -- server22:swp3:NetQBond-2 -- swp20:NetQBond-20:edge01
                              -- server22:swp4:NetQBond-2 -- swp20:NetQBond-20:edge02
      -- nginx-8586cf59-c82ns -- server12:swp2:NetQBond-1 -- swp23:NetQBond-23:edge01
                              -- server12:swp3:NetQBond-1 -- swp23:NetQBond-23:edge02
                              -- server12:swp1:swp1 -- swp6:VlanA-1:tor-1
      -- nginx-8586cf59-26pj5 -- server24:swp2:NetQBond-1 -- swp29:NetQBond-29:edge01
                              -- server24:swp3:NetQBond-1 -- swp29:NetQBond-29:edge02
                              -- server24:swp1:swp1 -- swp8:VlanA-1:tor-2

View Kubernetes Services Information

You can show details about the Kubernetes services in a cluster, including service name, labels associated with the service, type of service, associated IP address, an external address if a public service, and ports used. This example show the services available in the Kubernetes cluster:

cumulus@host:~$ netq show kubernetes service
Matching kube_service records:
Master                   Namespace        Service Name         Labels       Type       Cluster IP       External IP      Ports                               Last Changed
------------------------ ---------------- -------------------- ------------ ---------- ---------------- ---------------- ----------------------------------- ----------------
server11:3.0.0.68        default          kubern