Set Up Your Virtual Machine for an On-premises HA Scale Cluster

Follow these steps to set up and configure your VM on a cluster of servers in an on-premises deployment. First configure the VM on the master node, and then configure the VM on each additional node. NVIDIA recommends installing the virtual machines on different servers to increase redundancy in the event of a hardware failure.

NetQ 4.12.0 supports a 3-node HA scale cluster consisting of 1 master and 2 additional HA worker nodes.


System Requirements

Verify that each node in your cluster meets the VM requirements.

ResourceMinimum Requirements
Processor48 virtual CPUs
Memory512GB RAM
Local disk storage3.2TB SSD with minimum disk IOPS of 1000 for a standard 4kb block size
(Note: This must be an SSD; other storage options can lead to system instability and are not supported.)
Network interface speed10 Gbps NIC
HypervisorKVM/QCOW (QEMU Copy on Write) image for servers running Ubuntu;
VMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux or Ubuntu

Port Requirements

Confirm that the required ports are open for communications.

Port or Protocol NumberProtocolComponent Access
4IP ProtocolCalico networking (IP-in-IP Protocol)
22TCPSSH
80TCPnginx
179TCPCalico networking (BGP)
443TCPNetQ UI
2379TCPetcd datastore
4789UDPCalico networking (VxLAN)
5000TCPDocker registry
6443TCPkube-apiserver
30001TCPDPU communication
31980TCPNetQ Agent communication
31982TCPNetQ Agent SSL communication
32708TCPAPI Gateway

Additionally, for internal cluster communication, you must open these ports:

Port or Protocol NumberProtocolComponent Access
8080TCPAdmin API
5000TCPDocker registry
6443TCPKubernetes API server
10250TCPkubelet health probe
2379TCPetcd
2380TCPetcd
7072TCPKafka JMX monitoring
9092TCPKafka client
7071TCPCassandra JMX monitoring
7000TCPCassandra cluster communication
9042TCPCassandra client
7073TCPZookeeper JMX monitoring
2888TCPZookeeper cluster communication
3888TCPZookeeper cluster communication
2181TCPZookeeper client
36443TCPKubernetes control plane

Installation and Configuration

  1. Download the NetQ image.

    a. Log in to your NVIDIA Application Hub account.
    b. Select NVIDIA Licensing Portal.
    c. Select Software Downloads from the menu.
    d. Click Product Family and select NetQ.
    e. For deployments using KVM, download the NetQ SW 4.12.0 KVM Scale image. For deployments using VMware, download the NetQ SW 4.12.0 VMware Scale image
    f. If prompted, read the license agreement and proceed with the download.

NVIDIA employees can download NetQ directly from the NVIDIA Licensing Portal.

  1. Open your hypervisor and configure your VM. You can use the following examples for reference or use your own hypervisor instructions.
KVM Example Configuration

This example shows the VM setup process for a system with Libvirt and KVM/QEMU installed.

  1. Confirm that the SHA256 checksum matches the one posted on the NVIDIA Application Hub to ensure the image download has not been corrupted.

    $ sha256sum ./Downloads/netq-4.12.0-ubuntu-20.04-ts-qemu-scale.qcow2
    $ a0d9a4f9ce8925b7dfb90a5a44616cadbf3fc667013abae07cd774555c08ff6f ./Downloads/netq-4.12.0-ubuntu-20.04-ts-qemu-scale.qcow2
  2. Copy the QCOW2 image to a directory where you want to run it.

    Tip: Copy, instead of moving, the original QCOW2 image that was downloaded to avoid re-downloading it again later should you need to perform this process again.

    $ sudo mkdir /vms
    $ sudo cp ./Downloads/netq-4.12.0-ubuntu-20.04-ts-qemu-scale.qcow2 /vms/ts.qcow2
  3. Create the VM.

    For a Direct VM, where the VM uses a MACVLAN interface to sit on the host interface for its connectivity:

    $ virt-install --name=netq_ts --vcpus=16 --memory=65536 --os-type=linux --os-variant=generic --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=type=direct,source=eth0,model=virtio --import --noautoconsole

    Replace the disk path value with the location where the QCOW2 image is to reside. Replace network model value (eth0 in the above example) with the name of the interface where the VM is connected to the external network.

    Or, for a Bridged VM, where the VM attaches to a bridge which has already been setup to allow for external access:

    $ virt-install --name=netq_ts --vcpus=16 --memory=65536 --os-type=linux --os-variant=generic \ --disk path=/vms/ts.qcow2,format=qcow2,bus=virtio,cache=none --network=bridge=br0,model=virtio --import --noautoconsole

    Replace network bridge value (br0 in the above example) with the name of the (pre-existing) bridge interface where the VM is connected to the external network.

    Make note of the name used during install as this is needed in a later step.

  4. Watch the boot process in another terminal window.
    $ virsh console netq_ts
VMware Example Configuration This example shows the VM setup process using an OVA file with VMware ESXi.
  1. Enter the address of the hardware in your browser.

  2. Log in to VMware using credentials with root access.

  3. Click Storage in the Navigator to verify you have an SSD installed.

  4. Click Create/Register VM at the top of the right pane.

  5. Select Deploy a virtual machine from an OVF or OVA file, and click Next.

  6. Provide a name for the VM, for example NetQ.

    Tip: Make note of the name used during install as this is needed in a later step.

  7. Drag and drop the NetQ Platform image file you downloaded in Step 1 above.

  8. Click Next.

  9. Select the storage type and data store for the image to use, then click Next. In this example, only one is available.

  10. Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.

  11. Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.

    The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.

  12. Once completed, view the full details of the VM and hardware.

  1. Log in to the VM and change the password.

Use the default credentials to log in the first time:

  • Username: cumulus
  • Password: cumulus
$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
You are required to change your password immediately (root enforced)
System information as of Thu Dec  3 21:35:42 UTC 2020
System load:  0.09              Processes:           120
Usage of /:   8.1% of 61.86GB   Users logged in:     0
Memory usage: 5%                IP address for eth0: <ipaddr>
Swap usage:   0%
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for cumulus.
(current) UNIX password: cumulus
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Connection to <ipaddr> closed.

Log in again with your new password.

$ ssh cumulus@<ipaddr>
Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
Ubuntu 20.04 LTS
cumulus@<ipaddr>'s password:
  System information as of Thu Dec  3 21:35:59 UTC 2020
  System load:  0.07              Processes:           121
  Usage of /:   8.1% of 61.86GB   Users logged in:     0
  Memory usage: 5%                IP address for eth0: <ipaddr>
  Swap usage:   0%
Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
cumulus@ubuntu:~$
  1. Verify that the master node is ready for installation. Fix any errors before installing the NetQ software.
cumulus@hostname:~$ sudo opta-check scale
  1. Change the hostname for the VM from the default value.

The default hostname for the NetQ virtual machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

Kubernetes requires hostnames to be composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

Use the following command:

cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME

Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. For example:

127.0.0.1 localhost NEW_HOSTNAME
  1. Open your hypervisor and set up the VM for the additional nodes in the same manner as for the master node.

  2. Run the following command on each node to verify that the node is ready for a NetQ software installation. Fix any errors indicated before installing the software.

cumulus@hostname:~$ sudo opta-check scale
  1. Install and activate the NetQ software using the CLI.

Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your additional HA and worker nodes:

cumulus@<hostname>:~$ netq install cluster master-init
    Please run the following command on all worker nodes:
    netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVM3dQN9MWTU1a
  1. Run the netq install cluster worker-init <ssh-key> command on each of your worker nodes.

  2. Run the netq install cluster config generate command on your master node to generate a template for the cluster configuration JSON file:

cumulus@netq-server:~$ netq install cluster config generate
2024-10-28 17:29:53.260462: master-node-installer: Writing cluster installation configuration template file @ /tmp/cluster-install-config.json
  1. Edit the cluster configuration JSON file with the desired values for each attribute:
cumulus@netq-server:~$ vim /tmp/cluster-install-config.json 
{
        "version": "v2.0",
        "interface": "<INPUT>",
        "cluster-vip": "<INPUT>",
        "master-ip": "<INPUT>",
        "is-ipv6": false,
        "ha-nodes": [
                {
                        "ip": "<INPUT>"
                },
                {
                        "ip": "<INPUT>"
                }
        ]
}
AttributeDescription
interfaceThe local network interface on your master node used for NetQ connectivity.
cluster-vipThe cluster virtual IP address must be an unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes.
master-ipThe IP address assigned to the interface on your master node used for NetQ connectivity.
is-ipv6Set the value to true if your network connectivity and node address assignments are IPv6.
ha-nodesThe IP addresses of each of the HA nodes in your cluster.

NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the cluster configuration JSON file:

cumulus@netq-server:~$ vim /tmp/cluster-install-config.json 
{
        "version": "v2.0",
        "interface": "eth0",
        "cluster-vip": "10.176.235.101",
        "master-ip": "10.176.235.50",
        "is-ipv6": false,
        "pod-ip-range": "192.168.0.1/32",
	"service-ip-range": "172.168.0.1/32",
        "ha-nodes": [
                {
                        "ip": "10.176.235.51"
                },
                {
                        "ip": "10.176.235.52"
                }
        ]
}

cumulus@netq-server:~$ vim /tmp/cluster-install-config.json 
{
        "version": "v2.0",
        "interface": "eth0",
        "cluster-vip": "10.176.235.101",
        "master-ip": "10.176.235.50",
        "is-ipv6": false,
        "ha-nodes": [
                {
                        "ip": "10.176.235.51"
                },
                {
                        "ip": "10.176.235.52"
                }
        ]
}
AttributeDescription
interfaceThe local network interface on your master node used for NetQ connectivity.
cluster-vipThe cluster virtual IP address must be an unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes.
master-ipThe IP address assigned to the interface on your master node used for NetQ connectivity.
is-ipv6Set the value to true if your network connectivity and node address assignments are IPv6.
ha-nodesThe IP addresses of each of the HA nodes in your cluster.
  1. Run the following command on your master HA node, using the JSON configuration file created in step 11:
cumulus@<hostname>:~$ netq install cluster bundle /mnt/installables/NetQ-4.12.0.tgz /tmp/cluster-install-config.json

If this step fails for any reason, run netq bootstrap reset and then try again.

Verify Installation Status

To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:

State: Active
    NetQ Live State: Active
    Installation Status: FINISHED
    Version: 4.12.0
    Installer Version: 4.12.0
    Installation Type: Cluster
    Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
    Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
    Is Cloud: False
    
    Kubernetes Cluster Nodes Status:
    IP Address    Hostname     Role    NodeStatus    Virtual IP
    ------------  -----------  ------  ------------  ------------
    10.213.7.52   10.213.7.52  Worker  Ready         10.213.7.53
    10.213.7.51   10.213.7.51  Worker  Ready         10.213.7.53
    10.213.7.49   10.213.7.49  Master  Ready         10.213.7.53
    
    In Summary, Live state of the NetQ is... Active

Run the netq show opta-health command to verify that all applications are operating properly. Allow at least 15 minutes for all applications to come up and report their status.

cumulus@hostname:~$ netq show opta-health
    Application                                            Status    Namespace      Restarts    Timestamp
    -----------------------------------------------------  --------  -------------  ----------  ------------------------
    cassandra-rc-0-w7h4z                                   READY     default        0           Fri Apr 10 16:08:38 2020
    cp-schema-registry-deploy-6bf5cbc8cc-vwcsx             READY     default        0           Fri Apr 10 16:08:38 2020
    kafka-broker-rc-0-p9r2l                                READY     default        0           Fri Apr 10 16:08:38 2020
    kafka-connect-deploy-7799bcb7b4-xdm5l                  READY     default        0           Fri Apr 10 16:08:38 2020
    netq-api-gateway-deploy-55996ff7c8-w4hrs               READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-address-deploy-66776ccc67-phpqk               READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-admin-oob-mgmt-server                         READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-bgp-deploy-7dd4c9d45b-j9bfr                   READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-clagsession-deploy-69564895b4-qhcpr           READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-configdiff-deploy-ff54c4cc4-7rz66             READY     default        0           Fri Apr 10 16:08:38 2020
    ...

If any of the applications or services display a DOWN status after 30 minutes, open a support ticket and attach the output of the opta-support command.

After NetQ is installed, you can log in to NetQ from your browser.