If you are using the current version of Cumulus NetQ, the content on this page may not be up to date. The current version of the documentation is available here. If you are redirected to the main page of the user guide, then this page may have been renamed; please search for it there.

Set Up Your VMware Virtual Machine for an On-premises Server Cluster

First configure the VM on the master node, and then configure the VM on each worker node.

Follow these steps to setup and configure your VM cluster for an on-premises deployment:

  1. Verify that your master node meets the VM requirements.

    ResourceMinimum Requirements
    ProcessorSixteen (16) virtual CPUs
    Memory64 GB RAM
    Local disk storage500 GB SSD with minimum disk IOPS of 1000 for a standard 4kb block size
    (Note: This must be an SSD; use of other storage options can lead to system instability and are not supported.)
    Network interface speed1 Gb NIC
    HypervisorVMware ESXi™ 6.5 or later (OVA image) for servers running Cumulus Linux, CentOS, Ubuntu, and RedHat operating systems
  2. Confirm that the needed ports are open for communications.

    You must open the following ports on your NetQ on-premises servers:
    Port or Protocol NumberProtocolComponent Access
    4IP ProtocolCalico networking (IP-in-IP Protocol)
    22TCPSSH
    80TCPNginx
    179TCPCalico networking (BGP)
    443TCPNetQ UI
    2379TCPetcd datastore
    4789UDPCalico networking (VxLAN)
    5000TCPDocker registry
    6443TCPkube-apiserver
    31980TCPNetQ Agent communication
    31982TCPNetQ Agent SSL communication
    32708TCPAPI Gateway
    Additionally, for internal cluster communication, you must open these ports:
    PortProtocolComponent Access
    8080TCPAdmin API
    5000TCPDocker registry
    6443TCPKubernetes API server
    10250TCPkubelet health probe
    2379TCPetcd
    2380TCPetcd
    7072TCPKafka JMX monitoring
    9092TCPKafka client
    7071TCPCassandra JMX monitoring
    7000TCPCassandra cluster communication
    9042TCPCassandra client
    7073TCPZookeeper JMX monitoring
    2888TCPZookeeper cluster communication
    3888TCPZookeeper cluster communication
    2181TCPZookeeper client
    36443TCPKubernetes control plane

    Port 32666 is no longer used for the NetQ UI.

  3. Download the NetQ Platform image.

    1. On the NVIDIA Application Hub, log in to your account.
    2. Select NVIDIA Licensing Portal.
    3. Select Software Downloads from the menu.
    4. Click Product Family and select NetQ.
    5. Locate the NetQ SW 4.3 VMWare image and select Download.
    6. If prompted, agree to the license agreement and proceed with the download.

    For enterprise customers, if you do not see a link to the NVIDIA Licensing Portal on the NVIDIA Application Hub, contact NVIDIA support.


    For NVIDIA employees, download NetQ directly from the NVIDIA Licensing Portal.

  4. Setup and configure your VM.

    Open your hypervisor and set up your VM. You can use this example for reference or use your own hypervisor instructions.

    VMware Example Configuration This example shows the VM setup process using an OVA file with VMware ESXi.
    1. Enter the address of the hardware in your browser.

    2. Log in to VMware using credentials with root access.

    3. Click Storage in the Navigator to verify you have an SSD installed.

    4. Click Create/Register VM at the top of the right pane.

    5. Select Deploy a virtual machine from an OVF or OVA file, and click Next.

    6. Provide a name for the VM, for example NetQ.

      Tip: Make note of the name used during install as this is needed in a later step.

    7. Drag and drop the NetQ Platform image file you downloaded in Step 2 above.

    8. Click Next.

    9. Select the storage type and data store for the image to use, then click Next. In this example, only one is available.

    10. Accept the default deployment options or modify them according to your network needs. Click Next when you are finished.

    11. Review the configuration summary. Click Back to change any of the settings, or click Finish to continue with the creation of the VM.

      The progress of the request is shown in the Recent Tasks window at the bottom of the application. This may take some time, so continue with your other work until the upload finishes.

    12. Once completed, view the full details of the VM and hardware.

  5. Log in to the VM and change the password.

    Use the default credentials to log in the first time:

    • Username: cumulus
    • Password: cumulus
    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 20.04 LTS
    cumulus@<ipaddr>'s password:
    You are required to change your password immediately (root enforced)
    System information as of Thu Dec  3 21:35:42 UTC 2020
    System load:  0.09              Processes:           120
    Usage of /:   8.1% of 61.86GB   Users logged in:     0
    Memory usage: 5%                IP address for eth0: <ipaddr>
    Swap usage:   0%
    WARNING: Your password has expired.
    You must change your password now and login again!
    Changing password for cumulus.
    (current) UNIX password: cumulus
    Enter new UNIX password:
    Retype new UNIX password:
    passwd: password updated successfully
    Connection to <ipaddr> closed.
    

    Log in again with your new password.

    $ ssh cumulus@<ipaddr>
    Warning: Permanently added '<ipaddr>' (ECDSA) to the list of known hosts.
    Ubuntu 20.04 LTS
    cumulus@<ipaddr>'s password:
      System information as of Thu Dec  3 21:35:59 UTC 2020
      System load:  0.07              Processes:           121
      Usage of /:   8.1% of 61.86GB   Users logged in:     0
      Memory usage: 5%                IP address for eth0: <ipaddr>
      Swap usage:   0%
    Last login: Thu Dec  3 21:35:43 2020 from <local-ipaddr>
    cumulus@ubuntu:~$
    
  6. Verify the master node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check
  7. Change the hostname for the VM from the default value.

    The default hostname for the NetQ Virtual Machines is ubuntu. Change the hostname to fit your naming conventions while meeting Internet and Kubernetes naming standards.

    Kubernetes requires that hostnames are composed of a sequence of labels concatenated with dots. For example, “en.wikipedia.org” is a hostname. Each label must be from 1 to 63 characters long. The entire hostname, including the delimiting dots, has a maximum of 253 ASCII characters.

    The Internet standards (RFCs) for protocols specify that labels may contain only the ASCII letters a through z (in lower case), the digits 0 through 9, and the hyphen-minus character ('-').

    Use the following command:

    cumulus@hostname:~$ sudo hostnamectl set-hostname NEW_HOSTNAME

    Add the same NEW_HOSTNAME value to /etc/hosts on your VM for the localhost entry. Example:

    127.0.0.1 localhost NEW_HOSTNAME
  8. Verify that your first worker node meets the VM requirements, as described in Step 1.

  9. Confirm that the needed ports are open for communications, as described in Step 2.

  10. Open your hypervisor and set up the VM in the same manner as for the master node.

    Make a note of the private IP address you assign to the worker node. You need it for later installation steps.

  11. Verify the worker node is ready for installation. Fix any errors indicated before installing the NetQ software.

    cumulus@hostname:~$ sudo opta-check-cloud
  12. Repeat Steps 8 through 11 for each additional worker node you want in your cluster.

  13. The final step is to install and activate the NetQ software using the CLI:

Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:

cumulus@<hostname>:~$ netq install cluster master-init
    Please run the following command on all worker nodes:
    netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVVUWWJ5c2Q3NlJ4SHdseHBsOHQ4N2VMRWVGR05LSWFWVnVNcy94OEE4RFNMQVhKOHVKRjVLUXBnVjdKM2lnMGJpL2hDMVhmSVVjU3l3ZmhvVDVZM3dQN1oySVZVT29ZTi8vR1lOek5nVlNocWZQMDNDRW0xNnNmSzVvUWRQTzQzRFhxQ3NjbndIT3dwZmhRYy9MWTU1a

Run the netq install cluster worker-init <ssh-key> on each of your worker nodes.

Run the following commands on your master node, using the IP addresses of your worker nodes:

cumulus@<hostname>:~$ netq install cluster full interface eth0 bundle /mnt/installables/NetQ-4.3.0.tgz workers <worker-1-ip> <worker-2-ip>

You can specify the IP address instead of the interface name here: use ip-addr <IP address> in place of interface <ifname> above.

If you have changed the IP address or hostname of the NetQ On-premises VM after this step, you need to re-register this address with NetQ as follows:

Reset the VM, indicating whether you want to purge any NetQ DB data or keep it.

cumulus@hostname:~$ netq bootstrap reset [purge-db|keep-db]

Re-run the install CLI on the appliance. This example uses interface eno1. Replace this with your updated IP address, hostname or interface using the interface or ip-addr option.

cumulus@hostname:~$ netq install standalone full interface eno1 bundle /mnt/installables/NetQ-4.3.0.tgz

If this step fails for any reason, you can run netq bootstrap reset and then try again.

Verify Installation Status

To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premise installation:

State: Active
    Version: 4.3.0
    Installer Version: 4.3.0
    Installation Type: Standalone
    Activation Key: PKrgipMGEhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixUQmFLTUhzZU80RUdTL3pOT01uQ2lnRnrrUhTbXNPUGRXdnUwTVo5SEpBPTIHZGVmYXVsdDoHbmV0cWRldgz=
    Master SSH Public Key: a3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFEazliekZDblJUajkvQVhOZ0hteXByTzZIb3Y2cVZBWFdsNVNtKzVrTXo3dmMrcFNZTGlOdWl1bEhZeUZZVDhSNmU3bFdqS3NrSE10bzArNFJsQVd6cnRvbVVzLzlLMzQ4M3pUMjVZQXpIU2N1ZVhBSE1TdTZHZ0JyUkpXYUpTNjJ2RTkzcHBDVjBxWWJvUFo3aGpCY3ozb0VVWnRsU1lqQlZVdjhsVjBNN3JEWW52TXNGSURWLzJ2eks3K0x2N01XTG5aT054S09hdWZKZnVOT0R4YjFLbk1mN0JWK3hURUpLWW1mbTY1ckoyS1ArOEtFUllrr5TkF3bFVRTUdmT3daVHF2RWNoZnpQajMwQ29CWDZZMzVST2hDNmhVVnN5OEkwdjVSV0tCbktrWk81MWlMSDAyZUpJbXJHUGdQa2s1SzhJdGRrQXZISVlTZ0RwRlpRb3Igcm9vdEBucXRzLTEwLTE4OC00NC0xNDc=
    Is Cloud: False
    
    Cluster Status:
    IP Address     Hostname       Role    Status
    -------------  -------------  ------  --------
    10.188.44.147  10.188.44.147  Role    Ready
    
    NetQ... Active
    

Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.

cumulus@hostname:~$ netq show opta-health
    Application                                            Status    Namespace      Restarts    Timestamp
    -----------------------------------------------------  --------  -------------  ----------  ------------------------
    cassandra-rc-0-w7h4z                                   READY     default        0           Fri Apr 10 16:08:38 2020
    cp-schema-registry-deploy-6bf5cbc8cc-vwcsx             READY     default        0           Fri Apr 10 16:08:38 2020
    kafka-broker-rc-0-p9r2l                                READY     default        0           Fri Apr 10 16:08:38 2020
    kafka-connect-deploy-7799bcb7b4-xdm5l                  READY     default        0           Fri Apr 10 16:08:38 2020
    netq-api-gateway-deploy-55996ff7c8-w4hrs               READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-address-deploy-66776ccc67-phpqk               READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-admin-oob-mgmt-server                         READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-bgp-deploy-7dd4c9d45b-j9bfr                   READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-clagsession-deploy-69564895b4-qhcpr           READY     default        0           Fri Apr 10 16:08:38 2020
    netq-app-configdiff-deploy-ff54c4cc4-7rz66             READY     default        0           Fri Apr 10 16:08:38 2020
    ...
    

If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.