NVIDIA Tegra
NVIDIA DRIVE OS 5.1 Linux SDK

Developer Guide
5.1.9.0 Release


 
Setting Up Networking on the Host and Target
 
Configuring the Network Interface
Connecting the Target to the Host Using the Network Interface
Configuring the Private LAN to the Target Network
Configuring the DHCP and NFS Server on the Host
Networking through Comms VM
Packet Filter Settings on Comms VM
Domain Name Resolution for the Guest VMs
SSH/SCP/Telnet Access to the Guest VMs
Use the information in this topic to set up the network between an Ubuntu host and your NVIDIA DRIVE platform. The steps to set up networking includes configuring networking on the host computer, configuring the DHCP and NFS servers, etc.
Configuring the Network Interface
Configuring the network interface for your device requires:
Connecting the board to the host Linux development system.
Configuring the network interface.
Connecting the Target to the Host Using the Network Interface
Use the procedure below to connect your target board to the Linux Ubuntu host machine using a private LAN. This private LAN is not the LAN connecting the host to the Internet. The methods to connect include:
USB-to-Ethernet dongle-type adapter
Through Onboard 100 Mb/1Gb HSD connectors
Note:
To avoid auto detection conflicts with the adapter, do not configure these items with the network manager.
To use USB-to-Ethernet to connect the board and host on the private LAN
1. If using a USB-to-Ethernet dongle-type adapter on the host, plug the first USB-to-Ethernet adapter (Dlink DUB-E100) USB type A male end into a USB type A female jack on the host and plug/connect one end of CAT-6 crossover cable to RJ45 port. If using an additional on-board or PCIe-based Ethernet NIC, then connect one end of CAT-6 crossover cable to RJ45 port of the NIC.
2. Plug the other male end of the CAT-6 crossover into the RJ45 jack of a second USB-to-Ethernet adapter (Dlink DUB-E100).
3. Plug the USB type A male end of the adapter into the designated USB type A female jack on the target.
 
To use onboard HSD connector to connect the board and host on the private LAN
1. To use onboard 100Mb/1Gb HSD connector, one HSD to RJ45 dongle is required along with one HSD cable.
2. Connect one end of HSD cable to one of the 100Mb/1Gb HSD ports of the target and other end to HSD to RJ45 dongle.
3. Obtain a CAT-6 crossover patch cable and plug one of the RJ45 male ends into the RJ45 of HSD to RJ45 dongle.
4. Plug the other male end of the CAT-6 crossover patch cable into the RJ45 port of HOST machine or to RJ45 port of USB2-to-Ethernet adapter (Dlink DUB-E100) connected on your host machine.
Note:
Consult Connecting the platform to locate the 100Mb/1Gb HSD target ports.
Configuring the Private LAN to the Target Network
Use the following procedure to configure the host interface for the private LAN connected to the target platform. The procedure assumes eth1 is the Ethernet port on the host PC connected to the NVIDIA board.
To configure the private LAN to the target
1. Determine which host eth<n> port is connected to the target, where <n> is the port instance.
Find the eth device with the following command:
dmesg | grep -i eth
In the grep results, identify the eth<n> port for the smsc95xx or similar USB Ethernet adapter.
For example, the following dmesg result indicates that the eth1 port is connected to the target:
[1310932.166153] smsc95xx 2-5.1:1.0: eth1: register 'smsc95xx' at usb-0000:00:1d.7-5.1, smsc95xx USB 2.0 Ethernet, 00:04:4b:1b:32:6b
2. On the host, find and edit the following file:
/etc/network/interfaces
This file is read-only, so you must open it with administrative privileges, for example:
sudo vim /etc/network/interfaces
3. Depending on your connection to the target, modify the interfaces file:
Additional NIC card/adapter: Add the following to the interfaces file:
auto eth1
iface eth1 inet static
address 10.0.0.1
netmask 255.255.255.0
USB Ethernet adapter: Add the following to the interfaces file:
auto eth1
allow-hotplug eth1
iface eth1 inet static
address 10.0.0.1
netmask 255.255.255.0
post-up /etc/init.d/isc-dhcp-server restart
4. Restart the host’s networking with the following command:
sudo /etc/init.d/networking restart
5. Hard reboot the host system.
Configuring the DHCP and NFS Server on the Host
The DHCP server on the host is used to assign the IP address to the target board and the NFS server is used to mount the root file system on the NVIDIA target board using NFS.
If the DHCP and NFS servers are not yet installed on the host, the installer installs and configures them. Alternatively, those servers can be installed and configured as follows.
To set the DHCP server
1. Install the DHCP server on the host:
sudo apt-get install isc-dhcp-server
2. Specify the interface on which the server should listen for leasing an IP address from the target over the private LAN.
Locate and edit the following file:
/etc/default/isc-dhcp-server
This file is read-only, so you must open it with administrator privileges.
Modify the isc-dhcp-server file to set INTERFACES to the eth<n> connection you determined when connecting your network interface.
For example, add the following line if the DHCP server should listen on eth1 interface.
INTERFACES="eth1"
Changing the interface to “eth3” can result in the following error:
udevd[148]: error changing net interface name eth0 to eth3: Device or resource busy
To resolve this error, delete the /etc/udev/rules.d/70-persistent-net.rules file.
3. Configure your host DHCP server for the target interface.
Locate and edit the following file:
/etc/dhcp/dhcpd.conf
Because the file is read-only, open it with administrator privileges.
Modify the dhcpd.conf file to contain the following:
ddns-update-style none;
allow bootp;
subnet 10.0.0.0 netmask 255.255.255.0 {
option routers 10.0.0.1;
option domain-name <domain_name>;
option domain-name-servers <DNS1>, <DNS2>, ... ;
default-lease-time 345600;
max-lease-time 31557600;
range 10.0.0.2 10.0.0.254;
option root-path "10.0.0.1:/<top>/drive-t186ref-linux/targetfs,wsize=8192,rsize=8192,v3";
}
Where:
<domain_name> is your company domain name.
<DNS1>, <DNS2> are the DNS servers that you already added to the /etc/resolv.conf file on your host system. Multiple DNS servers are separated by commas. For example, the Google public DNS IP addresses are formatted as:
8.8.8.8, 8.8.4.4
4. Restart the DHCP server:
sudo /etc/init.d/isc-dhcp-server restart
To set the NFS server
1. Install the NFS server on the host using apt-get:
sudo apt-get install nfs-kernel-server nfs-common portmap
2. Locate and edit the following file:
/etc/exports
Add the corresponding path to the target file system:
<top>/drive-t186ref-linux *(async,rw,no_root_squash,no_all_squash,no_subtree_check)
This change exports the target file system.
3. Restart the NFS server:
sudo /etc/init.d/nfs-kernel-server restart
sudo exportfs -a
To enable Internet access from the target
1. On the Linux host, enter these commands to enable settings on the host:
$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo iptables -F
$ sudo iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
Where eth1 is the interface connected to the network that is connected to the Internet on the host.
$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Where eth0 is the private LAN interface connected to target from the host.
Networking through Comms VM
In the network topology with Comms and Security, a dedicated virtual machine (Comms VM) acts as central gateway for all the communication services. The Guest OS VMs are server nodes of an internal private network and exposed to the public domain via a Network Address Translator (NAT), which maps network address information (IP address and port number) between private and public domain, and allows nodes in the private network to share the same physical Ethernet link.
Access to the external network is through Comms VM, which retrieves dynamically the IP address for the platform physical Ethernet interface from an external DHCP server. Comms VM owns the Ethernet interface (1G/10G) and the Guest VMs only have the internal static IP configuration as defined in the IP Network topic.
Packet Filter Settings on Comms VM
As explained the IP Network topic, packet filter settings on Comms VM allow the Guest VMs to be accessed via pre-defined ports.
Use these settings to allow Guest VMs to have:
Internet access
Access to ssh/telnet
Resolve domain name settings, etc.
To find out the packet filter settings defined in Comms VM, execute the following command on the Comms VM console:
# pfctl -s nat
 
nat on eq0 inet from ! (eq0) port 0:1023 to any -> (eq0:0) port 0:1023
nat on eq0 inet from ! (eq0) port 1024:65535 to any -> (eq0:0) port 1024:65535
rdr on hv0 inet proto udp from any to (hv0) port = domain -> <dns_server_ips> port 53 round-robin sticky-address
rdr on hv1 inet proto udp from any to (hv1) port = domain -> <dns_server_ips> port 53 round-robin sticky-address
rdr on eq0 inet proto tcp from any to (eq0:0) port = 1000 -> 192.168.10.4 port 22
rdr on eq0 inet proto udp from any to (eq0:0) port = 1000 -> 192.168.10.4 port 22
rdr on eq0 inet proto tcp from any to (eq0:0) port = 1001 -> 192.168.11.4 port 22
rdr on eq0 inet proto udp from any to (eq0:0) port = 1001 -> 192.168.11.4 port 22
rdr on eq0 inet proto tcp from any to (eq0:0) port = 1002 -> 192.168.12.4 port 5555
rdr on eq0 inet proto udp from any to (eq0:0) port = 1002 -> 192.168.12.4 port 5555
rdr on eq0 inet proto tcp from any to (eq0:0) port = 6253 -> 192.168.12.4 port 6253
rdr on eq0 inet proto udp from any to (eq0:0) port = 6253 -> 192.168.12.4 port 6253
rdr on eq0 inet proto tcp from any to (eq0:0) port = telnet -> 192.168.10.4 port 23
rdr on eq0 inet proto udp from any to (eq0:0) port = 23 -> 192.168.10.4 port 23
rdr on eq0 inet proto tcp from any to (eq0:0) port = 1003 -> 192.168.11.4 port 23
 
This command lists the pre-defined rules in Comms VM with respect to external network access, ssh/telnet access, etc.
Domain Name Resolution for the Guest VMs
This is done via packet filter settings on the Comms VM. No additional configuration is required on the Guest VM.
SSH/SCP/Telnet Access to the Guest VMs
From the NAT table on Comms, you can identify which Guest VM has which port for ssh/telnet, ssh port being 22 and telnet being 23. Guest VM IP addresses can be found in the the IP Network topic.
In the table above:
rdr on eq0 inet proto tcp from any to (eq0:0) port = 1000 -> 192.168.10.4 port 22
rdr on eq0 inet proto tcp from any to (eq0:0) port = ssh -> 192.168.10.4 port 22
Indicates ssh can be performed to Guest VM (192.169.10.4) on port 1000 via Comms eq0 IP address or ssh directly on the Comms eq0 IP address.
From Host:
ssh root@< comms_eq0_IP_address >
-or-
ssh -p 1000 root@<comms_eq0_IP_address>
To access the Comms partition on ssh from Host, port 2000 is used.
For example, from the Host PC:
ssh root@<comms_IP> -p 2000
Without -p 2000, ssh from the Host PC on Comms IP logs you into the Guest VM as the Nat rule (rdr on eq0 inet proto tcp from any to (eq0:0) port = ssh -> 192.168.10.4 port 22) is defined to log in to the Guest VM by default.
SCP to Comms also needs to follow the above rule. For example, use port 2000 to scp to and from Comms.
For SCP_UL from Guest VM to QNX Comms:
#scp -P 2000 <file_to_transfer> root@< comms_eq0_IP_address >