Advanced Framework Configuration

Securing a notebook server

The Jupyter notebook web application is based on a server-client structure. This document describes how you can secure a notebook server .

Important

The following scripts don’t take jupyter notebook security into consideration. To properly secure your Jupyter notebook use the guide listed above.

Startup Scripts for Individual Containers

Startup Scripts for Jupyter

  1. Create a dataset directory to store all your datasets using Jupyter notebooks.

    mkdir ~/dataset
    
  2. Create a startup script and place it in the home directory.

    vim /home/nvidia/startup.sh
    

RAPIDS Container

Add the following contents to the startup.sh script created in previous section.

1
2
3
#!/bin/bash
docker rm -f $(docker ps -a -q)
docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 --name rapids_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/nvidia-rapids:21.08-cuda11.4-ubuntu20.04-py3.8 jupyter-notebook --allow-root  --ip='0.0.0.0'

Note

Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

TensorFlow1 Container

Add the following contents to the startup.sh script created in previous section.

1
2
3
#!/bin/bash
docker rm -f $(docker ps -a -q)
docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/tensorflow:21.07-tf1-py3 jupyter-notebook --allow-root  --ip='0.0.0.0'

Note

Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

TensorFlow2 Container

Add the following contents to the startup.sh script created in previous section.

1
2
3
#!/bin/bash
docker rm -f $(docker ps -a -q)
docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/tensorflow:21.07-tf2-py3 jupyter-notebook --allow-root  --ip='0.0.0.0'

Note

Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

PyTorch Container

Add the following contents to the startup.sh script created in previous section.

1
2
3
#!/bin/bash
docker rm -f $(docker ps -a -q)
docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/pytorch:21.07-py3 jupyter-notebook --allow-root  --ip='0.0.0.0'

Note

Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

Combined Startup Script

The script below autostarts Jupyter notebook for all the NVIDIA AI Enterprise containers together on a single VM. In this example, Jupyter notebook for PyTorch, TensorFlow1, TensorFlow2 and RAPIDS are started on port 8888, 8889, 8890 and 8891 respectively.

1
2
3
4
5
6
#!/bin/bash
docker rm -f $(docker ps -a -q)
docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888  -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/pytorch:21.07-py3 jupyter-notebook --allow-root  --NotebookApp.token='' --ip='0.0.0.0' --port 8888
docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8889:8889 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/tensorflow:21.07-tf1-py3 jupyter-notebook --allow-root --NotebookApp.token=''  --ip='0.0.0.0' --port 8889
docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8890:8890 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/tensorflow:21.07-tf2-py3 jupyter-notebook --allow-root  --NotebookApp.token='' --ip='0.0.0.0' --port 8890
docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p 8891:8891 --name rapids_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/nvidia-rapids:21.08-cuda11.4-ubuntu20.04-py3.8 jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8891

Enabling Startup Script

  1. Give execution privileges to the script.

    chmod +x /home/nvidia/startup.sh
    

    Note

    Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

  2. Create a systemd process for the startup script.

    sudo vim /etc/systemd/system/jupyter.service
    
  3. Add the following content to the jupyter.service file.

    1
    2
    3
    4
    5
    6
    7
    8
    [Unit]
    Description=Starts Jupyter server
    
    [Service]
    ExecStart=/home/nvidia/startup.sh #Use your home path
    
    [Install]
    WantedBy=multi-user.target
    
  4. Start and enable the startup service on reboot.

    sudo systemctl start jupyter.service
    
    sudo systemctl enable jupyter.service
    
  5. Reboot the VM.

    Note

    For the Combined Startup Script you can skip the next step and directly access PyTorch container, TensorFlow-v1, TensorFlow-v2 and RAPIDS Jupyter notebooks at: http://VM_IP:8888, http://VM_IP:8889, http://VM_IP:8890, http://VM_IP:8891 respectively.

  6. To open the Jupyter Notebook you will need the token/password. This is needed to prevent unauthorized access to a Jupyter Notebook. To access the token, look at the Jupyter service logs using the command below.

    journalctl -f -u jupyter.service
    
  7. The logs will display the full URL of the Jupyter Notebook including the token.

    Sep 15 16:33:58 triton-inference-server startup.sh[6315]: To access the notebook,
    http://341eed905e2a:8888/?token=0a13f9068c4ea9bb2f1ca5d8ad212a26accc085da896a368
    
  8. As an IT Administrator, you need to provide the data scientist with the IP of the VM and the token below.

    http://VM_IP:8888/?token=<token_from_the logs>
    

    Example:

    http://192.168.100.10:8888/?token=0a13f9068c4ea9bb2f1ca5d8ad212a26accc085da896a368
    

Startup Scripts for Triton Inference Server

  1. Create a triton directory inside the VM for the AI Practitioner to host the model.

    mkdir ~/triton
    
  2. Pull the latest Triton Inference Server container.

    sudo docker pull nvcr.io/nvaie/tritonserver:21.07-py3
    
  3. Create a startup script to run Triton Inference Server automatically on Template Clone VM.

    vim ~/startup.sh
    
  4. Add the following content to the startup.sh file.

    1
    2
    3
    #!/bin/bash
    docker rm -f $(docker ps -a -q)
    docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 --name triton_server_cont -v $HOME/triton_models:/models nvcr.io/nvaie/tritonserver:21.07-py3 tritonserver --model-store=/models --strict-model-config=false --log-verbose=1
    
  5. Make the startup script executable.

    chmod +x ~/startup.sh
    
  6. Create a systemd process for startup script.

    sudo vim /etc/systemd/system/triton.service
    
  7. Add the following content to the triton.service file.

    1
    2
    3
    4
    5
    6
    7
    8
    [Unit]
    Description=Starts Triton server
    
    [Service]
    ExecStart=/home/nvidia/startup.sh
    
    [Install]
    WantedBy=multi-user.target
    
  8. Start and enable the startup service on reboot.

    sudo systemctl start triton.service
    
    sudo systemctl enable triton.service
    
  9. Reboot the VM.