Advance Framework Configuration#

Securing a notebook server#

The Jupyter notebook web application is based on a server-client structure. This document describes how you can secure a notebook server .

Important

The following scripts don’t take jupyter notebook security into consideration. To properly secure your Jupyter notebook use the guide listed above.

Startup Scripts for Individual Containers#

Startup Scripts for Jupyter#

  1. Create a dataset directory to store all your datasets using Jupyter notebooks.

    mkdir ~/dataset
    
  2. Create a startup script and place it in the home directory.

    vim /home/nvidia/startup.sh
    

RAPIDS Container#

Add the following contents to the startup.sh script created in previous section.

1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvidia/rapidsai/notebooks:<CONTAINER-TAG> jupyter-notebook --allow-root --ip='0.0.0.0'

Note

Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

Tip

Example: nvcr.io/nvidia/rapidsai/notebooks:24.08-cuda11.8-py3.9 Container location: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/rapidsai/containers/notebooks

TensorFlow1 Container#

Add the following contents to the startup.sh script created in previous section.

1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvidia/tensorflow:<CONTAINER-TAG> jupyter-notebook --allow-root --ip='0.0.0.0'

Note

Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

Tip

Example: nvcr.io/nvidia/tensorflow:23.03-tf1-py3 Container location: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow/tags

TensorFlow2 Container#

Add the following contents to the startup.sh script created in previous section.

1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvidia/tensorflow:<CONTAINER-TAG> jupyter-notebook --allow-root --ip='0.0.0.0'

Note

Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

Tip

Example: nvcr.io/nvidia/tensorflow:24.09-tf2-py3 Container location: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow/tags

PyTorch Container#

Add the following contents to the startup.sh script created in previous section.

1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvidia/pytorch:<CONTAINER-TAG> jupyter-notebook --allow-root --ip='0.0.0.0'

Note

Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

Tip

Example: nvcr.io/nvidia/pytorch:24.09-py3 Container location: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags

Combined Startup Script#

The script below autostarts Jupyter notebook for all the NVIDIA AI Enterprise containers together on a single VM. In this example, Jupyter notebook for PyTorch, TensorFlow1, TensorFlow2 and RAPIDS are started on port 8888, 8889, 8890 and 8891 respectively.

1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 --name pytorch_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/pytorch:<NVAIE-CONTAINER-TAG> jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8888
4docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8889:8889 --name tensorflow1_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/tensorflow:<NVAIE-CONTAINER-TAG> jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8889
5docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8890:8890 --name tensorflow2_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/tensorflow:<NVAIE-CONTAINER-TAG> jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8890
6docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8891:8891 --name rapids_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/nvidia-rapids-:<NVAIE-CONTAINER-TAG> jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8891

Enabling Startup Script#

  1. Give execution privileges to the script.

    chmod +x /home/nvidia/startup.sh
    

    Note

    Replace /home/nvidia with your home path. Do not use $HOME, this script requires the absolute path.

  2. Create a systemd process for the startup script.

    sudo vim /etc/systemd/system/jupyter.service
    
  3. Add the following content to the jupyter.service file.

    1[Unit]
    2Description=Starts Jupyter server
    3
    4[Service]
    5ExecStart=/home/nvidia/startup.sh #Use your home path
    6
    7[Install]
    8WantedBy=multi-user.target
    
  4. Start and enable the startup service on reboot.

    sudo systemctl start jupyter.service
    
    sudo systemctl enable jupyter.service
    
  5. Reboot the VM.

    Note

    For the Combined Startup Script you can skip the next step and directly access PyTorch container, TensorFlow-v1, TensorFlow-v2 and RAPIDS Jupyter notebooks at; http://VM_IP:8888, http://VM_IP:8889, http://VM_IP:8890, http://VM_IP:8891 respectively.

  6. To open the Jupyter Notebook you will need the token/password. This is needed to prevent unauthorized access to a Jupyter Notebook. To access the token, look at the Jupyter service logs using the command below.

    journalctl -f -u jupyter.service
    
  7. The logs will display the full URL of the Jupyter Notebook including the token.

    Sep 15 16:33:58 triton-inference-server startup.sh[6315]: To access the notebook,
    http://341eed905e2a:8888/?token=0a13f9068c4ea9bb2f1ca5d8ad212a26accc085da896a368
    
  8. As an IT Administrator, you need to provide the data scientist with the IP of the VM and the token below.

    http://VM_IP:8888/?token=<token_from_the logs>
    

    Example:

    http://192.168.100.10:8888/?token=0a13f9068c4ea9bb2f1ca5d8ad212a26accc085da896a368
    

Startup Scripts for Triton Inference Server#

  1. Create a triton directory inside the VM for the AI Practitioner to host the model.

    mkdir ~/triton
    
  2. Pull the latest Triton Inference Server container.

    sudo docker pull nvcr.io/nvidia/tritonserver:<CONTAINER-TAG>
    
  3. Create a startup script to run Triton Inference Server automatically on Template Clone VM.

    vim ~/startup.sh
    
  4. Add the following content to the startup.sh file.

    1#!/bin/bash
    2docker rm -f $(docker ps -a -q)
    3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 --name triton_server_cont -v /home/nvidia/triton:/models nvcr.io/nvidia/tritonserver:<CONTAINER-TAG> tritonserver --model-store=/models --strict-model-config=false --log-verbose=1
    
  5. Make the startup script executable.

    chmod +x ~/startup.sh
    
  6. Create a systemd process for startup script.

    sudo vim /etc/systemd/system/triton.service
    
  7. Add the following content to the triton.service file.

    1[Unit]
    2Description=Starts Triton server
    3
    4[Service]
    5ExecStart=/home/nvidia/startup.sh
    6
    7[Install]
    8WantedBy=multi-user.target
    
  8. Start and enable the startup service on reboot.

    sudo systemctl start triton.service
    
    sudo systemctl enable triton.service
    
  9. Reboot the VM.