Advance Framework Configuration#
Added in version 2.0.
Securing a notebook server#
The Jupyter notebook web application is based on a server-client structure. This document describes how you can secure a notebook server .
Important
The following scripts don’t take jupyter notebook security into consideration. To properly secure your Jupyter notebook use the guide listed above.
Startup Scripts for Individual Containers#
Startup Scripts for Jupyter#
Create a
dataset
directory to store all your datasets using Jupyter notebooks.mkdir ~/dataset
Create a startup script and place it in the
home
directory.vim /home/nvidia/startup.sh
RAPIDS Container#
Add the following contents to the startup.sh
script created in Startup Scripts for Jupyter section.
1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvidia/rapidsai/notebooks:<CONTAINER-TAG> jupyter-notebook --allow-root --ip='0.0.0.0'
Note
Replace /home/nvidia
with your home path. Do not use $HOME
, this script requires the absolute path.
Tip
Example: nvcr.io/nvidia/rapidsai/notebooks:24.08-cuda11.8-py3.9 Container location: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/rapidsai/containers/notebooks
TensorFlow1 Container#
Add the following contents to the startup.sh
script created in Startup Scripts for Jupyter section.
1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvidia/tensorflow:<CONTAINER-TAG> jupyter-notebook --allow-root --ip='0.0.0.0'
Note
Replace /home/nvidia
with your home path. Do not use $HOME
, this script requires the absolute path.
Tip
Example: nvcr.io/nvidia/tensorflow:23.03-tf1-py3 Container location: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow/tags
TensorFlow2 Container#
Add the following contents to the startup.sh
script created in Startup Scripts for Jupyter section.
1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvidia/tensorflow:<CONTAINER-TAG> jupyter-notebook --allow-root --ip='0.0.0.0'
Note
Replace /home/nvidia
with your home path. Do not use $HOME
, this script requires the absolute path.
Tip
Example: nvcr.io/nvidia/tensorflow:24.09-tf2-py3 Container location: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow/tags
PyTorch Container#
Add the following contents to the startup.sh
script created in Startup Scripts for Jupyter section.
1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvidia/pytorch:<CONTAINER-TAG> jupyter-notebook --allow-root --ip='0.0.0.0'
Note
Replace /home/nvidia
with your home path. Do not use $HOME
, this script requires the absolute path.
Tip
Example: nvcr.io/nvidia/pytorch:24.09-py3 Container location: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags
Combined Startup Script#
The script below autostarts Jupyter notebook for all the NVIDIA AI Enterprise containers together on a single system. In this example, Jupyter notebook for PyTorch, TensorFlow1, TensorFlow2 and RAPIDS are started on port 8888, 8889, 8890 and 8891 respectively.
1#!/bin/bash
2docker rm -f $(docker ps -a -q)
3docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8888:8888 --name pytorch_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/pytorch:<NVAIE-CONTAINER-TAG> jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8888
4docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8889:8889 --name tensorflow1_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/tensorflow:<NVAIE-CONTAINER-TAG> jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8889
5docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8890:8890 --name tensorflow2_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/tensorflow:<NVAIE-CONTAINER-TAG> jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8890
6docker run -d --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8891:8891 --name rapids_cont -v /home/nvidia/dataset:/workspace/dataset nvcr.io/nvaie/nvidia-rapids-:<NVAIE-CONTAINER-TAG> jupyter-notebook --allow-root --NotebookApp.token='' --ip='0.0.0.0' --port 8891
Enabling Startup Script#
Give execution privileges to the script.
chmod +x /home/nvidia/startup.sh
Note
Replace
/home/nvidia
with your home path. Do not use$HOME
, this script requires the absolute path.Create a systemd process for the startup script.
sudo vim /etc/systemd/system/jupyter.service
Add the following content to the
jupyter.service
file.1[Unit] 2Description=Starts Jupyter server 3 4[Service] 5ExecStart=/home/nvidia/startup.sh #Use your home path 6 7[Install] 8WantedBy=multi-user.target
Start and enable the startup service on reboot.
sudo systemctl start jupyter.service
sudo systemctl enable jupyter.service
Reboot the system.
Note
For the :ref: combined-startup-scripts section you can skip the next step and directly access PyTorch container, TensorFlow-v1, TensorFlow-v2 and RAPIDS Jupyter notebooks at: http://system_IP:8888, http://system_IP:8889, http://system_IP:8890, http://system_IP:8891 respectively.
To open the Jupyter Notebook you will need the token/password. This is needed to prevent unauthorized access to a Jupyter Notebook. To access the token, look at the Jupyter service logs using the command below.
journalctl -f -u jupyter.service
The logs will display the full URL of the Jupyter Notebook including the token.
Sep 15 16:33:58 triton-inference-server startup.sh[6315]: To access the notebook, http://341eed905e2a:8888/?token=0a13f9068c4ea9bb2f1ca5d8ad212a26accc085da896a368
As an IT Administrator, you need to provide the data scientist with the IP of the system and the token below.
http://system_IP:8888/?token=<token_from_the logs>
Example:
http://192.168.100.10:8888/?token=0a13f9068c4ea9bb2f1ca5d8ad212a26accc085da896a368
Startup Scripts for Triton Inference Server#
Create a
triton
directory inside the system for the AI Practitioner to host the model.mkdir ~/triton
Pull the latest Triton Inference Server container.
sudo docker pull nvcr.io/nvaie/tritonserver-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG>
Create a startup script to run Triton Inference Server automatically on system.
vim ~/startup.sh
Add the following content to the
startup.sh
file.1#!/bin/bash 2docker rm -f $(docker ps -a -q) 3docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 --name triton_server_cont -v $HOME/triton_models:/models nvcr.io/nvaie/tritonserver-<NVAIE-MAJOR-VERSION>:<NVAIE-CONTAINER-TAG> tritonserver --model-store=/models --strict-model-config=false --log-verbose=1
Make the startup script executable.
chmod +x ~/startup.sh
Create a systemd process for startup script.
sudo vim /etc/systemd/system/triton.service
Add the following content to the
triton.service
file.1[Unit] 2Description=Starts Triton server 3 4[Service] 5ExecStart=/home/nvidia/startup.sh 6 7[Install] 8WantedBy=multi-user.target
Start and enable the startup service on reboot.
sudo systemctl start triton.service
sudo systemctl enable triton.service
Reboot the system.