NVIDIA Cloud Native Stack
NVIDIA Cloud Native Stack (Latest Version)

Step #2: Deploy Sample Applications

As the NVIDIA LaunchPad system is already configured with NVIDIA Cloud Native Stack, you can run sample applications with GPUs on it. For more information, please refer to the Validate Sample Application with NVIDIA Cloud Native Stack.

Run the below command on the NVIDIA LaunchPad system to verify the nvidia-smi on Kubernetes.

Copy
Copied!
            

cat <<EOF | tee nvidia-smi.yaml apiVersion: v1 kind: Pod metadata: name: nvidia-smi spec: restartPolicy: OnFailure containers: - name: nvidia-smi image: "nvidia/cuda:11.8.0-base-ubuntu20.04" args: ["nvidia-smi"] EOF

  1. Execute the below command to create a nvidia-smi pod

    Copy
    Copied!
                

    Kubectl apply -f nvidia-smi.yaml


  2. execute the below command to see the result.

    Copy
    Copied!
                

    kubectl logs pod/nvidia-smi


Expected Output:

Copy
Copied!
            

Wed Oct 27 17:17:10 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 520.61.07 Driver Version: 520.61.07 CUDA Version: 11.8 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:14:00.0 Off | Off | | N/A 47C P8 16W / 70W | 0MiB / 16127MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+

  1. Create a cuda-sampled.yaml with the help of the below command.

    Copy
    Copied!
                

    cat <<EOF | tee cuda-samples.yaml apiVersion: v1 kind: Pod metadata: name: cuda-vector-add spec: restartPolicy: OnFailure containers: - name: cuda-vector-add image: "k8s.gcr.io/cuda-vector-add:v0.1" EOF


  2. Execute the below command to create a sample pod

    Copy
    Copied!
                

    Kubectl apply -f cuda-samples.yaml


  3. Once the container run completes, execute the below command to see the result.

    Copy
    Copied!
                

    kubectl logs pod/cuda-vector-add


© Copyright 2022-2023, NVIDIA. Last updated on Jan 23, 2023.