Step #2: Deploy Sample Applications
As the NVIDIA LaunchPad system is already configured with NVIDIA Cloud Native Stack, you can run sample applications with GPUs on it. For more information, please refer to the Validate Sample Application with NVIDIA Cloud Native Stack.
Run the below command on the NVIDIA LaunchPad system to verify the nvidia-smi on Kubernetes.
cat <<EOF | tee nvidia-smi.yaml
apiVersion: v1
kind: Pod
metadata:
name: nvidia-smi
spec:
restartPolicy: OnFailure
containers:
- name: nvidia-smi
image: "nvidia/cuda:11.8.0-base-ubuntu20.04"
args: ["nvidia-smi"]
EOF
Execute the below command to create a nvidia-smi pod
Kubectl apply -f nvidia-smi.yaml
execute the below command to see the result.
kubectl logs pod/nvidia-smi
Expected Output:
Wed Oct 27 17:17:10 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.07 Driver Version: 520.61.07 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:14:00.0 Off | Off |
| N/A 47C P8 16W / 70W | 0MiB / 16127MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Create a
cuda-sampled.yaml
with the help of the below command.cat <<EOF | tee cuda-samples.yaml apiVersion: v1 kind: Pod metadata: name: cuda-vector-add spec: restartPolicy: OnFailure containers: - name: cuda-vector-add image: "k8s.gcr.io/cuda-vector-add:v0.1" EOF
Execute the below command to create a sample pod
Kubectl apply -f cuda-samples.yaml
Once the container run completes, execute the below command to see the result.
kubectl logs pod/cuda-vector-add