Step #6: Running Sample GPU Workloads
Navigate to Workloads > Pods.
Click Create Pod.
Paste in the following YAML.
apiVersion: v1 kind: Pod metadata: name: dcgmproftester namespace: nvidia-gpu-operator spec: restartPolicy: OnFailure containers: - name: dcgmproftester11 image: nvidia/samples:dcgmproftester-2.0.10-cuda11.0-ubuntu18.04 args: ["--no-dcgm-validation", "-t 1004", "-d 30"] resources: limits: nvidia.com/gpu: 1 securityContext: capabilities: add: ["SYS_ADMIN"]
A pod will be scheduled on one of the GPU-enabled nodes and pull the image.
Once it is running, navigate to your custom Grafana instance and access the NVIDIA DCGM Exporter Dashboard. Note the GPU temperature, power, and utilization increasing when this workload runs.