Optimize AI and Data Science Workloads (VMware Tanzu) (Latest Version)

Step #6: Running Sample GPU Workloads

  1. Navigate to Workloads > Pods.

  2. Click Create Pod.

  3. Paste in the following YAML.

    Copy
    Copied!
                

    apiVersion: v1 kind: Pod metadata: name: dcgmproftester namespace: nvidia-gpu-operator spec: restartPolicy: OnFailure containers: - name: dcgmproftester11 image: nvidia/samples:dcgmproftester-2.0.10-cuda11.0-ubuntu18.04 args: ["--no-dcgm-validation", "-t 1004", "-d 30"] resources: limits: nvidia.com/gpu: 1 securityContext: capabilities: add: ["SYS_ADMIN"]


  4. Click Create.

  5. A pod will be scheduled on one of the GPU-enabled nodes and pull the image.

  6. Once it is running, navigate to your custom Grafana instance and access the NVIDIA DCGM Exporter Dashboard. Note the GPU temperature, power, and utilization increasing when this workload runs.

© Copyright 2022-2023, NVIDIA. Last updated on Jan 10, 2023.