Customizing NVIDIA GPU Driver Parameters during Installation#

The NVIDIA Driver kernel modules accept a number of parameters which can be used to customize the behavior of the driver. By default, the GPU Operator loads the kernel modules with default values. On a machine with the driver already installed, you can list the parameter names and values with the cat /proc/driver/nvidia/params command. You can pass custom parameters to the kernel modules that get loaded as part of the NVIDIA Driver installation (nvidia, nvidia-modeset, nvidia-uvm, and nvidia-peermem).

Configure Custom Driver Parameters#

To pass custom parameters, execute the following steps.

  1. Create a configuration file named <module>.conf, where <module> is the name of the kernel module the parameters are for. The file should contain parameters as key-value pairs – one parameter per line.

    The following example shows the GPU firmware logging parameter being passed to the nvidia module.

    $ cat nvidia.conf
    NVreg_EnableGpuFirmwareLogs=2
    
  2. Create a ConfigMap for the configuration file. If multiple modules are being configured, pass multiple files when creating the ConfigMap.

    $ kubectl create configmap kernel-module-params -n gpu-operator --from-file=nvidia.conf=./nvidia.conf
    
  3. Install the GPU Operator and set driver.kernelModuleConfig.name to the name of the ConfigMap containing the kernel module parameters.

    $ helm install --wait --generate-name \
       -n gpu-operator --create-namespace \
       nvidia/gpu-operator \
       --version=v25.3.0 \
       --set driver.kernelModuleConfig.name="kernel-module-params"
    

Example using nvidia-uvm module#

This example shows the High Memory Mode being disabled in the nvidia-uvm module.

  1. Create a configuration file named nvidia-uvm.conf:

    $ cat nvidia-uvm.conf
    uvm_disable_hmm=1
    
  2. Create a ConfigMap for the configuration file. If multiple modules are being configured, pass multiple files when creating the ConfigMap.

    $ kubectl create configmap kernel-module-params -n gpu-operator --from-file=nvidia-uvm.conf=./nvidia-uvm.conf
    
  3. Install the GPU Operator and set driver.kernelModuleConfig.name to the name of the ConfigMap containing the kernel module parameters.

    $ helm install --wait --generate-name \
       -n gpu-operator --create-namespace \
       nvidia/gpu-operator \
       --version=v25.3.0 \
       --set driver.kernelModuleConfig.name="kernel-module-params"
    
  4. Verify the parameter has been correctly applied, go to /sys/module/nvidia_uvm/parameters/ on the node:

    $ ls /sys/module/nvidia_uvm/parameters/
    

    Example Output

    ...
    uvm_disable_hmm                               uvm_perf_access_counter_migration_enable  uvm_perf_prefetch_min_faults
    uvm_downgrade_force_membar_sys                uvm_perf_access_counter_threshold         uvm_perf_prefetch_threshold
    ...
    

    Then check the value of the parameter:

    $ cat /sys/module/nvidia_uvm/parameters/uvm_disable_hmm
    

    Example Output

    Y