CloudXR Experimental Server Sample

The NVIDIA CloudXR SDK includes a new server sample application which uses experimental CloudXR direct interface to run without SteamVR needed. It is currently only supplied as a prebuilt binary. It supports both AR and VR clients, displaying a 3D angel model and allowing for some basic manipulations to help exercise client controller input. The ability to exercise a client application to a high degree without needing SteamVR is extemely helpful for initial client development as well as general ongoing sanity testing.

CloudXR Server Sample Prerequisites


  • PC or laptop with an NVIDIA GPU, including Quadro GPUs, based on the NVIDIA Pascal™ GPU architecture or later.

    • Running Windows 10 or later

    • NVIDIA driver 473.11 or later (Windows)

    • Vulkan runtime >= v1.3.216.0 (provided by your GPU driver)


    The server sample is only currently supported on Windows. Linux support will come in a subsequent release.

Installing the CloudXR Server Sample

While the experimental CloudXR Server Sample is a standalone application, it is currently installed by the SDK installer, alongside the SteamVR server driver.

Running the CloudXR Server Sample

  1. Navigate to the default CloudXR driver install location C:\Program Files\NVIDIA Corporation\CloudXR\VRDriver\CloudXRRemoteHMD\bin\win64 in Explorer or terminal.

  2. Double click CloudXRServerSample.exe from Windows Explorer, or from terminal just type CloudXRServerSample.exe. If you want to run with options, such as -w to show the debug rendering window, you need to use a command prompt, or provide a launch option file. See Command-Line Options for a detailed overview, and the Windows section which details different methods of providing a raw Windows command-line versus use of a launch options file.


    If you use a launch options file, it can and will be used when launching from terminal equally to double-clicking the app. In that case, the options file is loaded first, and then any raw command-line is processed second, allowing the command-line to override things in the options file.

Connecting to the CloudXR Server Sample

The CloudXR 4.0 samples should all be able to connect to the Server Sample, and the server should support most of the client launch options that apply to server rendering and functionality. It understands VR vs AR clients, and will serve the proper streams, rendering an angel statue model.


At the core, you just point the client to the server IP address, the same as you would for SteamVR. For some clients there is UI to input the IP address, for others it needs to be provided on command-line or in a launch options file. Again, see Command-Line Options for more details on all the methods to provide options, and tables of options currently available.

The internal action list supported includes:


The current Server Sample loads a hardcoded profile, that maps left and right hand client controller inputs to one of the above actions. In an effort to support any of the many controllers in the market, it maps multiple inputs to the same action, such as trackpad vs joystick. The following is the latest set of bindings as of this documentation update:

std::map<std::string, std::string> genericProfile[2] =
   { // ##### LEFT HAND
      {"/input/application_menu/click", "/test/LDAT"},
      {"/input/trigger/click", "/model/color/alpha"},
      {"/input/trigger/value", "/model/color/alpha"},
      {"/input/grip/click", "/model/color/red"},
      {"/input/grip/value", "/model/color/red"},

      {"/input/trackpad/x", "/model/rotate"},
      {"/input/thumbstick/x", "/model/rotate"},
      {"/input/joystick/x", "/model/rotate"},

      {"/input/a/click", "/model/color/red"},
      {"/input/b/click", "/model/color/blue"},
      {"/input/x/click", "/model/color/red"},
      {"/input/y/click", "/model/color/blue"},
   { // ##### RIGHT HAND
      {"/input/trigger/click", "/model/scale"},
      {"/input/trigger/value", "/model/scale"},
      {"/input/grip/click", "/model/color/green"},
      {"/input/grip/value", "/model/color/green"},

      {"/input/trackpad/x", "/model/move/x"},
      {"/input/trackpad/y", "/model/move/z"},
      {"/input/thumbstick/x", "/model/move/x"},
      {"/input/thumbstick/y", "/model/move/z"},
      {"/input/joystick/x", "/model/move/x"},
      {"/input/joystick/y", "/model/move/z"},

      {"/input/a/click", "/model/color/green"},
      {"/input/b/click", "/model/color/white"},
      {"/input/x/click", "/model/color/green"},
      {"/input/y/click", "/model/color/white"},

A few notes about differences between clients at this time. First, the AR clients do not apply an initial scale to the scene. So the model is huge, and you may need to take a step back to see it. The iOS sample has a scale slider, and in testing around 0.2 scale works to fit the screen. The ARKit client has scale and rotation mapped via sliders, but the ARCore client has no interface at the moment and thus cannot manipulate the scene aside from moving around it.

For VR clients, the above ‘profile’ should be self-describing for the most part, as to what inputs you can generate on the client, and what action they will trigger on the server. Scale and rotate are mapped for analog inputs, with scale on the right trigger, and rotate on the left stick/trackpad. The move actions are mapped to the right stick/trackpad, and translate the model in X and Z axes up to a multiplier limit. The rest of the inputs map to affecting the color multiplier on the model, and generally should ‘mix’ live if you use multiple inputs at one time.

Note that if you look at the code for the Windows client sample, the windowed-mode code implements ‘fake’ mappings of keys to input paths, to enable some simulation of inputs and actions from the windowed mode app. See the source code for the current key mappings.