Customizing the Default Animation Graph

Warning

Creating custom animation graphs is not yet officially supported, but we provide some initial information about the internals for now to help you debug the microservice internals in case of issues.

Warning

Note that the pose provider node Animation Graph features required to configure the microservice are currently not available in the public USD Composer release. These features will be available in a future release.

The default animation graph is made up of the following parts:

  • Postures

  • Gestures

  • Positions

  • Facial Gestures

  • Audio2Face Input

Animation Graph Generation

A part of this Animation Graph could be manually adjusted to some degree, but the state machines for postures, gestures and facial gestures are so massive that they need to be generated using a python script. The script named animation_graph_builder_behavior_script.py is linked inside the Animation Graph prim. On initialization it checks if the mentioned state machines exist. If not, it will generate them using the animation clips under the prim Animations.

Adding or Removing Animations from the Default Graph

To change the set of animations the graph uses, there is no need to build a new graph. Follow these steps Add custom animations to add or remove animations.

Customizing the Animation Graph

Further altering the default animation graph is not recommended, unless you want to change the mechanics described above. In that case, please consider the following restrictions.

The Structure of the Scene

For a scene is used in two ways: 1) to control the avatar animation with the Animation Graph microservice and 2) to render a character in the Omniverse Renderer microservice. The same scene is currently used for both services. For the avatar animation there is a generic NVIDIA humanoid character which is invisible in the scene handles the animation in a generic way. The retargeting and rendering of that animation happens in the renderer microservice using whatever character should be in the final result. This way the Animation Graph microservice doesn’t need to consider that final character at all, and only uses one generic rig. In order for this to work, this generic rig and its associated prims need to be structured in a fixed way.

Hardcoded Prim Paths

Some paths to prims in the scene are hardcoded in the microservice. These prims need to be located and named according to the table below.

Some paths to prims in the scene are hardcoded in the microservice. So these prims need to be located and named according to the table below.

Variable Name

Prim Path

Description

scene_skel_roots_scope_prim_path

/World/SkelRoots

The parent prim of the character. The microservice will make multiple copies up to the maxium supported stream capacity.

scene_skel_root_prim_path

/World/SkelRoots/Rig_Retarget/SkelRoot

The SkelRoot is the prim that the Animation Graph is assigned to.

scene_skeleton_prim_path

/World/SkelRoots/Rig_Retarget/SkelRoot /Skeleton

The Skeleton defines the joint naming, hierarchy and transforms as well as retargeting tags.

scene_character_anim_pose_prim_path

/World/SkelRoots/Rig_Retarget/SkelRoot /Skeleton/AnimGraphOutputPose

The resulting poses from the Animatin Graph. This is created automatically during runtime.

scene_skel_animation_prim_path

/World/Animations/Rig_Retarget/SkelRoot /Skeleton/ACE_Animation_Target

The animation clip to which the microservice writes the poses on the renderer side.

scene_camera_prim_path

/World/SkelRoots/Rig_Camera/SkelRoot /Skeleton/root/camera_location /camera_body/camera_main

The location of the main camera in the scene. This isn’t currently used, but might be in the future.

Custom Layer Data

Whatever Animation Graph variables should be exposed by the microservice, they need to be explicitly specified in the customLayerData section of the main USD scene file. The Animation Graph microservice will then dynamically generate the necessary endpoints at startup.

Example of custom layer data from the default Avatar_Scene.usda
   #usda 1.0
   (
      customLayerData = {
         dictionary animation_graph_microservice_api_mapping = {
            dictionary animation_graphs = {
               dictionary avatar = {
                  string animation_graph_prim_path = "/World/..."
                  dictionary variable_routes = {
                     dictionary gesture_state = {
                        string variable_name = "gesture_state"
                     }
                     dictionary posture_state = {
                        string variable_name = "posture_state"
                     }
                     dictionary facial_gesture_state = {
                        string variable_name = "facial_gesture_state"
                     }
                     dictionary position_state = {
                        string variable_name = "position_state"
                     }
                  }
               }
            }
         }
      }
   )

Note on Facial Gestures

Facial gestures work similar to (body) gestures except that they’re additive. Meaning a facial gesture won’t interrupt an ongoing (body) gesture or posture, but combine with other animations happening at the same time. Since the additive mode of the blend node is still in early development it does not yet subtract the characters rest pose from the result. This means if you were to add two neutral poses, they would stack up and the character will visually explode. For this reason all animations used as facial gestures need to already have the rest pose subtracted from them (use relative transformations). Consequently the animations for facial gestures only work in this specific context. They only include blendshape animations as well as movements of the head and neck joints in relative space.

An example of how combining two animations additively leads to unpredictable results if the rest pose is not subtracted from one of the animations.