Character Animation (required for Audio2Face, Animation Stream)#

The Audio2Face and Animation Stream features work well with MetaHuman characters in Unreal Engine. If you don’t already have a MetaHuman character in your project, import one using Quixel Bridge and place it in a level in your project. Then use the following steps to enable NVDIA ACE animation on your character.

open Quixel Bridge

If you are animating a non-MetaHuman character, also read the Notes for non-MetaHuman characters section below.

Initial Editor Setup#

  1. From Unreal Editor, Edit > Editor Preferences.

  2. Disable the Level Editor / Miscellaneous / Create New Audio Device for Play in Editor setting. If this setting is enabled, you may not hear audio playback received from ACE in the editor.

Editor Preferences window

Add NVIDIA ACE Face Animation Support to MetaHumans#

  1. Find the Face_AnimBP blueprint:

  1. Edit the blueprint for your character, then select Face in the Components tab.

  2. Click the magnifying glass next to the Face_AnimBP_C Anim Class. This opens the browser and auto-selects the Face_AnimBP blueprint. Double click to open it.

  1. Open the animation graph My Blueprint > Animation Graphs > AnimGraph.

  2. Add an Apply ACE Face Animations animation node to the graph, before the ARKit pose mapping, which is typically mh_arkit_mapping_pose or mh_arkit_mapping_pose_A2F.

After making the change, an example of what Face_AnimBP might look like follows. Your specific MetaHuman blueprint may look a bit different from this, but you must add the Apply ACE Face Animations node before the ARKit pose mapping.

Apply ACE Face Animations animgraph node

What it does: The Apply ACE Face Animations animation blueprint node adds ARKit-compatible curves using data received from NVIDIA ACE technologies such as Audio2Face. The ARKIt pose mapping then applies those curves to the MetaHuman facial animation sequences.

Face_AnimBP is a common blueprint shared by all MetaHuman characters, so you typically do this once per project.

Bypass the Default MetaHuman MouthClose Animation#

NVIDIA ACE features output MouthClose curves. The default MetaHuman Face_AnimBP blueprint has additional animations based on the MouthClose curve that interfere with Audio2Face-provided lip movement. It’s recommended to remove these additional animations or bypass them when Audio2Face is being used.

To remove additional lip movements that would interfere with Audio2Face lip animation:

  1. Select the MetaHuman blueprint, then select Face in the Components tab.

  2. Click the magnifying glass next to the Face_AnimBP_C Anim Class.

  3. This opens the browser and auto-selects the Face_AnimBP blueprint. Double click to open it.

  4. Find the Mouth Close block and bypass its Modify Curve anim node.

Bypass Mouth Close animation

Update the ARKit Pose Mapping and Animation Sequence#

NVIDIA has modified the default MetaHuman ARKit pose asset and corresponding animation sequence asset and they are included in the ACE plugin’s Content folder. The curves (blend shapes) that are modified from the defaults are BrowDownLeft, BrowDownRight, BrowInnerUp, MouthClose, and MouthRollLower.

You can customize facial animations around the brow and mouth area to produce results that may look better with NVIDIA ACE output.

To use these optional alternate assets:

  1. Select the MetaHuman blueprint, then select Face in the Components tab.

  2. Click the magnifying glass next to the Face_AnimBP_C Anim Class.

  3. This opens the browser and auto-selects the Face_AnimBP blueprint. Double click to open it.

  4. Find the mh_arkit_mapping_pose node and change the Pose Asset to mh_arkit_mapping_pose_A2F.

    • If this asset is not visible, change the Content visibility to Show Plugin Content.

mh_arkit_mapping_pose_A2F animgraph node

Add ACE Audio Component to a Character#

The ACE Audio Curve Source component receives audio and curves from NVIDIA ACE technologies such as Audio2Face. The audio is played back and the curves are kept synchronized with the audio. If an animation blueprint attached to the same character contains an Apply ACE Face Animations node, it can read and apply the curves produced by this component. This results in synchronized speech and facial animations.

  1. Edit the blueprint for your character. You can select your character in the main viewport, then click the Edit link in the Outliner pane.

  2. In the Components tab, select Face, click +Add and choose ACE Audio Curve Source.

  3. Repeat for any other characters that you want to animate using NVIDIA ACE.

ACE Audio Curve Source component

Spatial Audio (Optional)#

With spatialized audio, sounds seem to come from the character the component is attached to. Without spatialized audio, sounds are flat and are the same regardless of the character’s location relative to the listener. See Unreal sound attenuation documentation for details of how sound attenuation works to enable spatialized audio in the engine.

To enable spatialized audio on a character, add attenuation settings to the character’s ACE Audio Curve Source component. These settings are in the Attenuation section of the component’s Details tab.

Attenuation properties are similar to Unreal’s UAudioComponent properties of the same name. Attenuation properties are:

  • Override Attenuation: If checked, the component’s inline Attenuation Overrides are used. If unchecked, the Attenuation Settings asset is used if one exists.

  • Attenuation Settings: Reference to a Sound Attenuation asset.

  • Attenuation Overrides: Inline sound attenuation settings.

If you haven’t added attenuation settings in your project yet, you can add spatialized audio to an ACE Audio Curve Source component by creating a new attenuation asset within the component.

  1. From the component’s Details tab, select Attenuation > Attenuation Settings.

  2. Open the drop-down menu, select Sound Attenuation under CREATE NEW ASSET.

  3. Type a name for the new Sound Attenuation asset.

  4. Optionally customize the new asset as desired. The newly created asset provides spatialized audio without any additional customization.

See the Unreal sound attenuation documentation for details on further customizing the sound attenuation.

ACE Audio Curve Source new attenuation asset

If you have an existing Sound Attenuation asset, you can add the asset to any ACE Audio Curve Source component:

  1. Select the Sound Attenuation asset you want to use in the content drawer.

  2. From the ACE Audio Curve Source component’s Details tab, select Attenuation > Attenuation Settings.

  3. Click the left arrow inside a circle icon to use the selected asset from the content browser.

You can also set attenuation settings inline directly from an ACE Audio Curve Source component. If you enable the Attenuation > Override Attenuation checkbox, the inline attenuation settings are available directly after the checkbox.

ACE Audio Curve Source inline attenuation settings

Other ACE Audio Component Settings (Optional)#

The ACE Audio Curve Source component is a type of Unreal Scene Component, it shows all the properties of any standard scene component in the Details tab. Additionally, there are a few custom properties to optionally configure:

  • ACE Config > Buffer Length in Seconds: Number of seconds of received audio to buffer before beginning playback. It’s recommended to use the default of 0.1 seconds. But if you experience audio stutter, try increasing the buffer length. In the case of an Audio2Face v1.0 service, audio is received in chunks of 0.033 s so the default represents 3 received audio chunks.

  • Developer > Enable Attenuation Debug: For debugging purposes only, see editor tooltips for details. Similar to the same property on other Unreal playable sound objects.

  • Sound > Group: Sound group to use for audio. The default group is Voice.

  • Sound > Volume: Playback volume of sound, between 0 and 1. The default is 1.

  • Voice Management > Priority > Priority: Used to determine whether sound can play or remain active if the channel limit is met. A higher value indicates a higher priority, see the platform’s Audio Settings Max Channels property. The value is weighted with the final volume of the sound to produce the final runtime priority value. The value must be between 0 and 100. The default is 1.

ACE Audio Curve Source component settings

Animation Start and End Events (Optional)#

Added in version 2.2.

The ACE Audio Curve Source Component triggers events when an animation clip starts and ends. You can use this setting to affect application logic. For example if you want an additional animation to play when a character finishes speaking.

  1. In your character blueprint, select the ACE Audio Curve Source component

  2. In the Details pane find Events > On Animation Started or Events > On Animation Ended

  3. Click the + sign to add an event to the Event Graph
    Add OnAnimationEnded event
  4. Add whatever logic you want to be triggered by the event
    OnAnimationEnded event logic

Notes for Non-MetaHuman Characters#

If you are animating a non-MetaHuman character, consider these additions to the instructions above:

  • The ACE Audio Curve Source component works as long as it is attached to the same actor that contains the Apply ACE Face Animations animation graph node. You can attach it to your character anywhere it makes sense.

  • The Apply ACE Face Animations animation graph node only produces animation curve values. You must have a pose asset for your face model with curves corresponding to ARKit blend shape names. The curves produced by the NVIDIA ACE plugin are:

    • EyeBlinkLeft

    • EyeLookDownLeft

    • EyeLookInLeft

    • EyeLookOutLeft

    • EyeLookUpLeft

    • EyeSquintLeft

    • EyeWideLeft

    • EyeBlinkRight

    • EyeLookDownRight

    • EyeLookInRight

    • EyeLookOutRight

    • EyeLookUpRight

    • EyeSquintRight

    • EyeWideRight

    • JawForward

    • JawLeft

    • JawRight

    • JawOpen

    • MouthClose

    • MouthFunnel

    • MouthPucker

    • MouthLeft

    • MouthRight

    • MouthSmileLeft

    • MouthSmileRight

    • MouthFrownLeft

    • MouthFrownRight

    • MouthDimpleLeft

    • MouthDimpleRight

    • MouthStretchLeft

    • MouthStretchRight

    • MouthRollLower

    • MouthRollUpper

    • MouthShrugLower

    • MouthShrugUpper

    • MouthPressLeft

    • MouthPressRight

    • MouthLowerDownLeft

    • MouthLowerDownRight

    • MouthUpperUpLeft

    • MouthUpperUpRight

    • BrowDownLeft

    • BrowDownRight

    • BrowInnerUp

    • BrowOuterUpLeft

    • BrowOuterUpRight

    • CheekPuff

    • CheekSquintLeft

    • CheekSquintRight

    • NoseSneerLeft

    • NoseSneerRight

    • TongueOut