NVIDIA ACE 2.1 Plugin

The NVIDIA ACE plugin allows your Unreal application to communicate with external NVIDIA ACE services. The ACE Plugin supports using the Audio2Face service (minimum v1.0) to animate your character by sending the service voice audio you provide and receiving audio and animation data back.

See the Audio2Face documentation for more details about the Audio2Face service.

Install the ACE Plugin

Before you start:

  • The ACE UE plugin is tested for UE 5.3 and 5.4 on Win64 platforms. The plugin includes source and may also build for other engine versions and platforms, but is unsupported outside of UE 5.3 and 5.4 on Win64.

  • If you have a previous version of the ACE or OmniverseLiveLink plugin installed, see the Upgrading from a previous plugin version section below.

  • If you installed Unreal Engine through the Epic Games Launcher, it’s recommended that you use the “Installing to a Packaged Engine” instructions.

Installing to a Packaged Engine

For a packaged engine installed from the Epic Games Launcher, find your engine install location and copy the plugin somewhere under Engine\Plugins\Marketplace. You may have to create the Marketplace folder if one does not already exist. Installing the plugin to the incorrect engine folder can lead to Expecting to find a type to be declared in a module rules errors when packaging your project.

Install location for packaged engine

Installing to a Source Engine

For a source engine, such as one acquired through GitHub and built from source, copy the plugin somewhere under Engine\Plugins\Runtime. Then compile your engine.

Installing to a Source Project

To install the ACE plugin directly to a source project:

  1. Go to your project folder.

  2. Create a Plugins folder if one doesn’t exist already.

  3. Copy the ACE plugin somewhere under Plugins.

Install the ACE plugin to either the engine or a project that uses that engine. Do not install it to both at the same time.

Activate the Plugin

After you have installed the plugin to either the engine or your project:

  1. Open your project in Unreal Editor.

  2. Go to Edit → Plugins and search for NVIDIA ACE.

  3. Check the box to add the plugin to your project.

  4. When prompted, restart the editor.

NVIDIA Audio2Face

NVIDIA Audio2Face works well with MetaHuman characters in Unreal Engine. If you don’t already have a MetaHuman character in your project, import one using Quixel Bridge and place it in a level in your project. Then use the following steps to enable NVIDIA Audio2Face animation on your character.

If you are animating a non-MetaHuman character, also read the Notes for non-MetaHuman characters section below.

Initial Editor Setup

From Unreal Editor, Edit → Editor Preferences, then disable the “Level Editor” / “Miscellaneous” / “Create New Audio Device for Play in Editor” setting. If this setting is enabled, you may not hear audio playback from Audio2Face in the editor.

Editor Preferences window

Audio2Face Connection Setting

The ACE plugin’s project settings have an option for setting the default Audio2Face server to connect to: Edit → Project Settings… → Plugins → NVIDIA ACE → Default A2X Server Config. The configuration has multiple fields:

  • Dest URL: The server address must include scheme (http or https), host (IP address or hostname), and port number. For example, http://203.0.113.37:52000 or https://a2x.example.com:52010. To connect to an NVIDIA Cloud Function (NVCF) the server address should be set to https://grpc.nvcf.nvidia.com:443.

  • API Key: If you are not connecting to an NVCF-hosted Audio2Face service, leave this blank. You can get an API key through https://build.nvidia.com/nvidia/audio2face to connect to NVCF-hosted Audio2Face services.

  • NVCF Function Id: If you are not connecting to an NVCF-hosted Audio2Face service, leave this blank. You can get an NVCF Function ID through https://build.nvidia.com/nvidia/audio2face to connect to NVCF-hosted Audio2Face services.

  • NVCF Function Version: Optional. Leave this blank unless you need to specify a specific version.

Note

We highly recommend that you use the NVCF option where and when possible.

You can change the Audio2Face connection settings at runtime by using the ACE → Audio2Face → Override Audio2Face Connection Info blueprint function, or by calling UACEBlueprintLibrary::SetA2XConnectionInfo from C++.

You can fetch the current Audio2Face connection settings at runtime by using the ACE → Audio2Face → Get Audio2Face Connection Info blueprint function, or by calling UACEBlueprintLibrary::GetA2XConnectionInfo from C++. Current settings are a combination of the project defaults and the runtime overrides.

The project settings are stored in your project’s DefaultEngine.ini file. If your API key is too sensitive to include in a project text file, consider setting it at runtime using the Override Audio2Face Connection Info blueprint function.

Add NVIDIA ACE Face Animation Support to MetaHumans

  • Find the Face_AnimBP blueprint:

    • Edit the blueprint for your character, then select Face in the Components tab.

    • Click the magnifying glass next to the Face_AnimBP_C Anim Class.

    • This opens the browser and auto-selects the Face_AnimBP blueprint. Double click to open it.

  • Open the animation graph My Blueprint → Animation Graphs → AnimGraph.

  • Add an Apply ACE Face Animations animation node to the graph, before the ARKit pose mapping, which is typically mh_arkit_mapping_pose or mh_arkit_mapping_pose_A2F.

After making the change, an example of what Face_AnimBP might look like follows. Note that your specific MetaHuman blueprint may look a bit different from this, but what’s important is to add the Apply ACE Face Animations node before the ARKit pose mapping.

Apply ACE Face Animations animgraph node

What it does: The Apply ACE Face Animations animation blueprint node adds ARKit-compatible curves using data received from NVIDIA ACE technologies such as Audio2Face. The ARKIt pose mapping then applies those curves to the MetaHuman facial animation sequences.

Face_AnimBP is a common blueprint shared by all MetaHuman characters, so you typically do this once per project.

Bypass the Default MetaHuman MouthClose Animation

NVIDIA Audio2Face outputs MouthClose curves. The default MetaHuman Face_AnimBP blueprint has additional animations based on the MouthClose curve that interfere with Audio2Face-provided lip movement. It’s recommended to remove these additional animations or bypass them when Audio2Face is being used.

To remove additional lip movements that would interfere with Audio2Face lip animation:

  1. Select the MetaHuman blueprint, then select Face in the Components tab.

  2. Click the magnifying glass next to the Face_AnimBP_C Anim Class.

  3. This opens the browser and auto-selects the Face_AnimBP blueprint. Double click to open it.

  4. Find the Mouth Close block and bypass its Modify Curve anim node.

Bypass Mouth Close animation

Update the ARKit Pose Mapping and Animation Sequence

NVIDIA has modified the default MetaHuman ARKit pose asset and corresponding animation sequence asset and they are included in the ACE plugin’s Content folder. The curves (blend shapes) that are modified from the defaults are BrowDownLeft, BrowDownRight, BrowInnerUp, MouthClose, and MouthRollLower.

You can customize facial animations around the brow and mouth area to produce results that may look better with NVIDIA Audio2Face output.

To use these optional alternate assets:

  1. Select the MetaHuman blueprint, then select Face in the Components tab.

  2. Click the magnifying glass next to the Face_AnimBP_C Anim Class.

  3. This opens the browser and auto-selects the Face_AnimBP blueprint. Double click to open it.

  4. Find the mh_arkit_mapping_pose node and change the Pose Asset to mh_arkit_mapping_pose_A2F.

    • If this asset is not visible, change the Content visibility to Show Plugin Content.

mh_arkit_mapping_pose_A2F animgraph node

Add ACE Audio Component to a Character

The ACE Audio Curve Source component receives audio and curves from NVIDIA ACE technologies such as Audio2Face. The audio is played back and the curves are kept synchronized with the audio. If an animation blueprint attached to the same character contains an Apply ACE Face Animations node, it can read and apply the curves produced by this component. This results in synchronized speech and facial animations.

  1. Edit the blueprint for your character. You can select your character in the main viewport, then click the Edit link in the Outliner pane.

  2. In the Components tab, select Face, click +Add and choose ACE Audio Curve Source.

  3. Repeat for any other characters that you want to animate using NVIDIA ACE.

ACE Audio Curve Source component

Spatial Audio (Optional)

With spatialized audio, sounds seem to come from the character the component is attached to. Without spatialized audio, sounds are flat and are the same regardless of the character’s location relative to the listener. See Unreal sound attenuation documentation for details of how sound attenuation works to enable spatialized audio in the engine.

To enable spatialized audio on a character, add attenuation settings to the character’s ACE Audio Curve Source component. These settings are in the Attenuation section of the component’s Details tab. Attenuation properties are similar to Unreal’s UAudioCmponent properties of the same name. Settings are listed here and then described in detail below:

  • Attenuation → Override Attenuation: If checked, the component’s inline Attenuation Overrides are used. If unchecked, the Attenuation Settings asset is used if one exists.

  • Attenuation → Attenuation Settings: Reference to a Sound Attenuation asset.

  • Attenuation → Attenuation Overrides: Inline sound attenuation settings.

If you haven’t added attenuation settings in your project yet, you can add spatialized audio to an ACE Audio Curve Source component by creating a new attenuation asset within the component.

  1. From the component’s Details tab, select Attenuation → Attenuation Settings.

  2. Open the drop-down menu, select Sound Attenuation under CREATE NEW ASSET.

  3. Type a name for the new Sound Attenuation asset.

  4. Optionally customize the new asset as desired. But the newly created asset provides spatialized audio without any additional customization.

See the Unreal sound attenuation documentation for details on further customizing the sound attenuation.

ACE Audio Curve Source new attenuation asset

If you have an existing Sound Attenuation asset, you can add the asset to any ACE Audio Curve Source component:

  1. Select the Sound Attenuation asset you wish to use in the content drawer.

  2. From the ACE Audio Curve Source component’s Details tab, select Attenuation → Attenuation Settings.

  3. Click the left arrow inside a circle icon to use the selected asset from the content browser.

You can also set attenuation settings inline directly from an ACE Audio Curve Source component. If you enable the Attenuation → Override Attenuation checkbox, the inline attenuation settings are made immediately available.

ACE Audio Curve Source inline attenuation settings

Other ACE Audio Component Settings (Optional)

The ACE Audio Curve Source component is a type of Unreal Scene Component, it shows all the properties of any standard scene component in the Details tab. Additionally, there are a few custom properties to optionally configure:

  • ACE Config → Buffer Length in Seconds: Number of seconds of received audio to buffer before beginning playback. It’s recommended to use the default of 0.1 seconds. But if you experience audio stutter, try increasing the buffer length. Audio is received from Audio2Face v1.0 in chunks of 0.033 s, so the default represents 3 received audio chunks.

  • Developer → Enable Attenuation Debug: For debugging purposes only, see editor tooltips for details. Similar to the same property on other Unreal playable sound objects.

  • Sound → Group: Sound group to use for audio. The default group is Voice.

  • Sound → Volume: Playback volume of sound, between 0 and 1. The default is 1.

  • Voice Management → Priority → Priority: Used to determine whether sound can play or remain active if the channel limit is met. A higher value indicates a higher priority, see the platform’s Audio Settings Max Channels property. The value is weighted with the final volume of the sound to produce the final runtime priority value. The value must be between 0 and 100. The default is 1.

ACE Audio Curve Source component settings

Import Speech Clips

The NVIDIA ACE plugin supports animating a character from speech stored in Sound Wave assets. Any sample rate is supported as input. The clip is converted at runtime to 16000 Hz mono by the plugin before sending to the Audio2Face service.

If you don’t already have Sound Wave assets in your project you can import the speech audio clips you want to animate:

  1. Open the Content Drawer and select the folder where you want to import your clips.

  2. Right click in the content pane and select Import to [path]….

  3. Navigate to a supported file (.wav, .ogg, .flac, .aif) and open it.

  4. Verify that a new Sound Wave asset appears in the Content Drawer.

See Unreal documentation for more details about import options.

Note

In some cases, the Sound Wave asset may not be usable by the ACE plugin unless it is fully loaded. It is recommended to set the Loading Behavior Override to ForceInline on the Sound Wave asset’s properties. The plugin logs a warning in the LogACERuntime category, if an asset can’t be read because it isn’t fully loaded.

Animating a Character from a Sound Wave Audio Clip

To animate a character from an audio clip stored in a Sound Wave asset, use the blueprint function Animate Character from Sound Wave on the character actor. These instructions describe the blueprint interface, but you can also call UACEBlueprintLibrary::AnimateCharacterFromSoundWave from C++.

Depending on your application, there are many ways you could use to determine which character to animate. Some options might be:

  • have a single default character that is animated

  • automatically animate the character that the player is looking at or the closest character

  • provide some UI for selecting a character

After you’ve chosen a character Actor, animate it from a Sound Wave asset:

  1. Call the ACE → Audio2Face → Animate Character From Sound Wave function.

  2. Provide the actor corresponding to the character you wish to animate. If the actor has an ACE Audio Curve Source component attached, this sends the speech clip to NVIDIA Audio2Face.

  3. Provide the speech clip asset as Sound Wave input.

  4. Optionally, provide an Audio2Face Emotion struct as ACEEmotionParameters input.

  5. Optionally, provide an Audio2Face Parameters input.

  6. The node indicates whether the audio clip was successfully sent to Audio2Face through the Success return value.

Animate Character From Sound Wave node

Animating a Character from a Local WAV File (Optional)

The plugin supports animating a character from a local WAV file at runtime. For example, this could be used in an application where the user can supply their own audio files for character speech. It’s similar to animating from a Sound Wave asset, but in the case of a WAV file, the audio won’t be stored in an Unreal asset and baked into the application’s content. Use the blueprint function Animate Character from Wav File on the character actor. You can also call UACEBlueprintLibrary::AnimateCharacterFromWavFile from C++.

Animate Character From Wav File node

Adjusting Character Emotion (Optional)

Audio2Face detects emotions from the audio input that affect character animations appropriately. But if your application has information about character emotion, you can also provide this to Audio2Face to blend application-provided emotion overrides with the detected emotion. Functions to animate a character accept an ACEEmotionParameters input of type Audio2FaceEmotion, where individual emotion values can be overridden. Each emotion override value must be between 0.0 and 1.0. Values outside that range are ignored. A value of 0.0 represents a neutral emotion.

The Audio2FaceEmotion struct can change how detected emotions are processed. The following table shows a summary of available options:

Parameter

Description

Valid Range

Default

Overall Emotion Strength

Multiplier applied globally after the mix of emotions is done

0.0 - 1.0

0.6

Detected Emotion Contrast

Increase the spread of detected emotions values by pushing them higher or lower

0.3 - 3.0

1.0

Max Detected Emotions

Firm limit on the quantity of detected emotion values

1 - 6

3

Detected Emotion Smoothing

Coefficient for smoothing detected emotions over time

0.0 - 1.0

0.7

Emotion Override Strength

Blend between detected emotions (0.0) and override emotions (1.0)

0.0 - 1.0

0.5

Emotion Overrides

Individual emotion override values, each in the range 0.0 - 1.0

Disabled or 0.0 - 1.0

Disabled

Audio2FaceEmotion struct

Note

Emotion and face parameter inputs won’t have any effect for audio clips less than 0.5 seconds.

Adjusting Audio2Face Parameters (Optional)

Certain Audio2Face service parameters can be overridden by the application. These parameters tend to be tightly coupled with the model deployed to the service. Typically, it’s not recommended to change these in the application. If you think you need to change any of these, refer to the Audio2Face service documentation for details on what they do.

Set parameters by string name. The parameters available might change depending on the version of the service you have deployed. The set of available parameters for v1.0 Audio2Face service are:

Parameter

Description

Valid Range

Default

skinStrength

Controls the range of motion of the skin

0.0 – 2.0

1.0

upperFaceStrength

Controls the range of motion on the upper regions of the face

0.0 – 2.0

1.0

lowerFaceStrength

Controls the range of motion on the lower regions of the face

0.0 – 2.0

1.0

eyelidOpenOffset

Adjusts the default pose of eyelid open-close (-1.0 means fully closed. 1.0 means fully open)

-1.0 – 1.0

depends on deployed model

blinkStrength

0.0 – 2.0

1.0

lipOpenOffset

Adjusts the default pose of lip close-open (-1.0 means fully closed. 1.0 means fully open)

-0.2 – 0.2

depends on deployed model

upperFaceSmoothing

Applies temporal smoothing to the upper face motion

0.0 – 0.1

0.001

lowerFaceSmoothing

Applies temporal smoothing to the lower face motion

0.0 – 0.1

depends on deployed model

faceMaskLevel

Determines the boundary between the upper and lower regions of the face

0.0 – 1.0

0.6

faceMaskSoftness

Determines how smoothly the upper and lower face regions blend on the boundary

0.001 – 0.5

0.0085

tongueStrength

0.0 – 3.0

depends on deployed model

tongueHeightOffset

-3.0 – 3.0

depends on deployed model

tongueDepthOffset

-3.0 – 3.0

depends on deployed model

Audio2FaceParameters object

Note

Emotion and face parameter inputs won’t have any effect for audio clips less than 0.5 seconds.

Troubleshooting

From Unreal Editor, you can view log messages with Window → Output Log. Filter the log messages on LogACE to see messages specifically from the NVIDIA ACE plugin. This shows you warnings or errors for common mistakes, such as not setting Loading Behavior Override to ForceInline on a Sound Wave asset’s properties. If you receive no audio or facial animation at all, check the log to see if the usual sequence of log messages has occurred. Logs for a typical Audio2Face animation are similar to:

LogACERuntime: sending A2X asset SoundWave /Game/Audio/SampleInput.SampleInput
LogACERuntime: Connected to a2x service at http://203.0.113.44:52000
LogACERuntime: [A2X SID 0] Started A2X session at http://203.0.113.44:52000
LogACERuntime: [A2X SID 0] Sent 60371 samples to A2X
LogACERuntime: [A2X SID 0] End of samples
LogACERuntime: start playing audio on BP_Maria_C /Game/Maps/UEDPIE_0_ACETestMap.ACETestMap:PersistentLevel.BP_Maria_C_1
LogACERuntime: [A2X SID 0] begin animation on BP_Maria_C /Game/Maps/UEDPIE_0_ACETestMap.ACETestMap:PersistentLevel.BP_Maria_C_1 at 0.000000
LogACERuntime: [A2X SID 0 callback] received 114 animation samples, 60797 audio samples for clip on BP_Maria_C /Game/Maps/UEDPIE_0_ACETestMap.ACETestMap:PersistentLevel.BP_Maria_C_1
LogACERuntime: [A2X SID 0]: resetting animation on BP_Maria_C /Game/Maps/UEDPIE_0_ACETestMap.ACETestMap:PersistentLevel.BP_Maria_C_1

These messages are also written to the application’s log file. If you’re running in standalone mode or from a packaged build, search the log file for LogACE messages to help troubleshoot issues.

Failed to connect Error

  • Verify that the URL is complete and correct. The server address must include scheme (http or https), host (IP address or hostname), and port number.

  • Try pinging the host. Make sure the host is accessible and available.

  • Make sure you are running the correct version of the service. The ACE Unreal plugin requires at least v1.0 of the Audio2Face service.

Failed to connect log

Lip Corners Stretch Down Too Much in MetaHuman Facial Animations

Verify that you are using the plugin’s mh_arkit_mapping_pose_A2F pose mapping. See Update the ARKit pose mapping and animation sequence section.

Lips Don’t Open Wide Enough in MetaHuman Facial Animations

Verify that the default MetaHuman MouthClose animation is not active. See Bypass the default MetaHuman MouthClose animation section.

Project Settings Don’t Persist

If your NVIDIA ACE project settings don’t seem to persist between Unreal Editor runs, it may be because something is corrupt in your machine’s local saved config files. Try deleting the project’s Saved\Config\WindowsEditor folder and starting the editor again.

Notes for Non-MetaHuman Characters

If you are animating a non-MetaHuman character, consider these additions to the instructions above:

  • The ACE Audio Curve Source component works as long as it is attached to the same actor that contains the Apply ACE Face Animations animation graph node. You can attach it to your character anywhere it makes sense.

  • The Apply ACE Face Animations animation graph node only produces animation curve values. You must have a pose asset for your face model with curves corresponding to ARKit blend shape names. The curves produced by the NVIDIA ACE plugin are:

    • EyeBlinkLeft

    • EyeLookDownLeft

    • EyeLookInLeft

    • EyeLookOutLeft

    • EyeLookUpLeft

    • EyeSquintLeft

    • EyeWideLeft

    • EyeBlinkRight

    • EyeLookDownRight

    • EyeLookInRight

    • EyeLookOutRight

    • EyeLookUpRight

    • EyeSquintRight

    • EyeWideRight

    • JawForward

    • JawLeft

    • JawRight

    • JawOpen

    • MouthClose

    • MouthFunnel

    • MouthPucker

    • MouthLeft

    • MouthRight

    • MouthSmileLeft

    • MouthSmileRight

    • MouthFrownLeft

    • MouthFrownRight

    • MouthDimpleLeft

    • MouthDimpleRight

    • MouthStretchLeft

    • MouthStretchRight

    • MouthRollLower

    • MouthRollUpper

    • MouthShrugLower

    • MouthShrugUpper

    • MouthPressLeft

    • MouthPressRight

    • MouthLowerDownLeft

    • MouthLowerDownRight

    • MouthUpperUpLeft

    • MouthUpperUpRight

    • BrowDownLeft

    • BrowDownRight

    • BrowInnerUp

    • BrowOuterUpLeft

    • BrowOuterUpRight

    • CheekPuff

    • CheekSquintLeft

    • CheekSquintRight

    • NoseSneerLeft

    • NoseSneerRight

    • TongueOut

Upgrading from a Previous Plugin Version

If you already have a version of the NVIDIA ACE or Omniverse Live Link plugin prior to 2.0, you must complete extra steps to update your project.

  • Before installing the new plugin, you can uninstall any previous instances of ACE or OmniverseLiveLink plugins from your engine and project by removing the ACE”** or OmniverseLiveLink folder. Then install the new ACE plugin using the instructions above.

  • For a C++ project, you may see a compile-time error about Unable to find plugin 'OmniverseLiveLink'. If so, edit your .uproject descriptor file to replace “OmniverseLiveLink” with NV_ACE_Reference.

  • For a blueprint-only project, you may see a dialog box on editor startup about “This project requires the ‘OmniverseLiveLink’ plugin, which could not be found”, and it offers to disable the plugin. Click Yes to disable it.

  • If your project references any content from the pre-2.0 plugin, you may have to set up a project-specific asset redirect to allow it to find the content under the new plugin descriptor name. This particularly affects the custom mh_arkit_mapping_pose_A2F pose asset. If you get errors about missing assets after upgrading the plugin, add this to your project’s Config\DefaultEngine.ini file:

    | [CoreRedirects]
    | +PackageRedirects=(OldName="OmniverseLiveLink",NewName="/NV_ACE_Reference/",MatchSubstring=true)
    | +PackageRedirects=(OldName="/OmniverseLiveLink/mh_arkit_mapping_pose_A2F.mh_arkit_mapping_pose_A2F",NewName="/NV_ACE_Reference/mh_arkit_mapping_pose_A2F.mh_arkit_mapping_pose_A2F",MatchSubstring=true)
    
    • The redirects allow assets with references to the old plugin version to load, but it’s still recommended to resave those assets after they’re properly loaded to force the references to be updated. For MetaHuman models, the asset containing the old plugin references is typically Face_AnimBP.

The legacy LiveLink-based interface, where input had to be provided to Audio2Face through an external application, is provided as-is and is no longer supported. The legacy interface exists in the current version of the plugin for convenience while upgrading an application from an older version of Audio2Face, but it will be removed in a future plugin release.