Kairos Unreal Sample Project#
The Kairos Unreal Sample Project serves as a guide for game developers looking to integrate the NVIDIA ACE plugin into their game.
You can download the Unreal Engine project files over at https://developer.nvidia.com/ace/get-started#section-ace-tools-and-reference-workflows.
This documentation gives a brief overview on the sample project and how it works.
Quick Start Guide#
This project needs the following things to get it working:
Install and activate the NVIDIA ACE plugin by following the detailed steps provided in NVIDIA ACE plugin installation.
Add MetaHuman to your scene and configure them to work with the NVIDIA ACE Plugin.
Configure the Audio2Face settings with your NVIDIA Cloud Functions API Key or by specifying a Server URL to an existing Audio2Face service.
After configured correctly, pressing the green play button initiates audio and facial animations on the selected target actor.
Note
We highly recommend that you use the NVIDIA Cloud Functions option where and when possible.
Troubleshooting#
Note
We’re stripping the project of MetaHuman characters when we publish this project to adhere to Epic’s guidelines. There may be some non-critical errors when you first open the project. These can be safely ignored and you can add MetaHuman back the project easily.
Tip
After creating a new project, if you receive a message indicating that the destination URL is missing when returning to the editor or if animation does not work, ensure that the hostname and API key is correctly populated in the project settings. Re-enter and save these details if necessary.
MetaHuman#
Adding New MetaHuman#
Common MetaHuman blueprints are included with the project and already configured to work with the plugin, however you must add your own MetaHumans to the project to see the NVIDIA ACE Plugin in action.
To add a new MetaHuman follow these steps:
Open the Quixel Bridge.
Navigate to the MetaHuman tab and browse for a preset. Download then Add the MetaHuman to your project.
After the download completes, browse to the new MetaHuman in the Content Browser. Drag the Blueprint Class into the scene and move the character into place.
Make a few changes to the MetaHuman’s Blueprint Class. Open this class up by double clicking on it and add an ACE Audio Curve Source to the scene components. It does not matter where in the hierarchy this is placed. Actors with this component are automatically added into the Target Actor dropdown in the UI, and can be used as a target to receive Audio2Face animation.
[Optional] Audio Components#
This project includes some test audio clips in Content/Audio/AudioClips
. These clips can be attached to the actor as AudioComponent and the in-game UI displays these within the audio examples dropdown.
Add AudioComponent clips to the blueprint by dragging relevant clips from the content browser into the scene components. You can group them in an empty Scene component to keep things organized.
Make sure to select all the AudioComponent and in the Details panel deselect Auto Activate
. Without this deselected, all of these clips begin playing when the game starts.
[Optional] Idle Animation#
This project includes some very basic idle animations from the Unreal Engine’s City Sample. You can retarget these animations to your MetaHuman’s skeletal mesh to add some basic idle animations. The screenshots below of the Retarget Animations dialog are from Unreal Engine 5.4, which was changed from 5.3, but the steps are the same.
Find the desired animation clip within Content/MetaHumans/Animations/
. Right click and select Retarget Animations.
You must populate the Target Skeletal Mesh
.
To find the skeletal mesh on your character, select them within the Outliner and within the Details panel select the Body component. With that selected, verify that you see a Skeletal Mesh Asset with a folder and magnify glass icon which can be used to open the content browser to that asset.
Select the Skeletal Mesh in the content browser and drag it (you can alt-tab after starting drag) to the Target Skeletal Mesh field in the Retarget Animations window.
Select the animations that you want to export, choose a destination folder for the output, and run the Export Animations.
Change your character’s Animation Mode to Use Animation Asset and drop it in your newly retargeted Animation Sequence.
UI Overview#
When launching the game, the settings UI appears automatically. You can toggle this UI on and off using the H
hotkey. This interface allows you to configure the NVIDIA ACE plugin settings, select the audio input, and trigger the playback on the selected character.
Actor Controls#
Target Actor#
This dropdown menu controls which character receives the animation from the NVIDIA ACE Plugin. It is populated automatically at game start, and identifies all Actors with an ACEAudioCurveSourceComponent
. Newly added or duplicated characters with the proper configuration also appear in this list.
Zoom To Character#
The magnifying glass icon allows you to adjust the camera to focus on the current target actor.
Toggle Idle Animations#
Characters in this demo possess basic idle animations that can be toggled on or off individually. This feature halts their idle movements without impacting the animations coming from Audio2Face.
With idle animations enabled, characters display subtle movements.
Disabling idle animations freezes the character in their current pose.
Audio2Face#
Audio Source#
The audio source selection determines the audio input for the NVIDIA ACE Plugin.
Example#
This option locates all AudioComponents attached to the currently selected Target Actor. These components are unique to each character, which causes the available options in this dropdown to change when you select a different target actor.
Local File#
In this mode, you can input or browse for a .wav
file on your disk. The play button becomes active only if the provided file path exists and all services are properly connected.
The folder icon opens a dialog allowing you to navigate to and select a .wav
file, and automatically populates the file path.
Record#
The record function enables you to press and hold to capture audio using your microphone. This recording is saved within the game files and is replaced with each new recording. The play button activates if the file exists and all services are connected.
Play#
Pressing the play button sends a request to the NVIDIA ACE Plugin, which then uses the selected audio source to animate the Target Actor.
Settings#
The Audio2Face settings must be configured to work properly. To do this, navigate to the Settings tab within the Audio2Face section.
Here you can either provide a NVIDIA Cloud Functions API Key or a Server URL to an existing Audio2Face service.
Note
We highly recommend that you use the NVIDIA Cloud Functions API Key option where and when possible. You can get your NVIDIA Cloud Functions API Key by visiting https://build.nvidia.com/nvidia/audio2face/api and following the instructions there. The API key starts with nvapi- followed by 64 random characters.
After configuring the service, the animated icon to the left of Audio2Face group turns green to indicate connection settings have been entered. This does not mean that settings entered are valid and that it can connect to the service. Stay tuned as we continue to evolve this functionality.
If using a Server URL, which does not begin with http
, the blueprints in this sample project automatically prepend http://
to whatever is provided. This is done within the Content/Core/Blueprints/UI/WB_Components/WB_A2F_Settings:GetConnection
function.
Emotion Controls#
Audio2Face detects emotions from the audio input that affect character animations appropriately. If your application has information about character emotion, you can also provide this to Audio2Face and application-provided emotion overrides are blended with the detected emotion. Each emotion override values are between 0.0 and 1.0. Values outside that range are ignored. A value of 0.0 represents a neutral emotion.
Note
Emotion and face parameter inputs won’t have any effect for audio clips less than 0.5 seconds.
Tune Parameters#
Certain Audio2Face service parameters can be overridden by the application. Typically, these parameters are tightly coupled with the model deployed to the service, and changing these settings in the application is not recommended. If you need to change any of these, see the Audio2Face service documentation for details on what they do.