Kairos Unreal Sample Project

The Kairos Unreal Sample Project serves as a guide for game developers looking to integrate the NVIDIA ACE plugin into their game.

You can download the Unreal Engine project files over at https://developer.nvidia.com/ace#game-characters.

This documentation gives a brief overview on the sample project and how it works.

Quick Start Guide

This project only needs a few things to get it working.

  1. Install and activate the NVIDIA ACE plugin by following the detailed steps provided in NVIDIA ACE plugin installation.

  2. Configure the Audio2Face settings with your NVIDIA Cloud Functions API Key or by specifying a Server URL to an existing Audio2Face service.

After configured correctly, pressing the green play button initiates audio and facial animations on the selected target actor.

UI Overview

Upon launching the game, the settings UI appear automatically. You can toggle this UI on and off using the H hotkey. This interface allows you to configure the NVIDIA ACE plugin settings, select the audio input, and trigger the playback on the selected character.

../../_images/kairos_initial_view.png

Actor Controls

Target Actor

../../_images/kairos_target_actor_combo.png

This dropdown menu controls which character receives the animation from the NVIDIA ACE Plugin. It is populated automatically at game start, identifying all Actors with an ACEAudioCurveSourceComponent. Newly added or duplicated characters with the proper configuration also appear in this list.

Zoom To Character

../../_images/kairos_zoom.png

The magnifying glass icon allows you to quickly adjust the camera to focus on the current target actor.

Toggle Idle Animations

Characters in this demo possess simple idle animations that can be toggled on or off individually. This feature halts their idle movements without impacting the animations coming from Audio2Face.

../../_images/kairos_idle_anim.png

With idle animations enabled, characters display subtle movements.

../../_images/kairos_no_idle_anim.png

Disabling idle animations freezes the character in their current pose.

Audio Source

The audio source selection determines the audio input for the NVIDIA ACE Plugin.

../../_images/kairos_audio_source_combo.png

Example

This option locates all AudioComponents attached to the currently selected Target Actor. These components are unique to each character, so the available options in this dropdown change when you select a different target actor.

../../_images/kairos_audiocomponent_combo.png

Local File

In this mode, you can input or browse for a .wav file on your disk. The play button becomes active only if the provided file path exists and all services are properly connected.

../../_images/kairos_audio_input.png

The folder icon opens a dialog allowing you to navigate to and select a .wav file, automatically filling in the file path.

../../_images/kairos_open_dialog.png

Record

The record function enables you to press and hold to capture audio using your microphone. This recording is saved within the game files and is replaced with each new recording. The play button activates if the file exists and all services are connected.

../../_images/kairos_record1.png
../../_images/kairos_record2.png

Play

Pressing the play button sends a request to the NVIDIA ACE Plugin, which then uses the selected audio source to animate the Target Actor.

../../_images/kairos_play_button.png

Audio2Face

Settings

The Audio2Face settings first need to be configured to work properly. To do this, navigate to the Settings tab within the Audio2Face section.

../../_images/kairos_a2f_settings.png

Here you can either provide a NVIDIA Cloud Functions API Key or a Server URL to an existing Audio2Face service.

You can get your NVIDIA Cloud Functions API Key by visiting https://build.nvidia.com/nvidia/audio2face/api and following the instructions there. The API key starts with nvapi- followed by 64 random characters.

Note

If using a Server URL, which does not begin with http, the blueprints in this sample project automatically prepend http:// to whatever is provided. This is done within the Content/Core/Blueprints/UI/WB_Components/WB_A2F_Settings:GetConnection function.

../../_images/kairos_a2f_server_connection.png
../../_images/kairos_a2f_connected.png

After configuring the service, the animated icon to the left of Audio2Face group should turn green to indicate connection settings have been entered. This does not mean that settings entered are valid and that it can connect to the service. Stay tuned as we continue to evolve this functionality.

Emotion Controls

Audio2Face detects emotions from the audio input that affect character animations appropriately. If your application has information about character emotion, you can also provide this to Audio2Face and application-provided emotion overrides are blended with the detected emotion. Each emotion override values are between 0.0 and 1.0. Values outside that range are ignored. A value of 0.0 represents a neutral emotion.

Note

Emotion and face parameter inputs won’t have any effect for audio clips less than 0.5 seconds.

../../_images/kairos_a2f_emotion_ctrls.png

Tune Parameters

Certain Audio2Face service parameters can be overridden by the application. Typically, these parameters are tightly coupled with the model deployed to the service, and changing these settings in the application is not recommended. If you need to change any of these, see the Audio2Face service documentation for details on what they do.

../../_images/kairos_a2f_tune_parameters.png