User Interface#

Attribute Editor - AceAnimationPlayer#

Network and Audio#

Information to connect ACE Audio2Face-3D service, and audio file information to request animation.

../../../_images/ui_network_and_audio.png
  • NVCF Api Key: A valid api key acquired from the page. The authoring feature requires a different key (from NGC) to access the service. Please reach out to your NVIDIA contact or to the NVIDIA support if you don’t have access to NGC.

  • Network Preset: An option menu to use predefined network settings.

  • Connect and Send Audio: A button to send audio and receives animation(Streaming) or start a live session(Authoring)

  • Status Indicator: A color circle that visualizes communication status; green(successful), red(error), and yellow(out-dated)

  • Client Type: There are two different types of ACE Audio2Face-3D servers that can be used with.

    • Streaming: Download the entire blendshape weights for each frame at once. The animation can be playback without network delay.

    • Authoring (Early Access): Interactive exploration of parameters and the output blendshape weights. Whenever the user changes a parameter or updates the timeline, a new grpc request is sent to the server to get the latest result reflecting the new configuration. This allows the user to see the updated blendshape weights in real-time, without having to wait for the entire animation to be processed.

  • Network Address: A full url with protocol and port number. example: https://grpc.nvcf.nvidia.com:443

  • NVCF Function Id: A valid function id that is specific for the service and an AI model.

    • Streaming (You can find the latest Function Id from this page)

      • Mark model: 945ed566-a023-4677-9a49-61ede107fd5a

      • Claire model: 462f7853-60e8-474a-9728-7b598e58472c

    • Authoring (Early Access)

      • Mark model: be24fd18-4c26-4a38-84ad-c7f88da10835

      • Claire model: f33c62b0-96d2-434a-9a4c-e89b7c064be5

Note

Please ensure to press the “Enter” key after updating the text in the textbox. This will apply the changes made to the text.

  • Audiofile: A path to an audio file to request animation from the service. It can be also selected from imported audios through the drop down option.

Emotion Parameters#

Parameters to control generated emotion and preferred(manual) emotion. Audio2Face-3D generate animation with the emotion input which includes generated emotion and preferred emotion. Please watch this video to understand how it works.

../../../_images/ui_emotion_params.png
  • Emotion Strength: the strength of the overall emotion; the total of auto emotion and preferred(manual) emotion.

    • emotion = emotion strength * (preferred weight * preferred emotion + (1.0 - preferred weight) * generated emotion)

  • Preferred Emotion: Enable/disable and the ratio of the user driven emotion in the overall emotion (1.0 = 100% preferred emotion, 0.0 = 100% generated emotion).

  • Auto Emotion: Parameters to control generated emotion.

Face Parameters#

Parameters to control overal face animation. Please check Audio2Face-3D Microservice documents for the updated information.

../../../_images/ui_face_params.png

Blendshape Multipliers#

Override specific expressions by multiplying the Audio2Face-3D result.

../../../_images/ui_bs_multipliers.png

Blendshape Offsets#

Override specific expressions by adding constant values to the Audio2Face-3D result;

  • each output = (raw result * multipler) + offset

../../../_images/ui_bs_offsets.png