Let’s now take a sample audio file to feed into Audio2Face-3D to drive the facial speaking animation.

Normally, you would send audio to Audio2Face-3D through its gRPC API. For convenience, a python script allows you to do this through the command line. Clone the ACE repo and follow the steps to setup the script.

The script comes with a sample audio file that is compatible with Audio2Face-3D. From the microservices/audio_2_face_microservice/1.2/scripts/audio2face_in_animation_pipeline_validation_app directory of the ACE repo, run the following command to send the sample audio file to Audio2Face-3D:

python3 validate.py -u 127.0.0.1:50000 -i $stream_id ../../example_audio/Mark_joy.wav

Note

Audio2Face-3D requires audio to be in 16KHz, mono-channel format.