nvidia/audio2face-3d
Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.
Legal
NVIDIA Audio2Face-3D NIM and Authoring Microservice allow you to upload audio files to drive an animation. NVIDIA will only use and store the audio files to provide you with the NVIDIA Audio2Face Authoring Microservice. For more information about our data processing practices, see our Privacy Policy. By clicking “Get API Key” you consent to the processing of your data in accordance with the NVIDIA Cloud Agreement and Service-Specific Terms for NVIDIA Audio2Face 3D Authoring Microservice and NVIDIA Audio2Face 3D Microservice NIM.
Getting Started
Audio2Face uses gRPC APIs. The following instructions demonstrate usage of a model using Python client. The current available models are Mark, Claire, and James.
Prerequisites
You will need a system with Python 3+ installed.
Prepare Python Client
Start by creating a python venv using
$ python3 -m venv .venv $ source .venv/bin/activate
Download A2F Python Client
Download Python client code by cloning ACE Github Repository.
$ git clone https://github.com/NVIDIA/Audio2Face-3D-Samples.git $ cd Audio2Face-3D-Samples/scripts/audio2face_3d_api_client
Install the proto files by installing the python wheel:
$ pip3 install ../../proto/sample_wheel/nvidia_ace-1.2.0-py3-none-any.whl
Then install the required dependencies:
$ pip3 install -r requirements
Run Python Client
To run with Claire model:
$ python ./nim_a2f_3d_client.py ../../example_audio/Claire_neutral.wav config/config_claire.yml \ --apikey $API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC\ --function-id a05a5522-3059-4dfd-90e4-4bc1699ae9d4
To run with Mark model:
$ python ./nim_a2f_3d_client.py ../../example_audio/Claire_neutral.wav config/config_mark.yml \ --apikey $API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC\ --function-id b85c53f3-5d18-4edf-8b12-875a400eb798
To run with James model:
$ python ./nim_a2f_3d_client.py ../../example_audio/Claire_neutral.wav config/config_james.yml \ --apikey $API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC\ --function-id 52f51a79-324c-4dbe-90ad-798ab665ad64
The script takes four mandatory parameters, an audio file at format PCM 16 bits, a yaml configuration file for the emotions parameters, the API Key generated by API Catalog, and the Function ID used to access the API function.
--apikey for the API Key generated through the API Catalog --function-id for the Function ID provided to access the API function for the model of interest
What does this example do?
- Reads the audio data from a wav 16bits PCM file
- Reads emotions and parameters from the yaml configuration file
- Sends emotions, parameters and audio to the A2F Controller
- Receives back blendshapes, audio and emotions
- Saves blendshapes as animation key frames in a csv file with their name, value and time codes
- Same process for the emotion data.
- Saves the received audio as out.wav (Should be the same as input audio)
Connect from any client
For gRPC connection from any client, use the following endpoint and function-id alongside the API Key. To generate a new API Key, click the Get API Key button on this page.
grpc.nvcf.nvidia.com:443 or https://grpc.nvcf.nvidia.com:443 authorization: Bearer $API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC function-id: <function ID>
Function IDs
Mark model: b85c53f3-5d18-4edf-8b12-875a400eb798
Claire model: a05a5522-3059-4dfd-90e4-4bc1699ae9d4
James model: 52f51a79-324c-4dbe-90ad-798ab665ad64