Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.
NVIDIA Audio2Face-3D NIM and Authoring Microservice allow you to upload audio files to drive an animation. NVIDIA will only use and store the audio files to provide you with the NVIDIA Audio2Face Authoring Microservice. For more information about our data processing practices, see our Privacy Policy. By clicking “Get API Key” you consent to the processing of your data in accordance with the NVIDIA Cloud Agreement and Service-Specific Terms for NVIDIA Audio2Face 3D Authoring Microservice and NVIDIA Audio2Face 3D Microservice NIM.
Audio2Face uses gRPC APIs. The following instructions demonstrate usage of a model using Python client. The current available models are Mark, Claire, and James.
You will need a system with Python 3+ installed.
Start by creating a python venv using
$ python3 -m venv .venv $ source .venv/bin/activate
Download Python client code by cloning ACE Github Repository.
$ git clone https://github.com/NVIDIA/Audio2Face-3D-Samples.git $ cd Audio2Face-3D-Samples/scripts/audio2face_3d_api_client
Install the proto files by installing the python wheel:
$ pip3 install ../../proto/sample_wheel/nvidia_ace-1.2.0-py3-none-any.whl
Then install the required dependencies:
$ pip3 install -r requirements
To run with Claire model:
$ python ./nim_a2f_3d_client.py ../../example_audio/Claire_neutral.wav config/config_claire.yml \ --apikey $API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC\ --function-id 0961a6da-fb9e-4f2e-8491-247e5fd7bf8d
To run with Mark model:
$ python ./nim_a2f_3d_client.py ../../example_audio/Claire_neutral.wav config/config_mark.yml \ --apikey $API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC\ --function-id 8efc55f5-6f00-424e-afe9-26212cd2c630
To run with James model:
$ python ./nim_a2f_3d_client.py ../../example_audio/Claire_neutral.wav config/config_james.yml \ --apikey $API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC\ --function-id 9327c39f-a361-4e02-bd72-e11b4c9b7b5e
The script takes four mandatory parameters, an audio file at format PCM 16 bits, a yaml configuration file for the emotions parameters, the API Key generated by API Catalog, and the Function ID used to access the API function.
--apikey for the API Key generated through the API Catalog --function-id for the Function ID provided to access the API function for the model of interest
For gRPC connection from any client, use the following endpoint and function-id alongside the API Key. To generate a new API Key, click the Get API Key button on this page.
grpc.nvcf.nvidia.com:443 or https://grpc.nvcf.nvidia.com:443 authorization: Bearer $API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC function-id: <function ID>
Mark model: 8efc55f5-6f00-424e-afe9-26212cd2c630
Claire model: 0961a6da-fb9e-4f2e-8491-247e5fd7bf8d
James model: 9327c39f-a361-4e02-bd72-e11b4c9b7b5e