nvidia/eyecontact

RUN ANYWHERE

Estimate gaze angles of a person in a video and redirect to make it frontal.

By running the below commands, you accept the NVIDIA AI Enterprise Terms of Use and the NVIDIA Community Models License.

Pull and run nvidia/eyecontact using Docker (this will download the full model and run it in your local environment)

$ docker login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>

NVIDIA Maxine Eye Contact NIM uses gRPC APIs for inferencing requests.

A NGC API Key is required to download the appropriate models and resources when starting the NIM. Pass the value of the API Key to the docker run command in the next section as the NGC_API_KEY environment variable as indicated.

If you are not familiar with how to create the NGC_API_KEY environment variable, the simplest way is to export it in your terminal:

export NGC_API_KEY=<PASTE_API_KEY_HERE>

Run one of the following commands to make the key available at startup:

# If using bash echo "export NGC_API_KEY=<value>" >> ~/.bashrc # If using zsh echo "export NGC_API_KEY=<value>" >> ~/.zshrc

Other, more secure options include saving the value in a file, so that you can retrieve with cat $NGC_API_KEY_FILE, or using a password manager.

To pull the NIM container image from NGC, first authenticate using the NVIDIA Container Registry with the following command:

echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin

Use $oauthtoken as the username and NGC_API_KEY as the password. The $oauthtoken username is a special keyword that indicates that you will authenticate with an API Key and not a user name and password.

The following command launches a container with the gRPC service.

docker run -it --rm --name=maxine-eye-contact-nim \ --net host \ --runtime=nvidia \ --gpus all \ --shm-size=8GB \ -e NGC_API_KEY=$NGC_API_KEY \ -e MAXINE_MAX_CONCURRENCY_PER_GPU=1 \ -e NIM_MANIFEST_PROFILE=7f0287aa-35d0-11ef-9bba-57fc54315ba3 \ -e NIM_HTTP_API_PORT=9000 \ -e NIM_GRPC_API_PORT=50051 \ -p 9000:9000 \ -p 50051:50051 \ nvcr.io/nim/nvidia/maxine-eye-contact:latest

Please note, the flag --gpus all is used to assign all available GPUs to the docker container. To assign specific GPU to the docker container (in case of multiple GPUs available in your machine) use --gpus '"device=0,1,2..."'

If the command runs successfully, you get a response similar to the following.

+------------------------+---------+--------+ | Model | Version | Status | +------------------------+---------+--------+ | GazeRedirectionKey68 | 1 | READY | | maxine_nvcf_eyecontact | 1 | READY | +------------------------+---------+--------+ I0903 10:35:41.663046 47 metrics.cc:808] Collecting metrics for GPU 0: GPU Name I0903 10:35:41.663361 47 metrics.cc:701] Collecting CPU metrics I0903 10:35:41.663588 47 tritonserver.cc:2385] +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Option | Value | +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | server_id | triton | | server_version | 2.35.0 | | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor | | | _data parameters statistics trace logging | | model_repository_path[0] | /opt/maxine/models | | model_control_mode | MODE_NONE | | strict_model_config | 0 | | rate_limit | OFF | | pinned_memory_pool_byte_size | 268435456 | | cuda_memory_pool_byte_size{0} | 67108864 | | min_supported_compute_capability | 6.0 | | strict_readiness | 1 | | exit_timeout | 30 | | cache_enabled | 0 | +----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ I0903 10:35:41.664874 47 grpc_server.cc:2445] Started GRPCInferenceService at 0.0.0.0:8001 I0903 10:35:41.665204 47 http_server.cc:3555] Started HTTPService at 0.0.0.0:8000 I0903 10:35:41.706437 47 http_server.cc:185] Started Metrics Service at 0.0.0.0:8002 Maxine GRPC Service: Listening to 0.0.0.0:8004

By default Maxine Eye Contact gRPC service is hosted on port 8004. You will have to use this port for inferencing requests.

We have provided a sample client script file in our GitHub repo. The script could be used to invoke the Docker container using the following instructions.

Download the Maxine Eye Contact Python client code by cloning the NIM Client Repository:

git clone https://github.com/NVIDIA-Maxine/nim-clients.git cd nim-clients/eye-contact

Install the dependencies for the Maxine Eye Contact gRPC client:

sudo apt-get install python3-pip pip install -r requirements.txt

Go to scripts directory

cd scripts

Run the command to send gRPC request

python eye-contact.py --target <target_ip:port> --input <input file path> --output <output file path along with file name>

To view details of command line arguments run this command

python eye-contact.py -h

You will get a response similar to the following.

usage: eye-contact.py [-h] [--target TARGET] [--input INPUT] [--output OUTPUT] Process mp4 video files using gRPC and apply Gaze-redirection. options: -h, --help show this help message and exit --target TARGET The target gRPC server address. --input INPUT The path to the input video file. --output OUTPUT The path for the output video file.

For more details on getting started with this NIM including configuring using parameters, visit the NVIDIA Maxine Eye Contact NIM Docs.