
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
NGC_API_KEY variable.export NGC_API_KEY=<your personal NGC key>
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p $LOCAL_NIM_CACHE
docker run -it \
--runtime=nvidia \
-p 8000:8000 \
-e NGC_API_KEY \
-v $LOCAL_NIM_CACHE:/opt/nim/.cache \
nvcr.io/nim/deepmind/alphafold2:latest
This command will start the NIM container and expose port 8000 for the user to interact with the NIM.
{"status":"ready"} before proceeding. This may take a couple of minutes. You can use the following command to query the health check.curl http://localhost:8000/v1/health/ready
nim_client.py.import requests
import json
url = "http://localhost:8000/protein-structure/alphafold2/predict-structure-from-sequence" # Replace with the actual URL
sequence = "MNVIDIAIAMAI" # Replace with the actual sequence value
headers = {
"content-type": "application/json"
}
data = {
"sequence": sequence,
"databases": ["small_bfd"],
"e_value": 0.000001,
"algorithm": "mmseqs2",
"relax_prediction": False,
}
response = requests.post(url, headers=headers, data=json.dumps(data))
# Check if the request was successful
if response.ok:
with open("output.pdb", "w") as ofi:
ofi.write(json.dumps(response.json()))
print("Request succeeded:", response.json())
else:
print("Request failed:", response.status_code, response.text)
python nim_client.py
output.pdb.cat output.pdb
nim_client.sh.#!/usr/bin/env bash
set -e
URL=http://localhost:8000/protein-structure/alphafold2/predict-structure-from-sequence
request='{
"sequence": "MNVIDIAIAMAI"
}'
curl -H 'Content-Type: application/json' \
-d "$request" "$URL"
chmod +x nim_client.sh
./nim_client.sh