mit

Boltz-2

Run Anywhere

Predict complex structures using Boltz-2.

Deploying your application in production? Get started with a 90-day evaluation of NVIDIA AI Enterprise

Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.

Generate API Key

Pull and Run the NIM

$ docker login nvcr.io Username: $oauthtoken Password: <PASTE_API_KEY_HERE>

Start the Boltz2 NIM

  1. Export the NGC_API_KEY environment variable.
export NGC_API_KEY=<your personal NGC key>
  1. The NIM container automatically downloads any required models. To save time and bandwidth it is recommended to provide a local cache directory. This way the NIM will be able to reuse any already downloaded models. Execute the following command to setup the cache directory:
export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p $LOCAL_NIM_CACHE chmod -R 777 $LOCAL_NIM_CACHE
  1. Run the NIM container with the following commands.
docker run -it \ --runtime=nvidia \ -p 8000:8000 \ -e NGC_API_KEY \ -v "$LOCAL_NIM_CACHE":/opt/nim/.cache \ nvcr.io/nim/mit/boltz2:1.0.0

This will by default run on all available GPUs. Below is an example of running the NIM specifically on device 1:

docker run -it \ --runtime=nvidia \ --gpus='"device=0"' \ -p 8000:8000 \ -e NGC_API_KEY \ -v "$LOCAL_NIM_CACHE":/opt/nim/.cache \ nvcr.io/nim/mit/boltz2:1.0.0
  1. Query the NIM

The following python script can be saved to a file named boltz2.py and can then be run using python boltz2.py. This will post a request to the locally-running NIM and print the response if it returns successfully. If the NIM fails, the response's text field will be printed.

import requests import json from typing import Dict, Any SEQUENCE = "MTEYKLVVVGACGVGKSALTIQLIQNHFVDEYDPTIEDSYRKQVVID" def query_boltz2_nim( input_data: Dict[str, Any], base_url: str = "http://localhost:8000" ) -> Dict[str, Any]: """ Query the Boltz2 NIM with input data. Args: input_data: Dictionary containing the prediction request data base_url: Base URL of the NIM service (default: http://localhost:8000) Returns: Dictionary containing the prediction response """ # Construct the full URL url = f"{base_url}/biology/mit/boltz2/predict" # Set headers headers = { "Content-Type": "application/json" } try: # Make the POST request response = requests.post(url, json=input_data, headers=headers) # Check if request was successful response.raise_for_status() # Return the JSON response return response.json() except requests.exceptions.RequestException as e: print(f"Error querying NIM: {e}") if hasattr(e.response, 'text'): print(f"Response text: {e.response.text}") raise # Example usage if __name__ == "__main__": # Example input data - modify this according to your BoltzPredictionRequest structure example_input = {"polymers":[ { "id": "A", "molecule_type": "protein", "sequence": SEQUENCE } ]} try: # Query the NIM result = query_boltz2_nim(example_input) # Print the result print("Prediction result:") print(json.dumps(result, indent=2)) except Exception as e: print(f"Failed to get prediction: {e}")