
MSFT TRELLIS is a 3D AI model that generates high-quality 3D assets from text or image inputs.
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
Export your personal credentials as environment variables:
export NGC_API_KEY=<PASTE_API_KEY_HERE>
Login to NVIDIA NGC so that you can pull the NIM container:
echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin
Pull and run the NIM with the command below.
# Create the cache directory on the host machine.
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
chmod 777 $LOCAL_NIM_CACHE
docker run -it --rm --name=nim-server \
--runtime=nvidia --gpus='"device=0"' \
-e NGC_API_KEY=$NGC_API_KEY \
-p 8000:8000 \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache/" \
nvcr.io/nim/microsoft/trellis:latest
You can specify the desired variant of TRELLIS by adding -e NIM_MODEL_VARIANT=<you variant>. Available variants are base:text, large:text, large:image and large:text+large:image.
When you run the preceding command, the container downloads the model, initializes a NIM inference pipeline, and performs a pipeline warm up.
A pipeline warm up typically requires up to five minutes. The warm up is complete when the container logs show Pipeline warmup: start/done.
invoke_url="http://localhost:8000/v1/infer"
output_image_path="result.glb"
response=$(curl -X POST $invoke_url \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d '{
"prompt": "A simple coffee shop interior",
"seed": 0
}')
response_body=$(echo "$response" | awk '/{/,EOF-1')
echo $response_body | jq .artifacts[0].base64 | tr -d '"' | base64 --decode > $output_image_path
For more details on getting started with this NIM including configuring using parameters, visit the Visual GenAI NIM docs.