
black-forest-labs
FLUX.1-schnell
Run AnywhereFLUX.1-schnell is a distilled image generation model, producing high quality images at fast speeds
Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.
Get Credentials
To access FLUX.1-schnell model read and accept FLUX.1-schnell and FLUX.1-schnell-onnx License Agreements and Acceptable Use Policy.
Create a new Hugging Face token with Read access to contents of all public gated repos you can access permission.
Export your personal credentials as environment variables:
export NGC_API_KEY=<PASTE_API_KEY_HERE> export HF_TOKEN=<PASTE_HUGGING_FACE_TOKEN_HERE>
Pull and Run the NIM
Login to NVIDIA NGC so that you can pull the NIM container:
echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin
Pull and run the NIM with the command below.
# Create the cache directory on the host machine. export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" chmod 777 $LOCAL_NIM_CACHE docker run -it --rm --name=nim-server \ --runtime=nvidia --gpus='"device=0"' \ -e NGC_API_KEY=$NGC_API_KEY \ -e HF_TOKEN=$HF_TOKEN \ -p 8000:8000 \ -v "$LOCAL_NIM_CACHE:/opt/nim/.cache/" \ nvcr.io/nim/black-forest-labs/flux.1-schnell:1.0.0
When you run the preceding command, the container downloads the model, initializes a NIM inference pipeline, and performs a pipeline warm up.
A pipeline warm up typically requires up to three minutes. The warm up is complete when the container logs show Pipeline warmup: start/done
.
Test the NIM
invoke_url="http://localhost:8000/v1/infer" output_image_path="result.jpg" response=$(curl -X POST $invoke_url \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -d '{ "prompt": "A simple coffee shop interior", "seed": 0, "steps": 4 }') response_body=$(echo "$response" | awk '/{/,EOF-1') echo $response_body | jq .artifacts[0].base64 | tr -d '"' | base64 --decode > $output_image_path
For more details on getting started with this NIM including configuring using parameters, visit the Visual GenAI NIM docs.