NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

black-forest-labs

FLUX.1-dev

Run Anywhere

FLUX.1 is a state-of-the-art suite of image generation models

Run-on-RTXImage GenerationText-to-Image
Get API Key
API Reference
Accelerated by DGX Cloud
Deploying your application in production? Get started with a 90-day evaluation of NVIDIA AI Enterprise

Follow the steps below to download and run the NVIDIA NIM inference microservice for this model on your infrastructure of choice.

Requirements

  • NVIDIA GeForce RTX 4080 or above (see supported GPUs)
  • Install the latest NVIDIA GPU Driver on Windows (Version 570+)

Step 1
Ensure virtualization is enabled in the system BIOS

In Windows, open the Task Manager. Select the Performance tab and click on CPU. Check if Virtualization is enabled. If it is disabled, see here to enable.

Step 2
Open the Windows Subsystem for Linux 2 WSL2 Distro

Install the WSL2 Distro.

Once installed, open the NVIDIA-Workbench WSL2 distro using the following command in the Windows terminal.

wsl -d NVIDIA-Workbench

Step 3
Export API Key

To access FLUX.1-dev model read and accept FLUX.1-dev, FLUX.1-Canny-dev, FLUX.1-Depth-dev and FLUX.1-dev-onnx License Agreements and Acceptable Use Policy.

Create a new Hugging Face token with Read access to contents of all public gated repos you can access permission.

Export your personal credentials as environment variables:

export NGC_API_KEY=<PASTE_API_KEY_HERE>
export HF_TOKEN=<PASTE_HUGGING_FACE_TOKEN_HERE>

Step 4
Login to NVIDIA NGC

Login to NVIDIA NGC so that you can pull the NIM container:

echo "$NGC_API_KEY" | podman login nvcr.io --username '$oauthtoken' --password-stdin

Step 5
Pull and Run NVIDIA NIM

Pull and run the NIM with the command below.

# Create the cache directory on the host machine.
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
chmod 777 $LOCAL_NIM_CACHE
                        
podman run -it --rm --name=nim-server \
  --device nvidia.com/gpu=all \
  -e NGC_API_KEY=$NGC_API_KEY \
  -e HF_TOKEN=$HF_TOKEN \
  -p 8000:8000 \
  -v "$LOCAL_NIM_CACHE:/opt/nim/.cache/" \
  nvcr.io/nim/black-forest-labs/flux.1-dev:latest

You can specify the desired variant of FLUX by adding -e NIM_MODEL_VARIANT=<you variant>. Available variants are base, canny, depth and their combinations, such as base+depth.

When you run the preceding command, the container downloads the model, initializes a NIM inference pipeline, and performs a pipeline warm up. A pipeline warm up typically requires up to three minutes. The warm up is complete when the container logs show Pipeline warmup: start/done.

Step 6
Test the NIM

invoke_url="http://localhost:8000/v1/infer"

output_image_path="result.jpg"

response=$(curl -X POST $invoke_url \
    -H "Accept: application/json" \
    -H "Content-Type: application/json" \
    -d '{
          "prompt": "A simple coffee shop interior",
          "mode": "base",
          "seed": 0, 
          "steps": 50 
        }')
response_body=$(echo "$response" | awk '/{/,EOF-1')
echo $response_body | jq .artifacts[0].base64 | tr -d '"' | base64 --decode > $output_image_path

For more details on getting started with this NIM including configuring using parameters, visit the Visual GenAI NIM docs.