NVIDIA
Explore
Models
Blueprints
GPUs
Docs
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2025 NVIDIA Corporation

View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data-science

  • Optimized JAX
  • Text to Knowledge Graph

tools

  • Comfy UI
  • DGX Dashboard
  • VS Code
  • RAG application in AI Workbench
  • Set up Tailscale on your Spark

fine-tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine tune with Pytorch
  • Unsloth on DGX Spark
  • Vision-Language Model Fine-tuning

use-case

  • Build and Deploy a Multi-Agent Chatbot
  • NCCL for Two Sparks
  • Connect Two Sparks
  • Video Search and Summarization

inference

  • Multi-modal Inference
  • NIM on Spark
  • NVFP4 Quantization
  • Speculative Decoding
  • TRT LLM for Inference
  • Install and Use vLLM for Inference

Multi-modal Inference

1 HR

Setup multi-modal inference with TensorRT

View on GitHub

Basic idea

Multi-modal inference combines different data types, such as text, images, and audio, within a single model pipeline to generate or interpret richer outputs.
Instead of processing one input type at a time, multi-modal systems have shared representations that text-to-image generation, image captioning, or vision-language reasoning.

On GPUs, this enables parallel processing across modalities for faster, higher-fidelity results for tasks that combine language and vision.

What you'll accomplish

You'll deploy GPU-accelerated multi-modal inference capabilities on NVIDIA Spark using TensorRT to run Flux.1 and SDXL diffusion models with optimized performance across multiple precision formats (FP16, FP8, FP4).

What to know before starting

  • Working with Docker containers and GPU passthrough
  • Using TensorRT for model optimization
  • Hugging Face model hub authentication and downloads
  • Command-line tools for GPU workloads
  • Basic understanding of diffusion models and image generation

Prerequisites

  • NVIDIA Spark device with Blackwell GPU architecture
  • Docker installed and accessible to current user
  • NVIDIA Container Runtime configured
  • Hugging Face account with access to Black Forest Labs models FLUX.1-dev and FLUX.1-dev-onnx on Hugging Face
  • Hugging Face token configured with access to both FLUX.1 model repositories
  • At least 48GB VRAM available for FP16 Flux.1 Schnell operations
  • Verify GPU access: nvidia-smi
  • Check Docker GPU integration: docker run --rm --gpus all nvidia/cuda:12.0-base-ubuntu20.04 nvidia-smi

Ancillary files

All necessary files can be found in the TensorRT repository here on GitHub

  • requirements.txt - Python dependencies for TensorRT demo environment
  • demo_txt2img_flux.py - Flux.1 model inference script
  • demo_txt2img_xl.py - SDXL model inference script
  • TensorRT repository - Contains diffusion demo code and optimization tools

Time & risk

  • Duration: 45-90 minutes depending on model downloads and optimization steps

  • Risks:

    • Large model downloads may timeout
    • High VRAM requirements may cause OOM errors
    • Quantized models may show quality degradation
  • Rollback:

    • Remove downloaded models from HuggingFace cache
    • Then exit the container environment

Resources

  • DGX Spark Documentation
  • DGX Spark Forum