vLLM for Inference

30 MIN

Install and use vLLM on DGX Spark

Basic idea

vLLM is an inference engine designed to run large language models efficiently. The key idea is maximizing throughput and minimizing memory waste when serving LLMs.

  • It uses a memory-efficient attention algoritm called PagedAttention to handle long sequences without running out of GPU memory.
  • New requests can be added to a batch already in process through continuous batching to keep GPUs fully utilized.
  • It has an OpenAI-compatible API so applications built for the OpenAI API can switch to a vLLM backend with little or no modification.

What you'll accomplish

You'll set up vLLM high-throughput LLM serving on DGX Spark with Blackwell architecture, either using a pre-built Docker container or building from source with custom LLVM/Triton support for ARM64.

What to know before starting

  • Experience building and configuring containers with Docker
  • Familiarity with CUDA toolkit installation and version management
  • Understanding of Python virtual environments and package management
  • Knowledge of building software from source using CMake and Ninja
  • Experience with Git version control and patch management

Prerequisites

  • DGX Spark device with ARM64 processor and Blackwell GPU architecture
  • CUDA 13.0 toolkit installed: nvcc --version shows CUDA toolkit version.
  • Docker installed and configured: docker --version succeeds
  • NVIDIA Container Toolkit installed
  • Python 3.12 available: python3.12 --version succeeds
  • Git installed: git --version succeeds
  • Network access to download packages and container images

Model Support Matrix

The following models are supported with vLLM on Spark. All listed models are available and ready to use:

ModelQuantizationSupport StatusHF Handle
GPT-OSS-20BMXFP4openai/gpt-oss-20b
GPT-OSS-120BMXFP4openai/gpt-oss-120b
Llama-3.1-8B-InstructFP8nvidia/Llama-3.1-8B-Instruct-FP8
Llama-3.1-8B-InstructNVFP4nvidia/Llama-3.1-8B-Instruct-FP4
Llama-3.3-70B-InstructNVFP4nvidia/Llama-3.3-70B-Instruct-FP4
Qwen3-8BFP8nvidia/Qwen3-8B-FP8
Qwen3-8BNVFP4nvidia/Qwen3-8B-FP4
Qwen3-14BFP8nvidia/Qwen3-14B-FP8
Qwen3-14BNVFP4nvidia/Qwen3-14B-FP4
Qwen3-32BNVFP4nvidia/Qwen3-32B-FP4
Qwen2.5-VL-7B-InstructNVFP4nvidia/Qwen2.5-VL-7B-Instruct-FP4
Phi-4-multimodal-instructFP8nvidia/Phi-4-multimodal-instruct-FP8
Phi-4-multimodal-instructNVFP4nvidia/Phi-4-multimodal-instruct-FP4
Phi-4-reasoning-plusFP8nvidia/Phi-4-reasoning-plus-FP8
Phi-4-reasoning-plusNVFP4nvidia/Phi-4-reasoning-plus-FP4

NOTE

The Phi-4-multimodal-instruct models require --trust-remote-code when launching vLLM.

NOTE

You can use the NVFP4 Quantization documentation to generate your own NVFP4-quantized checkpoints for your favorite models. This enables you to take advantage of the performance and memory benefits of NVFP4 quantization even for models not already published by NVIDIA.

Reminder: not all model architectures are supported for NVFP4 quantization.

Time & risk

  • Duration: 30 minutes for Docker approach
  • Risks: Container registry access requires internal credentials
  • Rollback: Container approach is non-destructive.
  • Last Updated: 01/02/2026
    • Add supported Model Matrix (25.11-py3)
    • Improve cluster setup instructions