NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth

inference

  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • vLLM for Inference
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • LM Studio on DGX Spark

Run models with llama.cpp on DGX Spark

30 MIN

Build llama.cpp with CUDA and serve models via an OpenAI-compatible API (Gemma 4 31B IT as example)

DGX SparkInferenceLLMllama.cpp
View llama.cpp on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Step 1
Verify prerequisites

This walkthrough uses Gemma 4 31B IT (gemma-4-31B-it-f16.gguf) as the example checkpoint. You can substitute another GGUF from ggml-org/gemma-4-31B-it-GGUF (for example Q4_K_M or Q8_0) by changing the hf download filename and --model path in later steps.

Ensure the required tools are installed:

git --version
cmake --version
nvcc --version

All commands should return version information. If any are missing, install them before continuing.

Install the Hugging Face CLI:

python3 -m venv llama-cpp-venv
source llama-cpp-venv/bin/activate
pip install -U "huggingface_hub[cli]"

Verify installation:

hf version

Step 2
Clone the llama.cpp repository

Clone upstream llama.cpp—the framework you are building:

git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp

Step 3
Build llama.cpp with CUDA

Configure CMake with CUDA and GB10’s sm_121 architecture so GGML’s CUDA backend matches your GPU:

mkdir build && cd build
cmake .. -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES="121" -DLLAMA_CURL=OFF
make -j8

The build usually takes on the order of 5–10 minutes. When it finishes, binaries such as llama-server appear under build/bin/.

Step 4
Download Gemma 4 31B IT GGUF (supported model example)

llama.cpp loads models in GGUF format. gemma-4-31B-it is available in GGUF from Hugging Face; this playbook uses a F16 variant that balances quality and memory on GB10-class hardware.

hf download ggml-org/gemma-4-31B-it-GGUF \
  gemma-4-31B-it-f16.gguf \
  --local-dir ~/models/gemma-4-31B-it-GGUF

The F16 file is large (~62GB). The download can be resumed if interrupted.

Step 5
Start llama-server with Gemma 4 31B IT

From your llama.cpp/build directory, launch the OpenAI-compatible server with GPU offload:

./bin/llama-server \
  --model ~/models/gemma-4-31B-it-GGUF/gemma-4-31B-it-f16.gguf \
  --host 0.0.0.0 \
  --port 30000 \
  --n-gpu-layers 99 \
  --ctx-size 8192 \
  --threads 8

Parameters (short):

  • --host / --port: bind address and port for the HTTP API
  • --n-gpu-layers 99: offload layers to the GPU (adjust if you use a different model)
  • --ctx-size: context length (can be increased up to model/server limits; uses more memory)
  • --threads: CPU threads for non-GPU work

You should see log lines similar to:

llama_new_context_with_model: n_ctx = 8192
...
main: server is listening on 0.0.0.0:30000

Keep this terminal open while testing. Large GGUFs can take several minutes to load; until you see server is listening, nothing accepts connections on port 30000 (see Troubleshooting if curl reports connection refused).

Step 6
Test the API

Use a second terminal on the same machine that runs llama-server (for example another SSH session into DGX Spark). If you run curl on your laptop while the server runs only on Spark, use the Spark hostname or IP instead of localhost.

curl -X POST http://127.0.0.1:30000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gemma4",
    "messages": [{"role": "user", "content": "New York is a great city because..."}],
    "max_tokens": 100
  }'

If you see curl: (7) Failed to connect, the server is still loading, the process exited (check the server log for OOM or path errors), or you are not curling the host that runs llama-server.

Example shape of the response (fields vary by llama.cpp version; message may include extra keys):

{
  "choices": [
    {
      "finish_reason": "length",
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "New York is a great city because it's a living, breathing collage of cultures, ideas, and possibilities—all stacked into one vibrant, never‑sleeping metropolis. Here are just a few reasons that many people ("
      }
    }
  ],
  "created": 1765916539,
  "model": "gemma-4-31B-it-f16.gguf",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 100,
    "prompt_tokens": 25,
    "total_tokens": 125
  },
  "id": "chatcmpl-...",
  "timings": {
    ...
  }
}

Step 7
Longer completion (with example model)

Try a slightly longer prompt to confirm stable generation with Gemma 4 31B IT:

curl -X POST http://127.0.0.1:30000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gemma4",
    "messages": [{"role": "user", "content": "Solve this step by step: If a train travels 120 miles in 2 hours, what is its average speed?"}],
    "max_tokens": 500
  }'

Step 8
Cleanup

Stop the server with Ctrl+C in the terminal where it is running.

To remove this tutorial’s artifacts:

rm -rf ~/llama.cpp
rm -rf ~/models/gemma-4-31B-it-GGUF

Deactivate the Python venv if you no longer need hf:

deactivate

Step 9
Next steps

  1. Context length: Increase --ctx-size for longer chats (watch memory; 1M-token class contexts are possible only when the build, model, and hardware allow).
  2. Other models: Point --model at any compatible GGUF; the llama.cpp server API stays the same.
  3. Integrations: Point Open WebUI, Continue.dev, or custom clients at http://<spark-host>:30000/v1 using the OpenAI client pattern.

The server implements the usual OpenAI-style chat features your llama.cpp build enables (including streaming and tool-related flows where supported).

Resources

  • llama.cpp GitHub Repository
  • DGX Spark Documentation
  • DGX Spark Forum
  • DGX Spark User Performance Guide
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation