NVIDIA
Explore
Models
Blueprints
GPUs
Docs
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • CUDA-X Data Science
  • Optimized JAX
  • Text to Knowledge Graph

tools

  • VS Code
  • DGX Dashboard
  • Comfy UI
  • RAG application in AI Workbench
  • Set up Tailscale on your Spark

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine tune with Pytorch
  • Unsloth on DGX Spark
  • Vision-Language Model Fine-tuning

use case

  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • NCCL for Two Sparks
  • Connect Two Sparks
  • Video Search and Summarization

inference

  • Multi-modal Inference
  • NIM on Spark
  • NVFP4 Quantization
  • Speculative Decoding
  • TRT LLM for Inference
  • Install and Use vLLM for Inference
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2025 NVIDIA Corporation

Unsloth on DGX Spark

1 HR

Optimized fine-tuning with Unsloth

View on GitHub
OverviewInstructionsTroubleshooting

Step 1
Verify prerequisites

Confirm your NVIDIA Spark device has the required CUDA toolkit and GPU resources available.

nvcc --version

The output should show CUDA 13.0.

nvidia-smi

The output should show a summary of GPU information.

Step 2
Get the container image

docker pull nvcr.io/nvidia/pytorch:25.09-py3

Step 3
Launch Docker

docker run --gpus all --ulimit memlock=-1 -it --ulimit stack=67108864 --entrypoint /usr/bin/bash --rm nvcr.io/nvidia/pytorch:25.09-py3

Step 4
Install dependencies inside Docker

pip install transformers peft datasets "trl==0.19.1"
pip install --no-deps unsloth unsloth_zoo

Step 5
Build and install bitsandbytes inside Docker

pip install --no-deps bitsandbytes

Step 6
Create Python test script

Curl the test script here into the container.

curl -O https://github.com/NVIDIA/dgx-spark-playbooks/blob/main/nvidia/unsloth/assets/test_unsloth.py

We will use this test script to validate the installation with a simple fine-tuning task.

Step 7
Run the validation test

Execute the test script to verify Unsloth is working correctly.

python test_unsloth.py

Expected output in the terminal window:

  • "Unsloth: Will patch your computer to enable 2x faster free finetuning"
  • Training progress bars showing loss decreasing over 60 steps
  • Final training metrics showing completion

Step 8
Next steps

Test with your own model and dataset by updating the test_unsloth.py file:

# Replace line 32 with your model choice
model_name = "unsloth/Meta-Llama-3.1-8B-bnb-4bit"

# Load your custom dataset in line 8
dataset = load_dataset("your_dataset_name")

# Adjust training parameter args at line 61
per_device_train_batch_size = 4
max_steps = 1000

Visit https://github.com/unslothai/unsloth/wiki for advanced usage instructions, including:

  • Saving models in GGUF format for vLLM
  • Continued training from checkpoints
  • Using custom chat templates
  • Running evaluation loops

Resources

  • Unsloth Documentation
  • DGX Spark Documentation
  • DGX Spark Forum