NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth

inference

  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • vLLM for Inference
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • LM Studio on DGX Spark

Vibe Coding in VS Code

30 MIN

Use DGX Spark as a local or remote Vibe Coding assistant with Ollama and Continue

DGXSparkVibeCoding
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting
SymptomCauseFix
Ollama not startingGPU drivers may not be installed correctlyRun nvidia-smi in the terminal. If the command fails check DGX Dashboard for updates to your DGX Spark.
Continue can't connect over the networkPort 11434 may not be open or accessibleRun command ss -tuln | grep 11434. If the output does not reflect tcp LISTEN 0 4096 *:11434 *:* , go back to step 2 and run the ufw command.
Continue can't detect a locally running Ollama modelConfiguration not properly set or detectedCheck OLLAMA_HOST and OLLAMA_ORIGINS in /etc/systemd/system/ollama.service.d/override.conf file. If OLLAMA_HOST and OLLAMA_ORIGINS are set correctly, add these lines to your ~/.bashrc file.
High memory usageModel size too bigConfirm no other large models or containers are running with nvidia-smi. Use smaller models such as gpt-oss:20b for lightweight usage.

NOTE

DGX Spark uses a Unified Memory Architecture (UMA), which enables dynamic memory sharing between the GPU and CPU. With many applications still updating to take advantage of UMA, you may encounter memory issues even when within the memory capacity of DGX Spark. If that happens, manually flush the buffer cache with:

sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'

Resources

  • DGX Spark Documentation
  • Ollama Documentation
  • VSCode
  • Continue.dev
  • DGX Spark Forum
  • DGX Spark User Performance Guide
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation