NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth

inference

  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • vLLM for Inference
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • LM Studio on DGX Spark

Vibe Coding in VS Code

30 MIN

Use DGX Spark as a local or remote Vibe Coding assistant with Ollama and Continue

DGXSparkVibeCoding
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Basic idea

This playbook walks you through setting up DGX Spark as a Vibe Coding assistant — locally or as a remote coding companion for VSCode with Continue.dev.
This guide uses Ollama with GPT-OSS 120B to provide easy deployment of a coding assistant to VSCode. Included is advanced instructions to allow DGX Spark and Ollama to provide the coding assistant to be available over your local network. This guide is also written on a fresh installation of the OS. If your OS is not freshly installed and you have issues, see the troubleshooting tab.

What You'll Accomplish

You'll have a fully configured DGX Spark system capable of:

  • Running local code assistance through Ollama.
  • Serving models remotely for Continue and VSCode integration.
  • Hosting large LLMs like GPT-OSS 120B using unified memory.

Prerequisites

  • DGX Spark (128GB unified memory recommended)
  • Ollama and an LLM of your choice (e.g., gpt-oss:120b)
  • VSCode
  • Continue VSCode extension
  • Internet access for model downloads
  • Basic familiarity with opening the Linux terminal, copying and pasting commands.
  • Having sudo access.
  • Optional: firewall control for remote access configuration

Time & risk

  • Duration: About 30 minutes
  • Risks: Data download slowness or failure due to network issues
  • Rollback: No permanent system changes made during normal usage.
  • Last Updated: 10/21/2025
    • First publication

Resources

  • DGX Spark Documentation
  • Ollama Documentation
  • VSCode
  • Continue.dev
  • DGX Spark Forum
  • DGX Spark User Performance Guide
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation