NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • VS Code
  • DGX Dashboard
  • Comfy UI
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞
  • Spark & Reachy Photo Booth
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent

inference

  • LM Studio on DGX Spark
  • Nemotron-3-Nano with llama.cpp
  • Speculative Decoding
  • SGLang for Inference
  • TRT LLM for Inference
  • vLLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark

OpenClaw 🦞

30 MINS

Run OpenClaw locally on DGX Spark with LM Studio or Ollama

AI AgentDGXLocal LLMSpark
OpenClaw
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting
SymptomCauseFix
OpenClaw dashboard URL not loadingGateway not running or wrong host/portRestart the OpenClaw gateway: For Ollama, run ollama launch openclaw to restart an already-configured gateway. For LM Studio, restart the OpenClaw gateway via the LM Studio UI or restart the OpenClaw service/container. Verify: Check that the gateway process is running with pgrep -f openclaw or ps aux | grep openclaw. Find URL/token: Check the original installer output (scroll up in your terminal) or look in gateway logs (typically ~/.openclaw/logs/) for the dashboard URL and access token
"Connection refused" to model (e.g. localhost:1234 or Ollama port)LM Studio or Ollama not running, or wrong portStart the model in a separate terminal (lms load ... or ollama run ...) and ensure the port in openclaw.json matches (1234 for LM Studio, 11434 for Ollama)
OpenClaw says no model availableModel provider not configured or model not loadedAdd the models section to ~/.openclaw/openclaw.json for LM Studio, or run ollama launch openclaw for Ollama; ensure the model is loaded/running
Out-of-memory or very slow inference on DGX SparkModel too large for available GPU memory or other GPU workloadsFree GPU memory (close other apps), choose a smaller model, or check usage with nvidia-smi
Install script fails or dependencies missingMissing system packages on LinuxInstall curl and any required build tools; see OpenClaw documentation for current requirements
Config changes not appliedGateway not reloadedRestart the OpenClaw gateway so it reloads ~/.openclaw/openclaw.json

Resources

  • OpenClaw Documentation
  • OpenClaw Gateway Security
  • Clawhub (community skills)
  • DGX Spark Documentation
  • DGX Spark Forum
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation