NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • cuTile Kernels
  • CLI Coding Agent
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw 🦞

inference

  • LM Studio on DGX Spark
  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • vLLM for Inference

Live VLM WebUI

20 MIN

Real-time Vision Language Model interaction with webcam streaming

DGXSparkVLMVision AIWebRTC
View on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting
SymptomCauseFix
pip install shows "error: externally-managed-environment"Python 3.12+ prevents system-wide pip installsUse virtual environment: python3 -m venv live-vlm-env && source live-vlm-env/bin/activate && pip install live-vlm-webui
Browser shows "Your connection is not private" warningApplication uses self-signed SSL certificateClick "Advanced" → "Proceed to <IP> (unsafe)" - this is safe and expected behavior
Camera not accessible or "Permission Denied"Browser requires HTTPS for webcam accessEnsure you're using https:// (not http://). Accept self-signed certificate warning and grant camera permissions when prompted
"Failed to connect to VLM" or "Connection refused"Ollama or VLM backend not runningVerify Ollama is running with curl http://localhost:11434/v1/models. If not running, start with sudo systemctl start ollama
VLM responses are very slow (>5 seconds per frame)Model too large for available VRAM or incorrect configurationTry a smaller model (gemma3:4b instead of larger models). Increase Frame Processing Interval to 60+ frames. Reduce Max Tokens to 100-200
GPU stats show "N/A" for all metricsNVML not available or GPU driver issuesVerify GPU access with nvidia-smi. Ensure NVIDIA drivers are properly installed
"No models available" in model dropdownAPI endpoint incorrect or models not downloadedVerify API endpoint is http://localhost:11434/v1 for Ollama. Download models with ollama pull gemma3:4b
Server fails to start with "port already in use"Port 8090 already occupied by another serviceStop the conflicting service or use --port flag to specify a different port: live-vlm-webui --port 8091
Cannot access from remote browser on networkFirewall blocking port 8090 or wrong IP addressVerify firewall allows port 8090: sudo ufw allow 8090. Use correct IP from hostname -I command
Video stream is laggy or frozenNetwork issues or browser performanceUse Chrome or Edge browser. Access from a separate PC on the network rather than locally. Check network bandwidth
Analysis results in unexpected languageModel supports multilingual and detected language in promptExplicitly specify output language in prompt: "Answer in English: describe what you see"
pip install fails with dependency errorsConflicting Python package versionsTry installing with --user flag: pip install --user live-vlm-webui
Command live-vlm-webui not found after installBinary path not in PATHAdd ~/.local/bin to PATH: export PATH="$HOME/.local/bin:$PATH" then run source ~/.bashrc
Camera works but no VLM analysis results appear, browser shows InvalidStateErrorAccessing via SSH port forwarding from remote machineWebRTC requires direct network connectivity and doesn't work through SSH tunnels (SSH only forwards TCP, WebRTC needs UDP). Solution 1: Access the web UI directly from a browser on the same network as the server. Solution 2: Use the server machine's browser directly. Solution 3: Use X11 forwarding (ssh -X) to display the browser remotely

Resources

  • Live VLM WebUI GitHub Repository
  • Live VLM WebUI Documentation
  • DGX Spark Documentation
  • DGX Spark Forum
  • DGX Spark User Performance Guide
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation