NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

View All Playbooks
View All Playbooks

onboarding

  • MIG on DGX Station

data science

  • Topic Modeling
  • Text to Knowledge Graph on DGX Station

tools

  • NVFP4 Quantization

fine tuning

  • Nanochat Training

use case

  • Run NemoClaw on DGX Station
  • Local Coding Agent
  • Secure Long Running AI Agents with OpenShell on DGX Station

inference

  • Serve Qwen3-235B with vLLM

Run NemoClaw on DGX Station

60 MINS

Run OpenClaw in an OpenShell sandbox on DGX Station with Ollama (Nemotron)

AI AgentDGX StationGB300NemoClawOllamaOpenShell
NemoClaw on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting
SymptomCauseFix
Gateway fails with cgroup / "Failed to start ContainerManager" errorsDocker not configured for host cgroup namespace on DGX StationRun the cgroup fix: sudo python3 -c "import json, os; path='/etc/docker/daemon.json'; d=json.load(open(path)) if os.path.exists(path) else {}; d['default-cgroupns-mode']='host'; json.dump(d, open(path,'w'), indent=2)" then sudo systemctl restart docker
"No GPU detected" during onboardWizard may not detect GPU in some setupsOn DGX Station with GB300, nvidia-smi should show GPU. The wizard can still proceed and use Ollama for inference.
"unauthorized: gateway token missing"Dashboard URL used without token or wrong formatPaste the full URL including #token=... (hash fragment, not ?token=). Run openclaw dashboard inside the sandbox to get the URL again.
"No API key found for provider anthropic"API key env vars not set when starting gateway in sandboxInside the sandbox, set both before running the gateway: export NVIDIA_API_KEY=local-ollama and export ANTHROPIC_API_KEY=local-ollama
Agent gives no responseModel not loaded or Nemotron 3 Super is slowNemotron 3 Super can take 30–90 seconds per response. Verify Ollama: curl http://localhost:11434. Ensure inference is set: openshell inference get
Port forward dies or dashboard unreachableForward not active or wrong portList forwards: openshell forward list. Restart: openshell forward stop 18789 my-assistant then openshell forward start --background 18789 my-assistant
Docker permission deniedUser not in docker groupsudo usermod -aG docker $USER, then log out and back in.
Ollama not reachable from sandbox (503 / timeout)Ollama bound to localhost only or firewall blocking 11434Ensure Ollama listens on all interfaces: add Environment="OLLAMA_HOST=0.0.0.0" in sudo systemctl edit ollama.service, then sudo systemctl daemon-reload and sudo systemctl restart ollama. If using UFW: sudo ufw allow 11434/tcp comment 'Ollama for NemoClaw' and sudo ufw reload

NOTE

DGX Station with GB300 GPUs provides 170+ GB VRAM, suitable for large models like Nemotron 3 Super. If you hit out-of-memory errors, try a smaller model (e.g. nemotron-3-nano or gpt-oss:20b) with openshell inference set --provider ollama-local --model <model-name>.

For the latest known issues, see DGX Station documentation.

Resources

  • openshell-openclaw-plugin (NemoClaw)
  • OpenClaw Documentation
  • OpenShell (PyPI)
  • DGX Station Documentation
  • NVIDIA GB300