| Symptom | Cause | Fix |
|---|---|---|
| Gateway fails with cgroup / "Failed to start ContainerManager" errors | Docker not configured for host cgroup namespace on DGX Station | Run the cgroup fix: sudo python3 -c "import json, os; path='/etc/docker/daemon.json'; d=json.load(open(path)) if os.path.exists(path) else {}; d['default-cgroupns-mode']='host'; json.dump(d, open(path,'w'), indent=2)" then sudo systemctl restart docker |
| "No GPU detected" during onboard | Wizard may not detect GPU in some setups | On DGX Station with GB300, nvidia-smi should show GPU. The wizard can still proceed and use Ollama for inference. |
| "unauthorized: gateway token missing" | Dashboard URL used without token or wrong format | Paste the full URL including #token=... (hash fragment, not ?token=). Run openclaw dashboard inside the sandbox to get the URL again. |
| "No API key found for provider anthropic" | API key env vars not set when starting gateway in sandbox | Inside the sandbox, set both before running the gateway: export NVIDIA_API_KEY=local-ollama and export ANTHROPIC_API_KEY=local-ollama |
| Agent gives no response | Model not loaded or Nemotron 3 Super is slow | Nemotron 3 Super can take 30–90 seconds per response. Verify Ollama: curl http://localhost:11434. Ensure inference is set: openshell inference get |
| Port forward dies or dashboard unreachable | Forward not active or wrong port | List forwards: openshell forward list. Restart: openshell forward stop 18789 my-assistant then openshell forward start --background 18789 my-assistant |
| Docker permission denied | User not in docker group | sudo usermod -aG docker $USER, then log out and back in. |
| Ollama not reachable from sandbox (503 / timeout) | Ollama bound to localhost only or firewall blocking 11434 | Ensure Ollama listens on all interfaces: add Environment="OLLAMA_HOST=0.0.0.0" in sudo systemctl edit ollama.service, then sudo systemctl daemon-reload and sudo systemctl restart ollama. If using UFW: sudo ufw allow 11434/tcp comment 'Ollama for NemoClaw' and sudo ufw reload |
NOTE
DGX Station with GB300 GPUs provides 170+ GB VRAM, suitable for large models like Nemotron 3 Super. If you hit out-of-memory errors, try a smaller model (e.g. nemotron-3-nano or gpt-oss:20b) with openshell inference set --provider ollama-local --model <model-name>.
For the latest known issues, see DGX Station documentation.