These steps prepare a fresh DGX Spark for NemoClaw. If Docker, the NVIDIA runtime, and Ollama are already configured, skip to Phase 2.
OpenShell's gateway runs k3s inside Docker. On DGX Spark (Ubuntu 24.04, cgroup v2), Docker must be configured with the NVIDIA runtime and host cgroup namespace mode.
Configure the NVIDIA container runtime for Docker:
sudo nvidia-ctk runtime configure --runtime=docker
Set the cgroup namespace mode required by OpenShell on DGX Spark:
sudo python3 -c "
import json, os
path = '/etc/docker/daemon.json'
d = json.load(open(path)) if os.path.exists(path) else {}
d['default-cgroupns-mode'] = 'host'
json.dump(d, open(path, 'w'), indent=2)
"
Restart Docker:
sudo systemctl restart docker
Verify the NVIDIA runtime works:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
If you get a permission denied error on docker, add your user to the Docker group and activate the new group in your current session:
sudo usermod -aG docker $USER
newgrp docker
This applies the group change immediately. Alternatively, you can log out and back in instead of running newgrp docker.
NOTE
DGX Spark uses cgroup v2. OpenShell's gateway embeds k3s inside Docker and needs host cgroup namespace access. Without default-cgroupns-mode: host, the gateway can fail with "Failed to start ContainerManager" errors.
Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Configure Ollama to listen on all interfaces so the sandbox container can reach it:
sudo mkdir -p /etc/systemd/system/ollama.service.d
printf '[Service]\nEnvironment="OLLAMA_HOST=0.0.0.0"\n' | sudo tee /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama
Verify it is running and reachable on all interfaces:
curl http://0.0.0.0:11434
Expected: Ollama is running. If not, start it with sudo systemctl start ollama.
IMPORTANT
Always start Ollama via systemd (sudo systemctl restart ollama) ā do not use ollama serve &. A manually started Ollama process does not pick up the OLLAMA_HOST=0.0.0.0 setting above, and the NemoClaw sandbox will not be able to reach the inference server.
Download Nemotron 3 Super 120B (~87 GB; may take 15--30 minutes depending on network speed):
ollama pull nemotron-3-super:120b
Run it briefly to pre-load weights into memory (type /bye to exit):
ollama run nemotron-3-super:120b
Verify the model is available:
ollama list
You should see nemotron-3-super:120b in the output.
This single command handles everything: installs Node.js (if needed), installs OpenShell, clones the latest stable NemoClaw release, builds the CLI, and runs the onboard wizard to create a sandbox.
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
The onboard wizard walks you through setup:
my-assistant). Names must be lowercase alphanumeric with hyphens only.telegram here and paste your bot token when prompted. Create the bot first via @BotFather in Telegram (see Step 9). If you skip this, you can re-run the installer later to recreate the sandbox with Telegram enabled.IMPORTANT
Telegram must be configured at this step. The channel plugin and bot token are wired into the sandbox container during onboarding ā they cannot be added to an existing sandbox by exporting environment variables on the host.
When complete you will see output like:
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Dashboard http://localhost:18789/
Sandbox my-assistant (Landlock + seccomp + netns)
Model nemotron-3-super:120b (Local Ollama)
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
IMPORTANT
Save the tokenized Web UI URL printed at the end -- you will need it in Step 8. It looks like:
http://127.0.0.1:18789/#token=<long-token-here>
NOTE
If nemoclaw is not found after install, run source ~/.bashrc to reload your shell path.
Connect to the sandbox:
nemoclaw my-assistant connect
You will see sandbox@my-assistant:~$ -- you are now inside the sandboxed environment.
Verify that the inference route is working:
curl -sf https://inference.local/v1/models
Expected: JSON listing nemotron-3-super:120b.
Still inside the sandbox, send a test message:
openclaw agent --agent main -m "hello" --session-id test
The agent will respond using Nemotron 3 Super. First responses may take 30--90 seconds for a 120B parameter model running locally.
Launch the terminal UI for an interactive chat session:
openclaw tui
Press Ctrl+C to exit the TUI.
Exit the sandbox to return to the host:
exit
If accessing the Web UI directly on the Spark (keyboard and monitor attached), open a browser and navigate to the tokenized URL from Step 4:
http://127.0.0.1:18789/#token=<long-token-here>
If accessing the Web UI from a remote machine, you need to set up an SSH tunnel. The NemoClaw onboard wizard already created the port 18789 forward on the Spark, so you only need to tunnel from your remote machine.
First, find your Spark's IP address. On the Spark, run:
hostname -I | awk '{print $1}'
This prints the primary IP address (e.g. 192.168.1.42). You can also find it in Settings > Wi-Fi or Settings > Network on the Spark's desktop, or check your router's connected-devices list.
From your remote machine, create an SSH tunnel to the Spark (replace <your-spark-ip> with the IP address from above):
ssh -L 18789:127.0.0.1:18789 <your-user>@<your-spark-ip>
Now open the tokenized URL in your remote machine's browser:
http://127.0.0.1:18789/#token=<long-token-here>
IMPORTANT
Use 127.0.0.1, not localhost -- the gateway origin check requires an exact match.
NOTE
If the Web UI fails to load and the port forward may be stale, reset it on the Spark host:
openshell forward stop 18789 my-assistant || true
openshell forward start 18789 my-assistant --background
IMPORTANT
Telegram must be enabled in the NemoClaw onboard wizard (Step 4 ā Messaging channels). The channel plugin and bot token are wired into the sandbox container at sandbox creation time ā policy-add only opens network egress and is not enough on its own. If you skipped Telegram during onboard, re-run the installer to recreate the sandbox with Telegram enabled.
Do this before running the NemoClaw installer in Step 4 so you have your bot token ready when the wizard prompts for it.
Open Telegram, find @BotFather, send /newbot, and follow the prompts. Copy the bot token it gives you and paste it into the wizard when you reach the Messaging channels step.
The Telegram bridge needs a public webhook URL so Telegram can deliver messages to your bot. NemoClaw uses cloudflared to create a free trycloudflare.com tunnel.
Make sure you are on the host (not inside the sandbox). If you are inside the sandbox, run exit first.
Install cloudflared (DGX Spark is arm64):
curl -L --output cloudflared.deb \
https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb
sudo dpkg -i cloudflared.deb
Start the tunnel:
nemoclaw tunnel start
Verify the public URL is live:
nemoclaw status
You should see ā cloudflared with a trycloudflare.com public URL (e.g. https://assembled-peer-persian-kitty.trycloudflare.com).
Open Telegram, find your bot, and send it a message. The bot forwards it to the agent and replies.
NOTE
If nemoclaw tunnel start prints cloudflared not found ā no public URL, the cloudflared install above did not complete successfully. Re-run the install, then restart the tunnel:
nemoclaw tunnel stop && nemoclaw tunnel start
NOTE
The first response may take 30--90 seconds for a 120B parameter model running locally.
NOTE
If sending a message returns Error: Channel is unavailable: telegram, the channel was not enabled during onboard. Re-run the installer to recreate the sandbox with Telegram selected at the Messaging channels step.
NOTE
For details on restricting which Telegram chats can interact with the agent, see the NemoClaw Telegram bridge documentation.
Stop the cloudflared tunnel:
nemoclaw tunnel stop
Stop the port forward:
openshell forward list # find active forwards
openshell forward stop 18789 # stop the dashboard forward
Run the uninstaller via curl (matches the NemoClaw README). It removes all sandboxes, the OpenShell gateway, Docker containers/images/volumes, the CLI, and all state files. Docker, Node.js, npm, and Ollama are preserved.
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash
Uninstaller flags (pass via bash -s -- <flags>):
| Flag | Effect |
|---|---|
--yes | Skip the confirmation prompt |
--keep-openshell | Leave the openshell binary in place |
--delete-models | Also remove the Ollama models pulled by NemoClaw |
To remove everything including the Ollama model, non-interactively:
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash -s -- --yes --delete-models
The uninstaller runs 6 steps:
nemoclaw npm package--delete-models)~/.nemoclaw, ~/.config/openshell, ~/.config/nemoclaw) and the OpenShell binaryNOTE
If you have a local clone at ~/.nemoclaw/source you want to keep, move or back it up before running the uninstaller ā it is removed as part of state cleanup in step 6.
| Command | Description |
|---|---|
nemoclaw my-assistant connect | Shell into the sandbox |
nemoclaw my-assistant status | Show sandbox status and inference config |
nemoclaw my-assistant logs --follow | Stream sandbox logs in real time |
nemoclaw list | List all registered sandboxes |
nemoclaw tunnel start | Start cloudflared tunnel (public URL for Telegram webhooks) |
nemoclaw tunnel stop | Stop the cloudflared tunnel |
openshell term | Open the monitoring TUI on the host |
openshell forward list | List active port forwards |
openshell forward start 18789 my-assistant --background | Restart port forwarding for Web UI |
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash | Remove NemoClaw (preserves Docker, Node.js, Ollama) |
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash -s -- --delete-models | Remove NemoClaw and Ollama models |