These steps prepare a fresh DGX Spark for NemoClaw. If Docker, the NVIDIA runtime, and Ollama are already configured, skip to Phase 2.
OpenShell's gateway runs k3s inside Docker. On DGX Spark (Ubuntu 24.04, cgroup v2), Docker must be configured with the NVIDIA runtime and host cgroup namespace mode.
Configure the NVIDIA container runtime for Docker:
sudo nvidia-ctk runtime configure --runtime=docker
Set the cgroup namespace mode required by OpenShell on DGX Spark:
sudo python3 -c "
import json, os
path = '/etc/docker/daemon.json'
d = json.load(open(path)) if os.path.exists(path) else {}
d['default-cgroupns-mode'] = 'host'
json.dump(d, open(path, 'w'), indent=2)
"
Restart Docker:
sudo systemctl restart docker
Verify the NVIDIA runtime works:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
If you get a permission denied error on docker, add your user to the Docker group and activate the new group in your current session:
sudo usermod -aG docker $USER
newgrp docker
This applies the group change immediately. Alternatively, you can log out and back in instead of running newgrp docker.
NOTE
DGX Spark uses cgroup v2. OpenShell's gateway embeds k3s inside Docker and needs host cgroup namespace access. Without default-cgroupns-mode: host, the gateway can fail with "Failed to start ContainerManager" errors.
Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh
Configure Ollama to listen on all interfaces so the sandbox container can reach it:
sudo mkdir -p /etc/systemd/system/ollama.service.d
printf '[Service]\nEnvironment="OLLAMA_HOST=0.0.0.0"\n' | sudo tee /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama
Verify it is running and reachable on all interfaces:
curl http://0.0.0.0:11434
Expected: Ollama is running. If not, start it with sudo systemctl start ollama.
IMPORTANT
Always start Ollama via systemd (sudo systemctl restart ollama) ā do not use ollama serve &. A manually started Ollama process does not pick up the OLLAMA_HOST=0.0.0.0 setting above, and the NemoClaw sandbox will not be able to reach the inference server.
Download Nemotron 3 Super 120B (~87 GB; may take 15--30 minutes depending on network speed):
ollama pull nemotron-3-super:120b
Run it briefly to pre-load weights into memory (type /bye to exit):
ollama run nemotron-3-super:120b
Verify the model is available:
ollama list
You should see nemotron-3-super:120b in the output.
This single command handles everything: installs Node.js (if needed), installs OpenShell, clones the latest stable NemoClaw release, builds the CLI, and runs the onboard wizard to create a sandbox.
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
The onboard wizard walks you through setup:
my-assistant). Names must be lowercase alphanumeric with hyphens only.When complete you will see output like:
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Dashboard http://localhost:18789/
Sandbox my-assistant (Landlock + seccomp + netns)
Model nemotron-3-super:120b (Local Ollama)
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
IMPORTANT
Save the tokenized Web UI URL printed at the end -- you will need it in Step 8. It looks like:
http://127.0.0.1:18789/#token=<long-token-here>
NOTE
If nemoclaw is not found after install, run source ~/.bashrc to reload your shell path.
Connect to the sandbox:
nemoclaw my-assistant connect
You will see sandbox@my-assistant:~$ -- you are now inside the sandboxed environment.
Verify that the inference route is working:
curl -sf https://inference.local/v1/models
Expected: JSON listing nemotron-3-super:120b.
Still inside the sandbox, send a test message:
openclaw agent --agent main --local -m "hello" --session-id test
The agent will respond using Nemotron 3 Super. First responses may take 30--90 seconds for a 120B parameter model running locally.
Launch the terminal UI for an interactive chat session:
openclaw tui
Press Ctrl+C to exit the TUI.
Exit the sandbox to return to the host:
exit
If accessing the Web UI directly on the Spark (keyboard and monitor attached), open a browser and navigate to the tokenized URL from Step 4:
http://127.0.0.1:18789/#token=<long-token-here>
If accessing the Web UI from a remote machine, you need to set up port forwarding.
First, find your Spark's IP address. On the Spark, run:
hostname -I | awk '{print $1}'
This prints the primary IP address (e.g. 192.168.1.42). You can also find it in Settings > Wi-Fi or Settings > Network on the Spark's desktop, or check your router's connected-devices list.
Start the port forward on the Spark host:
openshell forward start 18789 my-assistant --background
Then from your remote machine, create an SSH tunnel to the Spark (replace <your-spark-ip> with the IP address from above):
ssh -L 18789:127.0.0.1:18789 <your-user>@<your-spark-ip>
Now open the tokenized URL in your remote machine's browser:
http://127.0.0.1:18789/#token=<long-token-here>
IMPORTANT
Use 127.0.0.1, not localhost -- the gateway origin check requires an exact match.
NOTE
If you already configured Telegram during the NemoClaw onboarding wizard (step 5/8), you can skip this phase. These steps cover adding Telegram after the initial setup.
Open Telegram, find @BotFather, send /newbot, and follow the prompts. Copy the bot token it gives you.
Make sure you are on the host (not inside the sandbox). If you are inside the sandbox, run exit first.
Set the required environment variables. Replace the placeholders with your actual values. SANDBOX_NAME must match the sandbox name you chose during the onboard wizard:
export TELEGRAM_BOT_TOKEN=<your-bot-token>
export SANDBOX_NAME=my-assistant
export NVIDIA_API_KEY=<your-nvidia-api-key>
Add the Telegram network policy to the sandbox:
nemoclaw my-assistant policy-add
When prompted, select telegram and hit Y to confirm.
Start the Telegram bridge.
export TELEGRAM_BOT_TOKEN=<your-bot-token>
nemoclaw start
The Telegram bridge starts only when the TELEGRAM_BOT_TOKEN environment variable is set. Verify the services are running:
nemoclaw status
Open Telegram, find your bot, and send it a message. The bot forwards it to the agent and replies.
NOTE
The first response may take 30--90 seconds for a 120B parameter model running locally.
NOTE
If the bridge does not appear in nemoclaw status, make sure TELEGRAM_BOT_TOKEN is exported in the same shell session where you run nemoclaw start. You can also try stopping and restarting:
nemoclaw stop
export TELEGRAM_BOT_TOKEN=<your-bot-token>
nemoclaw start
NOTE
For details on restricting which Telegram chats can interact with the agent, see the NemoClaw Telegram bridge documentation.
Stop any running auxiliary services (Telegram bridge, cloudflared tunnel):
nemoclaw stop
Stop the port forward:
openshell forward list # find active forwards
openshell forward stop 18789 # stop the dashboard forward
Run the uninstaller from the cloned source directory. It removes all sandboxes, the OpenShell gateway, Docker containers/images/volumes, the CLI, and all state files. Docker, Node.js, npm, and Ollama are preserved.
cd ~/.nemoclaw/source
./uninstall.sh
Uninstaller flags:
| Flag | Effect |
|---|---|
--yes | Skip the confirmation prompt |
--keep-openshell | Leave the openshell binary in place |
--delete-models | Also remove the Ollama models pulled by NemoClaw |
To remove everything including the Ollama model:
./uninstall.sh --yes --delete-models
The uninstaller runs 6 steps:
nemoclaw npm package--delete-models)~/.nemoclaw, ~/.config/openshell, ~/.config/nemoclaw) and the OpenShell binaryNOTE
The source clone at ~/.nemoclaw/source is removed as part of state cleanup in step 6. If you want to keep a local copy, move or back it up before running the uninstaller.
| Command | Description |
|---|---|
nemoclaw my-assistant connect | Shell into the sandbox |
nemoclaw my-assistant status | Show sandbox status and inference config |
nemoclaw my-assistant logs --follow | Stream sandbox logs in real time |
nemoclaw list | List all registered sandboxes |
nemoclaw start | Start auxiliary services (Telegram bridge, cloudflared) |
nemoclaw stop | Stop auxiliary services |
openshell term | Open the monitoring TUI on the host |
openshell forward list | List active port forwards |
openshell forward start 18789 my-assistant --background | Restart port forwarding for Web UI |
cd ~/.nemoclaw/source && ./uninstall.sh | Remove NemoClaw (preserves Docker, Node.js, Ollama) |
cd ~/.nemoclaw/source && ./uninstall.sh --delete-models | Remove NemoClaw and Ollama models |