NemoClaw with Nemotron-3-Super and Telegram on DGX Spark

30 MINS

Install NemoClaw on DGX Spark with local Ollama inference and Telegram bot integration

Phase 1: Prerequisites

These steps prepare a fresh DGX Spark for NemoClaw. If Docker, the NVIDIA runtime, and Ollama are already configured, skip to Phase 2.

Configure Docker and the NVIDIA container runtime

OpenShell's gateway runs k3s inside Docker. On DGX Spark (Ubuntu 24.04, cgroup v2), Docker must be configured with the NVIDIA runtime and host cgroup namespace mode.

Configure the NVIDIA container runtime for Docker:

sudo nvidia-ctk runtime configure --runtime=docker

Set the cgroup namespace mode required by OpenShell on DGX Spark:

sudo python3 -c "
import json, os
path = '/etc/docker/daemon.json'
d = json.load(open(path)) if os.path.exists(path) else {}
d['default-cgroupns-mode'] = 'host'
json.dump(d, open(path, 'w'), indent=2)
"

Restart Docker:

sudo systemctl restart docker

Verify the NVIDIA runtime works:

docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

If you get a permission denied error on docker, add your user to the Docker group and log out/in:

sudo usermod -aG docker $USER

NOTE

DGX Spark uses cgroup v2. OpenShell's gateway embeds k3s inside Docker and needs host cgroup namespace access. Without default-cgroupns-mode: host, the gateway can fail with "Failed to start ContainerManager" errors.

Install Ollama

Install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Verify it is running:

curl http://localhost:11434

Expected: Ollama is running. If not, start it: ollama serve &

Configure Ollama to listen on all interfaces so the sandbox container can reach it:

sudo mkdir -p /etc/systemd/system/ollama.service.d
printf '[Service]\nEnvironment="OLLAMA_HOST=0.0.0.0"\n' | sudo tee /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama

Pull the Nemotron 3 Super model

Download Nemotron 3 Super 120B (~87 GB; may take 15--30 minutes depending on network speed):

ollama pull nemotron-3-super:120b

Run it briefly to pre-load weights into memory (type /bye to exit):

ollama run nemotron-3-super:120b

Verify the model is available:

ollama list

You should see nemotron-3-super:120b in the output.


Phase 2: Install and Run NemoClaw

Install NemoClaw

This single command handles everything: installs Node.js (if needed), installs OpenShell, clones NemoClaw at the pinned stable release (v0.0.1), builds the CLI, and runs the onboard wizard to create a sandbox.

curl -fsSL https://www.nvidia.com/nemoclaw.sh | NEMOCLAW_INSTALL_TAG=v0.0.4 bash

The onboard wizard walks you through setup:

  1. Sandbox name -- Pick a name (e.g. my-assistant). Names must be lowercase alphanumeric with hyphens only.
  2. Inference provider -- Select Local Ollama (option 7).
  3. Model -- Select nemotron-3-super:120b (option 1).
  4. Policy presets -- Accept the suggested presets when prompted (hit Y).

When complete you will see output like:

──────────────────────────────────────────────────
Dashboard    http://localhost:18789/
Sandbox      my-assistant (Landlock + seccomp + netns)
Model        nemotron-3-super:120b (Local Ollama)
──────────────────────────────────────────────────
Run:         nemoclaw my-assistant connect
Status:      nemoclaw my-assistant status
Logs:        nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────

IMPORTANT

Save the tokenized Web UI URL printed at the end -- you will need it in Step 8. It looks like: http://127.0.0.1:18789/#token=<long-token-here>

NOTE

If nemoclaw is not found after install, run source ~/.bashrc to reload your shell path.

Connect to the sandbox and verify inference

Connect to the sandbox:

nemoclaw my-assistant connect

You will see sandbox@my-assistant:~$ -- you are now inside the sandboxed environment.

Verify that the inference route is working:

curl -sf https://inference.local/v1/models

Expected: JSON listing nemotron-3-super:120b.

Talk to the agent (CLI)

Still inside the sandbox, send a test message:

openclaw agent --agent main --local -m "hello" --session-id test

The agent will respond using Nemotron 3 Super. First responses may take 30--90 seconds for a 120B parameter model running locally.

Interactive TUI

Launch the terminal UI for an interactive chat session:

openclaw tui

Press Ctrl+C to exit the TUI.

Exit the sandbox and access the Web UI

Exit the sandbox to return to the host:

exit

If accessing the Web UI directly on the Spark (keyboard and monitor attached), open a browser and navigate to the tokenized URL from Step 4:

http://127.0.0.1:18789/#token=<long-token-here>

If accessing the Web UI from a remote machine, you need to set up port forwarding.

Start the port forward on the Spark host:

openshell forward start 18789 my-assistant --background

Then from your remote machine, create an SSH tunnel to the Spark:

ssh -L 18789:127.0.0.1:18789 <your-user>@<your-spark-ip>

Now open the tokenized URL in your remote machine's browser:

http://127.0.0.1:18789/#token=<long-token-here>

IMPORTANT

Use 127.0.0.1, not localhost -- the gateway origin check requires an exact match.


Phase 3: Telegram Bot

Prepare credentials

You need two items:

ItemWhere to get it
Telegram bot tokenOpen Telegram, find @BotFather, send /newbot, and follow the prompts. Copy the token it gives you.
NVIDIA API keyGo to build.nvidia.com/settings/api-keys and create or copy a key (starts with nvapi-).

Configure and start the Telegram bridge

Make sure you are on the host (not inside the sandbox). If you are inside the sandbox, run exit first.

Set the required environment variables. Replace the placeholders with your actual values. SANDBOX_NAME must match the sandbox name you chose during the onboard wizard:

export TELEGRAM_BOT_TOKEN=<your-bot-token>
export SANDBOX_NAME=my-assistant

Add the Telegram network policy to the sandbox:

nemoclaw my-assistant policy-add

When prompted, type telegram and hit Y to confirm.

Start the Telegram bridge. On first run it will ask for your NVIDIA API key:

nemoclaw start

Paste your nvapi- key when prompted.

You should see:

[services] telegram-bridge started
Telegram:    bridge running

Open Telegram, find your bot, and send it a message. The bot forwards it to the agent and replies.

NOTE

The first response may include a debug log line like "gateway Running as non-root..." -- this is cosmetic and can be ignored.

NOTE

If you need to restart the bridge, nemoclaw stop may not cleanly stop the process. If that happens, find and kill the bridge process via its PID file:

kill -9 "$(cat /tmp/nemoclaw-services-${SANDBOX_NAME}/telegram-bridge.pid)"

Then run nemoclaw start again.


Phase 4: Cleanup and Uninstall

Stop services

Stop any running auxiliary services (Telegram bridge, cloudflared):

nemoclaw stop

Stop the port forward:

openshell forward list          # find active forwards
openshell forward stop 18789    # stop the dashboard forward

Uninstall NemoClaw

Run the uninstaller from the cloned source directory. It removes all sandboxes, the OpenShell gateway, Docker containers/images/volumes, the CLI, and all state files. Docker, Node.js, npm, and Ollama are preserved.

cd ~/.nemoclaw/source
./uninstall.sh

Uninstaller flags:

FlagEffect
--yesSkip the confirmation prompt
--keep-openshellLeave the openshell binary in place
--delete-modelsAlso remove the Ollama models pulled by NemoClaw

To remove everything including the Ollama model:

./uninstall.sh --yes --delete-models

The uninstaller runs 6 steps:

  1. Stop NemoClaw helper services and port-forward processes
  2. Delete all OpenShell sandboxes, the NemoClaw gateway, and providers
  3. Remove the global nemoclaw npm package
  4. Remove NemoClaw/OpenShell Docker containers, images, and volumes
  5. Remove Ollama models (only with --delete-models)
  6. Remove state directories (~/.nemoclaw, ~/.config/openshell, ~/.config/nemoclaw) and the OpenShell binary

NOTE

The source clone at ~/.nemoclaw/source is removed as part of state cleanup in step 6. If you want to keep a local copy, move or back it up before running the uninstaller.

Useful commands

CommandDescription
nemoclaw my-assistant connectShell into the sandbox
nemoclaw my-assistant statusShow sandbox status and inference config
nemoclaw my-assistant logs --followStream sandbox logs in real time
nemoclaw listList all registered sandboxes
nemoclaw startStart auxiliary services (Telegram bridge)
nemoclaw stopStop auxiliary services
openshell termOpen the monitoring TUI on the host
openshell forward listList active port forwards
openshell forward start 18789 my-assistant --backgroundRestart port forwarding for Web UI
cd ~/.nemoclaw/source && ./uninstall.shRemove NemoClaw (preserves Docker, Node.js, Ollama)
cd ~/.nemoclaw/source && ./uninstall.sh --delete-modelsRemove NemoClaw and Ollama models