NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • Run Hermes Agent with Local Models
  • cuTile Kernels
  • CLI Coding Agent
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw šŸ¦ž

inference

  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • LM Studio on DGX Spark
  • vLLM for Inference

NemoClaw with Nemotron 3 Super and Telegram on DGX Spark

30 MINS

Install NemoClaw on DGX Spark with local Ollama inference and Telegram bot integration

AI AgentDGXNemoClawNemotron 3 SuperOllamaOpenShellSparkTelegram
NemoClaw on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Phase 1: Prerequisites

These steps prepare a fresh DGX Spark for NemoClaw. If Docker, the NVIDIA runtime, and Ollama are already configured, skip to Phase 2.

Step 1
Configure Docker and the NVIDIA container runtime

OpenShell's gateway runs k3s inside Docker. On DGX Spark (Ubuntu 24.04, cgroup v2), Docker must be configured with the NVIDIA runtime and host cgroup namespace mode.

Configure the NVIDIA container runtime for Docker:

sudo nvidia-ctk runtime configure --runtime=docker

Set the cgroup namespace mode required by OpenShell on DGX Spark:

sudo python3 -c "
import json, os
path = '/etc/docker/daemon.json'
d = json.load(open(path)) if os.path.exists(path) else {}
d['default-cgroupns-mode'] = 'host'
json.dump(d, open(path, 'w'), indent=2)
"

Restart Docker:

sudo systemctl restart docker

Verify the NVIDIA runtime works:

docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

If you get a permission denied error on docker, add your user to the Docker group and activate the new group in your current session:

sudo usermod -aG docker $USER
newgrp docker

This applies the group change immediately. Alternatively, you can log out and back in instead of running newgrp docker.

NOTE

DGX Spark uses cgroup v2. OpenShell's gateway embeds k3s inside Docker and needs host cgroup namespace access. Without default-cgroupns-mode: host, the gateway can fail with "Failed to start ContainerManager" errors.

Step 2
Install Ollama

Install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Configure Ollama to listen on all interfaces so the sandbox container can reach it:

sudo mkdir -p /etc/systemd/system/ollama.service.d
printf '[Service]\nEnvironment="OLLAMA_HOST=0.0.0.0"\n' | sudo tee /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload
sudo systemctl restart ollama

Verify it is running and reachable on all interfaces:

curl http://0.0.0.0:11434

Expected: Ollama is running. If not, start it with sudo systemctl start ollama.

IMPORTANT

Always start Ollama via systemd (sudo systemctl restart ollama) — do not use ollama serve &. A manually started Ollama process does not pick up the OLLAMA_HOST=0.0.0.0 setting above, and the NemoClaw sandbox will not be able to reach the inference server.

Step 3
Pull the Nemotron 3 Super model

Download Nemotron 3 Super 120B (~87 GB; may take 15--30 minutes depending on network speed):

ollama pull nemotron-3-super:120b

Run it briefly to pre-load weights into memory (type /bye to exit):

ollama run nemotron-3-super:120b

Verify the model is available:

ollama list

You should see nemotron-3-super:120b in the output.


Phase 2: Install and Run NemoClaw

Step 4
Install NemoClaw

This single command handles everything: installs Node.js (if needed), installs OpenShell, clones the latest stable NemoClaw release, builds the CLI, and runs the onboard wizard to create a sandbox.

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

The onboard wizard walks you through setup:

  1. Sandbox name -- Pick a name (e.g. my-assistant). Names must be lowercase alphanumeric with hyphens only.
  2. Inference provider -- Select Local Ollama.
  3. Model -- Select nemotron-3-super:120b.
  4. Messaging channels -- If you want a Telegram bot, select telegram here and paste your bot token when prompted. Create the bot first via @BotFather in Telegram (see Step 9). If you skip this, you can re-run the installer later to recreate the sandbox with Telegram enabled.
  5. Policy presets -- Accept the suggested presets when prompted (hit Y).

IMPORTANT

Telegram must be configured at this step. The channel plugin and bot token are wired into the sandbox container during onboarding — they cannot be added to an existing sandbox by exporting environment variables on the host.

When complete you will see output like:

──────────────────────────────────────────────────
Dashboard    http://localhost:18789/
Sandbox      my-assistant (Landlock + seccomp + netns)
Model        nemotron-3-super:120b (Local Ollama)
──────────────────────────────────────────────────
Run:         nemoclaw my-assistant connect
Status:      nemoclaw my-assistant status
Logs:        nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────

IMPORTANT

Save the tokenized Web UI URL printed at the end -- you will need it in Step 8. It looks like: http://127.0.0.1:18789/#token=<long-token-here>

NOTE

If nemoclaw is not found after install, run source ~/.bashrc to reload your shell path.

Step 5
Connect to the sandbox and verify inference

Connect to the sandbox:

nemoclaw my-assistant connect

You will see sandbox@my-assistant:~$ -- you are now inside the sandboxed environment.

Verify that the inference route is working:

curl -sf https://inference.local/v1/models

Expected: JSON listing nemotron-3-super:120b.

Step 6
Talk to the agent (CLI)

Still inside the sandbox, send a test message:

openclaw agent --agent main -m "hello" --session-id test

The agent will respond using Nemotron 3 Super. First responses may take 30--90 seconds for a 120B parameter model running locally.

Step 7
Interactive TUI

Launch the terminal UI for an interactive chat session:

openclaw tui

Press Ctrl+C to exit the TUI.

Step 8
Exit the sandbox and access the Web UI

Exit the sandbox to return to the host:

exit

If accessing the Web UI directly on the Spark (keyboard and monitor attached), open a browser and navigate to the tokenized URL from Step 4:

http://127.0.0.1:18789/#token=<long-token-here>

If accessing the Web UI from a remote machine, you need to set up an SSH tunnel. The NemoClaw onboard wizard already created the port 18789 forward on the Spark, so you only need to tunnel from your remote machine.

First, find your Spark's IP address. On the Spark, run:

hostname -I | awk '{print $1}'

This prints the primary IP address (e.g. 192.168.1.42). You can also find it in Settings > Wi-Fi or Settings > Network on the Spark's desktop, or check your router's connected-devices list.

From your remote machine, create an SSH tunnel to the Spark (replace <your-spark-ip> with the IP address from above):

ssh -L 18789:127.0.0.1:18789 <your-user>@<your-spark-ip>

Now open the tokenized URL in your remote machine's browser:

http://127.0.0.1:18789/#token=<long-token-here>

IMPORTANT

Use 127.0.0.1, not localhost -- the gateway origin check requires an exact match.

NOTE

If the Web UI fails to load and the port forward may be stale, reset it on the Spark host:

openshell forward stop 18789 my-assistant || true
openshell forward start 18789 my-assistant --background

Phase 3: Telegram Bot

IMPORTANT

Telegram must be enabled in the NemoClaw onboard wizard (Step 4 → Messaging channels). The channel plugin and bot token are wired into the sandbox container at sandbox creation time — policy-add only opens network egress and is not enough on its own. If you skipped Telegram during onboard, re-run the installer to recreate the sandbox with Telegram enabled.

Step 9
Create a Telegram bot

Do this before running the NemoClaw installer in Step 4 so you have your bot token ready when the wizard prompts for it.

Open Telegram, find @BotFather, send /newbot, and follow the prompts. Copy the bot token it gives you and paste it into the wizard when you reach the Messaging channels step.

Step 10
Install cloudflared and start the Telegram bridge

The Telegram bridge needs a public webhook URL so Telegram can deliver messages to your bot. NemoClaw uses cloudflared to create a free trycloudflare.com tunnel.

Make sure you are on the host (not inside the sandbox). If you are inside the sandbox, run exit first.

Install cloudflared (DGX Spark is arm64):

curl -L --output cloudflared.deb \
  https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb
sudo dpkg -i cloudflared.deb

Start the tunnel:

nemoclaw tunnel start

Verify the public URL is live:

nemoclaw status

You should see ā— cloudflared with a trycloudflare.com public URL (e.g. https://assembled-peer-persian-kitty.trycloudflare.com).

Open Telegram, find your bot, and send it a message. The bot forwards it to the agent and replies.

NOTE

If nemoclaw tunnel start prints cloudflared not found — no public URL, the cloudflared install above did not complete successfully. Re-run the install, then restart the tunnel:

nemoclaw tunnel stop && nemoclaw tunnel start

NOTE

The first response may take 30--90 seconds for a 120B parameter model running locally.

NOTE

If sending a message returns Error: Channel is unavailable: telegram, the channel was not enabled during onboard. Re-run the installer to recreate the sandbox with Telegram selected at the Messaging channels step.

NOTE

For details on restricting which Telegram chats can interact with the agent, see the NemoClaw Telegram bridge documentation.


Phase 4: Cleanup and Uninstall

Step 11
Stop services

Stop the cloudflared tunnel:

nemoclaw tunnel stop

Stop the port forward:

openshell forward list          # find active forwards
openshell forward stop 18789    # stop the dashboard forward

Step 12
Uninstall NemoClaw

Run the uninstaller via curl (matches the NemoClaw README). It removes all sandboxes, the OpenShell gateway, Docker containers/images/volumes, the CLI, and all state files. Docker, Node.js, npm, and Ollama are preserved.

curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash

Uninstaller flags (pass via bash -s -- <flags>):

FlagEffect
--yesSkip the confirmation prompt
--keep-openshellLeave the openshell binary in place
--delete-modelsAlso remove the Ollama models pulled by NemoClaw

To remove everything including the Ollama model, non-interactively:

curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash -s -- --yes --delete-models

The uninstaller runs 6 steps:

  1. Stop NemoClaw helper services and port-forward processes
  2. Delete all OpenShell sandboxes, the NemoClaw gateway, and providers
  3. Remove the global nemoclaw npm package
  4. Remove NemoClaw/OpenShell Docker containers, images, and volumes
  5. Remove Ollama models (only with --delete-models)
  6. Remove state directories (~/.nemoclaw, ~/.config/openshell, ~/.config/nemoclaw) and the OpenShell binary

NOTE

If you have a local clone at ~/.nemoclaw/source you want to keep, move or back it up before running the uninstaller — it is removed as part of state cleanup in step 6.

Useful commands

CommandDescription
nemoclaw my-assistant connectShell into the sandbox
nemoclaw my-assistant statusShow sandbox status and inference config
nemoclaw my-assistant logs --followStream sandbox logs in real time
nemoclaw listList all registered sandboxes
nemoclaw tunnel startStart cloudflared tunnel (public URL for Telegram webhooks)
nemoclaw tunnel stopStop the cloudflared tunnel
openshell termOpen the monitoring TUI on the host
openshell forward listList active port forwards
openshell forward start 18789 my-assistant --backgroundRestart port forwarding for Web UI
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bashRemove NemoClaw (preserves Docker, Node.js, Ollama)
curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash -s -- --delete-modelsRemove NemoClaw and Ollama models

Resources

  • NemoClaw
  • NemoClaw Documentation
  • OpenClaw Documentation
  • DGX Spark Documentation
  • DGX Spark Forum
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright Ā© 2026 NVIDIA Corporation