NemoClaw with Nemotron-3-Super and Telegram on DGX Spark

30 MINS

Install NemoClaw on DGX Spark with local Ollama inference and Telegram bot integration

Overview

Basic idea

NVIDIA NemoClaw is an open-source reference stack that simplifies running OpenClaw always-on assistants more safely. It installs the NVIDIA OpenShell runtime -- an environment designed for executing agents with additional security -- and open-source models like NVIDIA Nemotron. A single installer command handles Node.js, OpenShell, and the NemoClaw CLI, then walks you through an onboard wizard to create a sandboxed agent on your DGX Spark using Ollama with Nemotron 3 Super.

By the end of this playbook you will have a working AI agent inside an OpenShell sandbox, accessible via a web dashboard and a Telegram bot, with inference routed to a local Nemotron 3 Super 120B model on your Spark -- all without exposing your host filesystem or network to the agent.

What you'll accomplish

  • Configure Docker and the NVIDIA container runtime for OpenShell on DGX Spark
  • Install Ollama, pull Nemotron 3 Super 120B, and configure it for sandbox access
  • Install NemoClaw with a single command (handles Node.js, OpenShell, and the CLI)
  • Run the onboard wizard to create a sandbox and configure local inference
  • Chat with the agent via the CLI, TUI, and web UI
  • Set up a Telegram bot that forwards messages to your sandboxed agent

Notice and disclaimers

The following sections describe safety, risks, and your responsibilities when running this demo.

Quick start safety check

Use only a clean environment. Run this demo on a fresh device or VM with no personal data, confidential information, or sensitive credentials. Keep it isolated like a sandbox.

By installing this demo, you accept responsibility for all third-party components, including reviewing their licenses, terms, and security posture. Read and accept before you install or use.

What you're getting

This experience is provided "AS IS" for demonstration purposes only -- no warranties, no guarantees. This is a demo, not a production-ready solution. You will need to implement appropriate security controls for your environment and use case.

Key risks with AI agents

  • Data leakage -- Any materials the agent accesses could be exposed, leaked, or stolen.
  • Malicious code execution -- The agent or its connected tools could expose your system to malicious code or cyber-attacks.
  • Unintended actions -- The agent might modify or delete files, send messages, or access services without explicit approval.
  • Prompt injection and manipulation -- External inputs or connected content could hijack the agent's behavior in unexpected ways.

Participant acknowledgement

By participating in this demo, you acknowledge that you are solely responsible for your configuration and for any data, accounts, and tools you connect. To the maximum extent permitted by law, NVIDIA is not responsible for any loss of data, device damage, security incidents, or other harm arising from your configuration or use of NemoClaw demo materials, including OpenClaw or any connected tools or services.

Isolation layers (OpenShell)

LayerWhat it protectsWhen it applies
FilesystemPrevents reads/writes outside allowed paths.Locked at sandbox creation.
NetworkBlocks unauthorized outbound connections.Hot-reloadable at runtime.
ProcessBlocks privilege escalation and dangerous syscalls.Locked at sandbox creation.
InferenceReroutes model API calls to controlled backends.Hot-reloadable at runtime.

What to know before starting

  • Basic use of the Linux terminal and SSH
  • Familiarity with Docker (permissions, docker run)
  • Awareness of the security and risk sections above

Prerequisites

Hardware and access:

  • A DGX Spark (GB10) with keyboard and monitor, or SSH access
  • An NVIDIA API key from build.nvidia.com (needed for the Telegram bridge)
  • A Telegram bot token from @BotFather (create one with /newbot)

Software:

  • Fresh install of DGX OS with latest updates

Verify your system before starting:

head -n 2 /etc/os-release
nvidia-smi
docker info --format '{{.ServerVersion}}'

Expected: Ubuntu 24.04, NVIDIA GB10 GPU, Docker 28.x+.

Have ready before you begin

ItemWhere to get it
NVIDIA API keybuild.nvidia.com/settings/api-keys
Telegram bot token@BotFather on Telegram -- create with /newbot

Ancillary files

All required assets are handled by the NemoClaw installer. No manual cloning is needed.

Time and risk

  • Estimated time: 20--30 minutes (with Ollama and model already downloaded). First-time model download adds ~15--30 minutes depending on network speed.
  • Risk level: Medium -- you are running an AI agent in a sandbox; risks are reduced by isolation but not eliminated. Use a clean environment and do not connect sensitive data or production accounts.
  • Last Updated: 03/31/2026
    • First Publication