NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • Set Up Local Network Access
  • Open WebUI with Ollama

data science

  • Single-cell RNA Sequencing
  • Portfolio Optimization
  • CUDA-X Data Science
  • Text to Knowledge Graph
  • Optimized JAX

tools

  • DGX Dashboard
  • Comfy UI
  • Connect Three DGX Spark in a Ring Topology
  • Connect Multiple DGX Spark through a Switch
  • RAG Application in AI Workbench
  • Set up Tailscale on Your Spark
  • VS Code

fine tuning

  • FLUX.1 Dreambooth LoRA Fine-tuning
  • LLaMA Factory
  • Fine-tune with NeMo
  • Fine-tune with Pytorch
  • Unsloth on DGX Spark

use case

  • NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
  • Secure Long Running AI Agents with OpenShell on DGX Spark
  • OpenClaw šŸ¦ž
  • Live VLM WebUI
  • Install and Use Isaac Sim and Isaac Lab
  • Vibe Coding in VS Code
  • Build and Deploy a Multi-Agent Chatbot
  • Connect Two Sparks
  • NCCL for Two Sparks
  • Build a Video Search and Summarization (VSS) Agent
  • Spark & Reachy Photo Booth

inference

  • Speculative Decoding
  • Run models with llama.cpp on DGX Spark
  • vLLM for Inference
  • Nemotron-3-Nano with llama.cpp
  • SGLang for Inference
  • TRT LLM for Inference
  • NVFP4 Quantization
  • Multi-modal Inference
  • NIM on Spark
  • LM Studio on DGX Spark

NemoClaw with Nemotron 3 Super and Telegram on DGX Spark

30 MINS

Install NemoClaw on DGX Spark with local Ollama inference and Telegram bot integration

AI AgentDGXNemoClawNemotron 3 SuperOllamaOpenShellSparkTelegram
NemoClaw on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Overview

Basic idea

NVIDIA NemoClaw is an open-source reference stack that simplifies running OpenClaw always-on assistants more safely. It installs the NVIDIA OpenShell runtime -- an environment designed for executing agents with additional security -- and open-source models like NVIDIA Nemotron. A single installer command handles Node.js, OpenShell, and the NemoClaw CLI, then walks you through an onboard wizard to create a sandboxed agent on your DGX Spark using Ollama with Nemotron 3 Super.

By the end of this playbook you will have a working AI agent inside an OpenShell sandbox, accessible via a web dashboard and a Telegram bot, with inference routed to a local Nemotron 3 Super 120B model on your Spark -- all without exposing your host filesystem or network to the agent.

What you'll accomplish

  • Configure Docker and the NVIDIA container runtime for OpenShell on DGX Spark
  • Install Ollama, pull Nemotron 3 Super 120B, and configure it for sandbox access
  • Install NemoClaw with a single command (handles Node.js, OpenShell, and the CLI)
  • Run the onboard wizard to create a sandbox and configure local inference
  • Chat with the agent via the CLI, TUI, and web UI
  • Set up a Telegram bot that forwards messages to your sandboxed agent

Notice and disclaimers

The following sections describe safety, risks, and your responsibilities when running this demo.

Quick start safety check

Use only a clean environment. Run this demo on a fresh device or VM with no personal data, confidential information, or sensitive credentials. Keep it isolated like a sandbox.

By installing this demo, you accept responsibility for all third-party components, including reviewing their licenses, terms, and security posture. Read and accept before you install or use.

What you're getting

This experience is provided "AS IS" for demonstration purposes only -- no warranties, no guarantees. This is a demo, not a production-ready solution. You will need to implement appropriate security controls for your environment and use case.

Key risks with AI agents

  • Data leakage -- Any materials the agent accesses could be exposed, leaked, or stolen.
  • Malicious code execution -- The agent or its connected tools could expose your system to malicious code or cyber-attacks.
  • Unintended actions -- The agent might modify or delete files, send messages, or access services without explicit approval.
  • Prompt injection and manipulation -- External inputs or connected content could hijack the agent's behavior in unexpected ways.

Participant acknowledgement

By participating in this demo, you acknowledge that you are solely responsible for your configuration and for any data, accounts, and tools you connect. To the maximum extent permitted by law, NVIDIA is not responsible for any loss of data, device damage, security incidents, or other harm arising from your configuration or use of NemoClaw demo materials, including OpenClaw or any connected tools or services.

Isolation layers (OpenShell)

LayerWhat it protectsWhen it applies
FilesystemPrevents reads/writes outside allowed paths.Locked at sandbox creation.
NetworkBlocks unauthorized outbound connections.Hot-reloadable at runtime.
ProcessBlocks privilege escalation and dangerous syscalls.Locked at sandbox creation.
InferenceReroutes model API calls to controlled backends.Hot-reloadable at runtime.

What to know before starting

  • Basic use of the Linux terminal and SSH
  • Familiarity with Docker (permissions, docker run)
  • Awareness of the security and risk sections above

Prerequisites

Hardware and access:

  • A DGX Spark (GB10) with keyboard and monitor, or SSH access
  • An NVIDIA API key from build.nvidia.com (needed for the Telegram bridge)
  • A Telegram bot token from @BotFather (create one with /newbot)

Software:

  • Fresh install of DGX OS with latest updates

Verify your system before starting:

head -n 2 /etc/os-release
nvidia-smi
docker info --format '{{.ServerVersion}}'

Expected: Ubuntu 24.04, NVIDIA GB10 GPU, Docker 28.x+.

Have ready before you begin

ItemWhere to get it
NVIDIA API keybuild.nvidia.com/settings/api-keys
Telegram bot token@BotFather on Telegram -- create with /newbot

Ancillary files

All required assets are handled by the NemoClaw installer. No manual cloning is needed.

Time and risk

  • Estimated time: 20--30 minutes (with Ollama and model already downloaded). First-time model download adds ~15--30 minutes depending on network speed.
  • Risk level: Medium -- you are running an AI agent in a sandbox; risks are reduced by isolation but not eliminated. Use a clean environment and do not connect sensitive data or production accounts.
  • Last Updated: 03/31/2026
    • First Publication

Resources

  • NemoClaw
  • NemoClaw Documentation
  • OpenClaw Documentation
  • DGX Spark Documentation
  • DGX Spark Forum
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright Ā© 2026 NVIDIA Corporation