NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
View All Playbooks
View All Playbooks

onboarding

  • MIG on DGX Station

data science

  • Topic Modeling
  • Text to Knowledge Graph on DGX Station

tools

  • NVFP4 Quantization

fine tuning

  • Nanochat Training

use case

  • NemoClaw with Nemotron-3-Super and vLLM on DGX Station
  • Local Coding Agent
  • Secure Long Running AI Agents with OpenShell on DGX Station

inference

  • Serve Qwen3-235B with vLLM

NemoClaw with Nemotron-3-Super and vLLM on DGX Station

30 MINS

Install NemoClaw on DGX Station with local vLLM inference and Telegram bot integration

AI AgentDGXDGX StationGB300NemoClawNemotron-3-SuperOpenShellTelegramvLLM
NemoClaw on GitHub
OverviewOverviewInstructionsInstructionsTroubleshootingTroubleshooting

Overview

Basic idea

NVIDIA NemoClaw is an open-source reference stack that simplifies running OpenClaw always-on assistants more safely. It installs the NVIDIA OpenShell runtime -- an environment designed for executing agents with additional security -- and open-source models like NVIDIA Nemotron. A single installer command handles Node.js, OpenShell, and the NemoClaw CLI, then walks you through an onboard wizard to create a sandboxed agent on your DGX Station using vLLM with Nemotron 3 Super.

By the end of this playbook you will have a working AI agent inside an OpenShell sandbox, accessible via a web dashboard and a Telegram bot, with inference routed to a local Nemotron 3 Super 120B model served by vLLM on your DGX Station -- all without exposing your host filesystem or network to the agent.

What you'll accomplish

  • Configure Docker and the NVIDIA container runtime for OpenShell on DGX Station
  • Pull Nemotron 3 Super 120B (NVFP4) from Hugging Face and serve it with vLLM
  • Install NemoClaw with a single command (handles Node.js, OpenShell, and the CLI)
  • Run the onboard wizard to create a sandbox and configure local vLLM inference
  • Chat with the agent via the CLI, TUI, and web UI
  • Set up a Telegram bot that forwards messages to your sandboxed agent

Notice and disclaimers

The following sections describe safety, risks, and your responsibilities when running this demo.

Quick start safety check

Use only a clean environment. Run this demo on a fresh device or VM with no personal data, confidential information, or sensitive credentials. Keep it isolated like a sandbox.

By installing this demo, you accept responsibility for all third-party components, including reviewing their licenses, terms, and security posture. Read and accept before you install or use.

What you're getting

This experience is provided "AS IS" for demonstration purposes only -- no warranties, no guarantees. This is a demo, not a production-ready solution. You will need to implement appropriate security controls for your environment and use case.

Key risks with AI agents

  • Data leakage -- Any materials the agent accesses could be exposed, leaked, or stolen.
  • Malicious code execution -- The agent or its connected tools could expose your system to malicious code or cyber-attacks.
  • Unintended actions -- The agent might modify or delete files, send messages, or access services without explicit approval.
  • Prompt injection and manipulation -- External inputs or connected content could hijack the agent's behavior in unexpected ways.

Participant acknowledgement

By participating in this demo, you acknowledge that you are solely responsible for your configuration and for any data, accounts, and tools you connect. To the maximum extent permitted by law, NVIDIA is not responsible for any loss of data, device damage, security incidents, or other harm arising from your configuration or use of NemoClaw demo materials, including OpenClaw or any connected tools or services.

Isolation layers (OpenShell)

LayerWhat it protectsWhen it applies
FilesystemPrevents reads/writes outside allowed paths.Locked at sandbox creation.
NetworkBlocks unauthorized outbound connections.Hot-reloadable at runtime.
ProcessBlocks privilege escalation and dangerous syscalls.Locked at sandbox creation.
InferenceReroutes model API calls to controlled backends.Hot-reloadable at runtime.

What to know before starting

  • Basic use of the Linux terminal and SSH
  • Familiarity with Docker (permissions, docker run)
  • Awareness of the security and risk sections above

Prerequisites

Hardware and access:

  • A DGX Station (GB300) with keyboard and monitor, or SSH access
  • A Telegram bot token from @BotFather (create one with /newbot) -- optional, for Phase 3

Software:

  • Fresh install of DGX OS with latest updates

Verify your system before starting:

head -n 2 /etc/os-release
nvidia-smi
docker info --format '{{.ServerVersion}}'
df -h / /var/lib/docker 2>/dev/null | head -20

Expected: Ubuntu 24.04, NVIDIA GB300 GPU(s), Docker 28.x+, and enough free disk for Docker layers, the NemoClaw sandbox image, and Hugging Face cache (treat ~40 GB free on the Docker data filesystem as a practical minimum; very low free space can surface as cryptic onboard errors such as “K8s namespace not ready”).

Have ready before you begin

ItemWhere to get it
Telegram bot token (optional)@BotFather on Telegram -- create with /newbot

Ancillary files

All required assets are handled by the NemoClaw installer. No manual cloning is needed.

Time and risk

  • Estimated time: 20--30 minutes (with model already downloaded). First-time model download adds ~10--20 minutes depending on network speed.
  • Risk level: Medium -- you are running an AI agent in a sandbox; risks are reduced by isolation but not eliminated. Use a clean environment and do not connect sensitive data or production accounts.
  • Last Updated: 04/27/2026
    • First publication for DGX Station with vLLM

Resources

  • NemoClaw
  • NemoClaw Documentation
  • OpenClaw Documentation
  • vLLM Documentation
  • Nemotron-3-Super on Hugging Face
  • DGX Station Documentation
  • DGX Station Forum
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation