Install NemoClaw on DGX Station with local vLLM inference and Telegram bot integration
NVIDIA NemoClaw is an open-source reference stack that simplifies running OpenClaw always-on assistants more safely. It installs the NVIDIA OpenShell runtime -- an environment designed for executing agents with additional security -- and open-source models like NVIDIA Nemotron. A single installer command handles Node.js, OpenShell, and the NemoClaw CLI, then walks you through an onboard wizard to create a sandboxed agent on your DGX Station using vLLM with Nemotron 3 Super.
By the end of this playbook you will have a working AI agent inside an OpenShell sandbox, accessible via a web dashboard and a Telegram bot, with inference routed to a local Nemotron 3 Super 120B model served by vLLM on your DGX Station -- all without exposing your host filesystem or network to the agent.
The following sections describe safety, risks, and your responsibilities when running this demo.
Use only a clean environment. Run this demo on a fresh device or VM with no personal data, confidential information, or sensitive credentials. Keep it isolated like a sandbox.
By installing this demo, you accept responsibility for all third-party components, including reviewing their licenses, terms, and security posture. Read and accept before you install or use.
This experience is provided "AS IS" for demonstration purposes only -- no warranties, no guarantees. This is a demo, not a production-ready solution. You will need to implement appropriate security controls for your environment and use case.
By participating in this demo, you acknowledge that you are solely responsible for your configuration and for any data, accounts, and tools you connect. To the maximum extent permitted by law, NVIDIA is not responsible for any loss of data, device damage, security incidents, or other harm arising from your configuration or use of NemoClaw demo materials, including OpenClaw or any connected tools or services.
| Layer | What it protects | When it applies |
|---|---|---|
| Filesystem | Prevents reads/writes outside allowed paths. | Locked at sandbox creation. |
| Network | Blocks unauthorized outbound connections. | Hot-reloadable at runtime. |
| Process | Blocks privilege escalation and dangerous syscalls. | Locked at sandbox creation. |
| Inference | Reroutes model API calls to controlled backends. | Hot-reloadable at runtime. |
docker run)Hardware and access:
/newbot) -- optional, for Phase 3Software:
Verify your system before starting:
head -n 2 /etc/os-release
nvidia-smi
docker info --format '{{.ServerVersion}}'
df -h / /var/lib/docker 2>/dev/null | head -20
Expected: Ubuntu 24.04, NVIDIA GB300 GPU(s), Docker 28.x+, and enough free disk for Docker layers, the NemoClaw sandbox image, and Hugging Face cache (treat ~40 GB free on the Docker data filesystem as a practical minimum; very low free space can surface as cryptic onboard errors such as “K8s namespace not ready”).
| Item | Where to get it |
|---|---|
| Telegram bot token (optional) | @BotFather on Telegram -- create with /newbot |
All required assets are handled by the NemoClaw installer. No manual cloning is needed.