Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
4 results for
Filters (1)
Models (5)
Blueprints (5)
Other (4)
Sort By
score:DESC
Best Match
DGX Station
Local Coding Agent
Run local CLI coding agents with Ollama on DGX Station (NVIDIA GB300) using glm-4.7-flash (fast) or unsloth/GLM-4.7-GGUF:Q8_0 (best quality)
Playbook
Station
+7
2w
DGX Station
Nanochat Training
Train a small ChatGPT-style LLM (nanochat) with tokenizer, pretraining, midtraining, and SFT on DGX Station with GB300 Ultra
Playbook
Station
+8
2w
DGX Spark
30 MIN
Nemotron-3-Nano with llama.cpp
Run Nemotron-3-Nano-30B model using llama.cpp on DGX Spark
Playbook
Nemotron
+3
3mo
DGX Spark
OpenClaw 🦞
Run OpenClaw locally on DGX Spark with LM Studio or Ollama
Playbook
DGX
+4
1w
Items per page
24
1
1
of 1 pages