Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
7 results for
Filters (1)
Models (6)
Blueprints (5)
Other (7)
Sort By
score:DESC
Best Match
DGX Spark
20 MINS
CLI Coding Agent
Build local CLI coding agents with Ollama
Playbook
Coding
+6
4d
Items per page
24
1
1
of 1 pages
DGX Spark
60 MIN
cuTile Kernels
Run cuTile kernel benchmarks, FMHA implementation, and LLM inference on DGX Spark and B300
Playbook
FMHA
+10
4d
DGX Station
30 MINS
Local Coding Agent
Run local CLI coding agents with Ollama on DGX Station (NVIDIA GB300) using glm-4.7-flash (fast) or unsloth/GLM-4.7-GGUF:Q8_0 (best quality)
Playbook
Coding
+5
1mo
DGX Station
30 MIN
Nanochat Training
Train a small ChatGPT-style LLM (nanochat) with tokenizer, pretraining, midtraining, and SFT on DGX Station with GB300 Ultra
Playbook
Training
+6
1mo
DGX Spark
30 MIN
Nemotron-3-Nano with llama.cpp
Run Nemotron-3-Nano-30B model using llama.cpp on DGX Spark
Playbook
Nemotron
+3
4mo
DGX Spark
OpenClaw 🦞
Run OpenClaw locally on DGX Spark with LM Studio or Ollama
Playbook
DGX
+3
1mo
DGX Spark
30 MIN
Run models with llama.cpp on DGX Spark
Build llama.cpp with CUDA and serve models via an OpenAI-compatible API (Nemotron 3 Nano Omni as example)
Playbook
DGX Spark
+3
4w