NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

3 results for

Filters (1)

  • NVIDIA
    3
  • Inference
  • DGX Spark

    LM Studio on DGX Spark

    Deploy LM Studio and serve LLMs on a Spark device; use LM Link to access models remotely.
    Playbook
    Inference
    1mo
    DGX Station

    Serve Qwen3-235B with vLLM

    Set up vLLM server with Qwen3-235B on DGX Station
    Playbook
    Station
    3w
    DGX Spark
    30 MIN

    Nemotron-3-Nano with llama.cpp

    Run Nemotron-3-Nano-30B model using llama.cpp on DGX Spark
    Playbook
    Nemotron
    3mo
    Items per page
    of 1 pages