NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

Models

Deploy and scale models on your GPU infrastructure of choice with NVIDIA NIM inference microservices

Optimized by NVIDIALaunch from Hugging FaceBeta

Filters (2)

  • Free Endpoint
    1
  • Partner Endpoint
    3
  • Download Available
    2
  • Retrieval Augmented Generation
    0
  • Drug Discovery
    0
  • Image-to-Text
    0
  • Code Generation
    0
  • Speech-to-Text
    0
  • Deep Infra
    3
  • Together AI
    1
  • Bitdeer AI
    1
  • GMI Cloud
    1
  • CoreWeave
    1
  • DeepSeek AI
    2
  • Qwen
    1
  • NVIDIA
    0
  • Meta
    0
  • Mistral AI
    0
  • A100 SXM4 80GB
    0
  • B200
    0
  • GB200
    0
  • GH200 144G HBM3e
    0
  • H100 80GB HBM3
    0
  • Moe
  • coding
  • 3 models
    DeepSeek AI
    Downloadable

    deepseek-v4-flash

    DeepSeek V4 Flash is a 284B MoE model with 1M-token context optimized for fast coding and agents.
    coding
    4d
    Items per page
    of 1 pages
    361K
    DeepSeek AI
    Downloadable

    deepseek-v4-pro

    DeepSeek V4 scales to 1M-token context windows with efficient MoE architecture for coding tasks.
    coding
    781K
    4d
    Qwen
    Free Endpoint

    qwen3-coder-480b-a35b-instruct

    Excels in agentic coding and browser use and supports 256K context, delivering top results.
    agentic coding
    3.27M
    7mo