NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

5 results for

Filters (1)

  • Free Endpoint
    1
  • Partner Endpoint
    5
  • Download Available
    4
  • Deep Infra
    4
  • Bitdeer AI
    2
  • GMI Cloud
    2
  • Lightning AI
    2
  • CoreWeave
    2
  • DeepSeek AI
    2
  • NVIDIA
    2
  • Qwen
    1
  • Mistral AI
    0
  • Minimaxai
    0
  • Moe
  • DeepSeek AI
    Downloadable

    deepseek-v4-pro

    DeepSeek V4 scales to 1M-token context windows with efficient MoE architecture for coding tasks.
    Model
    Moe
    Items per page
    of 1 pages
    1.23M
    6d
    Qwen
    Free Endpoint

    qwen3-coder-480b-a35b-instruct

    Excels in agentic coding and browser use and supports 256K context, delivering top results.
    Model
    agentic coding
    3.21M
    8mo
    DeepSeek AI
    Downloadable

    deepseek-v4-flash

    DeepSeek V4 Flash is a 284B MoE model with 1M-token context optimized for fast coding and agents.
    Model
    coding
    455K
    6d
    NVIDIA
    Downloadable

    nemotron-3-nano-30b-a3b

    Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more
    Model
    MoE
    9.28M
    4mo
    NVIDIA
    Downloadable

    nemotron-3-super-120b-a12b

    Open, efficient hybrid Mamba-Transformer MoE with 1M context, excelling in agentic reasoning, coding, planning, tool calling, and more
    Model
    MoE
    42.51M
    1mo