NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

11 results for

Filters (1)

  • Free Endpoint
    2
  • Download Available
    9
  • Code Generation
    2
  • Image-to-Text
    1
  • Qwen
    2
  • Mistral AI
    2
  • NVIDIA
    2
  • OpenAI
    2
  • Moonshotai
    1
  • reasoning
  • NVIDIA
    Downloadable

    nemotron-3-nano-30b-a3b

    Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more
    Model
    chat
    12.23M
    3mo
    NVIDIA
    Downloadable

    nemotron-3-super-120b-a12b

    Open, efficient hybrid Mamba-Transformer MoE with 1M context, excelling in agentic reasoning, coding, planning, tool calling, and more
    Model
    chat
    329K
    4d
    Z.ai
    Downloadable

    glm-5

    GLM-5 744B MoE enables efficient reasoning for complex systems and long-horizon agentic tasks.
    Model
    MoE
    9.8M
    1mo
    OpenAI
    Downloadable

    gpt-oss-120b

    Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
    Model
    reasoning
    41.01M
    7mo
    OpenAI
    Downloadable

    gpt-oss-20b

    Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
    Model
    reasoning
    8.46M
    7mo
    Moonshotai
    Downloadable

    kimi-k2.5

    1T multimodal MoE for high‑capacity video and image understanding with efficient inference.
    Model
    Multimodal
    22.84M
    1mo
    Mistral AI
    Downloadable

    mixtral-8x22b-instruct-v0.1

    An MOE LLM that follows instructions, completes requests, and generates creative text.
    Model
    chat
    4.96M
    8mo
    Mistral AI
    Downloadable

    mixtral-8x7b-instruct-v0.1

    An MOE LLM that follows instructions, completes requests, and generates creative text.
    Model
    chat
    750K
    8mo
    Qwen
    Downloadable

    qwen3-next-80b-a3b-thinking

    80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
    Model
    chat
    4.24M
    6mo
    Qwen
    Free Endpoint

    qwen3.5-122b-a10b

    122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
    Model
    chat
    1.49M
    1w
    Stepfun-ai
    Free Endpoint

    step-3.5-flash

    200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
    Model
    chat
    7.8M
    1mo
    Items per page
    of 1 pages