NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

18 results for

Filters

  • Download Available
    11
  • API Endpoint
    7
  • Image-to-Text
    4
  • Code Generation
    2
  • Qwen
    5
  • Mistral AI
    3
  • Meta
    2
  • NVIDIA
    2
  • OpenAI
    2
  • Z.ai

    glm5

    GLM-5 744B MoE enables efficient reasoning for complex systems and long-horizon agentic tasks.
    Model
    MoE
    7.94M
    3w
    Qwen

    qwen3.5-397b-a17b

    Next-gen Qwen 3.5 VLM (400B MoE) brings advanced vision, chat, RAG, and agentic capabilities.
    Model
    MoE
    6.55M
    3w
    NVIDIA

    nemotron-3-nano-30b-a3b

    Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more
    Model
    MoE
    12.32M
    2mo
    Qwen

    qwen3-coder-480b-a35b-instruct

    Excels in agentic coding and browser use and supports 256K context, delivering top results.
    Model
    agentic coding
    3.83M
    6mo
    Meta

    llama-4-scout-17b-16e-instruct

    A multimodal, multilingual 16 MoE model with 17B parameters.
    Model
    language generation
    191K
    7mo
    OpenAI

    gpt-oss-120b

    Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
    Model
    text-to-text
    36.1M
    7mo
    OpenAI

    gpt-oss-20b

    Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
    Model
    text-to-text
    7.97M
    7mo
    AI21 Labs

    jamba-1.5-mini-instruct

    Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.
    Model
    chat
    510K
    9mo
    Moonshotai

    kimi-k2.5

    1T multimodal MoE for high‑capacity video and image understanding with efficient inference.
    Model
    Multimodal
    21.51M
    1mo
    Meta

    llama-4-maverick-17b-128e-instruct

    A general purpose multimodal, multilingual 128 MoE model with 17B parameters.
    Model
    language generation
    3.16M
    7mo
    Mistral AI

    mixtral-8x22b-instruct-v0.1

    An MOE LLM that follows instructions, completes requests, and generates creative text.
    Model
    Advanced Reasoning
    4.76M
    7mo
    Mistral AI

    mixtral-8x7b-instruct-v0.1

    An MOE LLM that follows instructions, completes requests, and generates creative text.
    Model
    Advanced Reasoning
    683K
    7mo
    Qwen

    qwen3-next-80b-a3b-instruct

    Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.
    Model
    chat
    11.15M
    5mo
    Qwen

    qwen3-next-80b-a3b-thinking

    80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
    Model
    Reasoning
    3.89M
    5mo
    Qwen

    qwen3.5-122b-a10b

    122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
    Model
    tool calling
    878K
    4d
    Stepfun-ai

    step-3.5-flash

    200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
    Model
    Agentic
    7.29M
    1mo
    Mistral AI

    mistral-large-3-675b-instruct-2512

    A state-of-the-art general purpose MoE VLM ideal for chat, agentic and instruction based use cases.
    Model
    language generation
    6.17M
    3mo
    DGX Spark
    30 MIN

    CUDA-X Data Science

    Install and use NVIDIA cuML and NVIDIA cuDF to accelerate UMAP, HDBSCAN, pandas and more with zero code changes
    Playbook
    pandas
    4mo
    Items per page
    of 1 pages