NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

Search Results

Searching for: Mixture-of-Experts
Sorting by Last Updated

moonshotaikimi-k2.5

1T multimodal MoE for high‑capacity video and image understanding with efficient inference.

MultimodalReasoningchatMixture-of-ExpertsImage-to-Text

minimaxaiminimax-m2

Open Mixture of Experts LLM (230B, 10B active) for reasoning, coding, and tool-use/agent workflows

ConversationalReasoningchatLong ContextFunction Calling

openaigpt-oss-20b

Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math

text-to-textchatreasoningmath

openaigpt-oss-120b

Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.

text-to-textchatreasoningmath

moonshotaikimi-k2-instruct

State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities

codingchatadvanced reasoningagentic