Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
Models
Deploy and scale models on your GPU infrastructure of choice with NVIDIA NIM inference microservices
Optimized by NVIDIA
Launch from Hugging Face
Beta
Filters
11 models
Sort By
dateCreated:DESC
Most Recent
Qwen
qwen3.5-122b-a10b
122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
tool calling
+4
252K
2d
Minimaxai
minimax-m2.5
MiniMax M2.5 is a 230B-parameter text-to-text AI model excelling in coding, reasoning, and office tasks.
coding
+3
2.36M
1w
DeepSeek AI
deepseek-v3.2
State-of-the-art 685B reasoning LLM with sparse attention, long context, and integrated agentic tools.
long context
+3
14.95M
2mo
Mistral AI
devstral-2-123b-instruct-2512
State-of-the-art open code model with deep reasoning, 256k context, and unmatched efficiency.
coding
+4
5.35M
2mo
Qwen
qwen3-next-80b-a3b-thinking
80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
Reasoning
+2
3.47M
5mo
DeepSeek AI
deepseek-v3.1
DeepSeek V3.1 Instruct is a hybrid AI model with fast reasoning, 128K context, and strong tool use.
Reasoning
+2
14.47M
6mo
OpenAI
gpt-oss-20b
Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
text-to-text
+3
7.35M
7mo
OpenAI
gpt-oss-120b
Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
text-to-text
+3
34.47M
7mo
Meta
llama-3.3-70b-instruct
Advanced LLM for reasoning, math, general knowledge, and function calling
Reasoning
+5
22.79M
8mo
Mistral AI
mixtral-8x22b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Advanced Reasoning
+4
4.32M
7mo
Mistral AI
mixtral-8x7b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Advanced Reasoning
+4
648K
7mo
Items per page
24
1
1
of 1 pages