Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
Models
Deploy and scale models on your GPU infrastructure of choice with NVIDIA NIM inference microservices
Optimized by NVIDIA
Launch from Hugging Face
Beta
Filters
11 models
Sort By
dateCreated:DESC
Most Recent
Qwen
qwen3.5-122b-a10b
122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
tool calling
+4
1.44M
6d
Minimaxai
minimax-m2.5
MiniMax M2.5 is a 230B-parameter text-to-text AI model excelling in coding, reasoning, and office tasks.
coding
+3
3.77M
1w
DeepSeek AI
deepseek-v3.2
State-of-the-art 685B reasoning LLM with sparse attention, long context, and integrated agentic tools.
long context
+3
15.99M
2mo
Mistral AI
devstral-2-123b-instruct-2512
State-of-the-art open code model with deep reasoning, 256k context, and unmatched efficiency.
coding
+4
5.99M
3mo
Qwen
qwen3-next-80b-a3b-thinking
80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
Reasoning
+2
4.15M
6mo
DeepSeek AI
deepseek-v3.1
DeepSeek V3.1 Instruct is a hybrid AI model with fast reasoning, 128K context, and strong tool use.
Reasoning
+2
14.12M
6mo
OpenAI
gpt-oss-20b
Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
text-to-text
+3
8.32M
7mo
OpenAI
gpt-oss-120b
Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
text-to-text
+3
37.84M
7mo
Meta
llama-3.3-70b-instruct
Advanced LLM for reasoning, math, general knowledge, and function calling
Reasoning
+5
23.74M
9mo
Mistral AI
mixtral-8x22b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Advanced Reasoning
+4
4.93M
7mo
Mistral AI
mixtral-8x7b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Advanced Reasoning
+4
737K
7mo
Items per page
24
1
1
of 1 pages