Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
40 results for
Filters
Models (40)
Blueprints (0)
Other (0)
Sort By
score:DESC
Best Match
NVIDIA
Free Endpoint
nemotron-content-safety-reasoning-4b
A context‑aware safety model that applies reasoning to enforce domain‑specific policies.
Model
NeMo Guardrails
+3
3mo
Items per page
24
1
1
2
2
of 2 pages
174K
NVIDIA
Downloadable
nemotron-3-nano-omni-30b-a3b-reasoning
Nemotron 3 Nano Omni is an omni-modal reasoning model that understands images, video, speech, text.
Model
Image-to-Text
+4
6.83M
2w
Mistral AI
Downloadable
mistral-medium-3.5-128b
A high performing model for text generation, coding and agentic use cases
Model
coding
+3
1.55M
2w
Mistral AI
Deprecated
Free Endpoint
magistral-small-2506
High performance reasoning model optimized for efficiency and edge deployment
Model
coding
+3
1.15M
10mo
NVIDIA
Downloadable
llama-3.1-nemotron-nano-8b-v1
Leading reasoning and agentic AI accuracy model for PC and edge.
Model
math
+3
1.09M
10mo
Mistral AI
Downloadable
mistral-small-4-119b-2603
Hybrid MoE model unifying instruct, reasoning, and coding with multimodal input and 256k context
Model
code generation
+2
19.3M
1mo
NVIDIA
Downloadable
nvidia-nemotron-nano-9b-v2
High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.
Model
thinking budget
+1
783K
8mo
Qwen
Deprecation in 8d
Downloadable
qwen3-next-80b-a3b-thinking
80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
Model
Reasoning
+1
2.34M
8mo
ByteDance
Free Endpoint
seed-oss-36b-instruct
ByteDance open-source LLM with long-context, reasoning, and agentic intelligence.
Model
thinking budget
+2
1.23M
8mo
Stepfun-ai
Free Endpoint
step-3.5-flash
200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
Model
Agentic
+2
11.54M
3mo
NVIDIA
Downloadable
ising-calibration-1-35b-a3b
Open VLM for quantum computer calibration chart understanding across a range of qubit modalities.
Model
Quantum
+3
307K
1mo
Sarvamai
Downloadable
sarvam-m
Multilingual, hybrid-reasoning model optimized for Indian language tasks, programming, mathematical reasoning capabilities.
Model
coding
+5
305K
9mo
Google
Downloadable
gemma-4-31b-it
Dense 31B model delivering frontier reasoning for coding, agentic workflows, and fine-tuning.
Model
coding
+3
6.44M
1mo
Z.ai
Deprecation in 1d
Free Endpoint
glm-4.7
GLM-4.7 is a multilingual agentic coding partner with stronger reasoning, tool use, and UI skills.
Model
Tool Calling
+3
13.57M
3w
Z.ai
Downloadable
glm-5.1
GLM-5.1 is a flagship LLM for agentic workflows, coding, and long-horizon reasoning tasks.
Model
Agentic AI
+3
18.13M
3w
OpenAI
Downloadable
gpt-oss-120b
Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
Model
reasoning
+3
36.6M
9mo
OpenAI
Downloadable
gpt-oss-20b
Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
Model
reasoning
+3
17.38M
9mo
NVIDIA
Downloadable
llama-3.3-nemotron-super-49b-v1
High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.
Model
math
+3
4.34M
9mo
NVIDIA
Downloadable
llama-3.3-nemotron-super-49b-v1.5
High efficiency model with leading accuracy for reasoning, tool calling, chat, and instruction following.
Model
math
+3
3.37M
9mo
Qwen
Downloadable
qwen3.5-122b-a10b
122B MoE LLM (10B active) for coding, reasoning, multimodal chat. Agent-ready.
Model
tool calling
+3
11.84M
2mo
Minimaxai
Deprecated
Downloadable
minimax-m2.5
MiniMax M2.5 is a 230B-parameter text-to-text AI model excelling in coding, reasoning, and office tasks.
Model
coding
+2
7.03M
2mo
Minimaxai
Free Endpoint
minimax-m2.7
MiniMax M2.7 is a 230B-parameter text-to-text AI model excelling in coding, reasoning, and office tasks.
Model
coding
+2
12.26M
1mo
NVIDIA
Downloadable
nemotron-3-nano-30b-a3b
Open, efficient MoE model with 1M context, excelling in coding, reasoning, instruction following, tool calling, and more
Model
MoE
+3
11.42M
5mo
NVIDIA
Downloadable
nemotron-3-super-120b-a12b
Open, efficient hybrid Mamba-Transformer MoE with 1M context, excelling in agentic reasoning, coding, planning, tool calling, and more
Model
MoE
+4
54.47M
2mo