Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
8 results for
Filters
Models (8)
Blueprints (0)
Other (0)
Sort By
score:DESC
Best Match
Sarvamai
Downloadable
sarvam-m
Multilingual, hybrid-reasoning model optimized for Indian language tasks, programming, mathematical reasoning capabilities.
Model
coding
+5
Items per page
24
1
1
of 1 pages
144K
9mo
Mistral AI
Downloadable
mistral-small-4-119b-2603
Hybrid MoE model unifying instruct, reasoning, and coding with multimodal input and 256k context
Model
code generation
+2
8.35M
1mo
NVIDIA
Downloadable
nvidia-nemotron-nano-9b-v2
High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.
Model
thinking budget
+1
377K
8mo
Qwen
Downloadable
qwen3-next-80b-a3b-instruct
Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.
Model
text-generation
+1
17.92M
7mo
Qwen
Downloadable
qwen3-next-80b-a3b-thinking
80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.
Model
Reasoning
+1
1.73M
7mo
DeepSeek AI
Deprecation in 5d
Free Endpoint
deepseek-v3.1-terminus
DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.
Model
tool calling
+3
6.4M
6mo
NVIDIA
Downloadable
nemotron-3-super-120b-a12b
Open, efficient hybrid Mamba-Transformer MoE with 1M context, excelling in agentic reasoning, coding, planning, tool calling, and more
Model
MoE
+4
42.51M
1mo
NVIDIA
Free Endpoint
nv-embedcode-7b-v1
The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.
Model
nemo retriever
+2
118K
11mo