Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
7 results for
Filters
Models (7)
Blueprints (0)
Other (0)
Sort By
score:DESC
Best Match
DeepSeek AI
deepseek-v3.1-terminus
DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.
Model
tool calling
+3
12.1M
5mo
OpenAI
gpt-oss-120b
Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
Model
text-to-text
+3
34.11M
7mo
OpenAI
gpt-oss-20b
Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
Model
text-to-text
+3
7.06M
7mo
Moonshotai
kimi-k2-instruct
State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities
Model
coding
+3
19.43M
7mo
Mistral AI
mixtral-8x22b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Model
Advanced Reasoning
+4
4.07M
7mo
Mistral AI
mixtral-8x7b-instruct-v0.1
An MOE LLM that follows instructions, completes requests, and generates creative text.
Model
Advanced Reasoning
+4
624K
7mo
Qwen
qwq-32b
Powerful reasoning model capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems.
Model
coding
+3
3.32M
8mo
Items per page
24
1
1
of 1 pages