Explore
Models
Blueprints
GPUs
Docs
⌘K
Ctrl+K
?
Login
AI Models by DeepSeek AI | Try NVIDIA NIM APIs
DeepSeek AI
4 results
DeepSeek AI
Downloadable
deepseek-v4-flash
DeepSeek V4 Flash is a 284B MoE model with 1M-token context optimized for fast coding and agents.
Model
coding
+3
1.86M
1w
DeepSeek AI
Downloadable
deepseek-v4-pro
DeepSeek V4 scales to 1M-token context windows with efficient MoE architecture for coding tasks.
Model
Moe
+3
2.08M
1w
DeepSeek AI
Deprecation in 2d
Free Endpoint
deepseek-v3.2
State-of-the-art 685B reasoning LLM with sparse attention, long context, and integrated agentic tools.
Model
long context
+2
7.69M
4mo
DeepSeek AI
Deprecation in 2d
Free Endpoint
deepseek-v3.1-terminus
DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.
Model
tool calling
+3
5.7M
6mo