DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.
Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.
Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities
ByteDance open-source LLM with long-context, reasoning, and agentic intelligence.
Excels in agentic coding and browser use and supports 256K context, delivering top results.
High‑efficiency LLM with hybrid Transformer‑Mamba design, excelling in reasoning and agentic tasks.
State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities
Built for agentic workflows, this model excels in coding, instruction following, and function calling
Improve safety, security, and privacy of AI systems at build, deploy and run stages.
Automate and optimize the configuration of radio access network (RAN) parameters using agentic AI and a large language model (LLM)-driven framework.
Leading reasoning and agentic AI accuracy model for PC and edge.
Trace and evaluate AI Agents with Weights & Biases.