NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

7 results for

Filters (2)

  • Free Endpoint
    2
  • Partner Endpoint
    7
  • Download Available
    5
  • Code Generation
    2
  • Retrieval Augmented Generation
    0
  • Drug Discovery
    0
  • Image-to-Text
    0
  • Speech-to-Text
    0
  • Deep Infra
    6
  • Together AI
    4
  • GMI Cloud
    3
  • CoreWeave
    3
  • Lightning AI
    3
  • Mistral AI
    2
  • OpenAI
    2
  • NVIDIA
    1
  • DeepSeek AI
    1
  • Moonshotai
    1
  • A100 SXM4 80GB
    0
  • B200
    0
  • GB200
    0
  • GH200 144G HBM3e
    0
  • H100 80GB HBM3
    0
  • chat
  • reasoning
  • DeepSeek AI
    Deprecation in 2dFree Endpoint

    deepseek-v3.1-terminus

    DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.
    Model
    tool calling
    Items per page
    of 1 pages
    5.7M
    6mo
    OpenAI
    Downloadable

    gpt-oss-120b

    Mixture of Experts (MoE) reasoning LLM (text-only) designed to fit within 80GB GPU.
    Model
    reasoning
    27.85M
    9mo
    OpenAI
    Downloadable

    gpt-oss-20b

    Smaller Mixture of Experts (MoE) text-only LLM for efficient AI reasoning and math
    Model
    reasoning
    11.37M
    9mo
    Moonshotai
    Deprecation in 10dFree Endpoint

    kimi-k2-instruct

    State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities
    Model
    coding
    12.72M
    9mo
    Mistral AI
    Downloadable

    mixtral-8x22b-instruct-v0.1

    An MOE LLM that follows instructions, completes requests, and generates creative text.
    Model
    Advanced Reasoning
    2.11M
    9mo
    Mistral AI
    Downloadable

    mixtral-8x7b-instruct-v0.1

    An MOE LLM that follows instructions, completes requests, and generates creative text.
    Model
    Advanced Reasoning
    467K
    9mo
    NVIDIA
    Downloadable

    nemotron-3-super-120b-a12b

    Open, efficient hybrid Mamba-Transformer MoE with 1M context, excelling in agentic reasoning, coding, planning, tool calling, and more
    Model
    MoE
    44.08M
    1mo