NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

8 results for

Filters (3)

  • Free Endpoint
    4
  • Partner Endpoint
    6
  • Download Available
    4
  • Code Generation
    0
  • Image-to-Text
    0
  • Synthetic Data Generation
    0
  • Deep Infra
    6
  • Together AI
    3
  • GMI Cloud
    3
  • Bitdeer AI
    3
  • Lightning AI
    2
  • Mistral AI
    2
  • Moonshotai
    2
  • DeepSeek AI
    1
  • Google
    1
  • Z.ai
    1
  • reasoning
  • Agentic
  • coding
  • Mistral AI
    Downloadable

    mistral-medium-3.5-128b

    A high performing model for text generation, coding and agentic use cases
    Model
    coding
    1d
    Items per page
    of 1 pages
    Mistral AI
    Deprecation in 12dFree Endpoint

    devstral-2-123b-instruct-2512

    State-of-the-art open code model with deep reasoning, 256k context, and unmatched efficiency.
    Model
    coding
    2.55M
    4mo
    Moonshotai
    Deprecation in 13dFree Endpoint

    kimi-k2-instruct

    State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities
    Model
    coding
    14.48M
    9mo
    Stepfun-ai
    Free Endpoint

    step-3.5-flash

    200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
    Model
    Agentic
    8.9M
    2mo
    Google
    Downloadable

    gemma-4-31b-it

    Dense 31B model delivering frontier reasoning for coding, agentic workflows, and fine-tuning.
    Model
    coding
    3.76M
    4w
    Z.ai
    Downloadable

    glm-5.1

    GLM-5.1 is a flagship LLM for agentic workflows, coding, and long-horizon reasoning tasks.
    Model
    Agentic AI
    3.8M
    1w
    Moonshotai
    Deprecation in 6dFree Endpoint

    kimi-k2-instruct-0905

    Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities.
    Model
    long-context
    8.63M
    7mo
    DeepSeek AI
    Downloadable

    deepseek-v4-pro

    DeepSeek V4 scales to 1M-token context windows with efficient MoE architecture for coding tasks.
    Model
    Moe
    1.23M
    6d