NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

Models

Deploy and scale models on your GPU infrastructure of choice with NVIDIA NIM inference microservices

Optimized by NVIDIALaunch from Hugging FaceBeta

Filters (1)

  • Free Endpoint
    7
  • Partner Endpoint
    13
  • Download Available
    6
  • Image-to-Text
    1
  • Retrieval Augmented Generation
    0
  • Drug Discovery
    0
  • Code Generation
    0
  • Speech-to-Text
    0
  • Deep Infra
    11
  • Bitdeer AI
    7
  • GMI Cloud
    6
  • Together AI
    5
  • Lightning AI
    2
  • Qwen
    3
  • DeepSeek AI
    3
  • Mistral AI
    2
  • Moonshotai
    2
  • Google
    1
  • A100 SXM4 80GB
    0
  • B200
    0
  • GB200
    0
  • GH200 144G HBM3e
    0
  • H100 80GB HBM3
    0
  • agentic
  • 13 models
    DeepSeek AI
    Downloadable

    deepseek-v4-flash

    DeepSeek V4 Flash is a 284B MoE model with 1M-token context optimized for fast coding and agents.
    coding
    4d
    Items per page
    of 1 pages
    361K
    DeepSeek AI
    Downloadable

    deepseek-v4-pro

    DeepSeek V4 scales to 1M-token context windows with efficient MoE architecture for coding tasks.
    coding
    781K
    4d
    Z.ai
    Downloadable

    glm-5.1

    GLM-5.1 is a flagship LLM for agentic workflows, coding, and long-horizon reasoning tasks.
    Agentic AI
    2.53M
    1w
    Google
    Downloadable

    gemma-4-31b-it

    Dense 31B model delivering frontier reasoning for coding, agentic workflows, and fine-tuning.
    coding
    3.46M
    3w
    Qwen
    Downloadable

    qwen3.5-397b-a17b

    Next-gen Qwen 3.5 VLM (400B MoE) brings advanced vision, chat, RAG, and agentic capabilities.
    MoE
    9.6M
    2mo
    Stepfun-ai
    Free Endpoint

    step-3.5-flash

    200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
    Agentic
    9.07M
    2mo
    Mistral AI
    Deprecation in 14dFree Endpoint

    devstral-2-123b-instruct-2512

    State-of-the-art open code model with deep reasoning, 256k context, and unmatched efficiency.
    coding
    2.81M
    4mo
    Mistral AI
    Free Endpoint

    mistral-large-3-675b-instruct-2512

    A state-of-the-art general purpose MoE VLM ideal for chat, agentic and instruction based use cases.
    language generation
    4.15M
    4mo
    DeepSeek AI
    Deprecation in 7dFree Endpoint

    deepseek-v3.1-terminus

    DeepSeek-V3.1: hybrid inference LLM with Think/Non-Think modes, stronger agents, 128K context, strict function calling.
    tool calling
    7.29M
    6mo
    Qwen
    Downloadable

    qwen3-next-80b-a3b-instruct

    Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.
    text-generation
    18.75M
    7mo
    Moonshotai
    Free Endpoint

    kimi-k2-instruct-0905

    Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities.
    long-context
    9.75M
    7mo
    Qwen
    Free Endpoint

    qwen3-coder-480b-a35b-instruct

    Excels in agentic coding and browser use and supports 256K context, delivering top results.
    agentic coding
    3.27M
    7mo
    Moonshotai
    Free Endpoint

    kimi-k2-instruct

    State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities
    coding
    15.32M
    9mo