NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

Models

Deploy and scale models on your GPU infrastructure of choice with NVIDIA NIM inference microservices

Optimized by NVIDIALaunch from Hugging FaceBeta

Filters (1)

  • API Endpoint
    11
  • Download Available
    6
  • Code Generation
    1
  • Drug Discovery
    0
  • Image-to-Text
    0
  • Retrieval Augmented Generation
    0
  • Object Detection
    0
  • DeepSeek AI
    4
  • Mistral AI
    2
  • Qwen
    2
  • Moonshotai
    2
  • Minimaxai
    2
  • coding
  • 17 models
    Minimaxai
    minimax-m2.5
    MiniMax M2.5 is a 230B-parameter text-to-text AI model excelling in coding, reasoning, and office tasks.
    coding
    1.56M
    6d
    Minimaxai
    minimax-m2.1
    MiniMax M2.1 excels in multi-language coding, app/web dev, office AI, and agent integration
    Agentic
    7.8M
    1mo
    Stepfun-ai
    step-3.5-flash
    200B open-source reasoning engine with sparse MoE powering frontier agentic AI.
    Agentic
    6.25M
    1mo
    Z.ai
    glm4.7
    GLM-4.7 is a multilingual agentic coding partner with stronger reasoning, tool use, and UI skills.
    Tool Calling
    16.75M
    1mo
    Mistral AI
    devstral-2-123b-instruct-2512
    State-of-the-art open code model with deep reasoning, 256k context, and unmatched efficiency.
    coding
    4.46M
    2mo
    Moonshotai
    kimi-k2-instruct-0905
    Follow-on version of Kimi-K2-Instruct with longer context window and enhanced reasoning capabilities.
    long-context
    10.27M
    5mo
    Qwen
    qwen3-coder-480b-a35b-instruct
    Excels in agentic coding and browser use and supports 256K context, delivering top results.
    agentic coding
    2.89M
    6mo
    Sarvamai
    sarvam-m
    Multilingual, hybrid-reasoning model optimized for Indian language tasks, programming, mathematical reasoning capabilities.
    coding
    371K
    7mo
    Moonshotai
    kimi-k2-instruct
    State-of-the-art open mixture-of-experts model with strong reasoning, coding, and agentic capabilities
    coding
    18.28M
    7mo
    Mistral AI
    magistral-small-2506
    High performance reasoning model optimized for efficiency and edge deployment
    coding
    2.9M
    7mo
    IBM
    granite-3.3-8b-instruct
    Small language model fine-tuned for improved reasoning, coding, and instruction-following
    coding
    169K
    7mo
    Qwen
    qwq-32b
    Powerful reasoning model capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems.
    coding
    2.88M
    8mo
    DeepSeek AI
    deepseek-r1-distill-llama-8b
    Distilled version of Llama 3.1 8B using reasoning data generated by DeepSeek R1 for enhanced performance.
    Distillation
    3.61M
    7mo
    DeepSeek AI
    deepseek-r1-distill-qwen-32b
    Distilled version of Qwen 2.5 32B using reasoning data generated by DeepSeek R1 for enhanced performance.
    coding
    2.59K3.64M
    9mo
    DeepSeek AI
    deepseek-r1-distill-qwen-14b
    Distilled version of Qwen 2.5 14B using reasoning data generated by DeepSeek R1 for enhanced performance.
    coding
    2.17K3.27M
    9mo
    DeepSeek AI
    deepseek-r1-distill-qwen-7b
    Distilled version of Qwen 2.5 7B using reasoning data generated by DeepSeek R1 for enhanced performance.
    coding
    2.28K3.62M
    9mo
    Tiiuae
    falcon3-7b-instruct
    Instruction tuned LLM achieving SoTA performance on reasoning, math and general knowledge capabilities
    Coding
    400K
    9mo
    Items per page
    of 1 pages