NVIDIA
Explore
Models
Blueprints
GPUs
Docs
⌘KCtrl+K
Terms of Use
Privacy Policy
Your Privacy Choices
Contact

Copyright © 2026 NVIDIA Corporation

mistralai

mistral-small-4-119b-2603

Downloadable

Hybrid MoE model unifying instruct, reasoning, and coding with multimodal input and 256k context

mistralai

mistral-small-4-119b-2603

Downloadable

Hybrid MoE model unifying instruct, reasoning, and coding with multimodal input and 256k context

MoEcode generationimage-to-textreasoning

Mistral Small 4 119B A6B

Description

Mistral Small 4 is a powerful hybrid model capable of acting as both a general instruction model and a reasoning model. It unifies the capabilities of three different model families—Instruct, Reasoning (previously called Magistral), and Devstral—into a single, unified model.

With its multimodal capabilities, efficient architecture, and flexible mode switching, it is a powerful general-purpose model for any task. In a latency-optimized setup, Mistral Small 4 achieves a 40% reduction in end-to-end completion time, and in a throughput-optimized setup, it handles 3x more requests per second compared to Mistral Small 3.

To further improve efficiency, users can take advantage of:

  • Speculative decoding via the trained eagle head Mistral-Small-4-119B-2603-eagle.
  • 4-bit float precision quantization via the NVFP4 checkpoint.

This model is ready for commercial/non-commercial use.

Third-Party Community Consideration:

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA Mistral Small 4 119B Model Card

License and Terms of Use:

GOVERNING TERMS: This trial service is governed by the NVIDIA API Trial Terms of Service. Use of this model is governed by the NVIDIA Open Model License Agreement. Additional Information: Apache 2.0.

Deployment Geography:

Global

Use Case:

Mistral Small 4 is designed for general chat assistant use cases, coding, agentic tasks, and reasoning tasks (with its reasoning mode toggled). Its multimodal capabilities also allow it to understand documents and images to extract data or analyze them.

Its capabilities can be leveraged by:

  • Developers interested in coding and agentic capabilities for SWE automation and codebase exploration.
  • Enterprises seeking general chat assistants, agents, document understanding and more.
  • Researchers interested in its math and research capabilities.

Mistral Small 4 is also ideal for customization and fine-tuning, specializing the model in more defined and specific tasks.

Examples

  • General chat assistant
  • Document parsing and extraction
  • Coding agent
  • Research assistant
  • Customization & fine-tuning
  • And more...

Release Date:

Build.NVIDIA.com: 03/16/2026 via link
Huggingface: 03/16/2026 via link

Reference(s):

References:

  • Mistral Large 3 675B Instruct 2512

Model Architecture:

Architecture Type: Transformer
Network Architecture: Mistral (Mixture-of-Experts)
Total Parameters: 119B
Active Parameters: 6.5B

Input:

Input Types: Text, Image
Input Formats: String, Red, Green, Blue (RGB)
Input Parameters: One-Dimensional (1D), Two-Dimensional (2D)
Other Input Properties: Supports text and image inputs with text output. Switches between a fast instant reply mode and a reasoning thinking mode with configurable reasoning effort. Supports function calling and JSON output natively.
Input Context Length (ISL): 262,144

Output:

Output Types: Text
Output Format: String
Output Parameters: One-Dimensional (1D)
Other Output Properties: Generates text responses based on text and image inputs; supports reasoning mode with internal thinking content and native function calling with tool use.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engines:

  • vLLM: Primary inference engine (supports tensor parallelism, tool calling, reasoning parser)
  • Hugging Face Transformers: Model loading and lightweight serving

Supported Hardware:

  • NVIDIA Ampere: A100
  • NVIDIA Blackwell: B100, B200, GB200
  • NVIDIA Hopper: H100, H200

Preferred Operating Systems: Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s)

Mistral Small 4 v4.0

Training, Testing, and Evaluation Datasets:

Training Dataset

Data Modality: Text, Image
Training Data Collection: Undisclosed
Training Labeling: Undisclosed
Training Properties: Undisclosed

Testing Dataset

Testing Data Collection: Undisclosed
Testing Labeling: Undisclosed
Testing Properties: Undisclosed

Evaluation Dataset

Evaluation Data Collection: Undisclosed
Evaluation Labeling: Undisclosed
Evaluation Properties: Undisclosed

Benchmarks

Comparison with internal models

Depending on your tasks you can trigger reasoning thanks to the support of the per-request parameter reasoning_effort. Set it to:

  • reasoning_effort="none": Fast, lightweight responses for everyday tasks, equivalent to the same chat style of mistralai/Mistral-Small-3.2-24B-Instruct-2506.
  • reasoning_effort="high": Deep, step-by-step reasoning for complex problems, with equivalent verbosity to previous Magistral models such as mistralai/Magistral-Small-2509.
Comparing Reasoning Models

Comparison with other models

Mistral Small 4 with reasoning achieves competitive scores, matching or surpassing GPT-OSS 120B across all three benchmarks while generating significantly shorter outputs. On AA LCR, Mistral Small 4 scores 0.72 with just 1.6K characters, whereas Qwen models require 3.5-4x more output (5.8-6.1K) for comparable performance. On LiveCodeBench, Mistral Small 4 outperforms GPT-OSS 120B while producing 20% less output. This efficiency reduces latency, inference costs, and improves user experience.

Inference

Acceleration Engine: vLLM
Test Hardware: NVIDIA H100, H200 (TP=1, 2, 4)

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image content; if image includes people, personal health information, or intellectual property, the image generated will not blur or maintain proportions of image subjects included.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.