Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
This model is ready for commercial/non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA Qwen2.5-7B-Instruct Model Card.
Qwen/Qwen2.5-7B-Instruct is licensed under the Apache 2.0 License
Blog, Github, Documentation, Technical Report
Architecture Type: Transformer
Network Architecture: Qwen2.5-7B-Instruct
Input Type(s): Text
Input Format(s): String
Input Parameters: 1D
Output Type(s): Text
Output Format: String
Output Parameters: 1D
Qwen2.5-7B-Instruct
Link: Unknown
Data Collection Method by dataset: Unknown
Labeling Method by dataset: Unknown
Properties: The size of the pre-training dataset is expanded from 7 trillion tokens used in Qwen2 to a maximum of 18 trillion tokens.
Link: Unknown
Data Collection Method by dataset: Unknown
Labeling Method by dataset: Unknown
Properties: Unknown
Link: See evaluation section of the Hugging Face Qwen2.5-7B-Instruct Model Card
Data Collection Method by dataset: Unknown
Labeling Method by dataset: Unknown
Properties: Unknown
Engine: TensorRT-LLM
Test Hardware: NVIDIA L40S
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.