NVIDIA
Explore Models Blueprints GPUs Docs
Terms of Use

|

Privacy Policy

|

Manage My Privacy

|

Contact

Copyright © 2025 NVIDIA Corporation

utter-project

eurollm-9b-instruct

Run Anywhere

State-of-the-art, multilingual model tailored to all 24 official European Union languages.

utter-project

eurollm-9b-instruct

Run Anywhere

State-of-the-art, multilingual model tailored to all 24 official European Union languages.

chateuropeanmultilingualregional language generationsovereign aitext-to-text

EuroLLM-9B-Instruct Overview

Description:

The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages. EuroLLM-9B-Instruct is a 9.154 billion parameter multilingual transformer language model developed to understand and generate text across 35 languages, including all 24 official European Union languages and 11 additional languages. It is instruction-tuned on the EuroBlocks dataset, focusing on general instruction-following and machine translation tasks.

This model is ready for commercial and non-commercial use.

Third-Party Community Consideration

This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see the EuroLLM-9B-Instruct Model Card.

License and Terms of Use:

GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service; and the use of this model is governed by the NVIDIA Community Model License. ADDITIONAL INFORMATION: Apache License Version 2.0.

Deployment Geography:

Global

Use Case:

Designed for multilingual applications such as machine translation, conversational AI, and general-purpose instruction-following tasks across diverse languages.

Release Date:

  • Hugging Face: December 2024 via link
  • Build.NVIDIA.com: 05/14/2025 via link

Reference(s):

  • arXiv:2202.03799
  • arXiv:2402.17733
  • arXiv:2506.04079

Model Architecture:

  • Architecture Type: Transformer
  • Network Architecture: Dense Transformer with Grouped Query Attention (GQA)
  • Base Model: EuroLLM-9B
  • Model Parameters: 9.154 billion

Input:

  • Input Type(s): Text
  • Input Format(s): String
  • Input Parameters: 1D
  • Other Properties Related to Input: Maximum sequence length of 4,096 tokens; tokenized using a custom tokenizer designed for multilingual support.

Output:

  • Output Type(s): Text
  • Output Format: String
  • Output Parameters: 1D
  • Other Properties Related to Output: NA

Software Integration:

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Blackwell
  • NVIDIA Hopper
  • NVIDIA Lovelace
  • NVIDIA Pascal

Operating System(s):

  • Linux

Model Version(s):

  • EuroLLM-9B-Instruct v1.0

Training, Testing, and Evaluation Datasets:

Training Dataset:

  • Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic
  • Labeling Method by dataset: Hybrid: Automated, Human
  • Properties: Trained on 4 trillion tokens across 35 languages.

Testing Dataset:

  • Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic
  • Labeling Method by dataset: Hybrid: Automated, Human
  • Properties: Undisclosed

Evaluation Dataset:

  • Benchmark Score: EuroLLM-9B-Instruct demonstrates competitive performance on multilingual benchmarks, surpassing many European-developed models and matching the performance of models like Mistral-7B.
  • Data Collection Method by dataset: Undisclosed

Inference:

  • Engine: TensorRT-LLM
  • Test Hardware: NVIDIA Lovelace L40S

Additional Details:

For pre-training, we use 400 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 2,800 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision. Here is a summary of the model hyper-parameters:

Sequence Length4,096
Number of Layers42
Embedding Size4,096
FFN Hidden Size12,288
Number of Heads32
Number of KV Heads (GQA)8
Activation FunctionSwiGLU
Position EncodingsRoPE (\Theta=10,000)
Layer NormRMSNorm
Tied EmbeddingsNo
Embedding Parameters0.524B
LM Head Parameters0.524B
Non-embedding Parameters8.105B
Total Parameters9.154B

Run the model

from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "utter-project/EuroLLM-9B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) messages = [ { "role": "system", "content": "You are EuroLLM --- an AI assistant specialized in European languages that provides safe, educational and helpful answers.", }, { "role": "user", "content": "What is the capital of Portugal? How would you describe it?" }, ] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(inputs, max_new_tokens=1024) print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Results

EU Languages

image/png Table 1: Comparison of open-weight LLMs on multilingual benchmarks. The borda count corresponds to the average ranking of the models (see (Colombo et al., 2022)). For Arc-challenge, Hellaswag, and MMLU we are using Okapi datasets (Lai et al., 2023) which include 11 languages. For MMLU-Pro and MUSR we translate the English version with Tower (Alves et al., 2024) to 6 EU languages.
* As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.

The results in Table 1 highlight EuroLLM-9B's superior performance on multilingual tasks compared to other European-developed models (as shown by the Borda count of 1.0), as well as its strong competitiveness with non-European models, achieving results comparable to Gemma-2-9B and outperforming the rest on most benchmarks.

English

image/png

Table 2: Comparison of open-weight LLMs on English general benchmarks.
* As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.

The results in Table 2 demonstrate EuroLLM's strong performance on English tasks, surpassing most European-developed models and matching the performance of Mistral-7B (obtaining the same Borda count).

Bias, Risks, and Limitations

This model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Developers should implement appropriate safety measures and conduct thorough evaluations before deploying the model in production environments.

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.