State-of-the-art, multilingual model tailored to all 24 official European Union languages.
State-of-the-art, multilingual model tailored to all 24 official European Union languages.
The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages. EuroLLM-9B-Instruct is a 9.154 billion parameter multilingual transformer language model developed to understand and generate text across 35 languages, including all 24 official European Union languages and 11 additional languages. It is instruction-tuned on the EuroBlocks dataset, focusing on general instruction-following and machine translation tasks.
This model is ready for commercial and non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see the EuroLLM-9B-Instruct Model Card.
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service; and the use of this model is governed by the NVIDIA Community Model License. ADDITIONAL INFORMATION: Apache License Version 2.0.
Global
Designed for multilingual applications such as machine translation, conversational AI, and general-purpose instruction-following tasks across diverse languages.
For pre-training, we use 400 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 2,800 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision. Here is a summary of the model hyper-parameters:
Sequence Length | 4,096 |
Number of Layers | 42 |
Embedding Size | 4,096 |
FFN Hidden Size | 12,288 |
Number of Heads | 32 |
Number of KV Heads (GQA) | 8 |
Activation Function | SwiGLU |
Position Encodings | RoPE (\Theta=10,000) |
Layer Norm | RMSNorm |
Tied Embeddings | No |
Embedding Parameters | 0.524B |
LM Head Parameters | 0.524B |
Non-embedding Parameters | 8.105B |
Total Parameters | 9.154B |
from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "utter-project/EuroLLM-9B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) messages = [ { "role": "system", "content": "You are EuroLLM --- an AI assistant specialized in European languages that provides safe, educational and helpful answers.", }, { "role": "user", "content": "What is the capital of Portugal? How would you describe it?" }, ] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(inputs, max_new_tokens=1024) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Table 1: Comparison of open-weight LLMs on multilingual benchmarks. The borda count corresponds to the average ranking of the models (see (Colombo et al., 2022)). For Arc-challenge, Hellaswag, and MMLU we are using Okapi datasets (Lai et al., 2023) which include 11 languages. For MMLU-Pro and MUSR we translate the English version with Tower (Alves et al., 2024) to 6 EU languages.
* As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.
The results in Table 1 highlight EuroLLM-9B's superior performance on multilingual tasks compared to other European-developed models (as shown by the Borda count of 1.0), as well as its strong competitiveness with non-European models, achieving results comparable to Gemma-2-9B and outperforming the rest on most benchmarks.
Table 2: Comparison of open-weight LLMs on English general benchmarks.
* As there are no public versions of the pre-trained models, we evaluated them using the post-trained versions.
The results in Table 2 demonstrate EuroLLM's strong performance on English tasks, surpassing most European-developed models and matching the performance of Mistral-7B (obtaining the same Borda count).
This model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Developers should implement appropriate safety measures and conduct thorough evaluations before deploying the model in production environments.
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.