Sovereign AI model trained on Japanese language that understands regional nuances.
Our Swallow model has undergone continual pre-training from the Llama 3 family, primarily with the addition of Japanese language data. The Instruct versions use supervised fine-tuning (SFT) and Chat Vector.
We are excited to share the release schedule for our latest models:
Model | Llama-3-Swallow | Llama3 Swallow Instruct |
---|---|---|
8B | Link | Link |
70B | Link | Link |
This repository provides large language models developed by Swallow-LLM. Read our blog post.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA Model Card.
META LLAMA 3 COMMUNITY LICENSE
Architecture Type: Transformer
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
Input Type(s): Text
Input Format(s): String
Input Parameters: One Dimensional (1D)
Output Type(s): Text
Output Format: String
Output Parameters: 1D
Supported Hardware Microarchitecture Compatibility:
Preferred Operating System(s):
The following datasets were used for the instruction tuning.
Model | Size | JCom. | JEMHopQA | NIILC | JSQuAD | XL-Sum | MGSM | WMT20-en-ja | WMT20-ja-en | JMMLU | JHumanEval | Ja Avg |
---|---|---|---|---|---|---|---|---|---|---|---|---|
4-shot | 4-shot | 4-shot | 4-shot | 1-shot | 4-shot | 4-shot | 4-shot | 5-shot | 0-shot | |||
EM acc | Char-F1 | Char-F1 | Char-F1 | ROUGE-2 | EM acc | BLEU | BLEU | EM acc | pass@1 | |||
karakuri-lm-70b-chat-v0.1 | 70B | 0.8847 | 0.5139 | 0.5668 | 0.9096 | 0.1369 | 0.2800 | 0.2526 | 0.2095 | 0.4648 | 0.2354 | 0.4454 |
Meta-Llama-3-70B-Instruct | 70B | 0.9419 | 0.6114 | 0.5506 | 0.9164 | 0.1912 | 0.7200 | 0.2708 | 0.2350 | 0.6789 | 0.6610 | 0.5777 |
Llama-3-Swallow-70B-Instruct-v0.1 | 70B | 0.9607 | 0.6188 | 0.6026 | 0.9236 | 0.1389 | 0.6560 | 0.2724 | 0.2532 | 0.6572 | 0.6000 | 0.5683 |
Qwen2-72B-Instruct | 72B | 0.9634 | 0.6268 | 0.5418 | 0.9210 | 0.1644 | 0.7840 | 0.2592 | 0.2327 | 0.7713 | 0.6909 | 0.5955 |
Model | Size | OpenBookQA | TriviaQA | HellaSWAG | SQuAD2.0 | XWINO | MMLU | GSM8K | BBH | HumanEval | EnAvg |
---|---|---|---|---|---|---|---|---|---|---|---|
4-shot | 4-shot | 4-shot | 4-shot | 4-shot | 5-shot | 4-shot | 3-shot | 0-shot | |||
Acc | EMacc | Acc | EMacc | Acc | Acc | EMacc | CoTEMAcc | pass@1 | |||
karakuri-lm-70b-chat-v0.1 | 70B | 0.4100 | 0.6873 | 0.6315 | 0.3677 | 0.9049 | 0.5941 | 0.3882 | 0.5724 | 0.2305 | 0.5319 |
Meta-Llama-3-70B-Instruct | 70B | 00.4400 | 0.7999 | 0.6552 | 0.4024 | 0.9127 | 0.7992 | 0.9052 | 0.8326 | 0.7555 | 0.7225 |
Llama-3-Swallow-70B-Instruct-v0.1 | 70B | 0.4520 | 0.8174 | 0.6758 | 0.4050 | 0.9230 | 0.7883 | 0.8688 | 0.8152 | 0.6890 | 0.7150 |
Qwen2-72B-Instruct | 72B | 0.4360 | 0.7588 | 0.6857 | 0.3913 | 0.9110 | 0.8391 | 0.8499 | 0.2436 | 0.6939 | 0.6455 |
Model | Size | coding | extraction | humanities | math | reasoning | roleplay | stem | writing | JMTAvg |
---|---|---|---|---|---|---|---|---|---|---|
karakuri-lm-70b-chat-v0.1 | 70B | 0.2804 | 0.5862 | 0.6240 | 0.2934 | 0.4183 | 0.5530 | 0.4859 | 0.5964 | 0.4797 |
Meta-Llama-3-70B-Instruct | 70B | 0.5969 | 0.8410 | 0.7120 | 0.4481 | 0.4884 | 0.7117 | 0.6510 | 0.6900 | 0.6424 |
Llama-3-Swallow-70B-Instruct-v0.1 | 70B | 0.5269 | 0.7250 | 0.5690 | 0.4669 | 0.6121 | 0.6238 | 0.5533 | 0.5698 | 0.5809 |
Qwen2-72B-Instruct | 72B | 0.5699 | 0.7858 | 0.8222 | 0.5096 | 0.7032 | 0.7963 | 0.7728 | 0.8223 | 0.7228 |
GPT-3.5(gpt-3.5-turbo-0125) | 0.6851 | 0.7641 | 0.7414 | 0.5522 | 0.5128 | 0.7104 | 0.6266 | 0.7361 | 0.6661 | |
GPT-4o(gpt-4o-2024-05-13) | 0.7296 | 0.8540 | 0.8646 | 0.6641 | 0.6661 | 0.8274 | 0.8184 | 0.8085 | 0.7791 |
We used llm-jp-eval(v1.3.0), JP Language Model Evaluation Harness(commit #9b42d41) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
We used the Language Model Evaluation Harness(v.0.4.2) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
We used Japanese MT-Bench to assess the instruction-following capabilities of models. We utilized the following settings:
gpt-4-1106-preview
Engine: TensorRT-LLM
Test Hardware:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
Here are the team members:
If you find our work helpful, please feel free to cite us.
@misc{llama3swallow, title={Llama 3 Swallow}, url={https://swallow-llm.github.io/llama3-swallow.en.html}, author={Swallow LLM}, year={2024}, }
@article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} }
We thank Meta Research for releasing Llama 3 under an open license for others to build on.
Our project is supported by the Large Generative AI Development Support Program of the National Institute of Advanced Industrial Science and Technology.