Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification
Granite-3.0-3B-A800M-Instruct is a 3B parameter model finetuned from Granite-3.0-3B-A800M-Base-4K using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA *Granite-3.0-3B-A800M-Base-4K model card.
GOVERNING TERMS: The trial service is governed by the NVIDIA API Trial Terms of Service; and the use of this model is governed by the NVIDIA AI Foundation Models Community License Agreement. ADDITIONAL INFORMATION: Apache 2.0 License.
Architecture Type: Transformer
Network Architecture: [Other - MoE]
The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
Input Type(s): [ Text]
Input Format(s): [String]
Input Parameters: min_tokens, max_tokens, temperature, top_p, stop, frequency_penalty, presence_penalty
Other Properties Related to Input: Granite instruct models are primarily finetuned using instruction-response pairs mostly in English, but also in German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese (Simplified). As this model has been exposed to multilingual data, it can handle multilingual dialog use cases with a limited performance in non-English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs.\
Output Type(s): Text
Output Format: String
Output Parameters: None
Other Properties Related to Output: None
This is a simple example of how to use Granite-3.0-3B-A800M-Instruct model.
Install the following libraries:
pip install torch torchvision torchaudio pip install accelerate pip install transformers
Then, copy the snippet from the section that is relevant for your usecase.
import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-3b-a800m-instruct" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired chat = [ { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." }, ] chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) # tokenize the text input_tokens = tokenizer(chat, return_tensors="pt").to(device) # generate output tokens output = model.generate(**input_tokens, max_new_tokens=100) # decode output tokens into text output = tokenizer.batch_decode(output) # print output print(output)
Granite-3.0-3B-A800M-Instruct is based on a decoder-only sparse Mixture of Experts(MoE) transformer architecture. Core components of this architecture are: Fine-grained Experts, Dropless Token Routing, and Load Balancing Loss.
Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
---|---|---|---|---|
Embedding size | 2048 | 4096 | 1024 | 1536 |
Number of layers | 40 | 40 | 24 | 32 |
Attention head size | 64 | 128 | 64 | 64 |
Number of attention heads | 32 | 32 | 16 | 24 |
Number of KV heads | 8 | 8 | 8 | 8 |
MLP hidden size | 8192 | 12800 | 512 | 512 |
MLP activation | SwiGLU | SwiGLU | SwiGLU | SwiGLU |
Number of Experts | — | — | 32 | 40 |
MoE TopK | — | — | 8 | 8 |
Initialization std | 0.1 | 0.1 | 0.1 | 0.1 |
Sequence Length | 4096 | 4096 | 4096 | 4096 |
Position Embedding | RoPE | RoPE | RoPE | RoPE |
# Paremeters | 2.5B | 8.1B | 1.3B | 3.3B |
# Active Parameters | 2.5B | 8.1B | 400M | 800M |
# Training tokens | 12T | 12T | 10T | 10T |
Granite Language Instruct models are trained on a selection of open-srouce instruction datasets with a non-restrictive license, as well as a collection of synthetic datasets created by IBM. Together, these instruction datasets are a solid representation of the following domains: English, multilingual, code, math, tools, and safety.
We train the Granite Language models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
Granite-MoE-3.0-instruct
The model inherits ethical considerations and limitations from its base model.
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.