Fine-tuned Llama 3.1 70B model for code generation, summarization, and multi-language tasks.
We introduce the latest in the Smaug series, the Dracarys family of finetunes targeting coding performance improvements across a variety of base models.
This variant is a finetune of meta-llama/Meta-Llama-3.1-70B-Instruct that allows to generate code and answer questions about code.
Compared to meta-llama/Meta-Llama-3.1-70B-Instruct, Dracarys has better LiveCodeBench scores (see evaluation results below).
This model is ready for commercial and non-commercial use.
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA Model Card.
META LLAMA 3 COMMUNITY LICENSE
Architecture Type: Transformer
Input Type(s): Text
Input Format(s): String
Input Parameters: One Dimensional (1D)
Output Type(s): Text
Output Format: String
Output Parameters: 1D
Supported Hardware Microarchitecture Compatibility:
Preferred Operating System(s):
Model | Code Generation | Code Execution | Test Output Prediction |
---|---|---|---|
Dracarys-Llama-3.1-70B-Instruct | 37.08 | 39.00 | 49.90 |
Meta-Llama-3.1-70B-Instruct | 31.80 | 55.50 | 41.40 |
Data Collection Method by dataset:
Labeling Method by dataset:
Engine: TensorRT-LLM
Test Hardware:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.