Improve safety, security, and privacy of AI systems at build, deploy and run stages.
As large language models (LLMs) increasingly enable agentic AI systems capable of autonomous reasoning and tool use, they also introduce critical safety risks, including goal misalignment, hallucinations, and prompt injections. Enterprises are challenged to harness open-weight models' flexibility without compromising on trust, security, or compliance. As regulations tighten across regions and industries, non-compliance becomes a persistent challenge.
With this safety recipe, enterprises can now confidently adopt open models, aligned to their policy. Start with model evaluation using garak vulnerability scanning with curated risk prompts, benchmarking against enterprise thresholds. Then, post-train using recipes and safety datasets to close critical safety and security gaps. Deploy the hardened model as a trusted NVIDIA NIM and then add inference run time safety protection with NVIDIA NeMo Guardrails that actively block unsafe model behavior. With continuous monitoring, and collaboration between AI and risk teams, model safety becomes enforceable, not aspirational.
This safety recipe is broken down into four steps, which map to a typical agentic workflow environment:
NVIDIA believes Trustworthy AI is a shared responsibility, and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure the models meet requirements for the relevant industry and use case and address unforeseen product misuse. For more detailed information on ethical considerations for the models, please see the Model Card++, Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI concerns here.
Use of this developer example notebook is governed by the Apache 2.0 License.
The software and materials are governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products, except that models are governed by the AI Foundation Models Community License Agreement and the NVIDIA RAG dataset is governed by the NVIDIA Asset License Agreement. ADDITIONAL INFORMATION: for Meta/llama-3.1-70b-instruct model, the Llama 3.1 Community License Agreement, for nvidia/llama-3.2-nv-embedqa-1b-v2 model, the Llama 3.2 Community License Agreement. Built with Llama.