Search Results

Searching for: Highly regulated use case support
Sorting by Most Recent

NVIDIA DGX Cloud trained multilingual LLM designed for mission critical use cases in regulated industries including financial services, government, heavy industry

Instruction tuned LLM achieving SoTA performance on reasoning, math and general knowledge capabilities

Multilingual LLM with emphasis on European languages supporting regulated use cases including financial services, government, heavy industry

Fragment-Based Molecular Generation by Discrete Diffusion.

Transform PDFs into AI podcasts for engaging on-the-go audio content.

Generates physics-aware video world states from text and image prompts for physical AI development.

Generates future frames of a physics-aware world state based on simply an image or short video prompt for physical AI development.

Multilingual and cross-lingual text question-answering retrieval with long context support and optimized data storage efficiency.

Fine-tuned reranking model for multilingual, cross-lingual text question-answering retrieval, with long context support.

State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code.

Advanced LLM for reasoning, math, general knowledge, and function calling

Converts streamed audio to facial blendshapes for realtime lipsyncing and facial performances.

Generative downscaling model for generating high resolution regional scale weather fields.

FourCastNet predicts global atmospheric dynamics of various weather / climate variables.

Efficiently refine retrieval results over multiple sources and languages.

World-class multilingual and cross-lingual question-answering retrieval.

Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI

Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification

Shutterstock Generative 3D service for 360 HDRi generation. Trained on NVIDIA Edify using Shutterstock’s licensed creative libraries.

Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses.

State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.

Leaderboard topping reward model supporting RLHF for better alignment with human preferences.

Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.

Cutting-edge vision-language model exceling in high-quality reasoning from images.

Cutting-edge vision-Language model exceling in high-quality reasoning from images.

Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.

Generates consistent characters across a series of images without requiring additional training.

Ingest and extract highly accurate insights contained in text, graphs, charts, and tables within massive volumes of PDF documents.

Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.

This blueprint shows how generative AI and accelerated NIM microservices can design optimized small molecules smarter and faster.

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation

NV-DINOv2 is a visual foundation model that generates vector embeddings for the input image.

Grounding dino is an open vocabulary zero-shot object detection model.

An enterprise-grade text-to-image model trained on a compliant dataset produces high quality images.

Natural, high-fidelity, English voices for personalizing text-to-speech services and voiceovers

Enable smooth global interactions in 32 languages.

Expressive and engaging English voices for Q&A assistants, brand ambassadors, and service robots

Record-setting accuracy and performance for English transcription.

State-of-the-art accuracy and speed for English transcriptions.

ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.

Vision foundation model capable of performing diverse computer vision and vision language tasks.

Advanced small language generative AI model for edge applications

AI-powered search for OpenUSD data, 3D models, images, and assets using text or image-based inputs.

Create facial animations using a portrait photo and synchronize mouth movement with audio.

Verify compatibility of OpenUSD assets with instant RTX render and rule-based validation.

Supports Chinese and English languages to handle tasks including chatbot, content generation, coding, and translation.

Support Chinese and English chat, coding, math, instruction following, solving quizzes

Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks.

Powers complex conversations with superior contextual understanding, reasoning and text generation.

Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.

Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU.

English text embedding model for question-answering retrieval.

MAISI is a pre-trained volumetric (3D) CT Latent Diffusion Generative Model.

Powerful coding model offering advanced capabilities in code generation, completion, and infilling

Cutting-edge text generation model text understanding, transformation, and code generation.

Cutting-edge text generation model text understanding, transformation, and code generation.

Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.

Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines.

Creates diverse synthetic data that mimics the characteristics of real-world data.

Advanced text-to-image model for generating high quality images

OCDNet and OCRNet are pre-trained models designed for optical character detection and recognition respectively.

Generates high-quality numerical embeddings from text inputs.

Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.

Embedding model for text retrieval tasks, excelling in dense, multi-vector, and sparse retrieval.

Visual Changenet detects pixel-level change maps between two images and outputs a semantic change segmentation mask

Software programming LLM for code generation, completion, explanation, and multi-turn conversion.

EfficientDet-based object detection network to detect 100 specific retail objects from an input video.

A generative model of protein backbones for protein binder design.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Long context cutting-edge lightweight open language model exceling in high-quality reasoning.

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Vision language model adept at comprehending text and visual inputs to produce informative responses

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Optimized community model for text embedding.

An MOE LLM that follows instructions, completes requests, and generates creative text.

Powers complex conversations with superior contextual understanding, reasoning and text generation.

Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.

Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion.

GPU-accelerated generation of text embeddings used for question-answering retrieval.

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Generate images and stunning visuals with realistic aesthetics.

Groundbreaking multimodal model designed to understand and reason about visual elements in images.

Translate images of plots into tables with one-shot visual language understanding.

Multi-modal vision-language model that understands text/images and generates informative responses

Multi-modal model for a wide range of tasks, including image understanding and language generation.

VISTA-3D is a specialized interactive foundation model for segmenting and anotating human anatomies.

Cutting-edge text generation model text understanding, transformation, and code generation.

LLM capable of generating code from natural language and vice versa.

Generate BAM output given one or more pairs of FASTQ files, by running BWA-MEM & GATK best practices.

Run Google's DeepVariant optimized for GPU. Switch models for high accuracy on all major sequencers.

Stable Video Diffusion (SVD) is a generative diffusion model that leverages a single image as a conditioning frame to synthesize video sequences.

A fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation

MolMIM performs controlled generation, finding molecules with the right properties.

Predicts the 3D structure of a protein from its amino acid sequence.

Predicts the 3D structure of how a molecule interacts with a protein.

An MOE LLM that follows instructions, completes requests, and generates creative text.

World-record accuracy and performance for complex route optimization.