State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Leaderboard topping reward model supporting RLHF for better alignment with human preferences.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Cutting-edge vision-language model exceling in high-quality reasoning from images.
Cutting-edge vision-Language model exceling in high-quality reasoning from images.
Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.
Unique language model that delivers an unmatched accuracy-efficiency performance.
Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.
Fine-tuned Llama 3.1 70B model for code generation, summarization, and multi-language tasks.
Multi-modal vision-language model that understands text/images and generates informative responses
Robust image classification model for detecting and managing AI-generated content.
Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.
Sovereign AI model trained on Japanese language that understands regional nuances.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.
Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments
NV-DINOv2 is a visual foundation model that generates vector embeddings for the input image.
Specialized language model designed for mathematical reasoning and scientific discovery.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Grounding dino is an open vocabulary zero-shot object detection model.
An enterprise-grade text-to-image model trained on a compliant dataset produces high quality images.
ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.
Vision foundation model capable of performing diverse computer vision and vision language tasks.
Guardrail model to ensure that responses from LLMs are appropriate and safe
Advanced small language generative AI model for edge applications
AI-powered search for OpenUSD data, 3D models, images, and assets using text or image-based inputs.
Model for writing and interacting with code across a wide range of programming languages and tasks.
Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.
Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU.
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
MAISI is a pre-trained volumetric (3D) CT Latent Diffusion Generative Model.
Powerful coding model offering advanced capabilities in code generation, completion, and infilling
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Advanced programming model for code completion, summarization, and generation
Advanced programming model for code completion, summarization, and generation
Cutting-edge text generation model text understanding, transformation, and code generation.
Cutting-edge text generation model text understanding, transformation, and code generation.
Grades responses on five attributes helpfulness, correctness, coherence, complexity and verbosity.
Model for writing and interacting with code across a wide range of programming languages and tasks.
Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing.
NV-CLIP is a multimodal embeddings model for image and text.
Advanced text-to-image model for generating high quality images
OCDNet and OCRNet are pre-trained models designed for optical character detection and recognition respectively.
Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.
Embedding model for text retrieval tasks, excelling in dense, multi-vector, and sparse retrieval.
Advanced programming model for code generation, completion, reasoning, and instruction following.
Software programming LLM for code generation, completion, explanation, and multi-turn conversion.
Software programming LLM for code generation, completion, explanation, and multi-turn conversion.
A generative model of protein backbones for protein binder design.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Long context cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge lightweight open language model exceling in high-quality reasoning.
Cutting-edge open multimodal model exceling in high-quality reasoning from images.
Vision language model adept at comprehending text and visual inputs to produce informative responses
LLM to represent and serve the linguistic and cultural diversity of Southeast Asia
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG.
Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.
An MOE LLM that follows instructions, completes requests, and generates creative text.
Powers complex conversations with superior contextual understanding, reasoning and text generation.
Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.
Excels in complex multilingual reasoning tasks, including text understanding, and code generation.
Novel recurrent architecture based language model for faster inference when generating long sequences.
Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion.
Lightweight language model deployable on laptop, desktop or the cloud for summarization and reasoning.
GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.
Groundbreaking multimodal model designed to understand and reason about visual elements in images.
One-shot visual language understanding model that translates images of plots into tables.
Multi-modal vision-language model that understands text/images and generates informative responses
Multi-modal model for a wide range of tasks, including image understanding and language generation.
VISTA-3D is a specialized interactive foundation model for segmenting and anotating human anatomies.
Cutting-edge text generation model text understanding, transformation, and code generation.
LLM capable of generating code from natural language and vice versa.
Cutting-edge large language AI model capable of generating text and code in response to prompts.
Run Google's DeepVariant optimized for GPU. Switch models for high accuracy on all major sequencers.
Stable Video Diffusion (SVD) is a generative diffusion model that leverages a single image as a conditioning frame to synthesize video sequences.
A fast generative text-to-image model that can synthesize photorealistic images from a text prompt in a single network evaluation
AI based weather prediction pipeline with global models and downscaling models.
An MOE LLM that follows instructions, completes requests, and generates creative text.