
Vision language model that excels in understanding the physical world using structured reasoning on videos or images.

Translation model in 12 languages with few-shots example prompts capability.

A state-of-the-art general purpose MoE VLM ideal for chat, agentic and instruction based use cases.

A general purpose VLM ideal for chat and instruction based use cases

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

Nemotron Nano 12B v2 VL enables multi-image and video understanding, along with visual Q&A and summarization capabilities.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

Japanese-specialized large-language-model for enterprises to read and understand complex business documents.

Qwen3-Next Instruct blends hybrid attention, sparse MoE, and stability boosts for ultra-long context AI.

State-of-the-art model for Polish language processing tasks such as text generation, Q&A, and chatbots.

80B parameter AI model with hybrid reasoning, MoE architecture, support for 119 languages.

ByteDance open-source LLM with long-context, reasoning, and agentic intelligence.

Stable Diffusion 3.5 is a popular text-to-image generation model

FLUX.1 Kontext is a multimodal model that enables in-context image generation and editing.

Reasoning vision language model (VLM) for physical AI and robotics.

Multilingual 7B LLM, instruction-tuned on all 24 EU languages for stable, culturally aligned output.

Multilingual, cross-lingual embedding model for long-document QA retrieval, supporting 26 languages.

ProteinMPNN is a deep learning model for predicting amino acid sequences for protein backbones.


Lightweight reasoning model for applications in latency bound, memory/compute constrained environments


An MOE LLM that follows instructions, completes requests, and generates creative text.

An MOE LLM that follows instructions, completes requests, and generates creative text.

A general purpose multimodal, multilingual 128 MoE model with 17B parameters.

A multimodal, multilingual 16 MoE model with 17B parameters.

Supports Chinese and English languages to handle tasks including chatbot, content generation, coding, and translation.

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

An edge computing AI model which accepts text, audio and image input, ideal for resource-constrained environments

Powerful, multimodal language model designed for enterprise applications, including software development, data analysis, and reasoning.

Small language model fine-tuned for improved reasoning, coding, and instruction-following

Advanced state-of-the-art model with language understanding, superior reasoning, and text generation.

Advanced LLM for code generation, reasoning, and fixing across popular programming languages.

Multi-modal vision-language model that understands text/img and creates informative responses

Latency-optimized language model excelling in code, math, general knowledge, and instruction-following.

Natural and expressive voices in multiple languages. For voice agents and brand ambassadors.

Enable smooth global interactions in 36 languages.

SOTA LLM pre-trained for instruction following and proficiency in Indonesian language and its dialects.

State-of-the-art, multilingual model tailored to all 24 official European Union languages.

FLUX.1 is a state-of-the-art suite of image generation models

FLUX.1-schnell is a distilled image generation model, producing high quality images at fast speeds

Powers complex conversations with superior contextual understanding, reasoning and text generation.

Built for agentic workflows, this model excels in coding, instruction following, and function calling

This LLM follows instructions, completes requests, and generates creative text.

Cutting-edge vision-language model exceling in retrieving text and metadata from images.

Cutting-edge vision-Language model exceling in high-quality reasoning from images.

Cutting-edge vision-language model exceling in high-quality reasoning from images.

A generative model of protein backbones for protein binder design.

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Cutting-edge open multimodal model exceling in high-quality reasoning from image and audio inputs.

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

Instruction tuned LLM achieving SoTA performance on reasoning, math and general knowledge capabilities

Cutting-edge text generation model text understanding, transformation, and code generation.

LLM for improved language comprehension and chatbot-oriented capabilities in Traditional Chinese.

Cutting-edge text generation model text understanding, transformation, and code generation.

Multilingual LLM with emphasis on European languages supporting regulated use cases including financial services, government, heavy industry

A lightweight, multilingual, advanced SLM text model for edge computing, resource constraint applications

Advanced small language generative AI model for edge applications

Support Chinese and English chat, coding, math, instruction following, solving quizzes

Fine-tuned Llama 3.1 70B model for code generation, summarization, and multi-language tasks.

Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.

Long context cutting-edge lightweight open language model exceling in high-quality reasoning.

Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.

Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills.

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Chinese and English LLM targeting for language, coding, mathematics, reasoning, etc.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Powerful mid-size code model with a 32K context length, excelling in coding in multiple languages.

Cutting-edge lightweight open language model exceling in high-quality reasoning.

Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.

A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language.

This LLM follows instructions, completes requests, and generates creative text.

Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation.

Model for writing and interacting with code across a wide range of programming languages and tasks.

Powers complex conversations with superior contextual understanding, reasoning and text generation.

Sovereign AI model finetuned on Traditional Mandarin and English data using the Llama-3 architecture.

Cutting-edge MOE based LLM designed to excel in a wide array of generative AI tasks.

Sovereign AI model trained on Japanese language that understands regional nuances.

Sovereign AI model trained on Japanese language that understands regional nuances.

Sovereign AI model trained on Japanese language that understands regional nuances.

Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation.

Efficient multimodal model excelling at multilingual tasks, image understanding, and fast-responses

High accuracy and optimized performance for transcription in 25 languages

Enable smooth global interactions in 36 languages.

Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics.


Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks.

Multi-modal vision-language model that understands text/img/video and creates informative responses

Cutting-edge open multimodal model exceling in high-quality reasoning from images.

Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments

LLM to represent and serve the linguistic and cultural diversity of Southeast Asia

Advanced programming model for code completion, summarization, and generation

GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.

Optimized SLM for on-device inference and fine-tuned for roleplay, RAG and function calling

State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation.