
Build advanced AI agents for providers and patients using this developer example powered by NeMo Microservices, NVIDIA Nemotron, Riva ASR and TTS, and NVIDIA LLM NIM

Securely extract, embed, and index multimodal data with encryption in-use for fast, accurate semantic search.

Elevate Shopping Experiences Online and In Stores.

Sensor-captured radio enables real-time awareness, AI-driven analytics for actionable, searchable insights.

Use the multi-LLM compatible NIM container to deploy a broad range of LLMs from Hugging Face.

Build a data flywheel, with NVIDIA NeMo microservices, that continuously optimizes AI agents for latency and cost — while maintaining accuracy targets.

Automate and optimize the configuration of radio access network (RAN) parameters using agentic AI and a large language model (LLM)-driven framework.

Build a custom enterprise research assistant powered by state-of-the-art models that process and synthesize multimodal data, enabling reasoning, planning, and refinement to generate comprehensive reports.

This workflow shows how generative AI can generate DNA sequences that can be translated into proteins for bioengineering.

Power fast, accurate semantic search across multimodal enterprise data with NVIDIA’s RAG Blueprint—built on NeMo Retriever and Nemotron models—to connect your agents to trusted, authoritative sources of knowledge.

This blueprint shows how generative AI and accelerated NIM microservices can design protein binders smarter and faster.

Automate voice AI agents with NVIDIA NIM microservices and Pipecat.

Generate detailed, structured reports on any topic using LangGraph and Llama3.3 70B NIM.

Document your github repositories with AI Agents using CrewAI and Llama3.3 70B NIM.

This blueprint shows how generative AI and accelerated NIM microservices can design optimized small molecules smarter and faster.