
Managing energy consumption in wireless networks is a complex challenge that involves continuously balancing capacity, coverage, and service quality against power usage. Operators must decide when to put capacity cells to sleep and when to wake them up based on real network conditions, including load and throughput. This blueprint demonstrates how operator intent written in natural language can be translated into deterministic, explainable actions that optimize energy usage without degrading user experience.
Built in partnership with VIAVI, the blueprint uses the VIAVI’s TeraVM AI RAN Scenario Generator (AI RSG) platform to model the RAN scenario and to generate synthetic RAN KPI data — such as per‑cell utilization and QoS — that represent the current network state, then simulate how those KPIs change when cells are put to sleep or reactivated. This allows an intent‑driven planner agent to reason over realistic network conditions, propose safe energy‑saving actions, and have those actions validated end‑to‑end in the simulation environment before they are considered for live networks.
The goal of this blueprint is to help developers, network engineers, and telco operators quickly build an AI-powered energy saving solution that:
Unlike traditional energy management approaches, this solution leverages an LLM for reasoning and an LLM for validation, while keeping the underlying decision logic transparent and uses simulation before applying changes to the network.
Intent-Driven Planning Operators express high-level energy saving goals in plain language. The system translates these into actionable queries against KPI data.
KPI Normalization Synthetic KPI data from VIAVI’s AI RSG platform is transformed into a structured table (time, site, cell, utilization, QoS) that can be easily queried and analyzed using SQL.
Custom Logic With LLM An LLM generates SQL that implements energy-saving policy logic, while keeping the resulting decisions deterministic and auditable —no black-box optimization.
Validation and Guardrails QoS thresholds are provided as part of operator intent to ensure that energy-saving actions do not degrade service quality.
Closed-Loop Reporting After decisions are validated and applied back into the simulator, KPI outcomes and action summaries are reported to the user via the user interface.
At a high level, the system includes:
At a high level, the system includes:
KPI Data Source Synthetic or simulated KPI data at a 15-minute granularity is provided by VIAVI’s AI RSG.
Energy Saving Planner Agent (LLM) A trained LLM (llama-3.1-70b-instruct via NVIDIA NIM) generates SQL based on operator intent and KPIs.
Local SQL Database Normalized KPI data is stored and queried using SQLite via SQLAlchemy.
Decision Evaluation Logic SQL generated by the LLM encapsulates policy rules for energy saving actions.
Validation & Actuation
Decisions are validated against QoS thresholds with a validation agent built by a trained LLM (llama-3.1-70b-instruct via NVIDIA NIM) and applied back to the simulator, which recomputes KPIs to reflect the new configuration.
UI / Reporting Layer Displays actions taken, reports KPIs before and after actuation, and provides an interface to intake operator intent.
This blueprint converts KPI data into a structured table and uses an LLM to understand operator intent and generate SQL that evaluates energy saving policies. For each (time, site):
QoS is treated as a safety constraint: if throughput falls below a defined threshold, sleep actions are prevented.
The blueprint expects KPI data in a simple tabular format:
| Field | Description |
|---|---|
| time | Timestamp of KPI aggregation |
| site | Site identifier |
| cell | Cell role (e.g., coverage/capacity) |
| RRU | Downlink PRB utilization |
| QoS | Downlink throughput / QoS score |
This blueprint leverages NVIDIA NIM microservices to ensure that LLM-based planner and agent components can operate efficiently and reliably when deployed in real-world or production-like environments. While the energy saving logic itself is very comprehensive, the LLM plays a critical role in interpreting intent, generating SQL logic, and supporting agentic workflows, which are tasks that must scale to many concurrent users issuing diverse prompts. NIM microservices optimize LLM inference by significantly reducing time to first token (TTFT) and increasing tokens per second throughput, which is essential when multiple planners, validators, or user sessions are active simultaneously. By providing optimized model serving, GPU-aware scheduling, and standardized APIs, NIM microservices allow this blueprint to move beyond a single-user notebook experience and toward scalable, multi-tenant deployments without rewriting application logic. This makes NIM microservices a foundational component for deploying intent-driven, agentic network automation solutions where responsiveness, consistency, and efficient GPU utilization are critical as the number of users, agents, and prompts grows.
Prerequisites
Getting Started
This blueprint demonstrates:
Organizations looking to extend this blueprint or develop their own agentic network automation solutions can collaborate with VIAVI by leveraging VIAVI’s AI RSG simulation platform together with the VIAVI Automation Development Kit (ADK) as an integration layer between AI-driven applications and realistic RAN environments. AI RSG enables the generation of high-fidelity, scenario-based network behavior and KPIs, while ADK provides programmatic access to performance data, configuration parameters, and simulation control. Together, they allow partners to build closed-loop, agentic workflows in which LLM-based planners, validators, and learning agents can reason over network state, apply actions, and immediately observe their impact in simulation. By integrating their own AI logic, orchestration frameworks, or domain-specific policies on top of VIAVI’s AI RSG and ADK, companies can rapidly prototype, validate, and scale new intent-driven use cases such as energy optimization, configuration planning, fault mitigation, and policy exploration, while reducing risk and accelerating innovation before moving toward production environments.
NVIDIA Technology
NVIDIA NIM microservices:
VIAVI AI RSG – RAN Digital Twin:
For the blueprint related questions send e-mail to: IB_ES_blueprint@viavisolutions.com
NVIDIA believes Trustworthy AI is a shared responsibility, and we have established policies and practices to enable development for a wide array of AI applications. When downloading or using models in accordance with our terms of service, developers should work with their supporting model team to ensure the models meet requirements for the relevant industry and use case and address unforeseen product misuse. For more detailed information on ethical considerations for the models, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI concerns here.
Use of the models in this AI Blueprint is governed by the NVIDIA AI Foundation Models Community License.
This blueprint is powered by demo license with limited capabilities for VIAVI’s AI RSG platform.
For full capabilities please go to VIAVI AI RSG.
The software and materials are governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products , except that models are governed by the AI Foundation Models Community License Agreement and the NVIDIA RAG dataset is governed by the NVIDIA Asset License Agreement.
Additional Information: for Meta/llama-3.1-70b-instruct model the Llama 3.1 Community License Agreement, for nvidia/llama-3.2-nv-embedqa-1b-v2model the Llama 3.2 Community License Agreement, and for nvidia/llama-3.2-nv-embedqa-1b-v2 model the Llama 3.2 Community License Agreement. Built with Llama.