
Managing energy consumption in wireless networks is a complex challenge that involves continuously balancing capacity, coverage, and service quality against power usage. Operators must decide when to put capacity cells to sleep and when to wake them up based on real network conditions, including load and throughput. This blueprint demonstrates how operator intent written in natural language can be translated into deterministic, explainable actions that optimize energy usage without degrading user experience.
Built in partnership with VIAVI, the blueprint uses the VIAVI’s TeraVM AI RAN Scenario Generator (AI RSG) platform to model the RAN scenario and to generate synthetic RAN KPI data — such as per‑cell utilization and QoS — that represent the current network state, then simulate how those KPIs change when cells are put to sleep or reactivated. This allows an intent‑driven planner agent to reason over realistic network conditions, propose safe energy‑saving actions, and have those actions validated end‑to‑end in the simulation environment before they are considered for live networks.
Use Case Description
The goal of this blueprint is to help developers, network engineers, and telco operators quickly build an AI-powered energy saving solution that:
- Interprets operator intent (e.g., “minimize energy while maintaining throughput above X Mbps”)
- Ingests and normalizes network KPI data (load, throughput/QoS) into a queryable schema
- Uses an LLM to generate SQL-based policy evaluations over those KPIs
- Produces sleep, wake, or no-action decisions for capacity cells
- Simulates the decision in VIAVI’s AI RSG platform
- Validates decisions against required QoS and reports the results
Unlike traditional energy management approaches, this solution leverages an LLM for reasoning and an LLM for validation, while keeping the underlying decision logic transparent and uses simulation before applying changes to the network.
Key Features
Intent-Driven Planning Operators express high-level energy saving goals in plain language. The system translates these into actionable queries against KPI data.
KPI Normalization Synthetic KPI data from VIAVI’s AI RSG platform is transformed into a structured table (time, site, cell, utilization, QoS) that can be easily queried and analyzed using SQL.
Custom Logic With LLM An LLM generates SQL that implements energy-saving policy logic, while keeping the resulting decisions deterministic and auditable —no black-box optimization.
Validation and Guardrails QoS thresholds are provided as part of operator intent to ensure that energy-saving actions do not degrade service quality.
Closed-Loop Reporting After decisions are validated and applied back into the simulator, KPI outcomes and action summaries are reported to the user via the user interface.
Architecture Diagram
At a high level, the system includes:
At a high level, the system includes:
-
KPI Data Source Synthetic or simulated KPI data at a 15-minute granularity is provided by VIAVI’s AI RSG.
-
Energy Saving Planner Agent (LLM) A trained LLM (llama-3.1-70b-instruct via NVIDIA NIM) generates SQL based on operator intent and KPIs.
-
Local SQL Database Normalized KPI data is stored and queried using SQLite via SQLAlchemy.
-
Decision Evaluation Logic SQL generated by the LLM encapsulates policy rules for energy saving actions.
-
Validation & Actuation Decisions are validated against QoS thresholds with a validation agent built by a trained LLM (
llama-3.1-70b-instructvia NVIDIA NIM) and applied back to the simulator, which recomputes KPIs to reflect the new configuration. -
UI / Reporting Layer Displays actions taken, reports KPIs before and after actuation, and provides an interface to intake operator intent.
How It Works
This blueprint converts KPI data into a structured table and uses an LLM to understand operator intent and generate SQL that evaluates energy saving policies. For each (time, site):
- Aggregate KPIs such as PRB utilization and throughput (QoS)
- Determine thresholds where the capacity cell can be safely put to sleep without violating QoS requirements.
- Generate an action recommendation: sleep, wake, or no action
- Apply and validate the action in VIAVI’s AI RSG simulator
- Report the results back to the user
QoS is treated as a safety constraint: if throughput falls below a defined threshold, sleep actions are prevented.
Input Data
The blueprint expects KPI data in a simple tabular format:
| Field | Description |
|---|---|
| time | Timestamp of KPI aggregation |
| site | Site identifier |
| cell | Cell role (e.g., coverage/capacity) |
| RRU | Downlink PRB utilization |
| QoS | Downlink throughput / QoS score |
The Role of NVIDIA NIM:
This blueprint leverages NVIDIA NIM microservices to ensure that LLM-based planner and agent components can operate efficiently and reliably when deployed in real-world or production-like environments. While the energy saving logic itself is very comprehensive, the LLM plays a critical role in interpreting intent, generating SQL logic, and supporting agentic workflows, which are tasks that must scale to many concurrent users issuing diverse prompts. NIM microservices optimize LLM inference by significantly reducing time to first token (TTFT) and increasing tokens per second throughput, which is essential when multiple planners, validators, or user sessions are active simultaneously. By providing optimized model serving, GPU-aware scheduling, and standardized APIs, NIM microservices allow this blueprint to move beyond a single-user notebook experience and toward scalable, multi-tenant deployments without rewriting application logic. This makes NIM microservices a foundational component for deploying intent-driven, agentic network automation solutions where responsiveness, consistency, and efficient GPU utilization are critical as the number of users, agents, and prompts grows.
Running the Blueprint
Prerequisites
- Python 3.8+ environment
- Access to VIAVI’s AI RSG
- NVIDIA API key for LLM access (Obtain from https://build.nvidia.com/settings/api-keys)
- config.yaml specifying model names and endpoints
Getting Started
- Clone the GitHub repo linked from this page.
- Install dependencies:
- pip install -r requirements.txt
- Populate config.yaml with your API and model configurations.
- Run the Jupyter notebook end-to-end:
- KPI ingestion
- SQL database creation
- LLM initialization
- Policy evaluation
- Action reporting
- Review output for intended energy-saving decisions.
Target Users
- RAN and SON engineers seeking to prototype energy optimization
- Telecom developers exploring intent-based AI workflows
- System integrators evaluating LLM-augmented control loops
- Researchers studying AI applications in network operations
Success Criteria
This blueprint demonstrates:
- Clear mapping from operator intent to actionable energy decisions
- KPI-driven policy evaluation that enforces QoS guardrails
- A reproducible workflow that runs locally and can be adapted to simulator APIs
- Explainable logic that remains auditable and deterministic
License and Deployment
- Code is available under an open-source license on the linked GitHub repository.
- VIAVI AI RSG will be hosted in a static server that API calls to run simulation will be made while the user is running the notebook.
Future Extensions
- Multi-iteration closed-loop control
- UI for conversational intent refinement
- Learning-based actuation agents
- Longitudinal analytics and trend-based decision support
Collaborating with VIAVI to Expand Agentic Use Cases
Organizations looking to extend this blueprint or develop their own agentic network automation solutions can collaborate with VIAVI by leveraging VIAVI’s AI RSG simulation platform together with the VIAVI Automation Development Kit (ADK) as an integration layer between AI-driven applications and realistic RAN environments. AI RSG enables the generation of high-fidelity, scenario-based network behavior and KPIs, while ADK provides programmatic access to performance data, configuration parameters, and simulation control. Together, they allow partners to build closed-loop, agentic workflows in which LLM-based planners, validators, and learning agents can reason over network state, apply actions, and immediately observe their impact in simulation. By integrating their own AI logic, orchestration frameworks, or domain-specific policies on top of VIAVI’s AI RSG and ADK, companies can rapidly prototype, validate, and scale new intent-driven use cases such as energy optimization, configuration planning, fault mitigation, and policy exploration, while reducing risk and accelerating innovation before moving toward production environments.
Minimum System Requirements
Hardware Requirements
- CPU: 12+ cores @ 3,8 GHz; AVX-512 is a must-have
- RAM: 32 GB
- Tested on AMD Ryzen 9 9950X 16-Core Processor
- GPU: 2H100 in case running LLM locally
OS Requirements
- Modern Linux (e.g., Ubuntu 22.04)
Software Dependencies
- Python 3.8+ environment
- Access to VIAVI's AI RSG
Software Used in This Blueprint
NVIDIA Technology
NVIDIA NIM microservices:
VIAVI AI RSG – RAN Digital Twin:
For the blueprint related questions send e-mail to: IB_ES_blueprint@viavisolutions.com
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility, and we have established policies and practices to enable development for a wide array of AI applications. When downloading or using models in accordance with our terms of service, developers should work with their supporting model team to ensure the models meet requirements for the relevant industry and use case and address unforeseen product misuse. For more detailed information on ethical considerations for the models, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI concerns here.
License
Use of the models in this AI Blueprint is governed by the NVIDIA AI Foundation Models Community License.
This blueprint is powered by demo license with limited capabilities for VIAVI’s AI RSG platform.
For full capabilities please go to VIAVI AI RSG.
Terms of Use
The software and materials are governed by the NVIDIA Software License Agreement and the Product-Specific Terms for NVIDIA AI Products , except that models are governed by the AI Foundation Models Community License Agreement and the NVIDIA RAG dataset is governed by the NVIDIA Asset License Agreement.
Additional Information: for Meta/llama-3.1-70b-instruct model the Llama 3.1 Community License Agreement, for nvidia/llama-3.2-nv-embedqa-1b-v2model the Llama 3.2 Community License Agreement, and for nvidia/llama-3.2-nv-embedqa-1b-v2 model the Llama 3.2 Community License Agreement. Built with Llama.
