Virtualized infrastructures are becoming increasingly critical in CSPs' deployments as mobile networks grow and expand from core to edge, and as 5G implementations begin. In contrast to generic IT workloads, NFV has really stringent KPIs, like deterministic performance, high throughput or low latency, which often requires using the underlying infrastructure in inflexible and inefficient ways in order to achieve them. For instance, dataplane VNFs are keeping the servers where they run constantly at a high power-up state, as if they are always operating for peak demand. Similarly, operators deploy critical functions typically in isolation, reserving upfront a large portion of server resources in order to prevent contention from other services, and therefore Service Level Objectives (SLOs) violation. But in this way a large portion of infrastructure remains unutilized.

Intracom Telecom's NFV Resource Intelligence addresses the above challenges employing AI to realize autonomous service assurance. It decides the ideal distribution and configuration of resources fully automatically, in closed-loop forms, and dynamically, under any traffic or colocation condition. In this way, SLOs are always maintained and resources are used cost-efficiently – solving a highly complex challenges that goes beyond human expertise. Intracom Telecom's NFV Resource Intelligence guarantees optimal execution of the virtualized Network Services, and optimal utilization of the infrastructure where they are running.

Energy Consumption icon

Reduces energy energy consumption od DPDK-based packet processing VNFs by thrttling their power according to their actual traffic load

Slicing icon

Protects high-priority services from "noisy neighbors" by carefully slicing and isolating critical hardware resources leveraging advanced hardware technologies

Configuration icon

Automatically discovers optimal resource configurations that deliver certain performance levels for one or more VNFs, specified by the user

Infrastructure Utilization icon

Increase infrastructure utilization by enabling denser VNF placement (colocation) without introducing contention and SLO violations

Autonomous service icon

Realizes autonomous service assurance leveraging AI and closed-loop control.

Observability icon

Delivers maximum observability both for the VNFs and the platform, by exposing rich telemetry to the user via customized dashboards

Multiple logos of KVM platforms

Support multiple types of VNFs: KVM Virtual Machines, native Linux applications, Docker containers, Kubernetes pods

Energy optimization for user-plane network functions

User-plane functions like 5G's UPF have stringent KPIs, such as low latency, zero packet loss, and high throughput. To meet them, network functions usually employ frameworks like DPDK, which rely extensively on polling to ensure carrier-grade packet processing performance. Unfortunately, polling forces the server platforms that host the functions to always be running at a high power-up state as if they were operating for peak demand. Even during periods of zero or very light traffic, the servers consume the maximum possible power.

NFV-RI™ provides AI-driven closed-loop mechanisms to dynamically manage the power of user-plane network functions in line with their load, while guaranteeing zero packet drops. In this way, their server platforms are operated at significantly less power during off-peak periods, contributing to overall energy saving.

In an initial PoC with a Greek Tier-1 CSP, NFV-RI™ achieved a 14% reduction in total server power consumed for a vEPC node prototype over a 24-hour period. In similar PoCs that followed using 5G UPFs, NFV-RI™ achieved significant average daily power savings ranging between 17-35%.

Improved energy efficiency through traffic-aware power throttling Diagrams

Intent-based 5G Core slicing

To deploy multiple network slices with differentiated performance characteristics, CSPs need to carefully allocate resources to the key network functions belonging to each slice. However, this is a highly challenging task, as it is not obvious how to translate specific performance-level intents, possibly including multiple metrics, to resource decisions on the server platform: which resources to consider, at which amounts to allocate them to every function, how to adjust them dynamically to respond to varying load conditions, and, ultimately, how to use them wisely in order to minimize the overall energy footprint.

NFV-RI™ offers a variety of AI-based workflows that make it easy to deliver customized performance for 5G Core slices, in a fully automated fashion, and using the least amount of resources. The user simply declares the performance-level intents of each slice (e.g. latency, packet drops, throughput), and NFV-RI™ automatically decides the resource allocations that deliver them, whether it is a simple case requiring static, one-off allocations, or a much more dynamic case requiring allocations that must be continuously adapted to the current traffic conditions.

In a series of scenarios demonstrated in the context of an ETSI ENI PoC, NFV-RI™ managed to deliver the intended latency and packet drop objectives for two colocated 5G UPFs that were servicing subscriber groups of different priorities. This was achieved in a fully automated way, and with a resource efficiency that was translated to energy savings between 16% to 43%, depending on the scenario.

Intent-based 5G Core slicing Diagram

Edge colocation without performance compromises

Edge environments are expected to consolidate a disparate mixture of mobile core functions, RAN components, over-the-top services and new types of applications requiring proximity to end users. This increase in software diversity and density will introduce uncertainty on how workloads on a single server will interfere with each other, and hence, on how shared compute resources should be optimally distributed among workloads to meet SLOs. Isolating critical functions by reserving a large portion of server resources upfront is not an efficient option, particularly in the resource-limited edge cloud.

NFV-RI™ increases the density of workloads on an edge server in a way that does not compromise their performance SLOs. It employs advanced mechanisms to slice shared server resources and assign to each workload a private share. This eliminates contention and allows much denser workload placement. Using AI, it automatically decides the ideal amount of resources for every workload, so each can enjoy performance levels close, or identical to its standalone execution.

In several scenarios involving a broad range of performance-critical workloads (e.g. 5G UPFs, vRouters, high-performance message queues, web servers), NFV-RI™ managed to increase server density by up to 2x, colocating them with additional applications, while keeping the performance of the former fully protected as if they were running on a dedicated server.

Edge colocation Diagram