White Papers

Virtualized RAN-Vol.2

Apr 22. 2021
Download PDF

Introduction

The traffic demand in the mobile communication market continues to grow rapidly as the popularity of high-quality video streaming, as well as augmented reality (AR), virtual reality (VR), and the emergence of diverse vertical services continue to increase with 5G new radio (NR). Meanwhile, with forecasts showing the growth rate of the annual revenue per user lagging behind the growth rate of the traffic demand, mobile network operators (MNOs) are under pressure to minimize their network costs while maintaining service quality.

Radio access network (RAN), as a critical component of a mobile network, accounts for more than half of a MNO’s spending, and its performance directly impacts the service quality experiences of subscribers. Consequently, MNOs must continuously improve the cost efficiency of their RAN performance and operation to reduce spending, without deteriorating the service quality of their subscribers.

In general, traditional RANs prior to 5G have a vendor-proprietary closed system architecture comprised of purpose-built hardware and software and custom interfaces that operates with dependencies on the underlying hardware. This proprietary RAN architecture, however, is insufficient when it comes to lowering MNO’s capital and operational costs, as the traditional architecture, by its nature, is not dynamic, flexible, or efficient in meeting the needs of 5G services. MNOs are, therefore, exploring innovative ways to manage their networks and services in line with 5G technology.

The representative networking technologies being now explored by MNOs are software-defined network (SDN) and network functions virtualization (NFV). The virtualization, first applied to the Core network, demonstrated successfully the merits of flexibility in network deployment and business agility. These are critical elements in reducing time-to-market for new services and improving operational efficiency. By taking advantages of the disaggregated network architecture and the separation between software and hardware, virtualized RAN (vRAN) reduces the capital expenditure (CAPEX) and operational expenses (OPEX) in 5G RAN.

Purpose of This Paper

The purpose of this paper is to show that Samsung vRAN’s both competitiveness against the traditional RAN and the benefits that enable MNOs to operate their network more flexibly and efficiently. In particular, this paper focuses more on the virtualized distributed unit (vDU), which performs the entire baseband processing in the disaggregated 5G vRAN architecture. After detailing the role and scope of vDU in the disaggregated 5G vRAN architecture, this paper also describes Samsung's differentiated vRAN features created to enhance network performance and maximize efficiencies in deployment, operation, and management, which leads to the reduced total cost of ownership (TCO).

Overview of Virtualized RAN

Disaggregated RAN Architecture

Traditional RAN systems are an integrated network architecture based on a distributed RAN (D-RAN) model, where D-RAN integrates all RAN functions into a few elements. This integrated architecture, however, faces limitations in supporting technologies and services that are required in the 5G era. To overcome such drawbacks, a new RAN system architecture with flexible function split, known as ‘disaggregated RAN’, is introduced in 5G. The disaggregated RAN breaks up the integrated network system into several function components that can then be individually re-located as needed without hindering their ability to work together to provide a holistic network service.

As shown in Figure 1 below, the split option between the centralized portion and the distributed portion vary depending on the service requirements and network scenarios.

Figure 1. Function split options in the disaggregated RAN
Figure 1. Function split options in the disaggregated RAN

For example, Option 2 function split is the most preferred option, where non-real-time processing based RRC/PDCP and real-time processing based RLC/MAC/PHY are separated. The entity performing network functions of RRC/PDCP is called centralized unit (CU), and the remaining part of all the baseband processing functions of RLC/MAC/PHY is processed by an entity called a distributed unit (DU). Figure 2 illustrates this grouping of split functions into CU and DU. Note that PHY in DU can be further split between DU and Radio Unit (RU): the higher layer split in the literature refers to the DU and CU split, whereas the DU and RU split is called a lower-layer split.

Under the Option 2 split, CU can connect to multiple DUs; i.e., RRC/PDCP functions are centralized and thus RRC/PDCP anchor change disappears during a baseband handover across DUs. This ultimately improves the service quality associated with the handovers within the same CU. In addition, the centralized CU can pool the resources across several DUs, thus maximizing the resource efficiency.

Figure 2. Function split between CU and DU and RU
Figure 2. Function split between CU and DU and RU

The prevalence of dual connectivity (DC) in 5G networks is another reason why split Option 2 is preferred to other options. To support DC without DU and CU split concept, devices may connect to two different gNBs (CU+DU), but only one of the two DUs, the anchor DU, is responsible for processing the split data streams via PDCP. Thus, the PDCP load is concentrated on the PDCP anchor DU, which creates a load imbalance and inefficient resource usage between the PDCP anchor DU (over-utilized) and the non-anchor DU (under-utilized). However, in DC under DU and CU split Option 2, this load imbalance can be mitigated since PDCP aggregation is off-loaded to the CU.

From the lower layer split perspective, the traditional separation point between DU and RU is called the fronthaul and referred to as Option 8 in Figure 1. It is usually implemented over optic fiber common public radio interface (CPRI) fronthaul. However, as 4G long term evolution (LTE) networks evolve to 5G NR, higher bandwidth usage and the introduction of massive multiple-input multiple-output (MIMO) technology - coupled with carrier aggregation (CA) and the use of multiple sectors per cell-site – all contribute to increasing the fronthaul bandwidth. Due to this large bandwidth increase in the fronthaul, the fronthaul bandwidth must be lowered with more efficient fronthaul options than Option 8. Further, the demand from operators both to prevent vendor lock-in and to increase cost efficiency through multi-vendor introduction has accelerated the discussion of an open fronthaul interface. Thus, this led to the active discussion of Option 7 which separates functions in PHY that operate with various sub-blocks into high-PHY and low-PHY. To this end, the Open RAN (O-RAN) alliance has standardized the fronthaul split Option 7-2x – one of the sub types of Option 7 split - as the open fronthaul interface. The O-RAN option 7-2x is implemented over the Ethernet fronthaul with enhanced CPRI (eCPRI) specification.

Virtualized DU

Virtualization shifts the CU and DU from dedicated hardware to software components, allowing for flexible scaling, as well as rapid and continuous evolution. This virtualization makes the networks easily meet the evolving demands of new and existing services with minimal impact on the deployment and operation costs. With vDU, all the baseband functions of real-time RLC/MAC/PHY layers are executed over the commercial-off-the-shelf (COTS) server. Figure 3 illustrates the functional splits inside vDU, Option 2 split (between CU and DU), and Option 7-2x split (between DU and RU).

Figure 3. Function split between CU and DU
Figure 3. Function split between CU and DU

Samsung was the first vendor in the world to launch a 5G virtualized CU (vCU) with a disaggregated RAN architecture. As compared to vCU, it is very important to have resource-efficient, scalable software architecture in place for vDU due to the most complex functions of the entire baseband processing resides in L1/L2. Figure 4 shows the brief architecture of Samsung’s container based vDU that enhances scalability, flexibility and resource efficiency.

Figure 4. Architecture of vDU
Figure 4. Architecture of vDU
  • ● vDU functions are implemented as containerized network functions (CNF), which are decoupled with the underlying hardware and operates on an x86 based COTS server.
  • ● The PHY and MAC layers in vDU requires very high computational complexity: channel estimation and detection, forward error correction (FEC), and scheduling algorithms. These functions can be a burden on the computing power of a COTS server and lower the performance of vDU. Therefore, some computation-intensive tasks with repetitive structures, such as FEC, may be off-loaded to alternative hardware chips for acceleration, which can be installed on a COTS server as an option.
  • ● vDU has multiple pods that include one or more containers to provide a micro service type architecture. Server resources such as the central processing unit (CPU) core, and memory occupied by each pod can vary. The pod, in addition, can be scaled based on the capacity requirements. These features allow the vDU to be configured with proper CPU core, and memory dimensioning according to the capacity and performance requirements in network deployment area.
  • ● The management and orchestration of vDU containers can be supported by the Kubernetes (k8s), which is an open source system that automatically distributes, scales, and manages containerized applications.
  • ● By using the minimum number of CPU cycles, data plane development kit (DPDK) accelerates packet processing on x86 servers. Samsung vDU uses DPDK to improve networking performance without additional dedicated packet processing hardware. Moreover, Samsung vDU uses a single-root input/output virtualization (SR-IOV) that allows the sharing of a single physical network interface among multiple applications without compromising network performance. In doing so, different applications running on the vDU can access the network interface directly without using kernel bridging.

Benefits of Virtualized RAN

Samsung vRAN provides several benefits as a future-proof solution and meets the evolving requirements of network technologies. Further, it minimizes the cost of network deployment and operation – i.e. CAPEX and OPEX - by virtualizing network functions. These can be driven by the following two significant and operator-attractive values of virtualization; (1) operation on common platform instead of dedicated hardware, (2) implementation with software based cloud technology.

Same Common Platform in RAN, Core, and Application

Samsung vRAN operates on x86 based COTS server and is not constrained to proprietary hardware. The fact that COTS servers can be mass produced means that hardware costs associated with vRANs are reduced. In addition, the flexibility of using COTS servers, allow capacity requirements increase due to traffic demand, additional resources can easily be pooled by adding more COTS servers to the network. In turn, this is a much more cost-effective proposition than replacing outdated proprietary hardware as a whole with new proprietary hardware. Beyond the COTS server, the possibility of common platform separated from dedicated hardware leads the following benefits

End-to-end solution on common platform

Operator can reap significant benefits of end-to-end single uniform platform across the core network, CU, DU, and edge application such as mobile edge computing (MEC), providing further synergies and improved return on investment into vRAN. For example, Samsung’s fully-virtualized 5G end-to-end solutions, which consist of virtualized core, vCU, as well as vDU, can bring cloud-based network solutions to operator and industry on a cloud platform. The fact that none of the network components require a dedicated purpose-built platform could also drastically simplify the procurement, integration, management and orchestration of the end-to-end network, thereby reducing operations and maintenance costs.

Flexibility on network configuration

In a vRAN with a common platform, the network functions are fully decoupled from their underlying hardware. That is, the RAN functions of multiple vendors may be operated on a single hardware, thereby improving the flexibility of the service providers. In some cases, multiple service providers may even be able to share a single hardware. In addition, by breaking up the proprietary hardware and solutions of traditional network vendors, new market entrants are given the opportunity to bring their solutions to the market. In such an open environment, where multi-vendor’s software and hardware can co-exist on a common platform, MNOs will be able to configure their networks with a best-of-bread approach, thereby reducing TCO.

Software-based Cloud Technology

In traditional hardware oriented network solutions, the deployment of new standards, features, and services often requires hardware replacement, especially when there are changes to the lower layer protocols and the need for processing capability increase. But, decoupling of software from hardware allows for independent horizontal scaling of infrastructure to address constant evolution of radio access, versus frequent vertical upgrades and hardware replacement of propriety hardware. Samsung’s vRAN, which implements the most L1/L2 functions by using software, allows fast and cost-effective software upgrades to avoid costly and time consuming hardware replacements. In addition, even when capacity growth of the network is required, the system is able to respond quickly using software upgrades leveraging on the flexibility and scalability of the vRAN pods. Through the fast software upgrades, service availability is guaranteed. Rapid and cost-effective technical improvements to service requirements are possible and time-to-market for delivering new services is reduced. As a result of this, operators are better able to manage and maximize the lifecycle of their hardware. The capabilities of software based enhancement that does not require the high-cost and time consuming hard ware addition/replacement and to reuse the common infrastructures, are the biggest factors in which vRAN maximizes life cycle and reduces operator’s cost of ownership compared to traditional hardware based equipment.

An additional noteworthy benefit of a software-based virtualized cloud technology is the flexibility to add new features and deploy them automatically to a system when desired. This automatically adjusts network resources as needed to more effectively address sudden surges in traffic. And any idle resources of a given site can be reused in a computing cloud shared by multiple sites, thereby reducing the baseband processing cost for each radio site and improving the efficiency of the operator's entire infrastructure. In addition, automation of operations and management through virtualized software can make the deployment, upgrade, and health check of network elements and services more efficient and cost-effective, allowing operators to reduce network operation and maintenance costs.

Evolution of Samsung Virtualized RAN

Samsung vRAN is leading network virtualization as a global leader, and it will further be evolved. The evolution includes the total virtualization solutions from multi-technology RAN to web-scale or enterprise private network solutions, and continuous performance enhancement to catch up with the performance of traditional hardware DU. The evolution of underlying common hardware platforms on which vDU software runs can also contribute to the performance enhancement. Also, the performance evolution can come from advanced software technologies that maximize the flexibility of virtualization.

Evolution to Consolidated RAN Solution

Figure 5 shows Samsung's plan to secure total virtualization RAN solutions from adopting multiple radio access technologies (RAT). In the initial stage in ‘2020, the frequency division duplex (FDD) NR vDU for low frequency bands was provided for fast deployment of virtualization solution. The NR vDU portfolio will be expanded to provide various mid-band time division duplex (TDD) services in ‘2021. In addition, by securing other RAT solutions (e.g. 2G, 3G), the virtualization application product lineup will be expanded, and ultimately provide integrated multi-technology single RAN solution.

Along with the expansion of virtualization products, Samsung vDU would also provide parity with enhanced RAN features such as NR-LTE dynamic spectrum sharing (DSS), carrier aggregation (CA), dual connectivity (DC), and multi-user multiple input multiple output (MU-MIMO).

Figure 5. Evolution plan of Samsung vRAN solutions
Figure 5. Evolution plan of Samsung vRAN solutions

Performance Enhancement with Hardware Accelerators

The highly complicated L1/L2 baseband processing poses a great challenge in terms of software implementation and creates uncertainty as to whether vDU can parallel the performance of a traditional purpose-oriented hardware. The performance of a vDU cell capacity is extremely important, in that, it is directly tied to the deployment and operation cost of a vDU. In turn, such costs determine the TCO of a vDU system and dictate its overall competitiveness compared to a hardware based DU system.

In the initial stages of virtualization, unfortunately, the capacity performance of a fully virtualized DU with software only does not match the performance of a purpose-oriented hardware built with the same cost. As the processor and platform of the COTS server grow, the performance of the fully virtualized DU is expected to narrow the gap with that of the hardware based DU over time. However, in order to secure vDU competitiveness by greatly increasing the performance even before catching up with the performance of hardware based DU, a hardware accelerator is required. The hardware accelerator is a device that enables high speed and high capacity processing of some functions that are too complex and compute intensive for software to handle, and some physical layer functions such as FEC can be usually offloaded to hardware.

Figure 6 shows the performance evolution plan of Samsung NR vDU through the development of hardware platform and the introduction of hardware accelerator along with Samsung’s optimized software. The initial vDU supports 4T4R antenna array in the FDD narrow-band. In two years, the performance is expected to be improved enough to support 3 times capacity gain. Also, in TDD band with wide bandwidth, cell performance will continue to increase greatly due to the evolution of hardware. Mid-band TDD will be supported from 2021 and will be able to support 2 or 3 times the capacity in two years. The hardware evolution has the potential to narrow the performance gap with hardware based DU early, the improved cell capacity of a vDU makes the vDU performance and price competitive.

Figure 6. Evolution plan of Samsung NR vRAN performance
Figure 6. Evolution plan of Samsung NR vRAN performance

Adaptive Utilization of Software-based Algorithm

To guarantee proper performance in all potential operating scenarios and environments, a traditional hardware based DU must be implemented with the required capacity that can ensure good performance even in a worst-case situation. For example, variance of the channel between a base station and a user is small when the user is in a static environment or moving at a low speed, and when the user moves at a high speed, the amount of variance is severe. The DU needs to have sophisticated algorithms in a high-speed environment to predict the rapidly changing channel and demodulate the signal. It means that in the case of hardware based DU, even in a network environment where the user is mainly static and does not move at high speed, it is necessary to deploy a DU implemented with high complexity in preparation for a user or environment of high speed movement.

On the other hand, software baseband processing through virtualization enables the application of adaptable modem algorithms depending on the operating scenarios and channel environment. Samsung vDU detects the channel variance, time delay, or signal quality, and then applies effective algorithms adaptively according to the predicted channel environment - complex algorithms for bad channel condition and simple algorithms for good channel condition – without additional performance loss. A low complexity algorithm reduces the CPU and memory utilizations of vDU, lowering power consumption of operation. The reduced number of cycles consumed for computation also decreases the modem processing time. The saved time and increased resource space can be utilized for more users and consequently improve vDU overall performance.

In addition, Samsung vDU is capable of applying differential algorithms can be utilized adaptively according to the deployment environment, thus increasing the efficiency of the entire network. For example, a vDU with a high complexity modem that guarantees high performance even with large mobility is deployed on the platform of trains moving at high speed. On the other hand, in the areas with low mobility, sufficient performance can be achieved by deploying a vDU with a low complexity modem. In the network where a low complexity vDU is deployed, it is possible to support a larger number of users or cells based on higher CPU availability and faster processing time of the vDU, and based on this, the deployment cost of the network can be reduced. Figure 7 shows an example where fewer vDUs are required to support the same number of cells and users in a low mobility network with the adaptive utilization of vDU for different mobility networks.

Figure 7. Adaptive utilization of vDU in different deployment environments
Figure 7. Adaptive utilization of vDU in different deployment environments

Even if there is a use case that cannot be handled by the initial vDU with the low complexity modem against the later change in the deployed network environment, it is possible to convert the vDU to a fully implemented vDU through software upgrade without replacing the equipment.

Flexible Deployment and Dynamic Scaling

Samsung vDU can provide flexibility and dynamic scaling capabilities that create high degrees of freedom in network operation and reduce overall costs of owning a network. The vDU allows an optimal deployment of a network by dynamically allocating resources to various site configurations and traffic demand, rather than allocating fixed resources. The vDU functions operate in pod unit, which holds multiple flavors with varying amounts of hardware resources such as CPU cores and memory sizes, for flexible dimensioning. By using this flavor set, it is possible to deploy vDU with the pod flavor # 1 for a network that requires low amount of capacity, and to arrange flavor # 2 or #3 for networks that require higher amount of capacity, as shown in Figure 8.

Figure 8. Flexible deployment with multiple flavors
Figure 8. Flexible deployment with multiple flavors

In addition, Samsung vDU enables dynamic capacity management of networks according to the required service and traffic by the right size scaling. It can also be scaled horizontally as demand for network capacity changes. As the traffic demand of the network increases, a pod creates its replicas and scales out to meet the required capacity. And when the demand decreases, it is scaled in so that the resource utilization can be enhanced.

Dynamic scaling enables flexible management of vDU resources and also enables pooling to efficiently cope with the following network changes.

  • ● Traffic changes over time, e.g. daily traffic change, traffic flow throughout seasons
  • ● Additional cell or site deployment
  • ● Event-based unexpected traffic changes
  • ● Load imbalance between cells connected to a vDU

Figure 9 depicts an example of the pod scale-out when the traffic required for the network increases and additional cell deployment is necessary, and Figure 10 shows an example of optimizing resource utilization by dynamic scaling according to the required capacity change for day and night.

Figure 9. Dynamic scaling with additional cell deployment
Figure 9. Dynamic scaling with additional cell deployment
Figure 10. Dynamic scaling with traffic change
Figure 10. Dynamic scaling with traffic change

Since the vDU supports on-demand capacity management, it does not have to be allocated with the maximum hardware resource for further network growth at the initial deployment stage. Instead, it can start with the minimum number of DU resources required for the initial deployment. Overtime, as more cells are required or traffic demand changes, a vDU can increase or decrease its cell capacity by scaling the pods. The flexibility and scalability of the vDU reduce costs associated with additional deployment that is needed with increased demand. It also reduces energy consumption and maintenance cost, because there is no need to build a cell site to its maximum capacity, and thus any extra server resources are set aside.

Efficient Resource Utilization via Pooling

Samsung vDU is expected to leverage the flexibility and scalability potentials of DU virtualization, and further enhance resource efficiency through pooling. The pooling can refer to technologies in which several cells share a given resource pool, but more specifically, depending on whether the shared resource pool is made between multiple vDU servers or within one vDU server, it can be divided into vDU pooling and resource pooling.

The vDU pooling enables single vDU to support multiple cell-site baseband processing by sharing its baseband processing resources within a baseband cloud and allowing other cell sites and radio technologies to use its resources. On the other hand, traditional hardware based baseband unit have a fixed capacity and a static boundary with a RU, thereby being limited in the benefit that can be gained from dynamically pooling baseband processing resources. The disparity between a vDU and a traditional DU implies that idle capacities in the processing for a given cell can be reused by vDUs in the same baseband pool for the other cell sites, reducing the need to purchase additional equipment compared to the traditional DU that serves one cell-site, as shown in Figure 11.

Figure 11. Resource efficiency via vDU pooling
Figure 11. Resource efficiency via vDU pooling

The sharing of baseband processing resources by vDU pooling allows operators to flexibly change cell configurations as needed. For instance, assuming that a vDU can support a total capacity of 10MHz 4T4R 3600 UEs and 10Gbps, it is possible to configure the same vDU so that it can support a capacity of 10MHz 18 cells of 200 UEs per cell and 10Gbps, a capacity of 10MHz 6 cells of 600 UEs per cell and 10Gbps, or a capacity of 10MHz 4T4R 3 cells of 1200 UEs per cell and 10Gbps across multiple cell-sites.

On the other hand, resource pooling means that the logical resources required by the cells served by a vDU are shared in the resource pool in the vDU. The resource pooling enables dynamic resource sharing between the cells so that resource efficiency of the vDU would be maximized. While a traditional RAN can experience severe load imbalance between cells in certain situations, a vDU independently pools the hardware resources required for the DU functions of each cell via resource pooling, mitigating any potential for load imbalance. For example, when a vDU is designed to support 3 cells of 1200 UEs per cell and 10Gbps traffic processing capacity in total, it is also possible to dynamically program 3 cells of 800/200/200 UEs and 10Gbps, or 3 cells of 400/400/400 UEs and 10Gbps in the same vDU, depending the influx of traffic demand on each cell. Figure 12 shows an example of resolving the load imbalance through resource pooling when traffic changes between cells occur in which 2 cells serve a total of 800 UEs.

Figure 12. Resource pooling when load imbalance occurs
Figure 12. Resource pooling when load imbalance occurs

The pooling can be managed and fully automated by Samsung cloud orchestrator to optimize the resource efficiency of the network as shown in Figure 13. Samsung cloud orchestrator provides automatic vDU CNF provisioning and life-cycle management. In addition, it can also automatically scale the resource size for each cell in the vDU server according to the actual cell requirement, through an appropriate policy based on data such as the resource usage of vDU servers in the vDU pool and the required capacity of cells. Throughout this orchestration, vDU resource efficiency can be optimized by automatically configuring a network fabric in proportion to the vDU server resources required for cells.

Figure 13. Evolution of pooling with Samsung Cloud Orchestrator
Figure 13. Evolution of pooling with Samsung Cloud Orchestrator

Open RAN and RAN Intelligent Controller

To complement the disaggregated architecture and virtualization of 5G RAN, another powerful technology shift, so called Open-RAN (O-RAN), has been in the spotlight. The goal of O-RAN is to evolve the RAN more open and flexible, and it can be effectively realized with vRAN, which is fully decoupled from the proprietary hardware. O-RAN aims to drive the mobile industry towards an ecosystem of innovative, multi-vendor, interoperable, and autonomous RAN with reduced cost, improved performance, and greater agility. Empowered by principles of openness and intelligence, the O-RAN architecture is the foundation for building the vRAN on open hardware, with embedded artificial intelligence (AI)/machine learning (ML)-powered radio control. The overall O-RAN architecture is shown in Figure 14.

Figure 14. Overall architecture of O-RAN
Figure 14. Overall architecture of O-RAN

In terms of the openness, Samsung vDU already provides an open fronthaul interface – i.e. function split Option 7-2x –, in which multi-vendor DU-RU interoperability can be realized. The open fronthaul interface of Samsung vDU allows operators to customize their networks easily according to their particular purpose and introduce its own services that utilize various products from multi-vendors. This allows a mobile network to be disaggregated where multi-vendor software can interwork on COTS servers. RAN software from a certain vendor can communicate with either its own RU software via a proprietary interface or another vendor’s RU software via an open fronthaul interface.

In addition, Samsung vRAN will interwork with the RAN intelligent controller (RIC) to optimize network resource and improve user service quality. In O-RAN, RIC is introduced to provide intelligent radio resource management, higher layer procedure optimization, policy optimization, and AI/ML model. The non-real-time (non-RT) control functionality with runtime execution over 1 second and near-real-time (near-RT) control functionality with runtime execution under 1 second, are decoupled in the RIC. Near-RT RIC provides near real time control and optimization of RAN elements and resources via fine-tuned data collection and actions. This may include AI/ML workflow including model training, inference and updates.

The non-RT RIC and near-RT RIC enable a wide range of new use cases that provide enhanced service to network and users. O-RAN alliance identifies the high-level use cases, and Samsung vRAN realizes the potential use cases by interoperating with the RIC. For instance, service level agreement (SLA) assurance per RAN slice or per user group is guaranteed by monitoring and controlling service quality with Samsung near-RT RIC. It supports several specified service KPIs such as data rate and latency based on the SLA between the operator and the customer. It also reduces CAPEX by optimizing RAN resources according to channel and traffic changes, improving network performance. At this time, AI modeling can help optimize the operating resources per slice.

In addition, the RIC can search for neighbor cooperative cells that can coordinate scheduling with the serving cell in the downlink coordinated multi-point (CoMP) environment, and control inter-cell interference by inter-operating with the scheduler. At this time, the RIC searches for the cooperative cells and provides the cell information to the scheduler when the beams of UEs in the multi-cell cluster change. Then, the scheduler adjusts the transmission beams with cooperative cells in (near) real time. The Massive MIMO beam coordination by RIC effectively controls the interference resulting from the increased massive MIMO ratio for high capacity demand of the 5G network, and improves network performance.

Realization of Web-scale RAN

Furthermore, Samsung’s container based vDU realizes the web-scale evolution in the RAN domain through cloud-native architecture and technology. Web-scale vRAN (i.e. vDU + vCU) pertains to designing, deploying, and managing RAN at any scale, and can be packaged in a number of ways to suit diverse network requirements. It can be deployed and operated on a public cloud platform with higher agility and scalability, and can scale to any size of enterprise, any requirement of business.

Therefore, web-scale RAN can also provide suitable solutions to private networks of enterprises with various environmental and a wide range of network performance requirements. Samsung vRAN supports flexible deployment that increases resource efficiency by using only the resources that are required by the network, while meeting various capacity needs. The enterprises can scale their networks as their business grow with improved productivity, safety, and automation at web-scale.

One of the biggest reasons why it is difficult for private enterprises and organizations to directly operate its own private network is that, in general, mobile communication technology requires very high expertise in deployment and operation, something that is hard to come by in average enterprise IT departments. Deployment and operation of various types of hardware equipment, repairs in case of failure, introduction of new network features, and replacement of outdated equipment can be high entry barriers for the private enterprise that has not had long experience in telecommunications. Samsung’s web-scale vRAN converts telco technology to IT technology, and enables deployment, maintenance, repair, and upgrade with software, making it easy for private enterprises to operate their own mobile networks.

Recently, Samsung announced an agreement to collaborate with Microsoft on an end-to-end, cloud-based private 5G network solution. This collaboration plans to advance the virtualization of 5G solutions, which will include the deployment of Samsung’s vRAN, virtualized core, and MEC technologies on Microsoft's cloud platform - Azure. The collaboration highlights key benefits of Samsung’s virtualized cloud networks, which can accelerate 5G expansion for enterprises, and help them deploy private 5G networks faster.

Summary

The 5G era requires a network to best meet the different characteristics of new and diverse 5G services. Samsung’s disaggregated vRAN satisfies these requirements with an independent vCU and vDU function split architecture: vCU for non-real time functions (RRC/PDCP) for enhanced network performance and vDU for real time functions (RLC/MAC/PHY). This disaggregated architecture enables operators to manage their networks in a more flexible manner.

Samsung vDU is a hardware-agnostic solution with its components running independent of hardware specifications. With convenient hardware maintenance and maximized lifecycles, Samsung vDU enables operators to run their network more flexibly and efficiently. Samsung vDU does not require fixed resources dedicated to its underlying hardware during deployment. Operators can freely reallocate resources or automatically scaling vDU components depending on changes in network traffic patterns to best utilize resources such as CPU core and memory. Resource efficiency can be maximized further through resource pooling as hardware boundary limitations are no longer an issue with virtualization.

Operators can immediately upgrade their network without downtime via software with timely delivery and deployment. New technologies can be introduced and network capacity can be increased without additional changes to the existing hardware. In addition, Samsung's sophisticated baseband algorithm can further enhance vDU's competitiveness through the advanced hardware accelerator and the increase resource utilization and pooling in an adaptive manner to the deployed network environment.

Samsung vDU is a commercially-proven product supplied to a global Tier One operator. Samsung continues to explore the virtualization of the multi-RAT as single RAN solution. This relentless effort ensures that Samsung vDU will become a more realistic and future-proof, capable of serving as a consolidated multi-technology converged RAN platform.

Abbreviation

    • AI

      Artificial Intelligence

    • AR

      Augmented Reality

    • CA

      Carrier Aggregation

    • CAPEX

      Capital Expenditure

    • CNF

      Containerized Network Function

    • CoMP

      Coordinated Multi-Point

    • COTS

      Commercial Off the Shelf

    • CPRI

      Common Public Radio Interface

    • CPU

      Central Processing Unit

    • C-RAN

      Centralized RAN

    • CU

      Centralized Unit

    • D-RAN

      Distributed RAN

    • DC

      Dual Connectivity

    • DPDK

      Data Plane Development Kit

    • DU

      Distributed Unit

    • eCPRI

      Enhanced CPRI

    • FDD

      Frequency Division Duplex

    • FEC

      Forward Error Correction

    • LTE

      Long-Term Evolution

    • MAC

      Medium Access Control

    • MIMO

      Multiple Input Multiple Output

    • ML

      Machine Learning

    • MNO

      Mobile Network Operator

    • NFV

      Network Function Virtualization

    • NR

      New Radio

    • OPEX

      Operational Expenses

    • O-RAN

      Open RAN

    • PDCP

      Packet Data Convergence Protocol

    • RAN

      Radio Access Network

    • RF

      Radio Frequency

    • RIC

      RAN Intelligent Controller

    • RLC

      Radio Link Control

    • RRC

      Radio Resource Control

    • RU

      Radio Unit

    • SDN

      Software Defined Network

    • SLA

      Service Level Agreement

    • SR-IOV

      Single Root Input/Output Virtualization

    • TDD

      Time Division Duplex

    • TCO

      Total Cost of Ownership

    • vCU

      Virtualized CU

    • vDU

      Virtualized DU

    • VR

      Virtual Reality

    • vRAN

      Virtualized RAN