diff --git a/_data/navbars/reference-architectures.yml b/_data/navbars/reference-architectures.yml index 1071e6c9..4e177550 100644 --- a/_data/navbars/reference-architectures.yml +++ b/_data/navbars/reference-architectures.yml @@ -13,6 +13,11 @@ section: meta_title: Edge Cloud Deployment with 3GPP 4G LTE CUPS of EPC meta_description: OpenNESS is an open source edge computing platform that enables Service Providers and Enterprises to deploy applications and services on a network edge. + - title: 5G Non-Stand Alone (NSA) + path: /doc/reference-architectures/core-network/openness_5g_nsa + meta_title: Edge Cloud Deployment with 3GPP 5G Non Stand Alone + meta_description: OpenNESS is an open source edge computing platform that enables Service Providers and Enterprises to deploy applications and services on a network edge. + - title: Next-Gen Core (NGC) path: /doc/reference-architectures/core-network/openness_ngc meta_title: Edge Cloud Deployment with 3GPP 5G Stand Alone @@ -22,3 +27,31 @@ section: path: /doc/reference-architectures/core-network/openness_upf meta_title: User Plane Function (UPF) meta_description: User Plane Function is the evolution of Control and User Plane Separation which part of the Rel.14 in Evolved Packet core. CUPS enabled PGW to be split into PGW-C and PGW-U. + + - title: Radio Access Network + path: + section: + - title: OpenNESS Radio Access Network + path: /doc/reference-architectures/ran/openness_ran + meta_title: OpenNESS Radio Access Network is the Edge of Wireless Network + meta_description: OpenNESS Radio Access Network is the edge of the wireless network. OpenNESS Intel FlexRAN uses as a reference 4G and 5G base station for 4G and 5G end-to-end testing. + + - title: O-RAN Front Haul Sample Application in OpenNESS + path: /doc/reference-architectures/ran/openness_xran + meta_title: 5GNR FlexRAN Front Haul functional units deployment with OpenNESS based on O-RAN specifications at the Network Edge + meta_description: 5GNR FlexRAN Front Haul functional units deployment with OpenNESS based on O-RAN specifications at the Network Edge. + + - title: Converged Edge Reference Architecture Near Edge + path: /doc/reference-architectures/CERA-Near-Edge + meta_title: Converged Edge Reference Architecture Near Edge + meta_description: Reference architecture combines wireless and high performance compute for IoT, AI, video and other services. + + - title: Converged Edge Reference Architecture On Premises Edge + path: /doc/reference-architectures/CERA-5G-On-Prem + meta_title: Converged Edge Reference Architecture On Premises Edge + meta_description: Reference architecture combines wireless and high performance compute for IoT, AI, video and other services. + + - title: Converged Edge Reference Architecture for SD-WAN + path: /doc/reference-architectures/cera_sdwan + meta_title: Converged Edge Reference Architecture for SD-WAN + meta_description: OpenNESS provides a reference solution for SD-WAN consisting of building blocks for cloud-native deployments. diff --git a/doc/reference-architectures/CERA-5G-On-Prem.md b/doc/reference-architectures/CERA-5G-On-Prem.md new file mode 100644 index 00000000..80622c4d --- /dev/null +++ b/doc/reference-architectures/CERA-5G-On-Prem.md @@ -0,0 +1,68 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020-2021 Intel Corporation +``` + +# Converged Edge Reference Architecture 5G On Premises Edge +The Converged Edge Reference Architectures (CERA) are a set of pre-integrated HW/SW reference architectures based on OpenNESS to accelerate the development of edge platforms and architectures. This document describes the CERA 5G On Premises Edge, which combines wireless networking and high performance compute for IoT, AI, video and other services. + +- [CERA 5G On Prem](#cera-5g-on-prem) + - [CERA 5G On Prem Experience Kit](#cera-5g-on-prem-experience-kit) + - [CERA 5G On Prem OpenNESS Configuration](#cera-5g-on-prem-openness-configuration) + - [CERA 5G On Prem Deployment Architecture](#cera-5g-on-prem-deployment-architecture) + - [CERA 5G On Prem Experience Kit Deployments](#cera-5g-on-prem-experience-kit-deployments) + +## CERA 5G On Prem +CERA 5G On Prem deployment focuses on On Premises, Private Wireless and Ruggedized Outdoor deployments, presenting a scalable solution across the On Premises Edge. The assumed 3GPP deployment architecture is based on the figure below from 3GPP 23.501 Rel15 which shows the reference point representation for concurrent access to two (e.g. local and central) data networks (single PDU Session option). The highlighted yellow blocks - RAN, UPF and Data Network (edge apps) are deployed on the CERA 5G On Prem. + +![3GPP Network](cera-on-prem-images/3gpp_on_prem.png) + +> Figure 1 - 3GPP Network + +### CERA 5G On Prem Experience Kit +The CERA 5G On Prem implementation in OpenNESS supports a single Orchestration domain, optimizing the edge node to support Network Functions (gNB, UPF) and Applications at the same time. This allows the deployment on small uCPE and pole mounted form factors. + +#### CERA 5G On Prem OpenNESS Configuration +CERA 5G On Prem is a combination of the existing OpenNESS Building Blocks required to run 5G gNB, UPF, Applications and their associated HW Accelerators. CERA 5G On Prem also adds CMK and RMD to better support workload isolation and mitigate any interference from applications affecting the performance of the network functions. The below diagram shows the logical deployment with the OpenNESS Building Blocks. + +![CERA 5G On Prem Architecture](cera-on-prem-images/cera-on-prem-arch.png) + +> Figure 2 - CERA 5G On Prem Architecture + +#### CERA 5G On Prem Deployment Architecture + +![CERA 5G On Prem Deployment](cera-on-prem-images/cera_deployment.png) + +> Figure 3 - CERA 5G On Prem Deployment + +CERA 5G On Prem architecture supports a single platform (Xeon® SP and Xeon D) that hosts both the Edge Node and the Kubernetes* Control Plane. The UPF is deployed using SRIOV-Device plugin and SRIOV-CNI allowing direct access to the network interfaces used for connection to the gNB and back haul. For high throughput workloads such as UPF network function, it is recommended to use single root input/output (SR-IOV) pass-through the physical function (PF) or the virtual function (VF), as required. Also, in some cases, the simple switching capability in the NIC can be used to send traffic from one application to another, as there is a direct path of communication required between the UPF and the Data plane, this becomes an option. It should be noted that the VF-to-VF option is only suitable when there is a direct connection between PODs on the same PF with no support for advanced switching. In this scenario, it is advantageous to configure the UPF with three separate interfaces for the different types of traffic flowing in the system. This eliminates the need for additional traffic switching at the host. In this case, there is a separate interface for N3 traffic to the Access Network, N9 and N4 traffic can share an interface to the backhaul network. While local data network traffic on the N6 can be switched directly to the local applications, similarly gNB DU and CU interfaces N2 and N4 are separated. Depending on performance requirements, a mix of data planes can be used on the platform to meet the varying requirements of the workloads. + +The applications are deployed on the same edge node as the UPF and gNB. + +The use of Intel® Resource Director Technology (Intel® RDT) ensures that the cache allocation and memory bandwidth are optimized for the workloads on running on the platform. + +Intel® Speed Select Technology (Intel® SST) can be used to further enhance the performance of the platform. + +The following Building Blocks are supported in OpenNESS + +- High-Density Deep Learning (HDDL): Software that enables OpenVINO™-based AI apps to run on Intel® Movidius Vision Processing Units (VPUs). It consists of the following components: + - HDDL device plugin for K8s + - HDDL service for scheduling jobs on VPUs +- FPGA/eASIC/NIC: Software that enables AI inferencing for applications, high-performance and low-latency packet pre-processing on network cards, and offloading for network functions such as eNB/gNB offloading Forward Error Correction (FEC). It consists of: + - FPGA device plugin for inferencing + - SR-IOV device plugin for FPGA/eASIC + - Dynamic Device Profile for Network Interface Cards (NIC) +- Resource Management Daemon (RMD): RMD uses Intel® Resource Director Technology (Intel® RDT) to implement cache allocation and memory bandwidth allocation to the application pods. This is a key technology for achieving resource isolation and determinism on a cloud-native platform. +- Node Feature Discovery (NFD): Software that enables node feature discovery for Kubernetes*. It detects hardware features available on each node in a Kubernetes* cluster and advertises those features using node labels. +- Topology Manager: This component allows users to align their CPU and peripheral device allocations by NUMA node. +- Kubevirt: Provides support for running legacy applications in VM mode and the allocation of SR-IOV ethernet interfaces to VMs. +- Precision Time Protocol (PTP): Uses primary-secondary architecture for time synchronization between machines connected through ETH. The primary clock is a reference clock for the secondary nodes that adapt their clocks to the primary node's clock. Grand Master Clock (GMC) can be used to precisely set primary clock. + +#### CERA 5G On Prem Experience Kit Deployments +The CERA 5G On Prem experience kit deploys both the 5G On Premises cluster and also a second cluster to host the 5GC control plane functions and provide an additional Data Network POD to act as public network for testing purposes. Note that the Access network and UE are not configured as part of the CERA 5G On Prem Experience Kit. Also required but not provided is a binary iUPF, UPF and 5GC components. Please contact your local Intel® representative for more information. + +![CERA Experience Kit](cera-on-prem-images/cera-full-setup.png) + +> Figure 4 - CERA Experience Kit + +**More details on the Converged Edge Reference Architecture for On Premises deployments is available under [Intel® Distribution of OpenNESS](https://www.openness.org/products/intel-distribution).** diff --git a/doc/reference-architectures/CERA-Near-Edge.md b/doc/reference-architectures/CERA-Near-Edge.md new file mode 100644 index 00000000..fc8e479d --- /dev/null +++ b/doc/reference-architectures/CERA-Near-Edge.md @@ -0,0 +1,63 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Converged Edge Reference Architecture Near Edge +Reference architecture combines wireless and high performance compute for IoT, AI, video and other services. + +- [CERA Near Edge Experience Kit](#cera-near-edge-experience-kit) + - [CERA Near Edge OpenNESS Configuration](#cera-near-edge-openness-configuration) + - [CERA Near Edge Deployment Architecture](#cera-near-edge-deployment-architecture) + - [CERA Near Edge Experience Kit Deployments](#cera-near-edge-experience-kit-deployments) + +## CERA Near Edge Experience Kit +In order to support the most flexibility the first CERA Near Edge implementation in OpenNESS supports a single Orchestration domain, optimizing the edge node to support Network Functions (UPF) and Applications at the same time. This is also useful for demonstration purposes as the Near Edge deployment can be scaled down to a single server reducing HW and cost associated with setup. + +### CERA Near Edge OpenNESS Configuration +CERA Near edge is a combination of the existing OpenNESS Reference Architecture [CERA NGC](../flavors.md#core-control-plane-flavor), [CERA UPF](../flavors.md#core-user-plane-flavor), [CERA Apps](../flavors.md#minimal-flavor). CERA Near edge takes the NGC Reference Architecture as a base and adds the additional service required to run applications and their associated HW Acceleration for AI workloads. CERA Near edge also adds CMK and RMD to better support workload isolation and mitigate any interference from applications affecting the performance of the network functions. The below diagram shows the logical deployment with the OpenNESS micro services. + +![CERA Near Edge Architecture](cera-near-edge-images/cera-near-edge-arch.png) + +> Figure 1 - CERA Near Edge Architecture + +### CERA Near Edge Deployment Architecture + +![CERA Near Edge Deployment](cera-near-edge-images/cera_deployment.png) + +> Figure 2 - CERA Near Edge Deployment + +The CERA Near Edge architecture consists of a multi node (Xeon(R) SP based servers) cluster which can also be modified to support a single platform that hosts both the Edge Node and the Kubernetes Control Plane. The UPF is deployed using SRIOV-Device plugin and SRIOV-CNI allowing direct access to the network interfaces used for connection to the gNB and back haul. For high throughput workloads like UPF network function, it is recommended to use single root input/output (SR-IOV) pass through of the physical function (PF) or the virtual function (VF) as required. Also, in some cases, the simple switching capability in the NIC can be used to send traffic from one application to another as there is a direct path of communication required between the UPF and the Data plane this becomes an option. It should be noted the VF-to-VF option is only suitable when there is a direct connection between PODs on the same PF with no support for advanced switching. In this scenario it is advantageous to configure the UPF with three separate interfaces for the different types of traffic flowing in the system. This eliminates the need for additional traffic switching at the host. In this case there is a separate interface for N3 traffic to the Access Network, N9 and N4 traffic can share an interface to the backhaul network. While local data network traffic on the N6 can be switched directly to the local applications. Depending on performance requirements, a mix of data planes can be used on the platform to meet the varying requirements of the workloads. + +The applications are deployed on the same edge node as the UPF. Using CMK the applications can be deployed on the same CPU Socket or on separate CPU socket depending on the requirements. CPU pinning provides resource partitioning by pinning the workloads to specific CPU cores to ensure the low priority workloads don't interfere with the high priority NF workloads. + +The use of Intel® Resource Director Technology (Intel® RDT) ensures the cache allocation and memory bandwidth are optimized for the workloads on running on the platform. + +Intel® Speed Select Technology (Intel® SST) can be used to further enhance the performance of the platform. + +The following EPA features are supported in OpenNESS + +- High-Density Deep Learning (HDDL): Software that enables OpenVINOâ„¢-based AI apps to run on Intel® Movidius Vision Processing Units (VPUs). It consists of the following components: + - HDDL device plugin for K8s + - HDDL service for scheduling jobs on VPUs +- Visual Compute Acceleration - Analytics (VCAC-A): Software that enables OpenVINO-based AI apps and media apps to run on Intel® Visual Compute Accelerator Cards (Intel® VCA Cards). It is composed of the following components: + - VPU device plugin for K8s + - HDDL service for scheduling jobs on VPU + - GPU device plugin for K8s +- FPGA/eASIC/NIC: Software that enables AI inferencing for applications, high-performance and low-latency packet pre-processing on network cards, and offloading for network functions such as eNB/gNB offloading Forward Error Correction (FEC). It consists of: + - FPGA device plugin for inferencing + - SR-IOV device plugin for FPGA/eASIC + - Dynamic Device Profile for Network Interface Cards (NIC) +- Resource Management Daemon (RMD): RMD uses Intel® Resource Director Technology (Intel® RDT) to implement cache allocation and memory bandwidth allocation to the application pods. This is a key technology for achieving resource isolation and determinism on a cloud-native platform. +- Node Feature Discovery (NFD): Software that enables node feature discovery for Kubernetes. It detects hardware features available on each node in a Kubernetes cluster and advertises those features using node labels. +- Topology Manager: This component allows users to align their CPU and peripheral device allocations by NUMA node. +- Kubevirt: Provides support for running legacy applications in VM mode and the allocation of SR-IOV ethernet interfaces to VMs. + +### CERA Near Edge Experience Kit Deployments +The CERA Near edge experience kits deploys both the near edge cluster and also a second cluster to host the 5GC control plane functions and provide an additional Data Network POD to act as public network for testing purposed. Note the Access network and UE simulators are not configured as part of the CERA Near Edge Experience Kit. Also required but not provided is a binary iUPF, UPF and 5GC components. Please contact local Intel® rep for more information. + +![CERA Experience Kit](cera-near-edge-images/cera-full-setup.png) + +> Figure 3 - CERA Experience Kit + +**More details on the Converged Edge Reference Architecture for Near Edge deployments is available under [Intel® Distribution of OpenNESS](https://www.openness.org/products/intel-distribution).** diff --git a/doc/reference-architectures/README.md b/doc/reference-architectures/README.md new file mode 100644 index 00000000..218b496d --- /dev/null +++ b/doc/reference-architectures/README.md @@ -0,0 +1,7 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020-2021 Intel Corporation +``` + +# Converged Edge Reference Architectures +This folder contains documentation of various edge reference architectures supported by OpenNESS. diff --git a/doc/reference-architectures/cera-near-edge-images/cera-full-setup.png b/doc/reference-architectures/cera-near-edge-images/cera-full-setup.png new file mode 100644 index 00000000..a49c83b4 Binary files /dev/null and b/doc/reference-architectures/cera-near-edge-images/cera-full-setup.png differ diff --git a/doc/reference-architectures/cera-near-edge-images/cera-near-edge-arch.png b/doc/reference-architectures/cera-near-edge-images/cera-near-edge-arch.png new file mode 100644 index 00000000..857a5194 Binary files /dev/null and b/doc/reference-architectures/cera-near-edge-images/cera-near-edge-arch.png differ diff --git a/doc/reference-architectures/cera-near-edge-images/cera_deployment.png b/doc/reference-architectures/cera-near-edge-images/cera_deployment.png new file mode 100644 index 00000000..33db364b Binary files /dev/null and b/doc/reference-architectures/cera-near-edge-images/cera_deployment.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png b/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png new file mode 100644 index 00000000..f42ca993 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png b/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png new file mode 100644 index 00000000..49750f55 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png b/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png new file mode 100644 index 00000000..5d3f4385 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera_deployment.png b/doc/reference-architectures/cera-on-prem-images/cera_deployment.png new file mode 100644 index 00000000..0a544163 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera_deployment.png differ diff --git a/doc/reference-architectures/cera_sdwan.md b/doc/reference-architectures/cera_sdwan.md new file mode 100644 index 00000000..883015bd --- /dev/null +++ b/doc/reference-architectures/cera_sdwan.md @@ -0,0 +1,43 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Converged Edge Reference Architecture for SD-WAN + +- [Converged Edge Reference Architectures (CERA)](#converged-edge-reference-architectures-cera) + - [SD-WAN Edge Reference Architecture](#sd-wan-edge-reference-architecture) + - [SD-WAN Hub Reference Architecture](#sd-wan-hub-reference-architecture) + +## Converged Edge Reference Architectures (CERA) +CERA is a business program that creates and maintains validated reference architectures of edge networks, including both hardware and software elements. The reference architectures are used by ISVs, system integrators, and others to accelerate the development of production edge computing systems. + +The OpenNESS project has created a CERA reference architecture for SD-WAN edge and SD-WAN hub. They are used, with OpenNESS, to create a uCPE platform for an SD-WAN CNF on edge and hub accordingly. Even though there is only one implementation of CNF, it can be used for two different purposes, as described below. + +### SD-WAN Edge Reference Architecture +The SD-WAN Edge CERA reference implementation is used to deploy SD-WAN CNF on a single-node edge cluster that will also accomodate enterprize edge applications. The major goal of SD-WAN Edge is to support the creation of a Kubernetes-based platform that boosts the performance of deployed edge applications and reduces resource usage by the Kubernetes system. To accomplish this, the underlying platform must be optimized and made ready to use IA accelerators. OpenNESS provides support for the deployment of OpenVINO™ applications and workloads acceleration with the Intel® Movidius™ VPU HDDL-R add-in card. SD-WAN Edge also enables the Node Feature Discovery (NFD) building block on the cluster to provide awareness of the nodes’ features to edge applications. Finally, SD-WAN Edge implements Istio Service Mesh (SM) in the default namespace to connect the edge applications. SM acts as a middleware between edge applications/services and the OpenNESS platform, and provides abstractions for traffic management, observability, and security of the building blocks in the platform. Istio is a cloud-native service mesh that provides capabilities such as Traffic Management, Security, and Observability uniformly across a network of services. OpenNESS integrates with Istio to reduce the complexity of large scale edge applications, services, and network functions. More information on SM in OpenNESS can be found on the OpenNESS [website](https://openness.org/developers/). + + +To minimalize resource consumption by the cluster, SD-WAN Edge disables services such as EAA, Edge DNS, and Kafka. Telemetry service stays active for all the Kubernetes deployments. + +The following figure shows the system architecture of the SD-WAN Edge Reference Architecture. + +![OpenNESS SD-WAN Edge Architecture ](sdwan-images/sdwan-edge-arch.png) + + +### SD-WAN Hub Reference Architecture +The SD-WAN Hub reference architecture prepares an OpenNESS platform for a single-node cluster that functions primarily as an SD-WAN hub. That cluster will also deploy a SD-WAN CRD Controller and a CNF, but no other corporate applications are expected to run on it. That is why the node does not enable support for an HDDL card or for Network Feature Discovery and Service Mesh. + +The Hub is another OpenNESS single-node cluster that acts as a proxy between different edge clusters. The Hub is essential to connect edges through a WAN when applications within the edge clusters have no public IP addresses, which requires additional routing rules to provide access. These rules can be configured globally on a device acting as a hub for the edge locations. + +The Hub node has two expected use-cases: + +- If the edge application wants to access the internet, or an external application wants to access service running in the edge node, the Hub node can act as a gateway with a security policy in force. + +- For communication between a pair of edge nodes located at different locations (and in different clusters), if both edge nodes have public IP addresses, then an IP Tunnel can be configured directly between the edge clusters, otherwise the Hub node is required to act as a proxy to enable the communication. + +The following figure shows the system architecture of the SD-WAN Hub Reference Architecture. + +![OpenNESS SD-WAN Hub Architecture ](sdwan-images/sdwan-hub-arch.png) + +**More details on the Converged Edge Reference Architecture for SD-WAN Edge deployments is available under [Intel® Distribution of OpenNESS](https://www.openness.org/products/intel-distribution).** diff --git a/doc/reference-architectures/core-network/openness_5g_nsa.md b/doc/reference-architectures/core-network/openness_5g_nsa.md new file mode 100644 index 00000000..80529c83 --- /dev/null +++ b/doc/reference-architectures/core-network/openness_5g_nsa.md @@ -0,0 +1,11 @@ +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation + +# Edge Cloud Deployment with 3GPP 5G Non Stand Alone + +Edge Compute is highlighted as a key deployment mechanism for delivering services to end users by placing applications closer to the user. Network and Enterprise operators are trying to take advantage of this advancement to provide low latency, user centric and secure edge services. + +OpenNESS supports edge compute deployment for LTE Control and User Plane Separation(CUPS) as described in[OpenNESS_EPC] and 5G Stand Alone as described in [OpenNESS_NGC]. 5G can be deployed in five different deployment options as described in [3GPP 23.799][3GPP_23799], where SA (Stand Alone) options consist of only one generation of radio access technology and NSA (Non Stand Alone) options consist of two generations of radio access technologies (4G LTE and 5G). The early deployments of 5G will be adopting either NSA option 3 or standalone option 2 as the standardization of these two options have already been completed. The focus of this paper is towards the edge deployment using the **5G NSA Option-3 deployment** and how OpenNESS supports those deployment models. + +**More details on the Edge Cloud Deployment with 3GPP 5G Non Stand Alone is available under [Intel® Distribution of OpenNESS](https://www.openness.org/products/intel-distribution).** + diff --git a/doc/reference-architectures/ran/index.html b/doc/reference-architectures/ran/index.html new file mode 100644 index 00000000..57c7da80 --- /dev/null +++ b/doc/reference-architectures/ran/index.html @@ -0,0 +1,14 @@ + + +--- +title: OpenNESS Documentation +description: Home +layout: openness +--- +
You are being redirected to the OpenNESS Docs.
+ diff --git a/doc/reference-architectures/ran/openness_ran.md b/doc/reference-architectures/ran/openness_ran.md new file mode 100644 index 00000000..d6f588eb --- /dev/null +++ b/doc/reference-architectures/ran/openness_ran.md @@ -0,0 +1,18 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# OpenNESS Radio Access Network (RAN) + +Radio Access Network (RAN) is the edge of wireless network. 4G and 5G base stations form the key network function for the edge deployment. In OpenNESS, FlexRAN is used as a reference for 4G and 5G base stations as well as 4G and 5G end-to-end testing. + +FlexRAN offers high-density baseband pooling that could run on a distributed Telco\* cloud to provide a smart indoor coverage solution and next-generation fronthaul architecture. This 4G and 5G platform provides the open platform ‘smarts’ for both connectivity and new applications at the edge of the network, along with the developer tools to create these new services. FlexRAN running on the Telco Cloud provides low latency compute, storage, and network offload from the edge. Thus, saving network bandwidth. + +FlexRAN 5GNR Reference PHY is a baseband PHY Reference Design for a 4G and 5G base station, using Intel® Xeon® processor family with Intel® architecture. This 5GNR Reference PHY consists of a library of c-callable functions that are validated on several technologies from Intel (Intel® microarchitecture code name Broadwell, Intel® microarchitectures code name Skylake, Cascade Lake, and Intel® microarchitecture Ice Lake) and demonstrates the capabilities of the software running different 5GNR L1 features. The functionality of these library functions is defined by the relevant sections in [3GPP TS 38.211, 212, 213, 214, and 215]. Performance of the Intel 5GNR Reference PHY meets the requirements defined by the base station conformance tests in [3GPP TS 38.141]. This library of functions will be used by Intel partners and end customers as a foundation for their product development. Reference PHY is integrated with third-party L2 and L3 to complete the base station pipeline. + +The diagram below shows FlexRAN DU (Real-time L1 and L2) deployed on the OpenNESS platform with the necessary microservices and Kubernetes\* enhancements required for real-time workload deployment. + +This document aims to provide the steps involved in deploying FlexRAN 5G (gNb) on the OpenNESS platform. + +**More details on the Radio Access Network (RAN) deployment with OpenNESS is available under [Intel® Distribution of OpenNESS](https://www.openness.org/products/intel-distribution).** diff --git a/doc/reference-architectures/ran/openness_xran.md b/doc/reference-architectures/ran/openness_xran.md new file mode 100644 index 00000000..d1887255 --- /dev/null +++ b/doc/reference-architectures/ran/openness_xran.md @@ -0,0 +1,14 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020-2021 Intel Corporation +``` + +# O-RAN Front Haul Sample Application in OpenNESS + +Recent and incoming telecommunication standards for Radio Access Network (RAN) tend to introduce open network interfaces that are expected to become adopted by broad numbers of RAN vendors and operators. Networks based on common standards are thought to be more inclined to innovations. Thus, standardization committees aim to support the global industry vision and encourage emerging multi-vendor, interoperable, and innovative virtualized RAN (vRAN) to enable vRAN shift to the Cloud and exploit the opportunities the Cloud has to offer—scalability, efficiency, cost reduction, and more. Flexible Radio Access Network (FlexRAN), which is part of proof-of-concept work at Intel, demonstrates vRAN deployment on Intel® architecture. It follows the most recent RAN standards and deploys innovative software and hardware solutions proposed by Intel to refine the baseband L1 functionality. Recently, 5GNR FlexRAN has started supporting the open Front Haul\* interface standard introduced by the O-RAN Alliance\* [here](https://www.o-ran.org/specifications). + +The focus of this white paper is to show how OpenNESS facilitates the deployment of 5GNR FlexRAN Front Haul functional units based on O-RAN specifications at the Network Edge. It also demonstrates how OpenNESS may assist in exploiting the capabilities of the X700 family NICs to address the challenges related to 5G RAN evolution including fast-growing user traffic and the move towards the Edge Cloud. + +This document describes the Intel® Ethernet Controller X710 new capability known as Dynamic Device Personalization (DDP). It provides the steps for utilizing this feature on the OpenNESS platforms. DDP technology has been previously implemented and tested within LTE FlexRAN L1 and proven to reduce network latency and the number of CPU cycles used for packet processing, leading to the increase of the overall network throughput. Choosing DDP is a promising option for removing the network bottleneck related to packet filtering and realizing stringent latency and throughput requirements imposed onto 5G networks. Tests performed with FlexRAN using LTE Front Haul interface based on Ferry Bridge (FB), the codename of a technology from Intel, and incorporating the DDP capability of Intel® Ethernet Controller X710 showed up to 34% reduction in CPU cycles used for packet processing. Whereas tests performed on Multi-access Edge Computing (MEC) solution demonstrated a nearly 60% reduction in network latency. These findings are already described in an incoming white paper “Dynamic Device Personalization: Intel Ethernet Controller 700 Series - RadioFH Profile Application Note”. Shifting towards DDP for increased performance is also a promising option for the Network Edge. Such deployment has already been tested on Kubernetes\* architecture and described [here](https://builders.intel.com/docs/networkbuilders/intel-ethernet-controller-700-series-dynamic-device-personalization-support-for-cnf-with-kubernetes-technology-guide.pdf). + +**More details on O-RAN Front Haul Sample Application in OpenNESS is available under [Intel® Distribution of OpenNESS](https://www.openness.org/products/intel-distribution).** diff --git a/doc/reference-architectures/sdwan-images/sdwan-edge-arch.png b/doc/reference-architectures/sdwan-images/sdwan-edge-arch.png new file mode 100644 index 00000000..6f4053c2 Binary files /dev/null and b/doc/reference-architectures/sdwan-images/sdwan-edge-arch.png differ diff --git a/doc/reference-architectures/sdwan-images/sdwan-hub-arch.png b/doc/reference-architectures/sdwan-images/sdwan-hub-arch.png new file mode 100644 index 00000000..b8d0a4a4 Binary files /dev/null and b/doc/reference-architectures/sdwan-images/sdwan-hub-arch.png differ