Anuket Specifications

Anuket Project

Available Specifications

Introduction

Anuket Project

Overview

Initially organized early in 2019, the Cloud iNfrastructure Telco Taskforce (CNTT) was created in response to rapid changes in how networking applications are being designed, built and managed, plus a growing recognition of a perceived functional gap between the previous standard infrastructure models and the architectures needed to support Network Function Virtualisation (NFV) applications. Organizationally, the Cloud iNfrastructure Telco Taskforce was jointly hosted by GSMA and the Linux Foundation. In 2021, the CNTT and the Open Platform for NFV (OPNFV) projects were merged to form the Anuket project.

Terminology and Glossary

The definition and intent of the terminology used throughout the documents is defined in the Glossary.

Problem Statement

Based on informal conversations with many operators and developers, there is a realisation that there are significant technical, operational and business challenges to the development and deployment of VNF/CNF applications related to the lack of a common cloud infrastructure platform. These include but are not limited to the following:

  • Higher development costs due to the need to develop Virtual Network Functions (VNF) on multiple custom platforms for each operator

  • Increased complexities due to the need to maintain multiple versions of applications to support each custom environment

  • Lack of testing and validation commonalities, leading to inefficiencies and increased time to market. While the operators will still do internal testing, but using an industry driven verification program based on a common cloud infrastructure would provide a head start.

  • Slower adoption of cloud-native applications and architectures. A Common Telco Cloud may provide an easier path to methodologies that will drive faster cloud-native development.

  • Increased operational overhead due to the need for operators to integrate diverse and sometime conflicting cloud platform requirements.

One of major challenges holding back the more rapid and widespread adoption of VNF is that the traditional telecom ecosystem vendors, while building or designing their virtualised services (whether it be Voice over LTE (VoLTE), Evolved Packet Core (EPC), or popular customer facing enterprise services such as SD-WAN (Software Defined Wide Area Network), are making their own infrastructure assumptions and requirements, often with custom design parameters. This leaves the operators being forced to build complex integrations of various vendor/function specific silos which are incompatible with each other and might possibly have different and conflicting operating models. In addition, this makes the onboarding and conformance processes of VNFs/CNFs (coming from different vendors) hard to automate and standardise.

To put this effort into perspective, over the past few years, the telecom industry has been going through a massive technology revolution by embracing software defined networking and cloud architecture principles, in pursuit of the goal of achieving more flexibility, agility and operational efficiency. At a high level, the main objective of NFV (Network Function Virtualisation) is the ability to use general purpose standard COTS (Commercial off the Shelf) compute, memory and storage hardware platforms to run multiple Virtualised Network Functions. Earlier common infrastructure models built on the previous assumption that networking applications are typically built on discrete hardware, do not offer the level of flexibility and agility needed for the support of newer networking technologies such as 5G, intelligent networks and Edge computing. By running network applications as software rather than on purpose-built hardware, as it has been done since the early 1990’s, the operators aspire to realize operational efficiencies, and capital expense savings. These Software Defined Network (SDN) applications are increasingly being used by telecom operators to support their internal and customer facing network infrastructures. The need for a common model across the industry to facilitate more rapid adoption is clear.

Project Goals and Purpose

The goal of the project is to develop a robust infrastructure model and a limited discrete set of architectures built on that model that can be validated for use across the entire member community. The community, which is made up of a cross section of global operators and supporting vendors alike, was created to support the development, deployment and management of NFV applications faster and more easily.

All of this had led to a growing awareness of the need to develop more open models and validation mechanisms to bring the most value to telco operators as well as vendors, by agreeing on a standard set of infrastructure profiles to use for the underlying infrastructure to support VNF/CNF applications across the industry and telecom community at large. To achieve this goal, the cloud environment needs to be fully abstracted via APIs and other mechanisms to the VNFs/CNFs so that both developers of the VNF/CNF applications and the operators managing the environments can benefit from the flexibility that the disaggregation of the underlying infrastructure offers.

The next step after the Reference Model has been identified and developed is to take the general model, which is purposely designed to be able to be applied to a number of technologies, and apply it to a discrete number of concrete and ultimately deployable Reference Architecture platforms. The intention is to choose the reference architectures carefully so that there will only be a small set of architectures that meets the specific requirements for supporting NFV and Telecom specific applications. Per the principles laid out later in this document, the Reference Architectures need to meet the following criteria as much as is practical:

  • Initially should be based on widely established technology and systems used in the Telecom Industry. This will help ensure a faster adoption rate because the operators are already familiar with the technology and might even have systems in production. Another advantage to this approach is a project faster development cycle.

  • Subsequent architectures should be based on either additional established or promising emerging technologies that are chosen by the community members.

Common Cloud Infrastructure Benefits

By providing a pre-defined environment with common capabilities, applications are able to be developed and deployed more rapidly. In addition, the common infrastructure can be optimized for various workloads, such as IT (Information Technology), VNF, AI (Artificial Intelligence), and other future workload types as new technologies emerge. The benefits of this approach are:

  • Configuration automation over customisation

    • By abstracting the infrastructure capabilities as much as possible, operators are able to use common infrastructure platforms across all VNF/CNF vendors.

    • Maintaining a consistent infrastructure allows for higher levels of automation due to a reduced need for customisation of the various components.

    • Overall, the intention is to reduce the total cost of ownership for operators and development costs for vendors

  • Onboarding and conformance

    • By defining abstracted infrastructure capabilities, and the metrics by which they are measured, the onboarding and conformance process for both cloud infrastructure and VNFs/CNFs can be standardized, reducing development time for the VNF/CNF developers and deployment and operational management costs for the operators standing up the cloud environments.

    • Supply chain, procurement and assurance teams can then use these metrics to more accurately assess the most efficient / best value vendor for a given environment and network services requirement.

  • Better utilisation

    • Properly mapping VNFs/CNFs to flavours to the underlying infrastructure, brings the potential for more efficient utilisation, than needing to create specific configurations for each type of application in the infrastructure.

In conclusion, to serve the stated objective of building a common cloud infrastructure that is able to take advantage of true cloud models for the more rapid development and deployment of SDN NFV applications, the Anuket specifications include a reference model, a select set of architectures, a set of reference implementations, and a set of conformance suites, so that there is a more consistent model infrastructure for developers and vendors of SDN software and applications to build to.

Anuket General Principles

Any specifications created within the Anuket project must conform to the following principles:

Overall Principles
  1. A top-level objective is to build a single, overarching Reference Model with the smallest number of Reference Architectures tied to it as is practical. Two principles are introduced in support of these objectives:

    • Minimise Architecture proliferation by stipulating compatible features be contained within a single Architecture as much as possible:

      • Features which are compatible, meaning they are not mutually exclusive and can coexist in the same cloud infrastructure instance, shall be incorporated into the same Reference Architecture. For example, IPv4 and IPv6 should be captured in the same Architecture, because they don’t interfere with each other

      • Focus on the commonalities of the features over the perceived differences. Seek an approach that allows small differences to be handled at either the low-level design or implementation stage. For example, assume the use of existing common APIs over new ones.

    • Create an additional Architecture only when incompatible elements are unavoidable:

      • Creating additional Architectures is limited to when incompatible elements are desired by the Anuket Project members. For example, if one member desires KVM be used as the hypervisor, and another desires ESXi be used as the hypervisor, and no compromise or mitigation* can be negotiated, the Architecture could be forked, subject to community consensus, such that one Architecture would be KVM-based and the other would be ESXi-based.

        *Depending on the relationships and substitutability of the component(s) in question, it may be possible to mitigate component incompatibility by creating annexes to a single Architecture, rather than creating an additional Architecture. With this approach, the infrastructure architecture designers might implement the Architecture as described in the reference document, however when there is a potential for incompatibility for particular component, they would select their preferred option from one of the relevant annexes. For example, if one member wanted to use Software-Defined storage (SDS) as CEPH, and another member wanted to use Storage Attached Network(SAN), assuming the components are equally compatible with the rest of the Architecture, there could be one annex for the CEPH implementation and one annex for the SAN implementation.

  2. Cloud Infrastructure provides abstract and physical resources corresponding to:

    • Compute resources

    • Storage resources

    • Memory resources

    • Networking resources (Limited to connectivity services only)

    • Acceleration resources

  3. Vendor independence of Cloud Infrastructure exposed resources .

  4. Cloud Infrastructure Application Programming Interfaces (APIs) ensure Interoperability (multi-vendor, components substitution), drive simplification, and open source implementations that have an open governance model (e.g. come from Open Communities or Standards Development Organisations). • These APIs support, for example, cloud infrastructure resources discovery, monitoring by management entities, configuration on behalf of workloads and consumption by workloads

  5. Workloads are modular and designed to utilise the minimum resources required for the service.

  6. Workloads consume only the resources, capabilities and features provided by the Cloud infrastructure.

  7. Workload functional capabilities independence from Cloud Infrastructure (hardware and software) accelerations.

  8. Workload independence from Cloud Infrastructure (hardware and software) hardware-dependent software

    • This is in support of workload abstraction, enabling portability across the Infra and simplification of workload design

    • Use of critical features in this category are governed by technology specific policies and exceptions in the RA specifications.

  9. Abstraction of specific internal hardware details above the Infrastructure Cloud Management layers unless managed through Hardware Infrastructure Manager

    • This is in support of workload abstraction, enabling portability across the Infra and simplification of workload design

    • Use of critical features in this category are governed by technology specific policies and exceptions in the RA specifications.

Requirements Principles

The agreed upon rules and recommendations to which a compliant workload or cloud infrastructure must adhere.

  • All requirements will be hosted and maintained in the RM or relevant RA

  • All requirements must be assigned a requirements ID and not be embedded in narrative text. This is to ensure that readers do not have to infer if a requirement exists and is applicable

  • Requirements must have a unique ID for tracking and reference purposes

  • The requirement ID should include a prefix to delineate the source project

  • Requirements must state the level of compliance (ex: MUST, SHOULD, MAY) per RFC 2119[2]

  • Mandatory requirements must be defined in such a way that they are unambiguously verifiable via automated testing

  • Requirements should be publishable or extractable into a machine readable format such as JSON

  • Requirements should include information about the impact of non-conformance and the rationale for their existence

Architectural Principles

Following are a number of key architectural principles that apply to all Reference Architectures produced by the Anuket project:

  1. Open-source preference: for building Cloud Infrastructure solutions, components and tools, using open-source technology.

  2. Open APIs: to enable interoperability, component substitution, and minimise integration efforts.

  3. Separation of concerns: to promote lifecycle independence of different architectural layers and modules (e.g., disaggregation of software from hardware).

  4. Automated lifecycle management: to minimise the end-to-end lifecycle costs, maintenance downtime (target zero downtime), and errors resulting from manual processes.

  5. Automated scalability: of workloads to minimise costs and operational impacts.

  6. Automated closed loop assurance: for fault resolution, simplification, and cost reduction of cloud operations.

  7. Cloud nativeness: to optimise the utilisation of resources and enable operational efficiencies.

  8. Security compliance: to ensure the architecture follows the industry best security practices and is at all levels compliant to relevant security regulations.

  9. Resilience and Availability: to withstand Single Point of Failure.

Scope

Within the framework of the common Telecom cloud infrastructure vision, there are four levels of documents needed to describe the components, realize the practical application of the systems and qualify the resulting cloud infrastructure. They are, as highlighted in Figure 1: Reference Model, Reference Architecture, Reference Implementation, and Reference Conformance.

"Figure 1: Documentation Scope of Anuket specifications"

Figure 1: Documentation Scope of Anuket specifications

Functional Scope

To meet the goals, as described above, the Anuket project is focussed on:

  • Functional capabilities of the cloud infrastructure and the infrastructure management

  • Functional interfaces between infrastructure and infrastructure management

  • Functional interfaces between workloads and workload management

Due to the close alignment with ETSI GR NFV 002[3], those ETSI interfaces that are considered relevant (with notes where required) are included in the figure below.

"Figure 2: Functional Scope of Anuket specifications"

Figure 2: Functional Scope of Anuket specifications

Out of Scope Components

While the nature of the Anuket project might seem quite broad, the following areas are not at this time part of the scope of this effort.

  • Hardware specifications: beyond the abstracted high-level CPU, memory, network interface and storage elements. The intention is to write the documents so they are general enough that any vendor hardware can be used in a conformant implementation without making significant changes to the model.

  • Workload specifications: Other than the API interfaces when they directly need to touch the workloads themselves, the intention is to assume the workload application is a black box that the cloud infrastructure is providing resources to. The majority of interactions for lifecycle management of the workloads will be through the cloud infrastructure whenever possible.

  • Lifecycle Management of the CaaS Clusters: whilst a complete NFV-MANO solution would need to provide lifecycle management for the Kubernetes clusters it is using to deploy its CNFs, the Anuket specifications do not describe the NFVO and VNFM parts, and therefore the management of the cluster(s) is not in scope, while the VIM and the lifecycle management of containers (by Kubernetes) is in scope.

  • Company specific requirements: The Anuket specifications are designed to be general enough that most operators and others in the open source communities will be able to adapt and extend them to their own non-functional requirements.

Specification Types
  • Reference Model (RM): focuses on the Infrastructure Abstraction and how services and resources are exposed to VNFs/CNFs. It needs to be written at a high enough level that as new Reference Architectures and Reference Implementations are added, the model document should require few or no changes. Additionally, the Reference Model is intended to be neutral towards VMs or Containers.

  • Reference Architecture (RA): Reference Architectures defines all infrastructure components and properties which have effect on the VNF/CNF run time, deployment time, and design time. It is expected that at least one, but not more than a few Reference Architectures will be created, and they will conform to the Reference Model. The intention is, whenever possible, to use existing elements, rather than specify entirely new architectures in support of the high-level goals specified in the Reference Model.

  • Reference Implementation (RI): Builds on the requirements and specifications developed in RM, RAs and adds details so that it can be implemented. Each Reference Architecture is expected to be implemented by at least one Reference Implementation.

  • Reference Conformance (RC): Builds on the requirements and specifications developed in the other documents and adds details on how an implementation will be verified, tested and certified. Both infrastructure verification and conformance as well as VNFs/CNFs verifications and conformance will be covered.

Figure 3 below illustrates how each type of specifications relate to different element of a typical cloud platform stack.

"Figure 3: Documentation Scope of Anuket specifications"

Figure 3: Documentation Scope of Anuket specifications

Below is a diagram of the different artefacts that will need to be created to support the implementation of the abstract concepts presented in the Reference Model, which are then applied to create the Reference Architecture that will be deployed using the requirements spelled out in the Reference Implementation.

"Figure 4: Description of the possible different levels of Anuket specification artefacts"

Figure 4: Description of the possible different levels of Anuket specification artefacts

Relationship to other industry projects

The Anuket work is not done in a vacuum. The intention from the beginning was to utilize the work from other open source and standards bodies within the industry. Some of the projects, but by no means all, that are related in some way to the Anuket efforts include:

  • ETSI NFV ISG

  • OpenStack

  • ONAP

  • CNCF

  • MEF

  • TM Forum

  • OSM (ETSI Open Source MANO project)

  • ODIM (Open Distributed Infrastructure Management)

  • VMware (While not an open source project, VMware is a commonly used platform used for VNF deployments in the telecom industry)

Relationship to ETSI-NFV

The ETSI NFV ISG is very closely related to the Anuket project, in that it is a group that is working on supporting technologies for NFV applications (Figure 5 illustrates the scope of ETSI-NFV). To facilitate more collaboration as the project matures, the Anuket specifications’ scope (Figure 2 above) purposely references certain ETSI NFV reference points, as specified by ETSI GS NFV 002[3].

"Figure 5: Scope ETSI NFV"

Figure 5: Scope ETSI NFV

Relationship between Anuket projects and AAP

The Anuket project is also closely aligned with the Anuket Assured Program (AAP), an open source, community-led compliance and verification program that demonstrates the readiness and availability of commercial NFV and cloud native products and services including Vendor’s Implementation (VI) of both infrastructure and workloads. The AAP combines open source based automated compliance and verification testing for multiple parts of the stack specifications established by Anuket, ONAP, multiple SDOs such as ETSI and GSMA, and the LFN End User Advisory Group (EUAG).

We create an implementation that adheres to the Anuket Reference Implementations specifications. Products can undergo a conformance program based on the Anuket Reference Conformance specifications using the Anuket specified testing frameworks and tools. Figure 6 below illustrates the relationship with the Anuket Assured Program in more detail; the figure is specific to OpenStack-based specifications but the set-up is going to be similar to other implementations.

"Figure 6: Relationship between Anuket and Anuket Assured Program"

Figure 6: Relationship between Anuket and Anuket Assured Program

As can be seen from the above figure, roles and responsibilities are as follows:

  • Anuket specifies lab requirements in the Reference Implementation document which will be used to define what labs can be used within the community for the purpose of installing and testing Anuket conformant cloud infrastructure implementations.

  • Anuket includes a lab Playbook in its Reference Implementation detailing available suitable labs to run and test cloud infrastructure implementations; the playbook includes processes, access procedures and other details.

  • Anuket specifies requirements in the Reference Implementation document for installers that can be used to install a cloud infrastructure.

  • Anuket includes an installation Playbook in its Reference Implementation specifications detailing instructions of how to install an infrastructure using Anuket conformant installers.

An infrastructure that follows the Anuket Reference Implementation specifications and passes all the tests specified in the Anuket Reference Conformance document is referred to as an Anuket Reference Implementation.

  • Anuket specifies testing framework requirements in the Reference Conformance document that will be used to determine a suitable testing framework and portals to be used for the purpose of running test suites and tools, and carry out badging processes.

  • The Anuket Reference Conformance document defines high level test cases, for requirements from both the Reference Model and Reference Architecture, that are used to determine the testing projects within the community suitable to deliver these tests.

  • Anuket includes a traceability matrix in its Reference Conformance document detailing every test case (or group of test cases) available in the community and map them to the high level test case definition and the requirements they are fulfilling.

  • The Anuket Reference Conformance document includes a testing Playbook detailing instructions of how to run the testing framework and test cases against commercial NFV products (infrastructure and workload) to check conformance to Anuket specifications. The testing Playbook also details instructions of how to submit testing results for the AAP badging process.

Relationship to CNCF

A close relationship between Anuket and CNCF is maintained around the contents development for RA-2, RI-2, and RC-2.

Relationship to other communities

Anuket collaborates with relevant API workgroups of SDOs (such as MEF, TM Forum, 3GPP, TIP, etc) where applicable to align with their specification work and utilise their efforts.

Abbreviations

Please refer to Abbreviations for a full list.

References

Please refer to References for a full list.

Use Cases

The Anuket Project addresses a wide range of use cases from the Core to the Edge of the network. Different use cases supported by the Anuket Project specifications are described in ref_model:chapters/chapter02:use cases.

Roadmap and Releases

Releases

Baldy Release Notes

Figure 1: Baldy Release Organisation

Figure 1: Baldy Release Organisation

Baldy Release Contents
Overview

Reference #

Feature

Notes

baldy.tech.1

VNF Evolution policy and strategy

baldy.tech.2

Backward/Forward Compatibility

baldy.tech.3

Future Roadmap

Reference Model (v3.0)

WSL Note: Features below should be implemented in order.

Reference #

Feature

Notes

baldy.rm.1

General Cleanup

All Chapters

baldy.rm.2

Limiting infrastructure profiles to Basic and Network Intensive (Parking Compute intensive)

Ch02, Ch04, and Ch05

baldy.rm.3

Finalising Compliance, Verification and Conformance Strategy

Ch08

baldy.rm.4*

Full Container support

Ch04

baldy.rm.5

Complete Security Chapter

Ch07: 100% alignment with ONAP

baldy.rm.6

Virtual Networking/ Networking Fabric Modelling

Ch03

baldy.rm.7

Generic Installer Model

Ch09

baldy.rm.8

Guidelines

Appnedix-A

*Baldy Release includes at least features up to and including baldy.rm.4.

Reference Architecture 1 (v2.0)

WSL Note: Features below should be implemented in order.

Reference #

Feature

Notes

baldy.ra1.1

General Cleanup

All Chapters

baldy.ra1.2

Clarify OpenStack version policy

Ch01, Ch05

baldy.ra1.3

Incorporate RM Requirements in RA-1 Requirements

Ch02

baldy.ra1.4

Create a proposal for an exiting Gap

Ch08

baldy.ra1.5

Complete High Level Architecture

Ch03

baldy.ra1.6*

Complete Interfaces & APIs

Ch05

baldy.ra1.7

Complete Components Level Architecture

Ch04

baldy.ra1.8

Complete Security Chapter

Ch06

baldy.ra1.9

Complete LCM Chapter

Ch07

*Baldy Release includes at least features up to and including baldy.ra1.6.

Reference Conformance 1 (v2.0)

Figure 2: RC-1 Baldy Release plan

Figure 2: RC-1 Baldy Release plan

WSL Note: Features below should be implemented in order.

Reference #

Feature

Notes

baldy.rc1.1

General Cleanup

All Chapters

baldy.rc1.2

Clarify Conformance Categories (NFVI & VNFs)

Ch01

baldy.rc1.3

Complete NFVI Framework Requirements

Ch02

baldy.rc1.4

Categorise NFVI TC Req and Write API Testing TC

Ch03

baldy.rc1.5

Create NFVI Mapping & Traceability Matrix and populate it

Ch05

baldy.rc1.6

Restructure NFVI Cookbook and Cleanup

Ch04

baldy.rc1.7

NFVI Framework & Test Cases Development

DEV

baldy.rc1.8

RC-1 test suites can run against RI-1

DEV

Reference Implementation 1 (v3.0-alpha)

Figure 1: RI-1 Baldy Release plan

Figure 1: RI-1 Baldy Release plan

Reference #

Feature

Notes

baldy.ri1.1

General Cleanups

All Chapters

baldy.ri1.2

Complete Overall Requirements

Ch02

baldy.ri1.3

Complete Lab Requirements

Ch04

baldy.ri1.4

Complete Target State & Metadata

Ch03

baldy.ri1.5

Complete Installer Requirements

Ch05

baldy.ri1.6

Complete Lab Cookbook (Ops)

Ch06

baldy.ri1.6

Restructure & Complete Integration Cookbook

Ch07

baldy.ri1.7

Implement Profiles within OPNFV Installers and consume CNTT metadata

DEV

baldy.ri1.8

RI-1 passes the RC-1 test suite execution (For sanity and APIs)

DEV

Reference Architecture 2 (v3.0)

WSL Note: Features below should be implemented in order. For Baldy, At least

Reference #

Feature

Notes

baldy.ra2.1

General Cleanup

All Chapters

baldy.ra2.2

Complete Requirements & Map to RM

Ch02

baldy.ra2.3

Finish High Level Architecture

Ch03

baldy.ra2.4*

Propose solution to an existing gap

Ch08

baldy.ra2.5

More Details about Component Level Architecture

Ch04

baldy.ra2.6

More Details about Interfaces & APIs

Ch05

baldy.ra2.7

More Details about Security

Ch06

baldy.ra2.8

More Details about LCM

Ch07

baldy.ra2.9

More Details and proposals about Gaps

Ch08

baldy.ra2.10

Guidelines

Appendix-A

*Baldy Release includes at least features up to and including baldy.ra2.4.

Baraque Release Notes

Figure 1: Baraque Release Structure

Figure 1: Baraque Release Structure

Baraque Release Contents
Overview

This Release note highlights top features that will be included in Baraque release. For Full list of features in details, please refer to Baraque Release Planning page.

Reference Model (v4.0)

Reference #

Feature

Notes

baraque.rm.1

Networking Resources

Support for Advanced Networking Resources & SDN

baraque.rm.2

Networking & Storage Characterisation

More Metrics for Networking and Storage

baraque.rm.3

Full Container support

baraque.rm.4

HW Acceleration support

Support Hardware Ac celeration Resources

baraque.rm.5

New Edge Profile

Support a new profile for Edge Use cases

Reference Architecture 1 (v3.0)

Reference #

Feature

Notes

baraque.ra1.1

New OpenStack Base release

baraque.ra1.2

Support for SmartNic

For vSwitch Offload

baraque.ra1.3

Support for Hardware Acceleration

To support Hardware ac celeration resources exposed by RM

Reference Conformance 1 (v3.0)

Reference #

Feature

Notes

baraque.rc1.1

General Cleanup

baraque.rc1.2

Traceability Matrix

Centralised Traceability Matrix

Reference Implementation 1 (v3.0)

Reference #

Feature

Notes

baraque.ri1.1

Installer Requirements

Finalise Installer Requirements

baraque.ri1.2

Installation Cookbook

Finalise Installation Cookbook

baraque.ri1.3

Labs Cookbook

Finalise Lab Cookbook

Reference Architecture 2 (v4.0)

Reference #

Feature

Notes

baraque.ra2.1

Requirements

Finalise Re quirements

baraque.ra2.2

Traceability Matrix

C entralised Tr aceability Matrix

baraque.ra2.3

Architecture Specs (L3)

Full Arc hitectural Specs (Component Level)

baraque.ra2.4

Edge Usecases support

As Defined by Edge WS

Reference Conformance 2 (v4.0-alpha)

Reference #

Feature

Notes

baraque.rc2.1

Testing Requirements

Initial Content

baraque.rc2.2

Traceability Matrix

Centralised Traceability Matrix

baraque.rc2.3

Testing Cookbook

Initial Content

Reference Implementation 2 (v4.0-alpha)

Reference #

Feature

Notes

baraque.ri2.1

Installer Requirements

Initial Content

baraque.ri2.2

Lab Requirements

Initial Content

baraque.ri2.3

Installation Cookbook

Initial Content

Roadmap

Overview
  • The activities of the Anuket Project community are articulated around Projects (sub-projects of Anuket Project sometimes also referred to as Workstreams (WS)), Milestones and Releases.

  • The Anuket Project is embracing simultaneous delivery model, meaning that all contributing projects have to follow the cadence and intermediate milestones.

  • Each Anuket Project release is the only delivery vehicle and is common for all projects.

  • The Anuket Project current release plan is available here.

"Figure 1: Milestones"

Figure 1: Milestones

Definitions

A project (aka WS) is:

  • Long term endeavour setup to deliver features across multiple releases as shown in Releases Home

  • Led by leads/co-leads, contributors and committers with expertise in the relevant areas

  • Scripted and documented in repositories

A Release is:

  • Short term endeavour setup to deliver a specific features/functionalities as shown here.

  • An agreed common framework (template, criteria, best practice) for all projects

  • An unique release planning calendar with pre-defined milestones for each project

  • A vehicle to coordinate multiple projects and multiple type of projects (reference model and architecture, documentation, integration, packaging, deployment)

A Bundle is: A set of related specifications that are built to complement each other, specifically (RM -> RA -> RC -> RI).

A Version:

  • Each document within a release has a number attached to it that consists of Bundle.Version:

    • Bundle: specifies the bundle number of which the the document belongs to.

    • Version: specifies the sequential version of each document (improvement or enhancements).

  • Any Change in RM that will impact RAs and consequently RC and RI will triggers a new Bundle number.

High Level Roadmap
"Figure 2: The Anuket Project Technical Specification Roadmap"

Figure 2: The Anuket Project Technical Specification Roadmap

Detailed Roadmap

Please refer to individual release plans and features for detailed roadmap.

Detailed Milestones

Review

Milestone

Description

Activities

Comments

Kick-Off

M0

The goal of the release Kick-Off is to open the innovation platform for the intent to participate in the Anuket Project release. Release Kick-Off review takes place for each releases.

Name the Release and create appropriate labels in GitHub.

Planning & Scoping

The goal of the Release Planning & Scoping is to capture the initial set of features and functionality that should be part of the release along with prioritisation.

Identity a list of features and functionality including from the backlog will be developed and documented as part of the current release. N.B. Feature/functionality, errors etc. are logged in GitHub as Issues. Identify what is in or out of scope for the release. Escalate any issues to the TSC.

Release Plan Review

M1

The goal of the Release Planning review is to ensure the plan is complete, sufficient, and aligned with release milestones. All people resources are identified, documented and committed.

After the review cut-off date any major features & functionalities changes will be added to the backlog unless it is approved by the TSC to be added into the current scope of release. Bug fixes or any minor changes identified during the development will be allowed. For any other content changes to be approved by TSC.

Scope Changes/Logging

Feature/Functionality changes to be part of current Release

Feature/Functionality changes (in Github) for the current scope of release. Project leads ensure feature/functionality are correctly labelled, mapped to the corresponding project and milestone, etc.

Scope Freeze

M2

The goal of the Scope Freeze is to mark the end of adding new features/functionalites in the Release.

All the project leads authorise the issues are correctly labelled, mapped to the corresponding project and milestone, etc. Features/Functionalities changes (except for bugs fixes) identified post-freeze will be added to the Backlog. Exceptions to the above need TSC approval.

Feature/Functionality/Content Development

The goal is to ensure that changes to features and functionalities are captured and all content necessary for the In-Scope features & functionalities will be developed as part of the release scope.

Update Feature/Functionality as we evolve. Develop / Update the contents for the release in-scope listed features & functionalities

Content Freeze

M3

The goal of the Content Freeze is to mark the end of the features documented and provided the resolution for all impacting defects. After Content Freeze, there will be no new features/functionalities are allowed into the current release. Only the critical fixes are allowed.

All the project leads review the document and ensure all the planned features are documented and fixes are available before end of the Content Freeze. Uncompleted features/functionality will be added to the Backlog. After discussed and approved by the TSC.

Content Review

The goal is to carefully review and validate the contents and check for errors in the document.

Validate content is within Release Scope and is technically correct. Check document for grammatical errors, extraneous items, etc. Close all In-Scope & reviewed projects/issues and move all others to Backlog after discussed and approved by the TSC.

Content Review Freeze

M4

The goal is to perform the final proof reading of the document before it is released. This is the release content completion milestone.

All Projects are closed or else are marked Backlog. Discuss with TSC for any exceptional approval.

Release Packaging

The goal is to package the precise and reviewed document versions into a new release branch.

Create new Release Branch after content review ends.

Release Candidate

RC0

The goal of the Release Candidate is to ensure the documentations are properly aligned, fully reviewed in the new release branch.

Prioritise the required fixes and address them. If there are any critical fixes required then the fixes will be provided and it will be tagged with minor release. (Eg. Baldy 4.0.1)

Release End

The goal of the Release Sign-Off review is to ensure all the projects are successfully passed all the review. All the committed deliverables are available and passed the quality criteria.

Table 1: Detailed Milestones

Dependencies between various Workstreams

The various workstreams in the Anuket Project are:

  • Reference Model (RM)

  • Reference Architecture (RA)

  • Reference Implementation (RI)

  • Reference Conformance (RC)

The workstream dependency relationship in simple terms, Reference Conformance verifies and tests the Reference Implementation which follows the requirements and architecture defined in the Reference Architectures and Reference Architecture describes the high level system components and its interactions by adhering to the requirements and expectations set by the Reference Model which sets the standards for infrastructure abstraction, compliance and verification.

For the standard release stabilisation, On each release, All documents that are related to each other will have the same main version number as shown in the Figure 3.

There are two different tracks in the Anuket Project:

  • Virtualised workloads, deployed on OpenStack

  • Cloud Native workloads, deployed on Kubernetes

Each track follows the industry driven standards in the Reference Model as depicted in the below diagram.

"Figure 3: Anuket Project WS Dependencies"

Figure 3: Anuket Project WS Dependencies

Dependencies with Industry Communities

The Anuket Project is collaboratively working with other standard bodies and open source communities such as:

  • CNCF

  • ETSI ISG NFV

  • ETSI ISG MEC

  • MEF

  • ONAP

  • OpenInfra OpenStack

  • Telecom Infra Project (TIP)

  • XGVELA

Anuket Project Technical Policies and Transition Plan

There are multiple situations where a :policy, comprised of one or more compromises and/or transitions is required to address technology that does not presently conform to the Anuket Project mandates or strategy, and hence requires explicit direction to prescribe how the situation will be treated in the present, as well as in the future. This informs workload designers how the Reference Conformance testing will respond when encountering such technologies during the qualification process, including flagging warnings and potentially errors which could prevent issuance of a conformance badge.

Anuket Project Technical Policies and Transition Plan

Anuket Project Policies for Managing Non-Conforming Technologies

There are multiple situations where a policy, comprised of one or more compromises and/or transitions is required to address technology that does not presently conform to the Anuket Project mandates or strategy, and hence requires explicit direction to prescribe how the situation will be treated in the present, as well as in the future. This informs application designers how RC will react when encountering such technologies during the qualification process, including flagging warnings and potentially errors which could prevent issuance of a certification badge.

Feature Availability

One such case is where the Anuket Project strategically deems a given capability as mandatory, but the feature is a roadmap item, under development or otherwise unavailable. To address this scenario, a policy can be created to recognise the current state of the technology, and identify a Time Point (TP) in VNF Evolution when the feature will become mandatory for RC purposes.

Current Anuket Project Policies

The following sets of compromises and transition plans comprise the policy for each technology subject to this document.

Be aware the compromises and transition plans contained herein, are directly related to factors which are subject to change with the evolution of technology, with changes in industry direction, with changes in standards, etc. Hence, the policies are subject to change without notice, and the reader is advised to consult the latest online Github revision of this chapter. All locally stored, printed or other copies should be considered obsolete.

Note to Authors: Status should be set to “Proposed” when initial content is entered. Once alignment is attained following vetting and discussion, status should be set to “Aligned”. Immediately prior to merge, status should be set to “In Force”. When amending previously approved language, status should be changed from “In Force” to “In Force (Pending Changes)”, followed by “Aligned” and ultimately, “In Force”.

Anuket Project Technical Transition Plan

Overall Transition Plan is explained in the Governance Adoption strategy.

Other Policies

Anuket OpenStack Baseline Release Selection

This section specifies policies for the selection of the next Anuket OpenStack baseline release and the number of releases that the Anuket shall support:

  • criteria for the triggering of the next baseline selection

  • criteria to use in choosing the next OpenStack release, and

  • the number of OpenStack releases to be supported by Anuket specifications

The selection of a new OpenStack baseline release will be associated with a new Anuket release and a whole set of documents (RA1, RI1 and RC1) with new versions. Please note that while a new OpenStack release selection may only trigger updates to certain sections, all document releases will be complete and can be utilised on their own independent of previous releases.

Triggering Events for next release selection

This section specifies events that may trigger the selection of the next OpenStack release.

  • Complete change in architecture: OpenStack, OpenStack Service or major API change of an OpenStack RA-1 required service

  • New OpenStack features, services or projects workloads targeted for Anuket compliant cloud infrastructure

  • Major Security Fix (not fixed through a patch; OpenStack or OS) that affect APIs

  • Current Anuket OpenStack release entered “Extended Maintenance” phase approximately 18 months ago

OpenStack Release Selection Committee

On the occurrence of any of the triggering events, the TSC shall constitute an OpenStack Release Selection Committee composed of a maximum of 7 (seven) members representing both operators and vendors. These committee members shall be from active Anuket member organisations and meet the criteria specified for Active Community Members in the actual version of the Anuket Charter. The committee decisions shall be by consensus and no decision shall be made without at least 5 members agreeing. The committee may agree by unanimous agreement to adjust the OpenStack Release Selection Criteria.

OpenStack Release Selection Criteria

The OpenStack Release Selection Committee shall utilize the following criteria, and any other criteria that it unanimously agrees to, in selecting the next Anuket OpenStack baseline release:

  • The latest OpenStack release that was released approximately 6 months ago

    • The Committee may agree to relax or extend the 6 months period based on its knowledge of OpenStack releases

  • The OpenStack release should be supported by the OPNFV Installer (Airship)

  • Backward Compatibility: ensure API support

  • Consider OpenStack Distribution vendors and their extended support versions

Deprecation of Anuket OpenStack Releases

Anuket shall support no more than 2 (two) OpenStack releases at any given time. Thus, on selection of a new Anuket OpenStack baseline release, an existing Anuket OpenStack release may be deprecated. The selection of new release Anuket OpenStack release n, triggers the deprecation of the n-2 release. On the completion of the Reference Architecture for release n, the release n-2 will stand deprecated. Please note that reference to releases in this subsection is to Anuket’s OpenStack release where Pike is release 1.

Relevant Technologies

There are different technologies used and specified by the Anuket Project specifications. The technologies section describes the relevant technologies for the Anuket Project and clarifies the Anuket Project position about them.

Anuket Project Relevant Technologies

Virtualisation

There are different ways of which IO devices (such as NICs) are presented to workloads for consumption by those workloads. Here is a list of current methods of existing IO Virtualisation:

  • Para-Virtualisation method (software only).

  • Direct Assignment via h/w assisted PCI-Passthrough (IOMMU).

  • Device Sharing with SR-IOV & h/w assisted PCI-Passthrough (IOMMU).

  • Para-Virtualisation method with Hardware support.

Figure 1 below shows some of the relevant IO Virtualisation techniques.

"Figure 1: Relevant IO Virtualisation Techniques"

Figure 1: Relevant IO Virtualisation Techniques

Para-virtualisation method (software only)

This is the preferred method of IO virtualisation as it provides flexibility and full abstraction of workloads from the underlying infrastructure. It usually relies on standard IO interfaces that are implemented in software. For Networking, there are two common networking interfaces used: virtio-net for KVM/QEMU and VMXNET for VMware.

Using a standard interface for IO means that workload doesn’t need to run any proprietary software drivers for specific hardware vendors and the implementation of that workload is completely agnostic of the hardware used.

Figure 2 below shows the typical components of a para-virtualised interface:

  • frontEnd driver: The frontEnd driver is an off-the-shelf driver that runs on the workload.

  • backEnd driver: runs on the Hypervisor and is responsible of bridging standard communications coming from applications to a hardware specific ones.

This nature of this disaggregation is what gives the para-virtualised interfaces the flexibility that makes them favourable in a virtualised environment.

The downside of para-virtualisation interfaces is the involvement of the hypervisor which may introduce latency and jitter that can impact the performance.

"Figure 2: Para-Virtualszed interface components (software only)"

Figure 2: Para-Virtualszed interface components (software only)

Direct assignment with IOMMU.

Direct Assignment is supported in x86 architectures through an IOMMU (Input/Ouput Memory Management Unit), which provides the ability for a PCIe device to autonomously (i.e. without hypervisor intervention) perform DMA transfers directly into guest memory as shown in Figure 3.

Once an IO device is directly assigned to a workload, that workload will then have an exclusive access to that device and no other entities (including the hypervisor) can access it.

This method provides better performance than the para-virtualised one as no hypervisor is involved but provides less flexibility and less portability.

Having an IO device directly assigned to a workload means that the workload needs to run vendor specific drivers and libraries to be able to access that device which makes the workload less portable and dependent on a specific hardware type from a specific vendor.

"Figure 3: Direct Assignment with Virtual Technology"

Figure 3: Direct Assignment with Virtual Technology

Device Sharing with SR-IOV & IOMMU.

This method partitions a hardware device into multiple regions (known as VFs), and uses Direct Assignment to provide workloads exclusive access to one or more of those regions (VFs), thereby bypassing the hypervisor and simultaneously allowing multiple workloads to share the same device.

For this method to be possible, the IO device need to support Single Root Input Output Virtualisation (SR-IOV) which allows it to present itself as multiple devices, known as Physical Functions, PFs, and Virtual Functions, VFs as presented in Figure 4.

Each of those Virtual Functions can then be independently assigned exclusively to a workload (with the appropriate hardware support of an IOMMU).

Similar to the previous method (“Direct Assignment”), this method provides better performance than para-virtualisation, but lacks the flexibility and the portability sought.

"Figure 4: Device Sharing with SR-IOV & Direct Assignment"

Figure 4: Device Sharing with SR-IOV & Direct Assignment

Para-Virtualisation method (Hardware support)

This method basically is a mixture between the software only para-virtualisation method and the direct assignment method (including the device sharing method) where the frontEnd driver which is running on the workload is a standard off the shelf driver and the backEnd driver is implemented straight in hardware logic (bypassing the hypervisor with hardware support from an IOMMU and SR-IOV) as shown in Figure 5.

Unlike the software only para-virtualised interfaces, this method provides better performance as it by-passes the hypervisor and unlike Direct Assignment methods, this method doesn’t require proprietary drivers to run in the workload and hence this method makes workloads portable.

However, this method doesn’t provide the same level of flexibility as the software only para-virtualisation method as migrating workloads from one host to another is more challenging due to the hardware presence and the state it holds for the workloads using it.

"Figure 5: Para-Virtualisation method (with hardware support)"

Figure 5: Para-Virtualisation method (with hardware support)

SmartNICs
Acceleration Cards
FPGAs
GPUs/NPUs
EPA/NFD

Other data

Abbreviations

Term

Description

3GPP

3rd Generation Partnership Project

AAA

Authentication, Authorisation, and Accounting

AAL

Acceleration Abstraction Layer

AAP

Anuket Assured Program

AArch64

64bit ARM architecture

Acc

Accelerator

AD

Active Directory

ADC

Application Delivery Controller

AES

Advanced Encryption Standard

AES-NI

AES New Instructions

AF_XDP

Address Family For XDP

AI

Artificial Intelligence

AICPA

American Institute of Certified Public Accountants

AMF

Access and Mobility management Function

API

Application Programming Interface

AR

Augmented Reality

ARM

Advanced RISC Machines

ARP

Address Resolution Protocol

AS

Application Servers

ASIC

Application-Specific Integrated Circuit

AUSF

AUthentication Server Function

AZ

Availability Zone

B2B

Business to Business

B2C

Business to Consumer

BBU

BaseBand Unit

BGCF

Border Gateway Control Function

BGP

Border Gateway Protocol

BGPaaS

BGP as a Service

BIOS

Basic Input Output System

BLOB

Binary Large Object

BM

Bare Metal

BMC

Baseband Management Controller

BMRA

Bare Metal Reference Architecture

BNG

Broadband Network Gateway

BOOTP

Bootstrap Protocol

BRAS

Broadband Remote Access Server

BSS

Business Support Systems

CaaS

Cloud Native Container as a Service

CaaS

Container as a Service

CAPEX

Capital Expenditure

C&V

Compliance & Verification

CCP

Centralised Control Plane

CCS

Converged Charging System

CDN

Content Distribution (or Delivery) Network

CG-NAT

Carrier-Grade Network Address Translation

cgroups

Control Groups

CHF

Charging Function (part of the converged charging system CCS)

CI/CD

Continuous Integration / Continuous Deployment

CIDR

Classless Inter-Domain Routing

CIFS

Common Internet File System

CIM

Cloud Infrastructure Management

CIRV

Common Infrastructure Realization & Validation

CIS

Center for Internet Security

CIT

Cloud Integrity Tool

CLI

Command Line Interface

CM

Configuration Management

CNCF

Cloud Native Computing Foundation

CNF

Cloud Native Network Function

CNI

Container Network Interface

CNTT

Cloud iNfrastructure Telco Taskforce

CP

Control Plane

CPE

Customer Premises Equipment

CPU

Central Processing Unit

CRD

Custom Resource Definition

CRI

Container Runtime Interface

CRI-O

OCI compliant CRI implementation

CRTM

Core Root of Trust for Measurements

CRUD

Create, Read, Update, and Delete

CSA

Cloud Security Alliance

CSAR

(TOSCA) Cloud Service Archive

CSCF

Call Session Control Function

CSI

Container Storage Interface

CSP

Cloud Service Provider

CU

Centralised Unit (O-RAN context)

CVC

(LFN) Compliance Verification Committee

CVE

Common Vulnerabilities and Exposures

CVSS

Common Vulnerability Scoring System

DANM

Damn, Another Network Manager

DBaaS

Data Base as a Service

DC

Data Center

DCP

Distributed Control Plane

DDoS

Distributed Denial of Service

DHCP

Dynamic Host Configuration Protocol

DMA

Direct Memory Access

DNS

Domain Name System

DPDK

Data Plane Development Kit

DPI

Deep Packet Inspection

DPU

Data Processing Unit

DRA

Diameter Routing Agent

DRAM

Dynamic Random Access Memory

DRTM

Dynamic Root of Trust for Measurements

DSP

Digital Signal Processor

DU

Distributed Unit (O-RAN context)

DVR

Distributed Virtual Routing

E2E

End to End

eBPF

Extended Berkley Packet Filter

EBS

Elastic Block Storage

EFI

(BIOS) Extensible Firmware Interface

eMBB

Enhanced Mobile BroadBand

EMS

Element Management System

EPA

Enhanced Platform Awareness

EPC

Evolved Packet Core

ePDG

Evolved Packet Data GateWay

ESXi

(VMware) ESX Integrated

eTOM

Enhanced Telecom Operations Map

ETSI

European Telecommunications Standards Institute

EUAG

Linux Foundation Networking End User Advisory Group

EUD

End User Device

EULA

End-User License Agreement

EVPN

Ethernet Virtual Private Network

EVPN

Ethernet VPN

FAT

File Allocation Table

F2F

Face-to-Face

FC

Fiber Channel

FCAPS

Fault, Configuration, Accounting, Performance, Security

FC-AL

Fibre Channel Arbitrated Loop

FCIP

Fibre Channel over IP

FFA

Fixed Function Accelerator

FPGA

Field Programmable Gate Array

FTTx

Fiber to the x

FW

Fire Wall

FWD

(Traffic) ForWarDed

GB

Giga Byte

GDPR

General Data Protection Regulation

GFS

Global (Linux) File System

GGSN

Gateway GPRS Support Node

Gi or GiB

Gibibyte (1024^3) bytes

GPRS

General Packet Radio Service

GPS

Global Positioning System

GPU

Graphics Processing Unit

GRE

Generic Routing Encapsulation

GSM

Global System for Mobile Communications, previously Groupe Speciale Mobile

GSMA

GSM Association

GUI

Graphical User Interface

GW

Gateway

HA

High Availability

HBA

Host Bus Adapter

HCP

Hyperscaler Cloud Provider

HDD

Hard Disk Drive

HDFS

Hadoop Distributed File System

HDV

Hardware Delivery Validation

HEM-clouds

Hybrid, Edge, and Multi-clouds

HEMP

Hybrid, Edge, and Multi-Cloud unified management Platform

HLR

Home Location Register

HOT

(OpenStack) Heat Orchestration Templates

HSS

Home Subscriber Server

HTML

Hyper Text Markup Language

HTTP

Hypertext Transfer Protocol

HTTPS

Hypertext Transfer Protocol Secure

HW

Hardware

IaaS

Infrastructure as a Service

IaC (IaaC)

Infrastructure as Code (or “as a”)

IAM

Identity and Access Management

ICMP

Internet Control Message Protocol

iSCSI

Internet Small Computer Systems Interface

ID

Identifier

IDF

(OPNFV) Installer Descriptor File

IdP

Identity Provider

IDRAC

(Dell) Integrated Dell Remote Access Controller

IDS

Intrusion Detection System

ILO

(HPE) Integrated Lights-Out

IMS

IP Multimedia Subsystem

IO

Input/Output

IOMMU

Input/Output Memory Management Unit

IOPS

Input/Output per Second

IoT

Internet of Things

IP

Internet Protocol

IPAM

IP Address Management

IPMI

Intelligent Platform Management Interface

IPS

Intrusion Prevention System

IPSec

Internet Protocol Security

iSCSI

Internet Small Computer Systems Interface

IT

Information Technology

ITIL

IT Infrastructure Library

JSON

JavaScript Object Notation

K8s

Kubernetes

KPI

Key Performance Indicator

KVM

Keyboard, Video and Mouse

LaaS

(Testing) Lab as a Service

LAN

Local Area Network

LB

Load Balancer

LBaaS

Load Balancer as a Service

LCM

LifeCycle Management

LDAP

Lightweight Directory Access Protocol

LF

Linux Foundation

LMS

Log Management Service

LTD

Less Trusted Domain

LFN

Linux Foundation Networking

LLDP

Link Layer Discovery Protocol

LMA

Logging, Monitoring, and Analytics

LSR

Label Switching Router

MAAS

(Canonical) Metal as a Service

MAC

Media Access Control

MANO

Management and Orchestration

MC-LAG or MLAG

Multi-chassis Link Aggregation Group

MEC

Multi-access Edge Computing

MGCF

Media Gateway Control Function

MGW

Media GateWay

Mi or MiB

Mebibyte (1024^2)

ML

Machine Learning

ML2 or ML-2

Modular Layer 2

MME

Mobility Management Entity

mMTCs

Massive Machine-Type Communications

MPLS

Multi-Protocol Label Switching

MTD

More Trusted Domain

MRF

Media Resource Function

MSAN

MultiService Access Node

MSC

Mobile Switching Center

MTAS

Mobile Telephony Application Server

MVNO

Mobile Virtual Network Operator

NAS

Network Attached Storage

NaaS

Network as a Service

NAT

Network Address Translation

NBI

North Bound Interface

NEF

Network Exposure Function

NF

Network Function

NFD

Node Feature Discovery

NFP

Network Forwarding Path

NFR

Non Functional Requirements

NFS

Network File System

NFV

Network Function Virtualisation

NFVI

Network Function Virtualisation Infrastructure

NFVO

Network Function Virtualisation Orchestrator

NIC

Network Interface Card

NIST

National Institute of Standards and Technology

NMS

Network Management System

NPL

Network Programming Language

NPN

Non-Public Network

NPU

Neural Processing Unit

NR

New Radio (5G context)

NRF

Network Repository Function

NS

Network Service

NSSF

Network Slice Selection Function

NTP

Network Time Protocol

NUMA

Non-Uniform Memory Access

NVMe

Non-Volatile Memory Express

NW

Network

OAM

Operations, Administration and Maintenance

OCI

Open Container Initiative

OCS

Online Charging system

ODIM

Open Distributed Infrastructure Management

OFCS

Offline Charging System

OLT

Optical Line Termination

ONAP

Open Network Automation Platform

ONF

Open Networking Forum

OOB

Out of Band

OPEX

Operational Expenditure

OPG

(GSMA) Operator Platform Group

OPNFV

Open Platform for NFV

ORAN

Open Radio Access Network

O-RAN

Open RAN

OS

Operating System

OSD

(Ceph) Object Storage Daemon

OSS

Operational Support Systems

OSSA

OpenStack Security Advisories

OSTK

OpenStack

OVP

OPNFV Verified Program

OVS

Open Virtual Switch

OWASP

Open Web Application Security Project

PaaS

Platform as a Service

PCF

Policy Control Function

PCIe

Peripheral Component Interconnect Express

PCI-PT

PCIe PassThrough

PCR

Platform Configuration Register

PCRF

Policy and Charging Rules Function

PDF

(OPNFV) Pod Descriptor File

PF

Physical Function

PGW

Packet data network GateWay

PGW-C

PGW Control plane

PGW-U

PGW User plane

PIM

Privileged Identity Management

PLMN

Public Land Mobile Network

PM

Performance Measurements

POD

Point of Delivery

PRD

Permanent Reference Document

PTP

Precision Time Protocol

PV

Persistent Volumes

PVC

Persistent Volume Claims

PXE

Preboot Execution Environment

QCW

QEMU copy-on-write

QEMU

Quick EMUlator

QoS

Quality of Service

R/W

Read/Write

RA

Reference Architecture

RADOS

Reliable Autonomic Distributed Object Store

RAID

Redundant Array of Independent Disks

RAM

Random Access Memory

RAN

Radio Access Network

RAW

Raw disk format

RBAC

Role-bases Access Control

RC

Reference Conformance

Repo

Repository

RFC

Request for Change

RFC

Request for Comments

RGW

Residential GateWay

RI

Reference Implementation

RISC

Reduced Instruction Set Computing

RM

Reference Model

ROI

Return on Investment

RR

Route Reflector

RTM

Requirements Traceability Matrix

RTM

Root of Trust for Measurements

RTT

Round Trip Time

RU

Radio Unit (O-RAN context)

S3

(Amazon) Simple Storage Service

SA

Service Assurance

SaaS

Software as a Service

SAML

Security Assertion Markup Language

SAN

Storage Area Network

SAS

Serial Attached SCSI

SATA

Serial Advanced Technology Attachment

SBA

Service Based Architecture

SBC

Session Border Controller

SBI

South Bound Interface

SCAP

Security Content Automation Protocol

SDF

(OPNFV) Scenario Descriptor File

SDK

Software Development Kit

SDN

Software-Defined Networking

SDNC

SDN Controller

SDNo

SDN Overlay

SDNu

SDN Underlay

SDO

Standard Development Organisation

SDS

Software-Defined Storage

SD-WAN

Software Defined Wide Area Network

Sec

Security

Sec-GW

Security GateWay

SF

Service Function

SFC

Service Function Chaining

SFF

Service Function Forwarder

SFP

Service Function Paths

SGSN

Serving GPRS Support Node

SGW

Serving GateWay

SGW-C

SGW Control plane

SGW-U

SGW User plane

SIEM

Security Information and Event Monitoring

SIG

Special Interest Group

SIP

Session Initiation Protocol

SLA

Service Level Agreement

SME

Subject Matter Expert

SMF

Session Management Function

SMS

Short Message Service

SMSC

SMS Center

SMT

Simultaneous Multi-Threading

SNAT

Source Network Address Translation

SNMP

Simple Network Management Protocol

SOC

System and Organization Controls

SONiC

Software for Open Networking in the Cloud

SR-IOV

Single Root Input Output Virtualisation

SRTM

Static Root of Trust for Measurements

SRV

(Traffic) client-SeRVer traffic

SSD

Solid State Drive

SSDF

Secure Software Development Framework

SSH

Secure SHell protocol

SSL

Secure Sockets Layer

SUT

System Under Test

SW

Software

TCDI

Trusted Cross-Domain Interface

TBC

To Be Confirmed

TC

Test Case

TCP

Transmission Control Protocol

TEC

(GSMA) Telco Edge Cloud

TF

Tungsten Fabric

TFTP

Trivial File Transfer Protocol

TIP

Telecom Infra Project

TLB

Translation Lookaside Buffers

TLS

Transport Layer Security

TOR

Top of Rack

TOSCA

Topology and Orchestration Specification for Cloud Applications

TPM

Trusted Platform Module

TTL

Time To Live

TUG

(CNCF) Telco User Group

UDM

Unified Data Management

UDP

User Datagram Protocol

UDR

Unified Data Repository

UEFI

Unified Extensible Firmware Interface

UHD

Ultra High Definition

UI

User Interface

UPF

User Plane Function

uRLLC

Ultra-Reliable Low-Latency Communications

V2I

Vehicle to Infrastructure

V2N

Vehicle to Network

V2P

Vehicle to Pedestrian

V2V

Vehicle to Vehicle

V2X

Vehicle-to-everything

VA

Virtual Application

VAS

Value Added Service

V&V

Verification And Validation

vCPU

Virtual CPU

VF

Virtual Function

VI

Vendor Implementation

vIDS

Virtualised IDS

VIM

Virtualised Infrastructure Manager

vIPS

Virtualised IPS

VLAN

Virtual LAN

VM

Virtual Machine

VMDK

VMware Virtual Machine Disk File

vNAS

virtual Network Attached Storage

VMM

Virtual Machine Monitor (or Manager)

VNF

Virtualised Network Function

VNFC

Virtualised Network Function Component

VNFM

Virtualisedl Network Function Manager

VNI

VXLAN Network Identifier

vNIC

Virtual Network Interface Card

VoLTE

Voice over LTE

VPN

Virtual Private Network

VPP

Vector Packet Processing

VR

Virtual Reality

vRAN

Virtualised Radio Access Network

VRF

Virtual Routing and Forwarding

VRRP

Virtual Router Redundancy Protocol

VTEP

Virtual Termination End Point

VTP

(ONAP) VNF Test Platform

VxLAN

Virtual Extensible LAN

vXYZ

virtual XYZ, e.g., as in vNIC

WG

Working Group

Wi-Fi

Wireless Fidelity

WLAN

Wireless Local Area Network

WLC

Wireless LAN Controller

WS

WorkStream

XDP

eXpress Data Path

XML

eXtensible Markup Language

ZAP

Zed Attack Proxy

ZTA

Zero Trust Architecture

Glossary

Terminology

To help guide the reader, this glossary provides an introduction to the terminology used within this document. These definitions are, with a few exceptions, based on the ETSI GR NFV 003 V1.5.1 [1] definitions. In a few cases, they have been modified to avoid deployment technology dependencies only when it seems necessary to avoid confusion.

Software Layer Terminology
  • Cloud Infrastructure: A generic term covering NFVI, IaaS and CaaS capabilities - essentially the infrastructure on which a Workload can be executed.

Note: NFVI, IaaS and CaaS layers can be built on top of each other. In case of CaaS some cloud infrastructure features (e.g.: HW management or multitenancy) are implemented by using an underlying IaaS layer.

  • Cloud Infrastructure Profile: The combination of the Cloud Infrastructure Software Profile and the Cloud Infrastructure Hardware Profile that defines the capabilities and configuration of the Cloud Infrastructure resources available for the workloads.

  • Cloud Infrastructure Software Configuration: a set of settings (Key:Value) that are applied/mapped to cloud infrastructure SW deployment.

  • Cloud Infrastructure Software Profile: defines the behaviour, capabilities and metrics provided by a Cloud Infrastructure Software Layer on resources available for the workloads.

  • Cloud Native Network Function (CNF): A cloud native network function (CNF) is a cloud native application that implements network functionality. A CNF consists of one or more microservices. All layers of a CNF is developed using Cloud Native Principles including immutable infrastructure, declarative APIs, and a “repeatable deployment process”.

    Note: This definition is derived from the Cloud Native Thinking for Telecommunications Whitepaper (https://github.com/cncf/telecom-user-group/blob/master/whitepaper/cloud_native_thinking_for_telecommunications.md#1.4), which also includes further detail and examples.

  • Compute flavour: defines the sizing of the virtualised resources (compute, memory, and storage) required to run a workload.

    Note: used to define the configuration/capacity limit of a virtualised container.

  • Hypervisor: a software that abstracts and isolates workloads with their own operating systems from the underlying physical resources. Also known as a virtual machine monitor (VMM).

  • Instance: is a virtual compute resource, in a known state such as running or suspended, that can be used like a physical server. Used interchangeably with Compute Node and Server.

    Note: Can be used to specify VM Instance or Container Instance.

  • Network Function (NF): functional block or application that has well-defined external interfaces and well-defined functional behaviour.

  • Within NFV, a Network Function is implemented in a form of Virtualised NF (VNF) or a Cloud Native NF (CNF).

  • Network Function Virtualisation (NFV): The concept of separating network functions from the hardware they run on by using a virtual hardware abstraction layer.

  • Network Function Virtualisation Infrastructure (NFVI): The totality of all hardware and software components used to build the environment in which a set of virtual applications (VAs) are deployed; also referred to as cloud infrastructure.

    Note: The NFVI can span across many locations, e.g. places where data centres or edge nodes are operated. The network providing connectivity between these locations is regarded to be part of the cloud infrastructure. NFVI and VNF are the top-level conceptual entities in the scope of Network Function Virtualisation. All other components are sub-entities of these two main entities.

  • Network Service (NS): composition of Network Function(s) and/or Network Service(s), defined by its functional and behavioural specification, including the service lifecycle.

  • Software Defined Storage (SDS): An architecture which consists of the storage software that is independent from the underlying storage hardware. The storage access software provides data request interfaces (APIs) and the SDS controller software provides storage access services and networking.

  • Virtual Application (VA): A general term for software which can be loaded into a Virtual Machine.

    Note: a VNF is one type of VA.

  • Virtual CPU (vCPU): Represents a portion of the host’s computing resources allocated to a virtualised resource, for example, to a virtual machine or a container. One or more vCPUs can be assigned to a virtualised resource.

  • Virtual Machine (VM): virtualised computation environment that behaves like a physical computer/server.

    Note: A VM consists of all of the components (processor (CPU), memory, storage, interfaces/ports, etc.) of a physical computer/server. It is created using sizing information or Compute Flavour.

  • Virtual Network Function (VNF): a software implementation of a Network Function, capable of running on the Cloud Infrastructure.

    • VNFs are built from one or more VNF Components (VNFC) and, in most cases, the VNFC is hosted on a single VM or Container.

  • Virtual resources:

    • Virtual Compute resource (a.k.a. virtualisation container): partition of a compute node that provides an isolated virtualised computation environment.

    • Virtual Storage resource: virtualised non-volatile storage allocated to a virtualised computation environment hosting a VNFC.

    • Virtual Networking resource: routes information among the network interfaces of a virtual compute resource and physical network interfaces, providing the necessary connectivity.

  • Workload: an application (for example VNF, or CNF) that performs certain task(s) for the users. In the Cloud Infrastructure, these applications run on top of compute resources such as VMs or Containers. Most relevant workload categories in the context of the Cloud Infrastructure are:

    • Data Plane Workloads: that perform tasks related to packet handling of the end-to-end communication between applications. These tasks are expected to be very I/O and memory read/write operations intensive.

    • Control Plane Workloads: that perform tasks related to any other communication between NFs that is not directly related to the end-to-end data communication between applications. For example, this category includes session management, routing or authentication.

    • Storage Workloads: that perform tasks related to disk storage (either SSD or HDD or other). Examples range from non-intensive router logging to more intensive database read/write operations.

Hardware Layer Terminology
  • Cloud Infrastructure Hardware Configuration: a set of settings (Key:Value) that are applied/mapped to Cloud Infrastructure HW deployment.

  • Cloud Infrastructure Hardware Profile: defines the behaviour, capabilities, configuration, and metrics provided by a cloud infrastructure hardware layer resources available for the workloads.

    • Host Profile: is another term for a Cloud Infrastructure Hardware Profile.

  • CPU Type: A classification of CPUs by features needed for the execution of computer programs; for example, instruction sets, cache size, number of cores.

  • Hardware resources: Compute/Storage/Network hardware resources on which the cloud infrastructure platform software, virtual machines and containers run on.

  • Physical Network Function (PNF): Implementation of a network function via tightly coupled dedicated hardware and software system.

    Note: This is a physical cloud infrastructure resource with the NF software.

  • Simultaneous Multithreading: Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution on a single core to better utilise the resources provided by modern processor architectures.

Operational and Administrative Terminology
  • Cloud service user: Natural person, or entity acting on their behalf, associated with a cloud service customer that uses cloud services.

    Note Examples of such entities include devices and applications.

  • Compute Node: An abstract definition of a server. Used interchangeably with Instance and Server.

    Note: A compute node can refer to a set of hardware and software that support the VMs or Containers running on it.

  • External Network: External networks provide network connectivity for a cloud infrastructure tenant to resources outside of the tenant space.

  • Fluentd (https://www.fluentd.org/): An open source data collector for unified logging layer, which allows data collection and consumption for better use and understanding of data. Fluentd is a CNCF graduated project.

  • Kibana: An open source data visualisation system.

  • Multi-tenancy: feature where physical, virtual or service resources are allocated in such a way that multiple tenants and their computations and data are isolated from and inaccessible by each other.

  • Prometheus: An open-source monitoring and alerting system.

  • Quota: An imposed upper limit on specific types of resources, usually used to prevent excessive resource consumption by a given consumer (tenant, VM, container).

  • Resource pool: A logical grouping of cloud infrastructure hardware and software resources. A resource pool can be based on a certain resource type (for example, compute, storage and network) or a combination of resource types. A Cloud Infrastructure resource can be part of none, one or more resource pools.

  • Service Assurance (SA): collects alarm and monitoring data. Applications within SA or interfacing with SA can then use this data for fault correlation, root cause analysis, service impact analysis, SLA management, security, monitoring and analytic, etc.

  • Tenant: cloud service users sharing access to a set of physical and virtual resources ITU (Y.3500: Information technology - Cloud computing - Overview and vocabulary).

    Note Tenants represent an independently manageable logical pool of compute, storage and network resources abstracted from physical hardware.

  • Tenant Instance: refers to a single Tenant.

  • Tenant (Internal) Networks: Virtual networks that are internal to Tenant Instances.

Other Referenced Terminology
  • Anuket Assured Program (AAP): An open source, community-led program to verify compliance of the telecom applications and the cloud infrastructures with the Anuket specifications.

  • Carrier Grade: Carrier grade refers to network functions and infrastructure that are characterised by all or some of the following attributes: High reliability allowing near 100% uptime, typically measured as better than “five nines”; Quality of Service (QoS) allowing prioritization of traffic; High Performance optimized for low latency/packet loss, and high bandwidth; Scalability to handle demand growth by adding virtual and/or physical resources; Security to be able to withstand natural and man-made attacks.

  • Monitoring (Capability): Monitoring capabilities are used for the passive observation of workload-specific traffic traversing the Cloud Infrastructure. Note, as with all capabilities, Monitoring may be unavailable or intentionally disabled for security reasons in a given cloud infrastructure instance.

  • NFV Orchestrator (NFVO): Manages the VNF lifecycle and Cloud Infrastructure resources (supported by the VIM) to ensure an optimised allocation of the necessary resources and connectivity.

  • Platform: A cloud capabilities type in which the cloud service user can deploy, manage and run customer-created or customer-acquired applications using one or more programming languages and one or more execution environments supported by the cloud service provider. Adapted from ITU (Y.3500: Information technology - Cloud computing - Overview and vocabulary).

    Note: This includes the physical infrastructure, Operating Systems, virtualisation/containerisation software and other orchestration, security, monitoring/logging and life-cycle management software.

  • Vendor Implementation: A commercial implementation of a cloud platform.

  • Virtualised Infrastructure Manager (VIM): Responsible for controlling and managing the Network Function Virtualisation Infrastructure compute, storage and network resources.

References

Common References

Ref

Doc Number

Title

[1]

ETSI GR NFV 003 V1.5.1

“Network Functions Virtualisation (NFV); Terminology for Main Concepts in NFV”, January 2020. Available at https://www.etsi.org/deliver/etsi_gr/NFV/001_099/003/01.05.01_60/gr_NFV003v010501p.pdf

[2]

RFC 2119

“Key words for use in RFCs to Indicate Requirement Levels”, S. Bradner, March 1997. Available at https://www.rfc-editor.org/info/rfc2119

[3]

ETSI GS NFV 002 V1.2.1

“Network Functions Virtualisation (NFV); Architectural Framework”. Available at https://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.02.01_60/gs_NFV002v010201p.pdf

[4]

ETSI GR NFV-IFA 029 V3.3.1

“Network Functions Virtualisation (NFV) Release 3; Architecture; Report on the Enhancements of the NFV architecture towards “Cloud-native” and “PaaS” ”. Available at https://www.etsi.org/deliver/etsi_gr/NFV-IFA/001_099/029/03.03.01_60/gr_NFV-IFA029v030301p.pdf

[5]

ETSI GS NFV-TST 008 V3.2.1

“Network Functions Virtualisation (NFV) Release 3; Testing; NFVI Compute and Network Metrics Specification”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/008/03.02.01_60/gs_NFV-TST008v030201p.pdf

[6]

ETSI GS NFV-IFA 027 V2.4.1

“Network Functions Virtualisation (NFV) Release 2; Management and Orchestration; Performance Measurements Specification”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-IFA/001_099/027/02.04.01_60/gs_nfv-ifa027v020401p.pdf

[7]

ETSI GS NFV-IFA 002 V2.1.1

“Network Functions Virtualisation (NFV);Acceleration Technologies; VNF Interfaces Specification”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-IFA/001_099/002/02.01.01_60/gs_NFV-IFA002v020101p.pdf

[8]

ETSI NFV-IFA 019 V3.1.1

“Network Functions Virtualisation (NFV); Acceleration Technologies; Acceleration Resource Management Interface Specification; Release 3”. Available at https://www.etsi.org/deliver/etsi_gs/nfv-ifa/001_099/019/03.01.01_60/gs_nfv-ifa019v030101p.pdf

[9]

ETSI GS NFV-INF 004 V1.1.1

“Network Functions Virtualisation (NFV); Infrastructure; Hypervisor Domain”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-INF/001_099/004/01.01.01_60/gs_NFV-INF004v010101p.pdf

[10]

ETSI GS NFV-IFA 005 V3.1.1

“Network Functions Virtualisation (NFV) Release 3; Management and Orchestration; Or-Vi reference point - Interface and Information Model Specification”. Available at https://www.etsi.org/deliver/etsi_gs/nfv-ifa/001_099/005/03.01.01_60/gs_nfv-ifa005v030101p.pdf

[11]

DMTF RedFish

“DMTF RedFish Specification”. Available at https://www.dmtf.org/sites/default/files/standards/documents/DSP0268_2022.2.pdf

[12]

NGMN Overview on 5GRAN Functional Decomposition ver 1.0

“NGMN Overview on 5GRAN Functional Decomposition”. Available at https://www.ngmn.org/wp-content/uploads/Publications/2018/180226_NGMN_RANFSX_D1_V20_Final.pdf

[13]

ORAN-WG4.IOT.0-v01.00

“Front haul Interoperability Test Specification(IOT)”. Available at https://static1.squarespace.com/static/5ad774cce74940d7115044b0/t/5db36ffa820b8d29022b6d08/1572040705841/ORAN-WG4.IOT.0-v01.00.pdf/2018/180226_NGMN_RANFSX_D1_V20_Final.pdf

[14]

ETSI GS NFV-TST 009 V3.1.1

“Network Functions Virtualisation (NFV) Release 3; Testing; Specification of Networking Benchmarks and Measurement Methods for NFVI”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.01.01_60/gs_NFV-TST009v030101p.pdf

[15]

ETSI GR NFV IFA-012

“Network Functions Virtualisation (NFV) Release 3; Management and Orchestration; Report on Os-Ma-Nfvo reference point - application and service management use cases and recommendations”. Available at https://www.etsi.org/deliver/etsi_gr/NFV-IFA/001_099/012/03.01.01_60/gr_NFV-IFA012v030101p.pdf

[16]

ETSI GS NFV-SEC 001 V1.1.1

“Network Functions Virtualisation (NFV); NFV Security; Problem Statement”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-SEC/001_099/001/01.01.01_60/gs_nfv-sec001v010101p.pdf

[17]

ETSI GS NFV-SEC 003 V1.1.1

“Network Functions Virtualisation (NFV); NFV Security; Security and Trust Guidance”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-SEC/001_099/003/01.01.01_60/gs_NFV-SEC003v010101p.pdf

[18]

ETSI GS NFV-SEC 013 V3.1.1

“Network Functions Virtualisation (NFV) Release 3; NFV Security; Security Specification for MANO Components and Reference points”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-SEC/001_099/014/03.01.01_60/gs_NFV-SEC014v030101p.pdf

[19]

ETSI GS NFV-SEC 013 V2.6.1

“Network Functions Virtualisation (NFV) Release 2; Security; VNF Package Security Specification ”. Available at https://www.etsi.org/deliver/etsi_gs/NFV-SEC/001_099/021/02.06.01_60/gs_nfv-sec021v020601p.pdf

[20]

GSMA FS.31 V2.0 February 2020

“Baseline Security controls ”. Available at https://www.gsma.com/security/resources/fs-31-gsma-baseline-security-controls

[21]

GSMA whitepaper January 2021

“Open Networking & the Security of Open Source Software deployment ”. Available at https://www.gsma.com/futurenetworks/resources/open-networking-the-security-of-open-source-software-deployment

[22]

Cloud Security Alliance (CSA) and SAFECode

“The Six Pillars of DevSecOps: Automation (2020) ”. Available at https://safecode.org/resource-secure-development-practices/the-six-pillars-of-devsecops-automation/

[23]

ISO/IEC 27000:2018

Information technology — Security techniques — Information security management systems — Overview and vocabulary. Available at https://www.iso.org/standard/73906.html.

[24]

Cloud Security Alliance (CSA)

“Information Security Management through Reflexive Security ”. Available at https://cloudsecurityalliance.org/artifacts/information-security-management-through-reflexive-security/

[25]

NIST SP 800-207

“Zero Trust Architecture (ZTA) ”. Available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207.pdf

[26]

Open Infrastructure

“Edge Computing: Next Steps in Architecture, Design and Testing ”. Available at https://www.openstack.org/use-cases/edge-computing/edge-computing-next-steps-in-architecture-design-and-testing/

[27]

RFC5905

“Network Time Protocol Version 4: Protocol and Algorithms Specification”, IETF RFC, Available at https://www.rfc-editor.org/info/rfc5905

[28]

RFC5906

“Network Time Protocol Version 4: Autokey Specification”, IETF RFC, Available at https://www.rfc-editor.org/info/rfc5906

[29]

RFC5907

“Definitions of Managed Objects for Network Time Protocol Version 4 (NTPv4)”, IETF RFC, Available at https://www.rfc-editor.org/info/rfc5907

[30]

RFC5908

“Network Time Protocol (NTP) Server Option for DHCPv6”, IETF RFC, Available at https://www.rfc-editor.org/info/rfc5908

[31]

IEEE 1588-2019

“Precision Clock Synchronization Protocol for Networked Measurement and Control Systems”, Available at https://standards.ieee.org/standard/1588-2019.html

[32]

ITU-T G.8262

“Timing characteristics of a synchronous equipment slave clock”, Available at https://www.itu.int/rec/T-REC-G.8262

[33]

ITU-T G.8275.2

“Precision time protocol telecom profile for time/phase synchronization with partial timing support from the network”, Available at https://www.itu.int/rec/T-REC-G.8275.2

[34]

GSMA OPG.02

“Operator Platform: Requirements and Architecture”, Available at https://www.gsma.com/futurenetworks/operator-platform-hp/

[35]

O-RAN.WG6.AAL-GAnP-v01.00

“O-RAN Acceleration Abstraction Layer General Aspects and Principles 1.0”, November 2020”, Available at https://www.o-ran.org

[36]

GSMA FS.40-v02.00

“5G Security Guide, version 2.0, 20 October 2021”, November 2020”, Available at https://www.gsma.com/security/publications/

[37]

ETSI TS 103 457

“CYBER; Trusted Cross-Domain Interface: Interface to offload sensitive functions to a trusted domain”, TS 103 457 - V1.1.1, Available at https://www.etsi.org/deliver/etsi_ts/103400_103499/103457/01.01.01_60/ts_103457v010101p.pdf

[38]

RFC 2544

“Benchmarking Methodology for Network Interconnect Devices”, Available at https://www.ietf.org/rfc/rfc2544.txt

[39]

O-RAN.WG6

“WG6: Cloudification and Orchestration Workgroup specifications”, Available at https://www.o-ran.org

[40]

ITU-T L.1330

“Energy Efficiency measurement methodology and KPI/metrics for NFV”, Available at https://www.itu.int/rec/T-REC-L.1330

[41]

ETSI EN 303 471

“Energy Efficiency measurement methodology and KPI/metrics for NFV”, Available at https://portal.etsi.org/webapp/workprogram/Report_WorkItem.asp?WKI_ID=50095

[42]

ETSI ES 203 539

“Measurement method for energy efficiency of Network Functions Virtualisation (NFV) in laboratory environment”, Available at https://portal.etsi.org/webapp/workprogram/Report_WorkItem.asp?WKI_ID=47210

[43]

ITU-T L.1361

“Measurement method for energy efficiency of network functions virtualization”, Available at https://www.itu.int/rec/T-REC-L.1361

[44]

“Open RAN Technical Priority - Focus on Energy Efficiency”, Available at https://www.o-ran.org/ecosystem-resources

[45]

“QuEST Forum - NFV Workload Efficiency Whitepaper”, Available at https://tl9000.org/resources/documents/NFV%20Workload%20Efficiency%20Whitepaper.pdf

[46]

GSMA FS.16

“Network Equipment Security Assurance Scheme – Development and Lifecycle Security Requirements, version 2.2”, Available at https://www.gsma.com/security/resources/fs-16-network-equipment-security-assurance-scheme-development-and-lifecycle-security-requirements/

Cloud Native and Kubernetes References

Ref

Doc Number

Title

[C1]

“Extended Cloud Native Principles”. Available at https://networking.cloud-native-principles.org/cloud-native-principles.

[C2]

“DANM”. Available at https://github.com/nokia/danm.

[C3]

“Kubernetes Container Runtime Interface (CRI)”. Available at https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/.

[C4]

“Multus”. Available at https://github.com/k8snetworkplumbingwg/multus-cni.

[C5]

“Node Feature Discovery (NFD)”. Available at https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/index.html.

[C6]

“Open Container Initiative (OCI)”. Available at https://github.com/opencontainers/runtime-spec.

O-RAN, 5G and Miscellaenous References

Ref

Doc Number

Title

[M1]

ITU-T IMT-2020

“International Mobile Telecommunications-2020 (IMT-2020) Standard for 5G networks”. Available at https://www.itu.int/pub/T-TUT-IMT.

[M2]

O-RAN.WG6.AAL-GAnP-v01.00

“O-RAN Acceleration Abstraction Layer General Aspects an Principles 1.0”, November 2020; O-RAN.WG6.AAL-GAnP-v01.00. Available at https://www.o-ran.org/specifications.

[M3]

ETSI TS 123 501 V16.6.0

“System architecture for the 5G System (5GS)”. ETSI TS 123 501 V16.6.0 (2020-10) (3GPP TS 23.501 version 16.6.0 Release 16). Available at https://www.etsi.org/deliver/etsi_ts/123500_123599/123501/16.06.00_60/ts_123501v160600p.pdf.

Use Cases

Edge
Executive Summary

Edge computing, a disruptive technology, that should be considered for adoption by Digital Service Providers (DSP) as it is tightly linked with a number of use cases that monetize 5G. Thus, operators, cloud service providers, content providers and application developers are increasingly focussed on Edge computing.

Edge is considered as one of the main enablers to 5G use cases adoption. There are a number of 5G challenges that Edge will support solve so as to implement the 5G requirements defined by IMT-2020 and 5G. Edge is required to support enhanced mobile broadband (eMBB), massive machine-type communications (mMTCs) and ultra-reliable low-latency communications (uRLLCs).

With respect to monetization, Edge will help DSPs with their Return on investment (ROI) on 5G and cloud as a number of new services require the Edge. For example, services such as Cloud Gaming, Assisted Reality/Virtual Reality (AR/VR), 5G private networks, and vRAN.

Objective

Based on CNTT’s goal and purpose is to develop a robust cloud infrastructure model and define a limited set of discrete architectures built on that model that can be tested and validated for use across the entire member community.

Extend CNTT’s Scope beyond the Regional and National Data Center cloud infrastructures to the Edge. The Edge is a disruptive use case where CNTT can add value especially as there are a number of scattered initiatives under various Standard Development Organisations (SDO) and open source communities.”

Edge Stream related activities :

  • To harmonize the work under Standard Development bodies or Open source communities

  • To build Common Cloud infrastructure based on CNTT principles that can be consumed by any operator

  • To build Cloud infrastructure that can scale over hundreds of thousands of nodes and cover the Edge Telco use cases that can help operators to monetize the NFV/SDN journey

  • Modify the existing RM and RA-s so that they are aligned with the edge requirements

Approach & Scope

All Edge requirement gaps under Reference Model , Reference Architecture 01 (OpenStack), Reference Architecture 02 (Kubernetes) will be identified and fulfilled.

Edge scope under CNTT will cover :

  • Define Edge locations based on use case

  • Define guidelines around factors that can affect the edge, for example, WAN latency based on telco use cases.

  • Define Edge use case specific hardware and software profiles, if needed.

  • Define resource requirements in terms of compute, storage and networking; new concepts, such as hyper converged infrastructure, can be introduced.

  • Define different architecture models as no one size fits all”.

Out of Scope

  • APIs exposed to 3rd party applications.

  • VNF/CNF Architecture.

  • Edge deployment locations as they will vary by operator.

Principles

This section introduces some principles that should be followed during the definition and development of Edge scope to be covered in CNTT Reference Model, Reference Architectures, Reference Implementations and Reference Conformance test suites.

A main principle is that CNTT Edge will not re-define a new branch of CNTT and target to avoid re-inventing what other organizations already have . CNTT Edge following the same principles that defined in existing Reference Model Principles, the Reference Architecture Principles and the Network Principles.

CNTT believes that Edge computing is unique in terms of infrastructure requirements, implementation and deployment, and that’s why there some additional principles specific to the edge need to be defined.

  • Distribution into many small sites

  • Deployment automation

  • Cloud Infrastructure API accessibility

  • Automated lifecycle management

  • Automated scalability

  • Automated closed loop assurance

  • On-Site staff trust and competence availability

  • Security concerns

  • On-Site access restrictions (distance, accessibility, cost)

  • Remote analysis, isolation and serviceability

  • Resource restrictions

  • Cloud Infrastructure overhead minimization

    • Separation of concerns.

    • Cloud nativeness.

  • Geographical presence and data origin.

  • Data locality, protection and regulatory fulfilments.

    • Resilience and Availability.

  • WAN connectivity availability, capabilities and quality.

  • Autonomous local infrastructure functionality and operations.

  • Heterogeneous Infrastructure.

    • Workload diversity.

  • Support of Telco and non-Telco workloads.

  • Specific priority, control and orchestration concerns.

Terminologies
Standard Developing Bodies (SDO) and Open Source Communities Interlock
  • OpenStack Edge Computing Group

    • Working will OpenStack ECG on defining various architecture that will be fit in RA01 & RA02

  • Linux Foundation - Edge (LF-Edge)

  • GMSA - operator Platform Group (OPG) & Telco Edge Cloud (TEC)

  • ETSI MEC

  • ETSI NFV

  • Telecom Infra Project (TIP)

    • Working will TIP for Requirements gathering to be adopted from Telco Edge cloud infrastructure prospective

Anuket Project - Community Guidelines

Table of contents

Introduction

History of Anuket reference specifications

The Cloud iNfrastructure Telco Task Force (CNTT) was founded by AT&T, Vodafone, Verizon, Deutsche Telekom, Orange, and China Mobile. Soon thereafter, additional telco operator and vendor partner participants began to join the Task Force. CNTT reached its first major milestone when it gained sponsorship and support of the GSMA and Linux Foundation Networking in Summer 2019. As of June 2020, there were over thirty operators and partners (VNF suppliers, third-party integrators, hw/sw suppliers) in its member community and these numbers have continued to grow. CNTT was collaborating very closely with OPNFV and there were dependencies and overlap between the work of the two communities. In the beginning of 2021 CNTT and OPNFV merged under the name Anuket to leverage the synergies between the two projects.

The Anuket project community is leading the industry in creating a common infrastructure reference platform in the form of reference model and reference architecture definitions to better support virtualised and containerised Network Functions for the Telecom industry as a whole. The Anuket community includes many open source development projects to create and support reference implementations, and develop tests for reference certification platforms.

The Anuket project operates under a Technical Steering Committee (TSC) and governing rules documented in the Anuket Charter and in the Anuket Project Operations and Guidelines document.

How to participate

Participating in Anuket does not need any formal membership, and nothing to sign except the CLA. Participation is open to anyone, whether you are an employee of an LFN member company or an individual contributor. By participating, you automatically accept the individual anti-trust policies of LFN and GSMA, the joint Terms of Reference of LFN and GSMA, the LFN Code of Conduct, as well as the LFN Trademark policy.

Recommended checklist for participating in the Anuket community:

Adoption

Introduction

It is vitally important for the success of the Anuket reference specifications mission to have as many working Anuket compliant solutions, including infrastructure and VNF/CNF designs from the vendor community as possible. Obviously, there will be solutions that will not be able to be fully aligned with Anuket reference specifications requirements, however, the intention is to make the Anuket reference architectures and implementations map to the real world so to make compliance attractive to the broader community. Therefore, a transition plan, an adoption strategy, and adoption roadmap needs to be agreed on within the Anuket community. The intention of this document is to detail the strategy for broader adoption in the larger telecom ecosystem.

Background

Anuket is developing a set of cloud infrastructure specifications to be implemented within telcos to improve the cost effectiveness and speed of deployment of cloud network functions. As part of the specifications development, the organization has built a Reference Model (RM) on which Reference Implementation (RI) and Reference Conformance (RC) standards have been defined. For Anuket to ensure value add to Telco industry operators, suppliers, and end user customers, it is running field tests to validate the feasibility, utility, and effectiveness of its methods (RI/RC standards).

Field Trial Purpose

In the truest form, adoption of a specification is an indication of its success within an industry. Specifications developed must be interactively tested in multiple environments or “trialed” to ensure they are practicable, functional, and operative. Without running trials to validate the Anuket reference specifications approach, specifications may not provide intended value across a sufficient spectrum of participating entities to be widely adopted. The intentions of these field trials is as follows:

  1. Demostrate the partnership approach to validate Anuket community is adopting a consistent approach

  2. Validate the RI1 specifications and RC1 test suite, not VNFs or NFVIs

  3. Validate the RI2 specifications and RC2 test suite, not CNFs or CaaSs

Purpose of this Document Section

The purpose of this document is to define the goals/outcomes, expectations, and roles necessary to support the Anuket release trials. The document will define/discuss the following:

  • Purpose of field trials

  • Goals/desired outcomes of the field trials

  • Success indicators

  • Intentions and expectations

  • Action plan

  • Resource requirements

  • Metrics definition

Adoption Strategy
Expectations from Operators
Expectations from Vendors
Expectations from Industry
Adoption Roadmap
Transition Plan

A Transition plan is needed to address technology components that do not presently conform to Anuket reference specification principles, and hence require explicit direction on how the situation will be treated in the present, as well as plans for the future. The plans might be that the component will be added to the Anuket reference specification corpus in a future release, or remain outside of the main body, depending on the nature of the given technology or system. For example, a technology might be propriatary to a specific vendor, yet has become a de facto standard would not be part of the reference, but might be referred to due to its wildspread adoption by the industry.

The transition plan described here informs application designers on how the Reference Conformance, and ultimately the industry certification programs will manage and document exceptions encountered during the badging process. The actions taken might include flagging warnings and potential errors caused by the variance from the Anuket conformance levels, which could prevent issuance of a certification badge.

Conformance Levels
  • Fully Conformant: VNFs/CNFs or Cloud Infrastructure designed and developed to be fully conformant to Anuket reference specifications with no use of any of the allowed Exceptions.

  • Conformant with Exceptions: VNFs/CNFs or Cloud Infrastructure written and designed to be conformant to Anuket reference specifications with one or more of the allowed Exceptions used.

Exception Types
  • Technology Exceptions : The use of specific technologies that are considered non conformant to Anuket reference specification principles (such as PCIe Direct Assignment, exposure of hardware features to VNFs/CNFs).

  • Version Exceptions: Using versions of Software components, APIs, or Hardware that are different from the specifications.

Transition Framework
Transition Plan Framework

Exceptions will be clearly recorded in a given Reference Architecture’s Appendix. That document provides guidance to NFVI vendors of what Exceptions will be allowed in each Anuket release. Figure 1 below demonstrates the concept.

  • It is expected that over time, as technology matures, there will be a decreasing numbers of Exceptions allowed in Anuket releases.

  • For each Anuket Release, the Cloud Infrastructure can be either Fully Conformant or Conformant with Exceptions.
    • Fully Conformant: Supports the Target Reference Architecture without any exceptions. There should be a technology choice in RA to support RM Exceptions (However, none of the Exceptions allowed in RA has been used).

    • Conformant with Exceptions: One or more of the allowed exceptions in RA are used.

Transition Plan for cloud infrastructure solutions within Anuket reference specifications

Transition Plan for cloud infrastructure solutions within Anuket reference specifications

VNF/CNF Transition Plan Framework

Exceptions will be clearly recorded in the appropriate specification Appendix which will service as guidance to VNF/CNF application vendors of what Exceptions will be allowed in each Anuket release. Figure 2 below demonstrates the concept.

  • It is expected that over time, as technology matures, there will be a decreasing numbers of Exceptions allowed in Anbuket releases.

  • For each Anuket Release, VNF/CNF can be either:
    • Fully Conformant: No Exception used.

    • Conformant with Exception: One or More of the allowed Exceptions in the Reference Model have been used.

Transition Plan for VNFs/CNFs within Anuket reference specifications

Transition Plan for VNFs/CNFs within Anuket reference specifications

Anuket Field Trial/ Approach

This portion of Chapter 9 is segmented into two subsections. Section 9.5.1 provides a summary and overview of the trials activities specifically targeted to potential trials participants. Section 9.5.2 addresses the overall CNTT approach to field trials as a method of ensuring consistency between releases.

Summary/Field Trials Participants Overview

Reference Implementation (RI1) and Reference Conformance (RC1) requirements are defined by the Reference Architecture (RA1). To ensure that Telecom industry operators, suppliers, and end user customers will derive benefit for the effort, Anbuket is running field tests to validate the feasibility, utility, and effectiveness of its requirements and methods (RI1/RC1).

Field Trials Intentions

The field trials are viewed as a partnership of Anuket with participants to validate that the community is adopting a consistent approach. This is not a VI badging exercise. The trials will validate the RI1 and the RC1 test suite requirements and methods themselves, not VNFs or VI systems under test.

Expectations and Assumptions of Field Trials

Anuket expects to exit the trials with either validation of RI1 and RC1 or a set of actions to review and possibly modify the RI1 or RC1 to address any gaps identified. By taking advantage of the community continuous improvement process, the lessons learned in the field trials will be applied to the badging processes to refine/define the criteria with the intention of making the badges meaningful and mutually beneficial to operators and suppliers. Performance testing is not included in the field trials.

Pre-trials activities

Prior to the comencement of any field trials, the Anuket community will define an operational plan, secure resources, and provide all designated contact information required to support trial participants. As the results of the trails may produce data and information that could be considered sensitive by participants, Anuket will establish standard data set requirements and secure collection methods to ensure participant privacy protection.

Expectations of Trials Participants

Trials participants will be expected to commit to establishing an RA1 compliant NFVI ot RA2 comp;iant CaaS, in whatever manner best suited to the participant. The first step is for the participant to secure appropriate environment space from pre-existing space, newly built space or securing LaaS. The environment can exist in any mix of participant owned, private or community hardware infrastructure.

Second, the participant will build/setup/configure the environment space using their preferred method. This can include the use of a cookbook, automated install, and/or build from RA1/RI1 or RA2/RC2 requirements and specifications. CNTT RI1 Chapter 3 and RI2 Chapter 3 documentations provide the matching RI requirements for the build.

Expectation 2: Execute the RC1 or RC2 Test suites

Anuket will provide the participants with the community ref_arch_openstack:chapters/chapter08:conformance or Anuket Specifications test suites. The participants will execute test cases per instructions and record the quantitative results.

Test case suite should be executed successfully at least three (3) times, because this number represents the recommended number of test suite runs to eliminate false positives in results. A triage process will be used to determine and manage root cause analysis of any failures encountered. If the failures are determined to be issues with the participant’s VI, Anuket will convey the issues to the RI work stream and make available SMEs to assist the participant in resolving the issues. When failures are deemed to be caused by an issue or gap in the RA/RI/RC, the community will work to determine the resolution, and modify the RA/RI/RC accordingly.

Once the test case suite execution is successful for 3 consecutive iterations, the participant will provide the data of all iterations (both successful and unsuccessful) to Anuket based on participant privacy expectations (See Expectation #4)

Expectation #3: The Qualitative Survey

At the conclusion of the Test Case iterations, the participant will be asked to complete a qualitative survey of their experience. This survey will be used to measure the feasibility, utility, and effectiveness of the RI1 specifications, installation/configuration methods and RC-1 Test case efficacy. The survey will be in an Agile User Story format. The Table below provides an example of the survey questions:

![Table 1: Survey/Questionnaire example](../figures/Table 1-1.png)

Table 1: Survey/Questionnaire example

Expectation 4: Providing Trials Results

As a community, Anuket is concerned with the privacy of participant data. Anuket abides by the LFN anti-trust policies and the LFN Privacy Policy. As discussed in the Pre-trials activity section of the document, data generated by the trials will be secured to protect participant privacy. Additionally, should participants have concerns regarding the data they generate from the trials, Anuket will either work with the participant to eliminate their concerns, honor instructions from the participant on limitations to the data use, or agree to exclude that participant’s data from the analysis.

Conclusion: Final Deliverable - End-of-Trial Report

Upon completion of field trials, Anuket write an End of Trial Report which summarizes the overall conclusions based on the evaluation. The report will include:

  1. Successes: What activities went well both generally and specifically? How did it compare to past or alternative results?

  2. Challenges: What did not go well overall? What impact could these challenges have on future community adoption?

  3. Discoveries: What are key discoveries/strategic learnings about any of the Anuket approaches or methods? Other?

  4. Decisions and Recommendations: Identification of the key decisions made and list of what corrective actions shall be taken. What shoud be changed, enhanced, maintained, or discontinued?

  5. Next Steps: Indication of proposed steps and activities to be undertaken by the community to further the objectives of the Anuket work group.

Anuket Field Trials Approach
Key Expectations and Assumptions
  1. Expectation: Through healthy feedback from suppliers, Anuket will exit the trial with either validation of RI1, RI2, RC1 and RC2 or a set of actions to close gaps.

  2. Expectation: Post trial and gap closure, the community will define a badging process that is mutually beneficial to operators and suppliers.

  3. Assumption: Performance testing is not in field trial.

Overview: Stages of Field Trial

The following diagram the key components and flow of activities, actions, and deliverables to be undertaken during the trial. Details of each component are provided in this document.

Field Trial Approach

Field Trial Approach

Success Indicators
  1. Agreement secured on the use of trials results data, including:

    1. Level of data detail required to validate the results

    2. Acceptable data values indicating valid results

    3. Level of data detail that will be published

  2. Vendor Implementation (VI) Labs are successfully deployed in all target environments

    • Vendor (NFVI, VNF, VIM, 3rd Party)

    • Community (Anuket)

    • LaaS (e.g. UNH)

  3. Engaged vendors successfully configure their Cloud Infrastructure and run the RC1 or RC2 test suite and are able to provide expert feedback

  4. Engaged vendors are able to validate that they can instantiate and run rudimentary validation of VNF functionality on more than one conformant cloud infrastructure (NFVI)

Initiation
Objectives of RI1/RC1 Trials

The object is to quantitively and qualitatively assess and evaluate the following Anuket requirements, methods, and support processes:

  • RA1 or RA2 Specifications

  • Cloud Infrastructure implementation support methods ( i.e. cookbooks, installation manuals, how to guides etc.)

  • RC1 or RC2 Test Suite

  • TC Traceability

  • Test Pass Criteria

  • Benchmark Data

  • Other criteria to be determined at commencment or during the execution of the trial

Overall, feedback from the trials and issues and gaps found shall be used to enhance and improve the CNTT approach. Enhancements to future releases will/shall be identified accordingly.

Trial Participant Interaction with the Community

The focus of the field trials is on the test suites and Anuket methods, not on the systems under test. A process is being developed to identify issues and gaps and managing how they are reported.

Anuket will work very closely with field trial partners (NFVI vendors, VNF vendors, or system integrators) and agree on labs that will be used for the trial. Anuket will take all necessary measures to protect the intellectual property rights (IP rights) for all partners involved in those trials. All Reports and findings will be vetted carefully and only published after being approved by all parties concerned. No test results or records will be kept in any public records without the consent of the participants.

The targeted repositories for this information are:

Anuket GitHub

  • GitHub Code

  • GitHub Projects

  • GitHub Issues

Test Case Identification

Specific test cases for the field trials will be documented and provided to the participants based upon the CNTT RI1 and RC1 work streams requirements. The technical testing methods, procedures and documentation shall be provided by these work streams.

Vendor Solicitation/Commitment

Vendor members will be solicited for participation in the trials. The vendors will be required to commit fully to the assessment and evaluation processes. As previously mentioned, additional discussion is needed to define what results data and at what level of detail is acceptable to be shared.

RI1/RC1 Trial Deliverable

The Initiate Field Trial Stage will deliver execution and assessment plans including:

  • A high-level check list of the tasks each participant will need to complete shall be provided.

  • The plan will contain all the key milestones and activities the participants will expected to perform.

Execution Stage
Objectives of the Execute Stage

The objective of Execute Stage is participants implementing field trials tasks and record/assess outcomes Anuket will assemble the Trials team to fully develop the action plan including resource assignments, materials requirements, and timelines.

Activities include the deployment and configuration of VI and execution of the RC1 test cases. Vendor community members that commit to the trials will build/setup/prep labs for the trials per the instructions:

  1. Secure appropriate environment space (pre-existing, new build, LaaS)

  2. VI per published RI1 Specifications

  3. RC1 or RC2 Test suite will be provided to the participants

  4. Trial Participants ensure a complete understanding of the test suite actions and expected outcomes.

Running the Field Trial

The field trial will run the Test Suite for 3 Iterations. For each iteration:

  • Vendor RC1 or RC2 test results are documented. Vendor provide feedback to Anuket

  • Anuket RC1 or RC2 test results are documented. Feedback is recorded.

The Community shall review Issues/Gaps during the evaluate stage and do one of the following:

  • Accept the Issue/Gap, and accordingly modify the RI/RC

  • Not-Accept the Issue/Gap and document the condition of non-conformance while maintaining the privacy of participants

Resources and Roles

Anuket will staff the plan by soliciting volunteers from the participants. The list below is suggested list of roles to be staffed:

  • Overall Field Trial Lead

  • Technical Field Trial Steering Lead

  • Vendor lead from each supplier

  • SME(s) for RC1 or RC2 supporting suppliers

  • SME(s) for RI1 or RI2 supporting suppliers

  • SME(s) for RI1/RC1 or RI2/RC2

  • Other support roles such as Governance, technical writers, etc.

The participants that volunteer for the roles will be expected to provide the appropriate amount of time to support the trials initiative.

Execution Stage Deliverables

The deliverables of the execute stage will be:

  • Implemented Participant RA1 or RA2 Labs which have been tested.

  • RC1 or RC2 Test cases are run.

Assessment

The Assess stage shall utilize data collected during the execute stage. Participants will assess their experience using the methods used by Anuket accordingly to quantitatively and or qualitatively measure:

Required Assessments
  • Cloud Infrastructure Implementation methods and procedures (cookbook, etc)

  • RA1 or RA2 Specifications

  • RC1 or RC2 Test Suite

  • TC Traceability

  • Test Pass Criteria

  • Benchmark Data

  • Other?

Optional (Pre-Launch Trials only)
Instantiation
  • Smoke test the level of verification and validation

  • Non-functional

  • Stand up with only key operations working

Anuket will also assess their experience of the methods used by the reference specifications to assess the following operational areas:

  1. Mechanism for Reporting Issues / Receiving Status

  2. Results Collation and Presentation,

  3. Support Availability

    • SME (Human)

    • Materials

  4. Release Notes

  5. Other?

Measuring Outcomes
Qualitative Outcomes

Participants and project teams will be provided a questionnaire based upon a set of User stories related to the field trail. Questionnaire responses will be used in the Evaluate phase.

Quantitative Outcomes

Technical outcomes i.e. technical test results will be collected and managed by RI1/RC1 work streams based upon participants privacy preferences.

Deliverables: * Feedback is provided from the participants on their outcomes to Anuket. * Completed Questionnaire and test case results (Participant)

Evaluation Stage

Proving the ‘right’ value to the operator and vendor community is ultimately what will ensure adoption of Anuket requirements. These field trials are intended to verify and validate the requirements and methods developed by Anuket so that adjustments can be made to ensure the intended value is being delivered.

Anuket shall evaluate all feedback and test results to understand whether Anuket methods and measures are meeting intended objectives. If a method or measure is not meeting its intended purpose, it shall be identified as a gap or an issue for resolution. Determinations if and when adjustments or adaptations are needed shall be made to by the Anuket community.

All identified gaps and issues shall be captured in the Anuket reference specifications GitHub repository. Decisions and determinations will be captured and logged accordingly.

Closeout Stage

To close out the Field Trial, Anuket shall summarize its evaluation of the Field Trial and actions to be taken to address any adaption needed.

Final Deliverable - End-of-Trial Report

Upon completion of field trials, Anuket shall develop an End of Trial Report which summarizes the overall conclusions based on the evaluation, to include:

  • Successes - What went activities well both generally or specifically? How did it compare to the past or alternative results?

  • Challenges - What didn’t go well overall? What impact could these challenges have to adoption?

  • Discoveries - What are key discoveries/strategic learnings about any Anuket approaches or methods? Other?

  • Decisions and Recommendations - Identification of key decisions made and list of what corrective actions shall be

  • taken. What to enhance, maintain, or discontinue?

  • Next Steps - Indication of proposed steps and activities to be undertaken by the community

Contributor Covenant Code of Conduct

Our Pledge

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

Our Standards

Examples of behavior that contributes to creating a positive environment include:

  • Using welcoming and inclusive language

  • Being respectful of differing viewpoints and experiences

  • Gracefully accepting constructive criticism

  • Focusing on what is best for the community

  • Showing empathy towards other community members

Examples of unacceptable behavior by participants include:

  • The use of sexualized language or imagery and unwelcome sexual attention or advances

  • Trolling, insulting/derogatory comments, and personal or political attacks

  • Public or private harassment

  • Publishing others’ private information, such as a physical or electronic address, without explicit permission

  • Other conduct which could reasonably be considered inappropriate in a professional setting

Our Responsibilities

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

Scope

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at anuket-tsc@lists.anuket.io. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership.

Attribution

This Code of Conduct is adapted from the Contributor Covenant, version 1.4.

For answers to common questions about this code of conduct, see the FAQ.

Version information

Version: Orinoco

Release Date: 25th July 2023

Version history

Version History

Release

Date

Snezka

10th January 2020

Baldy

15th May 2020

Baraque

25th Sep 2020

Elbrus

29th Jan 2021

Kali

1st Jul 2021

Lakelse

4th Jan 2022

Moselle

21st Jun 2022

Nile

20st Dec 2022

Orinoco

25th July 2023