VMware Telco Cloud Service Assurance Deployment Guide

48
VMware Telco Cloud Service Assurance Deployment Guide VMware Telco Cloud Service Assurance 2.0.0

Transcript of VMware Telco Cloud Service Assurance Deployment Guide

VMware Telco Cloud Service Assurance Deployment Guide

VMware Telco Cloud Service Assurance 2.0.0

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

VMware, Inc.3401 Hillview Ave.Palo Alto, CA 94304www.vmware.com

Copyright ©

2022 VMware, Inc. All rights reserved. Copyright and trademark information.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 2

Contents

1 Introduction 5Audience 5

2 Deployment Architecture 6

3 System Requirements for VMware Telco Cloud Service Assurance 8Non HA-Based System Requirements 9

HA-Based System Requirements 10

4 Ports and Protocols 14

5 Software Versions and Interoperability 15

6 Prerequisites for VMware Telco Cloud Service Assurance 17

7 Deploy VMware Telco Cloud Service Assurance 20Deployment Overview 20

Download the Package 20

Extract the Downloaded Package and Installer 21

Configure Deploy.Settings File 22

Configure Deploy.Settings File for TKG 22

Configure Deploy.Settings File for AKS 24

Download and Launch the Deployment Container 25

Download the Deployment Container 26

Launch the Deployment Container 26

DarkSite Deployment 27

Trigger Deployment Using the Installation Script 28

8 Configuring VMware Telco Cloud Service Assurance and Domain Manager 29Accessing VMware Telco Cloud Service Assurance UI 29

Installing Domain Manager 29

9 Incremental Scaling of VMware Telco Cloud Service Assurance on TKG and AKS31

Incremental Scaling on TKG 31

Incremental Scaling on AKS 35

10 Configuring TKG with VMware Telco Cloud Automation 40

VMware, Inc. 3

Steps to Deploy VMware Telco Cloud Service Assurance on TKG with VMware Telco Cloud Automation 40

Deploying TKG Management Cluster 41

Deploying TKG Workload Cluster 41

Configuring Harbor Registry in VMware Telco Cloud Automation 42

Configure TKG Cluster for Secure Harbor Registry 42

Obtaining KUBECONFIG File from TKG Workload Cluster 46

11 Uninstall VMware Telco Cloud Service Assurance Deployment 47

12 Troubleshooting Deployment 48

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 4

Introduction 1The VMware Telco Cloud Service Assurance Deployment Guide provides information on how you can deploy VMware Telco Cloud Service Assurance on VMware Tanzu Kubernetes Grid (TKG) and Azure Kubernetes Service (AKS).

Characteristics of VMware Telco Cloud Service Assurance:

n Deploys on AKS cluster

n Deploys on TKG cluster

n Uses different footprint options for deployment based on the number of managed devices (see the following table). See the VMware Telco Cloud Service Assurance Systems Requirements section for more details about footprint options. Some footprint sizes allow you to choose either a High Availability (HA) or non-HA version. The HA version requires more nodes as listed in the footprint specification.

Footprint Number of Devices

2.5 K Up to 2,500 managed devices

25 K Up to 25,000 managed devices

50 K Up to 50,000 managed devices

100 K Up to 100,000 managed devices

This chapter includes the following topics:

n Audience

Audience

This guide assumes that the reader has experience with the following:

n Linux system administration

n Basic understanding of Docker commands

n Familiarity with configuring TKG and AKS clusters

VMware, Inc. 5

Deployment Architecture 2The VMware Telco Cloud Service Assurance implements the architecture that is outlined and defined at a high level through logical building blocks and core components.

n Domain Manager (IP, SAM, and ESM) provides services that control the access and discoverability of both the physical and virtual infrastructure. In addition, the Domain Manager components collects and forwards events related to the topology, status, and health of the system in the form of events and metrics that is processed through an ingestion pipeline.

n The data moves to the components in the top right hand corner of the diagram that make up the VMware Telco Cloud Service Assurance cloud native components running on a Kubernetes workload cluster.

VMware, Inc. 6

n Smarts collector collects the data and processes the data in real-time according to enrichment, alarming, and analytics policies which is used to discover and monitor the overall network.

n VMware vROps, NFV-SOL, and Kafka collector provide interfaces to other external sources of information that supplement the information available from Domain Manager and allows stitching of topology and event information from sources including CNF vendors and VMware products compatible with the VMware Telco Cloud Service Assurance solution such as the VMware Telco Cloud Automation and the VMware vRealize Operations.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 7

System Requirements for VMware Telco Cloud Service Assurance 3This section talks about the VMware Telco Cloud Service Assurance on TKG deployment, and static IP address requirement in addition to the High Availability (HA) and non-HA requirements for deploying VMware Telco Cloud Service Assurance.

VMware Telco Cloud Service Assurance on TKG Deployment

The following diagram illustrates the VMware Telco Cloud Service Assurance on TKG deployment.

VMware Telco Cloud Service Assurance can run on TKG by deploying the product on a TKG workload cluster. In addition to the TKG workload cluster, you must create a management cluster under which you can manage multiple workload clusters. Any TKG deployment must have at a minimum one management cluster and one workload cluster. VMware Telco Cloud Service Assurance must be installed in the workload cluster that meets the resource requirements of the desired footprint of the product.

Static IP Address Requirement for Kubernetes Control Plane

VMware, Inc. 8

A set of static virtual IP addresses must be available for all the clusters that you create, including both management and Tanzu Kubernetes Grid clusters.

n Every cluster that you deploy to vSphere requires one static IP address for Kube-Vip to use for the API server endpoint. You specify this static IP address when you deploy a management cluster. Make sure that these IP addresses are not in the DHCP range but are in the same subnet as the DHCP range. Before you deploy management clusters to vSphere, make a DHCP reservation for Kube-Vip on your DHCP server. Use an auto-generated MAC Address when you make the DHCP reservation for Kube-Vip so that the DHCP server does not assign this IP to other machines.

n Each control plane node of every cluster that you deploy requires a static IP address. This includes both management clusters and Tanzu Kubernetes Grid clusters. These static IP addresses are required in addition to the static IP address that you assign to Kube-Vip when you deploy a management cluster. To make the IP addresses that your DHCP server assigned to the control plane nodes static, you can configure a DHCP reservation for each control plane node in the cluster, after you deploy it. For instructions on how to configure DHCP reservations, see your DHCP server documentation.

For more information, see the VMware Tanzu Kubernetes Grid documentation.

This chapter includes the following topics:

n Non HA-Based System Requirements

n HA-Based System Requirements

Non HA-Based System Requirements

If you are deploying VMware Telco Cloud Service Assurance in the Non-High Availability (HA) mode, ensure that your system meets the following requirements for the 2.5 K footprint.

The 2.5 K footprint is for non-production deployment without HA capabilities and cannot be incrementally scaled.

Table 3-1. 2.5 K Footprint for TKG Management Cluster

Number of VMs vCPU Per VM RAM Per VM (GBs) Role

1 2 8 Control Plane Node

1 2 8 Worker Node

Note The table shows the TKG management cluster sizing for deployments when a dedicated TKG management cluster is used for the TKG workload cluster in VMware Telco Cloud Service Assurance. To size deployments when multiple workload clusters are managed by a single management cluster, see VMware TKG Documentation.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 9

Table 3-2. 2.5 K Footprint for TKG Workload Cluster

Number of VMs vCPU Per VM RAM Per VM (GBs)Local Disk Per VM (GBs)

Total Persistent Volume Storage (TBs) Role

1 2 8 50 NA Control Plane Node

5 16 32 200 3 Worker Node

Table 3-3. 2.5 K Footprint for AKS Workload Cluster

Number of VMs vCPU Per VM RAM Per VM (GBs)Local Disk Per VM (GBs)

Total Persistent Volume Storage (TBs)

5 16 32 200 3

Note By default in AKS, first three worker nodes can also be the control plane nodes.

For 2.5 K footprint, the recommended AKS VM size template is Standard_F16s_v2.

HA-Based System Requirements

If you are deploying VMware Telco Cloud Service Assurance in the High Availability (HA) mode, ensure that your system meets the following requirements and the deployment scaling requirements.

Footprint Specification

The following tables specify the requirements for deploying different footprints. The number of VMs required for each type of cluster is provided. This is particularly important when HA is required. The following tables provides the number of virtual CPUs, main memory (RAM), and the total disk size for each virtual machine.

Table 3-4. Footprint for TKG Management Cluster

Footprint Size Number of VMs vCPU Per VM RAM Per VM (GBs) Role

25 K 3 2 8 Control Plane Node

2 2 8 Worker Node

50 K 3 4 16 Control Plane Node

2 4 16 Worker Node

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 10

Table 3-4. Footprint for TKG Management Cluster (continued)

Footprint Size Number of VMs vCPU Per VM RAM Per VM (GBs) Role

100 K 3 4 16 Control Plane Node

2 4 16 Worker Node

Note The table shows the TKG management cluster sizing for deployments when a dedicated TKG management cluster is used for the TKG workload cluster in VMware Telco Cloud Service Assurance. To size deployments when multiple workload clusters are managed by a single management cluster, see VMware TKG Documentation.

Table 3-5. Footprint for TKG Workload Cluster

Footprint SizeNumber of VMs

vCPU Per VM

RAM Per VM (GBs)

Local Disk Per VM (GBs)

Total Persistent Volume Storage (TBs) Role

25 K 3 2 8 50 NA Control Plane Node

10 16 64 200 26 Worker Node

50 K 3 4 16 50 NA Control Plane Node

14 16 64 200 41 Worker Node

100 K 3 4 16 50 NA Control Plane Node

20 16 64 200 78 Worker Node

Table 3-6. Footprint for AKS Workload Cluster

Footprint Size Number of VMs vCPU Per VMRAM Per VM (GBs)

Local Disk Per VM (GBs)

Total Persistent Volume Storage (TBs)

25 K 10 16 64 200 26

50 K 14 16 64 200 41

100 K 20 16 64 200 78

Note By default in AKS, first three worker nodes can also be the control plane nodes.

For 25 K, 50 K, and 100 K footprints, the recommended AKS VM size template is Standard_D16s_v3.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 11

Performance and Scalability for Different Deployments

The following table provides sample managed capacity for each footprint. The 50 K and 100 K footprint has been tested to handle all the noted total capacities on a single instance of VMware Telco Cloud Service Assurance. For 25 K footprint numbers are carried over from 1.4 release, however the following table shows random sample validation for 25 K footprints too.

Footprint Small (HA) 25 KSmall-Medium (HA) 50 K Medium (HA) 100 K

Number of Devices 25 K 50 K 100 K

Number of Unique Events or Notifications per day 25 K 50 K 100 K

Number of Metrics per five minutes 10 million 20 million 40 million

Number of Routers or Switches 15 K 30 K 60 K

Managed P and I 300 K 600 K 1.2 million

Number of Hosts 2 K 4 K 8 K

Number of VMs 5 K 10 K 20 K

Number of CNFs 10 K 10 K 10 K

Number of Pods 10 K 10 K 10 K

Total Number of Events=Number of Devices*4+External Events

105 K 205 K 500 K

Number of Raw Metrics from Domain Manager Metric Collector per five minutes polling interval

9 million 14 million 29 million

Kafka to Kafka Collector Metrics per five minutes polling interval

1 million 6 million 11 million

Total number of Concurrent APIs 100 100 100

Number of Concurrent Users 10 10 10

Total number of users 200 200 200

Maximum number of Events from VMware vROps to VMware Telco Cloud Service Assurance per five minutes polling interval

6 K 6 K 6 K

Number of Notifications Processed per second 350 450 450

Data Synchronization of Topology in VMware Telco Cloud Service Assurance UI

6 minutes 8 minutes 10 minutes

Number of metrics that can be exported to external Kafka per five minutes polling interval

10 Million 20 Million 40 Million

Bandwidth Utilization for Storage Traffic 33 Mbps 65 Mbps 135 Mbps

Total Disk IOPS (Read + Write) 1000 2000 4000

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 12

Footprint Small (HA) 25 K Small-Medium (HA) 50 K Medium (HA) 100 K

Native Traffic Flow Metrics 1.5 K 2.5 K 5 K

Note For native traffic flow, scale support mentioned in the table considers that no other source of metric data is flowing to VMware Telco Cloud Service Assurance. If other source of metric data is flowing into VMware Telco Cloud Service Assurance, then reduce the ratio of traffic flow data.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 13

Ports and Protocols 4If you manage network components from outside a firewall, you might be required to reconfigure the firewall to allow access on the appropriate ports.

For the list of all supported ports and protocols in VMware Telco Cloud Service Assurance, see the VMware Ports and Protocols Tool™ at https://ports.vmware.com/home/ and select the product from the list in the left pane.

VMware, Inc. 14

Software Versions and Interoperability 5This section provides a list of software and interoperability versions for different products. The interoperability information defines the qualified products and software versions you can use with VMware Telco Cloud Service Assurance.

Table 5-1. Supported Versions on Tanzu Kubernetes Grid

Product or Suite Versions

VMware Telco Cloud Platform 5G 2.2

VMware Telco Cloud Platform vRAN 1.5

VMware Tanzu Kubernetes Grid Kubernetes Cluster 1.20.14, 1.21.8

VMware Telco Cloud Automation 2.0.1

VMware Tanzu Kubernetes Grid 1.4.2

Table 5-2. Supported Versions for Azure Kubernetes Service

Product Versions

AKS 1.21.9, 1.22.6

Table 5-3. Supported Platform Versions for Domain Manager (IP, SAM, and ESM)

Product Versions

IP, SAM, ESM RHEL 7.7, 7.8, 7.9, 8.2, 8.3, 8.4

SAM Console n RHEL 7.8 or 7.9

n Windows 2012 or 2016

Note SAM Console requires Windows Operating system, running on Intel based processor.

Note Any RHEL based Linux platform is supported for IP, SAM, and ESM.

VMware, Inc. 15

Table 5-4. Interoperability Support for Discovery and Monitoring

Product Versions

VMware vSphere and ESXi 7.0, 7.0U1, 7.0U2, 7.0U3c

VMware NSX-T 3.2.0.1

VMware vCloud Director 10.3.2

VMware vRealize Operations 8.6.2, 8.4

VMware Integrated OpenStack 7.0

VMware Telco Cloud Automation 2.0.1

Table 5-5. Supported Versions for Browser

Browser Versions

Google Chrome 99 or later.

Mozilla Firefox 91 or later.

Table 5-6. Supported Version for Container Registry

Container Registry Version

Harbor 2.2.0 or later.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 16

Prerequisites for VMware Telco Cloud Service Assurance 6This section provides a list of prerequisites required for deploying the VMware Telco Cloud Service Assurance.

Prerequisites for Deploying VMware Telco Cloud Service Assurance

Following requirements have to be met before deploying VMware Telco Cloud Service Assurance:

n For deployment host:

n A Linux x86/64bit host with Docker installed.

n This machine must have connectivity to:

n The public internet to download the VMware Telco Cloud Service Assurance package from VMware Customer Connect.

n The Kubernetes cluster.

n The Container registry.

n Verify that you meet the Deployment Container prerequisites for the deployment host.

n The deployment user home partition must be a minimum of 40 GB.

n If you are deploying VMware Telco Cloud Service Assurance on TKG, verify that you meet the TKG specific prerequisites.

n If you are deploying VMware Telco Cloud Service Assurance on AKS, verify that you meet the AKS specific prerequisites.

Prerequisites for Setting up Deployment Container

n Familiarity with Linux and Docker commands is required.

n Any Linux based platform which has the Docker installed, preferably RHEL and CentOS.

n Allocate 40 GB of hard disk space for storing VMware Telco Cloud Service Assurance files.

n To download the VMware Telco Cloud Service Assurance tar.gz file and the Deployment

Container, verify that the host has internet access.

VMware, Inc. 17

n Verify that the host time zone, date, and time settings correspond to the zone where VMware Telco Cloud Service Assurance is installed. For example, in AKS, it must match the East-US and West-US time zones.

n Verify that NTP service is configured on the deployment host.

n Install Docker on the deployment host. Make sure you use Docker version 20.10.14 or later.

n Ensure that your deployment host is authenticated with your Container Registry. Run the following command on your deployment host and type in the credentials to your registry.

docker login <registry-fqdn>

After you login to the Docker, it prompts you for the registry username and password. Ensure that you enter the Harbor or ACR registry username and password so that you do not have to update registry username and password in deploy.settings file.

Note Verify that the config.json file is present on your host. Typically the config.json must

be under ~/.docker. If not, then use the following command to create the file:

Create an empty config file$ mkdir -p ~/.docker$ echo {} > ~/.docker/config.json

n Verify that your Kubernetes clusters KUBECONFIG file is copied to the $HOME/.kube/ directory on your deployment host.

Prerequisites for Deploying VMware Telco Cloud Service Assurance on TKG with VMware Telco Cloud Automation

n Deployment files provided by VMware Customer Connect from where you can download the deployer package and Deployment Container.

n Harbor must be deployed and a project must be created in Harbor with public access.

n Ensure that you have KUBECONFIG file for deploying VMware Telco Cloud Service Assurance in TKG workload cluster. For more information, see Obtaining KUBECONFIG File from TKG Workload Cluster.

n Verify that you have TKG workload cluster with available vsphere-sc storage class. For

example, vSAN.

n Kubernetes workload cluster virtual IP must be noted post deployment of workload cluster. Use the virtual IP when updating deploy.settings file.

n The recommended registry for TKG is Harbor.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 18

Prerequisites for Deploying VMware Telco Cloud Service Assurance on AKS

n To deploy VMware Telco Cloud Service Assurance, configure AKS cluster.

n If deploying AKS cluster in a private corporate network:

n Create an Azure Virtual Network (VNet) on AKS with required IP address.

Note For AKS, the recommended network plugin is Kubenet. While creating AKS cluster, use --pod-cidr option to ensure that pods get private IP addresses. --pod-cidr option must be provided with private IP address specifying /16 subnet.

n Configure the firewall for communicating between IP addresses in a Subnet.

n Verify that all configured networking resources in Azure, such as VNet, Subnet, and Route Table are available in the same resource group as the Kubernetes cluster.

n Verify that there is connectivity between the network to provide access to external clients or devices that are not part of the same network.

n By having the ports open, verify that the Kubernetes cluster is sending outward traffic. For more information on the list of open ports, see AKS Global Network Rules.

Note n The VMware Telco Cloud Service Assurance and the Domain Manager (IP,

SAM, and ESM) deployment is tested when the VNet, Subnet, and Route table are provided and configured to ensure connectivity to the on-premises test infrastructure.

n You can also deploy VMware Telco Cloud Service Assurance and Domain Manager without the VNet, Subnet, and Route Table. AKS provides a default for the same. Verify that there is connectivity between VMware Telco Cloud Service Assurance and Domain Manager and the infrastructure that monitor VMware Telco Cloud Service Assurance and Domain Manager.

n Create an Azure Container Registry (ACR) instance in the same region and resource group as the AKS cluster.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 19

Deploy VMware Telco Cloud Service Assurance 7This section describes the procedure to deploy VMware Telco Cloud Service Assurance on TKG or AKS.

Note If you are using secure Harbor registry for deploying VMware Telco Cloud Service Assurance, ensure that you follow the steps described in Configure TKG Cluster for Secure Harbor Registry.

Deployment Overview

At a high level, the deployment of VMware Telco Cloud Service Assurance involves the following steps:

Procedure

1 Download the VMware Telco Cloud Service Assurance package from VMware Customer Connect onto deployment host.

2 Extract the bundle on the deployment host.

3 Edit the deploy.settings configuration file.

4 Download and launch the Deployment Container.

5 Trigger the installation script.

Download the Package

This section provides instructions on how to download the VMware Telco Cloud Service Assurance package.

Procedure

1 Log in to the deployment host.

2 Download the VMware Telco Cloud Service Assurance package from the VMware Customer Connect site. A typical VMware Telco Cloud Service Assurance deployment package is named as VMware-TCSA-Deployer-<VERSION>-<BUILD_ID>.tar.gz. For example, VMware-TCSA-Deployer-2.0.0-11.tar.gz.

VMware, Inc. 20

3 Place the VMware Telco Cloud Service Assurance deployer package under your deployment host.

Note To verify the downloaded package, run the following command on your deployment host.

$ shasum256 VMware-TCSA-Deployer-<VERSION>-<BUILD_ID>.tar.gz

This command displays the SHA256 fingerprint of the file. Compare this string with the SHA256 fingerprint provided next to the file in the VMware Customer Connect download site and ensure that they match.

4 Extract the VMware Telco Cloud Service Assurance deployer package and set the environment variable as shown in the following commands:

# Setting up your work space directory on your host.$ export TCSA_WORK_SPACE=<work_space_dir> # directory for downloading or extracting TCSA Deployer package$ cd $TCSA_WORK_SPACE

Extract the Downloaded Package and Installer

This section provides instructions to extract the downloaded package and installer.

Procedure

1 Extract the contents of the package. Use the following commands to access the contents of your directory:

# cd tcx-deployer/# ls -ltrtotal 16drwxr-xr-x. 2 root root 67 Apr 20 12:23 imagesdrwxr-xr-x. 4 root root 40 Apr 20 12:23 imgpkg-rw-r--r--. 1 root root 10578 Apr 20 12:26 VMware-TCSA-Deployer-<VERSION>-<BUILD_ID>_metadata.yaml-rw-r--r--. 1 root root 220 Apr 20 12:27 releasedrwxr-xr-x. 3 root root 18 Apr 20 12:27 product-helm-chartsdrwxr-xr-x. 5 root root 192 Apr 26 03:30 scripts

2 Unpack the installer within the deployer package.

# cd scripts# tar -xzf tcx_install.tar.gz total 860-r-xr-xr-x. 1 root root 24707 Dec 31 1999 tcx_install.zipdrwxr-xr-x. 5 root root 163 Dec 31 1999 ansible-rw-r--r--. 1 root root 22 Apr 20 12:26 tag-rw-r--r--. 1 root root 281 Apr 20 12:26 imgpkg_tags.yaml-rw-r--r--. 1 root root 130 Apr 20 12:26 footprints.yaml

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 21

-r-xr-xr-x. 1 root root 707181 Apr 20 12:27 tcx_install.tar.gzdrwxr-xr-x. 9 root root 122 Apr 20 15:26 manifestsdrwxr-xr-x. 3 root root 246 Apr 25 03:14 deployment-rw-r--r--. 1 root root 127972 Apr 25 03:17 tcx_installer_log.log

Configure Deploy.Settings File

Use the deploy.settings file to update parameters specific to TKG or AKS configuration.

Configure Deploy.Settings File for TKG

For TKG configuration, use the deploy.settings file available in the <TCSA_WORK_SPACE>/tcx-deployer/scripts/deployment/ directory.

Update the parameters in the deploy.settings file.

n To obtain the KUBECONFIG file for the following snippet, see Obtaining KUBECONFIG File from TKG Workload Cluster.

# ========== General configuration ========== ## Mandatory: Path to the KUBECONFIG file of the Kubernetes cluster inside the deployment container (for example: /tmp/.kube/<YOUR-CLUSTER-KUBECONFIG-file>KUBECONFIG=

n Based on the footprint, select the PRODUCT_DEPLOYMENT_TIMEOUT.

# ========== Product details ========== #PRODUCT=tcsa # Do not modify# Product helm config

# Mandatory: The footprint to deploy. Possible values are: 2.5k, 25k, 50k, 100k (case sensitive).FOOTPRINT=# Mandatory: Time to wait for the deployment to complete. Must be in minutes (examples: 30m for 2.5k, 45m for 25k, 60m for 50k, and 75m for 100k)PRODUCT_DEPLOYMENT_TIMEOUT=30m

# ========== Deployment Location ========== ## Mandatory: The cloud provider location for the deployment (azure or tkg)LOCATION=tkg# == TKG default namespace of the workload cluster. Unless otherwise configured, set to "default" ==TKG_NAMESPACE=default # ========= Deployment modes and actions ========== ## Mandatory:# Options are:# "init": Initialize cluster, push artifacts, deploy core controllers# "deploy-apps": Install the product by deploying its applications# "deploy-all": init + deploy-apps# "cleanup": Uninstall the product and cleanup the clusterDEPLOYMENT_ACTION="deploy-all"## Optional: Set this to '--force' if you want to cleanup by force without waiting for

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 22

user confirmationDELETE_ARGS= ASYNCHRONOUS_MODE=true # Do not modify# Optional: Controls the speed of deployment of TCSA apps.ASYNCHRONOUS_MODE_DEPLOYMENT_INTERVAL=2s # Optional: To access TCSA edge services with a static IP address, set this to "--set ingressHostname.edgeServices=<IP-address>"PRODUCT_SPECIFIC_HELM_OVERRIDES=""

n Update the Kubernetes workload cluster virtual IP that is assigned to the workload cluster. For more information about virtual IP address, see Deploy a Workload Cluster.

# The IP address/FQDN of ingress. i.e. the name that will be use in the URL to# access the product landing page.INGRESS_HOSTNAME=<Kubernetes workload cluster Virtual IP>

n If you are using secure Harbor registry with certificate based authentication (https), then update the following fields for the registry details. For the registry URL, use the format as harbor_fqdn/project_name/tcx. URL must be given without https or http and must end

with /tcx.

If you are using docker login <registry-fqdn> command, then you do not have to

update <REGISTRY_USERNAME> and <REGISTRY_PASSWORD> in deploy.settings file. For more

information, see Prerequisites for Setting up Deployment Container.

# ========== Registry details ========== ## Note: The "/tcx" suffix is mandatory# MandatoryREGISTRY_URL=<your container registry URL>/tcx# MandatoryREGISTRY_USERNAME=<your registry username># MandatoryREGISTRY_PASSWORD=<your registry password># Optional: If the registry uses certificates, path to the certificates file (.crt)REGISTRY_CERTS_PATH=<path to your registry certificate file>

For information about using secure Harbor registry with certificate based authentication (https) in TKG workload cluster, see Configure TKG Cluster for Secure Harbor Registry.

n If you are using Harbor without certificate based authentication (http), then update the following fields for the registry details. For the registry url, use the format as harbor_fqdn/project_name/tcx. URL must be given without https or http and must end with /tcx.

# ========== Registry details ========== ## Note: The "/tcx" suffix is mandatory# MandatoryREGISTRY_URL=<your container registry URL>/tcx

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 23

# MandatoryREGISTRY_USERNAME=<your registry username># MandatoryREGISTRY_PASSWORD=<your registry password>

Configure Deploy.Settings File for AKS

For AKS configuration, use the deploy.settings file available in the <TCSA_WORK_SPACE>/tcx-deployer/scripts/deployment/ directory.

Verify that you meet the following prerequisites for configuring the deploy.settings file in AKS:

n Obtain KUBECONFIG file for AKS workload cluster from Azure.

n For obtaining registry details, go to ACR registry section in Azure website and obtain registry URL, username, and password.

Update the parameters in the deploy.settings file. For the ACR registry url, use the format

as harbor_fqdn/project_name/tcx. URL must be given without https or http and must end

with /tcx.

If you are using docker login <registry-fqdn> command, then you do not have to update

<REGISTRY_USERNAME> and <REGISTRY_PASSWORD> in deploy.settings file. For more information,

see Prerequisites for Setting up Deployment Container.

# ========== General configuration ========== ## Mandatory: Path to the KUBECONFIG file of the Kubernetes cluster inside the deployment container (for example: /tmp/.kube/<YOUR-CLUSTER-KUBECONFIG-file>KUBECONFIG= # ========== Product details ========== #PRODUCT=tcsa # Do not modify# Product helm config

# Mandatory: The footprint to deploy. Possible values are: 2.5k, 25k, 50k, 100k (case sensitive).FOOTPRINT=# Mandatory: Time to wait for the deployment to complete. Must be in minutes (examples: 30m for 2.5k, 45m for 25k, 60m for 50k, and 75m for 100k)PRODUCT_DEPLOYMENT_TIMEOUT=30m

# ========== Deployment Location ========== ## Mandatory: The cloud provider location for the deployment (azure or tkg)LOCATION=azure

# == Azure configuration ==# Mandatory: The resource group of your AKS cluster.AKS_RESOURCE_GROUP=<resource group of AKS cluster># Mandatory: If the AKS cluster is in a private network, set this to TRUEPRIVATE_NETWORK=<TRUE|FALSE>

# Mandatory: The IP address/FQDN of ingress. i.e. the name that will be use in the URL to# access the product landing page.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 24

# This can be:# 1. A public IP address automatically assigned by Azure or# 2. A static Public IP address created manually.# 3. A private IP address from your Vnet. # Contact your Azure admin to get the right IP address. INGRESS_HOSTNAME=

# ========== Registry details ========== ## These are mandatory parameters# Note: The "/tcx" suffix is mandatoryREGISTRY_URL=<your ACR instance URL>/tcxREGISTRY_USERNAME=<ACR registry username>REGISTRY_PASSWORD=<ACR registry password>

# Optional: To access TCSA edge services with a static IP address, set this to "--set ingressHostname.edgeServices=<IP-address>"# The IP address can be a public IP assigned by Azure or private IP from your Vnet. Contact your Azure admin to get the right parameter. PRODUCT_SPECIFIC_HELM_OVERRIDES=""

Note If Kubenet network plugin is used for AKS cluster creation, then INGRESS_HOSTNAME=<IPAddress1> and PRODUCT_SPECIFIC_HELM_OVERRIDES="--set ingressHostname.edgeServices=<IPAddress2>". For example,

PRODUCT_SPECIFIC_HELM_OVERRIDES="--set ingressHostname.edgeServices=10.183.142.44". In

the variable , <IPAddress1> and <IPAddress2> must be obtained from the CIDR specified at the

time of AKS cluster creation. These IP addresses must be free or available and must not be used for any other purpose.

If Azure CNI network plugin is used, you must update the variable INGRESS_HOSTNAME=<IPAddress1> and PRODUCT_SPECIFIC_HELM_OVERRIDES="" is left empty.

IPAddress1 is a public IP and must be created in the same region and resource group where AKS

cluster is created. IPAddress1 is used for istio loadbalancer.

Download and Launch the Deployment Container

A Docker image is an executable package that includes everything needed to run an application. The Deployment Container is used for triggering the installation of VMware Telco Cloud Service Assurance. In this section, you can find instructions on how to download and launch the Deployment Container.

Note In case you do not have internet connectivity or have a complete isolated deployment host, use the DarkSite deployment procedure for hosting the local registry or saving the deployment image to a tar archive on jump host. For more information, see DarkSite Deployment.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 25

Prerequisites

Before you download and launch the Deployment Container, verify that you meet the prerequisites for setting up the Deployment Container. For more information, see Prerequisites for Setting up Deployment Container.

Procedure

1 Download the Deployment Container on to the deployment host, For more information, see Download the Deployment Container.

2 Launch the Deployment Container on the deployment host. For more information, see Launch the Deployment Container.

Note The execution takes place inside the Deployment Container.

Download the Deployment Container

This section provides instructions to download the Deployment Container.

Procedure

u Pull the Deployment Container from the VMware Distribution Harbor using the following command. Use the specific tag or the recommended SHA256 digest to pull the container as given in the following command:

$ docker pull projects.registry.vmware.com/tcx/deployment:1.0.0-3OR$ docker pull projects.registry.vmware.com/tcx/deployment@sha256:99ddc638fb7b7714a08bae31c2be79f57a639486a1a95d5df8447112bd727a6d

The registry projects.registry.vmware.com/tcx/ is publicly accessible and is hosted on

VMware registry.

Note To verify the downloaded image, run the following command on your deployment host. Compare SHA256 fingerprint in the DIGEST column with the digest in the previous pull command and ensure they match.

$ docker images --digests

$ export DEPLOYMENT_IMAGE=projects.registry.vmware.com/tcx/deployment:1.0.0-3

What to do next

After you download the Deployment Container, you must launch it. For more information, see Launch the Deployment Container.

Launch the Deployment Container

This section provides instructions to launch the Deployment Container.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 26

Procedure

1 To launch and log in to the Deployment Container, run the following commands and set the environment variables as shown in the following snippet:

$ docker run \ --rm \ -v ${TCSA_WORK_SPACE}/tcx-deployer:/root/tcx-deployer \ -v $HOME/.ssh:/root/.ssh \ -v $HOME/.kube:/root/.kube \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $(which docker):/usr/local/bin/docker:ro \ -v $HOME/.docker:/root/.docker:ro \ -v /etc/docker:/etc/docker:rw \ --network host \ -it $DEPLOYMENT_IMAGE \ bash

2 Verify if you can access the Kubernetes cluster from the container.

$ kubectl get nodes

3 After you have logged in to the Deployment Container, go to the deployer path <TCSA_WORK_SPACE>/tcx-deployer. You can view the contents of the tcx-deployer as

shown in the following snippet:

$ ls -ltr tcx-deployer/total 16drwxr-xr-x 2 root root 67 Apr 20 05:40 imagesdrwxr-xr-x 4 root root 40 Apr 20 05:40 imgpkg-rw-r--r-- 1 root root 10588 Apr 20 05:46 VMware-TCSA-Deployer-<VERSION>-<BUILD_ID>_metadata.yaml-rw-r--r-- 1 root root 222 Apr 20 05:47 bundledrwxr-xr-x 3 root root 19 Apr 20 05:47 product-helm-chartsdrwxr-xr-x 7 root root 231 Apr 20 10:10 scripts

If you cannot view the contents of tcx-deployer, then exit the Deployment Container and rerun the docker run command from Step 1.

What to do next

After you launch the Deployment Container, you must trigger the deployment. For more information, see Trigger Deployment Using the Installation Script.

DarkSite Deployment

In case, you do not have internet connectivity or have a complete isolated deployment host, use the DarkSite deployment procedure for the VMware Telco Cloud Service Assurance installation.

Prerequisites

n Ensure that you have a jump host that has internet connectivity.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 27

n Verify that you have a jump host with the Docker installed and running.

Procedure

1 Download VMware Telco Cloud Service Assurance files and pull the Deployment Container to your jump host.

2 Push the Deployment Container to local registry, or pack the Deployment Container to .tar and then transfer to a deployment host.

# Option A: If you are hosting local registry$ docker pull projects.registry.vmware.com/tcx/deployment:<VERSON>-<BUILD_ID>$ docker tag projects.registry.vmware.com/tcx/deployment:<VERSON>-<BUILD_ID> <local-registry>/deployment:<VERSON><BUILD_ID>$ docker push <local-registry>/deployment:<VERSON><BUILD_ID> # Option B:# Save deployment image to a tar archive on jump host$ docker save -o <dir/on/jumphost>/deployment.tar projects.registry.vmware.com/tcx/deployment:<VERSON><BUILD_ID># Transfer to net deployment host and reload it$ docker load -i <dir/on/darksite>/deployment.tar

Trigger Deployment Using the Installation Script

You can trigger the deployment for both TKG and AKS using the same installation script.

Following installation script is used to trigger the deployment inside the Deployment Container.

root [ ~ ]# cd tcx-deployer/scripts/deployment/root [ ~/tcx-deployer/scripts/deployment ]# ./tcx_app_deployment.sh

Note After the tcx_app_deployment.sh script exits, wait for ten minutes for all the services to

come up and then launch the VMware Telco Cloud Service Assurance browser in the UI.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 28

Configuring VMware Telco Cloud Service Assurance and Domain Manager

8This section describes the configuration required to ensure connectivity between VMware Telco Cloud Service Assurance and Domain Manager (SAM, IP, and ESM).

This chapter includes the following topics:

n Accessing VMware Telco Cloud Service Assurance UI

n Installing Domain Manager

Accessing VMware Telco Cloud Service Assurance UI

This section provides information on how you can access the VMware Telco Cloud Service Assurance UI in TKG and AKS.

For accessing the VMware Telco Cloud Service Assurance UI in TKG, use your browser to point to https://<INGRESS_HOSTNAME>:<30002>. The value of INGRESS_HOSTNAME is the same parameter

used in the deploy.settings file.

For accessing the VMware Telco Cloud Service Assurance UI in AKS, use your browser to point to https://<INGRESS_HOSTNAME>. The value of INGRESS_HOSTNAME is the same parameter used in

the deploy.settings file.

For accessing the VMware Telco Cloud Service Assurance UI, you can use the default credentials as admin and changeme.

Note To change the default password for Keycloak administrator user and VMware Telco Cloud Service Assurance user, see KB Article.

Installing Domain Manager

This section provides a list of documents required for installing Domain Manager in VMware Telco Cloud Service Assurance.

VMware, Inc. 29

To install Domain Manager, refer to the following Domain Manager installation guide and other supporting guides.

Note Discovery operations from the VMware Telco Cloud Service Assurance UI happens through the EDAA call and hence the Domain Manager must be started in EDAA mode. For more information, see Domain Manager installation guide for installation, uninstallation, and starting servers.

Domain Manager Documentation Links

Domain Manager Installation Guide Installation Guide for IP, SAM, and ESM Managers

Domain Manager Security Guide Security Guide for Domain Manager

Domain Manager Security Update for Multiple Vulnerabilities

Domain Manager Security Update for Multiple Vulnerabilities

Domain Manager Support Matrix Domain Manager Support Matrix

IP Manager Concepts Guide IP Manager Concepts Guide

IP Manager Troubleshooting Guide IP Manager Troubleshooting Guide

IP Manager User Guide IP Manager User Guide

IP Manager Reference Guide IP Manager Reference Guide

IP Manager Deployment Guide IP Manager Deployment Guide

SAM Notification Module User Guide SAM Notification Module User Guide

SAM Adapter Platform User Guide SAM Adapter Platform User Guide

SAM BIM Manager User Guide SAM BIM Manager User Guide

SAM Notification Adapter User Guide SAM Notification Adapter User Guide

SAM Troubleshooting Guide SAM Troubleshooting Guide

SAM Introduction SAM Introduction

SAM Configuration Guide SAM Configuration Guide

SAM Deployment Guide SAM Deployment Guide

ESM User Guide ESM User Guide

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 30

Incremental Scaling of VMware Telco Cloud Service Assurance on TKG and AKS

9Using TKG or AKS, you can scale your VMware Telco Cloud Service Assurance deployment from one footprint to another footprint.

This chapter includes the following topics:

n Incremental Scaling on TKG

n Incremental Scaling on AKS

Incremental Scaling on TKG

You can scale your VMware Telco Cloud Service Assurance deployment on TKG from 25 K to 50 K, 25 K to 100 K, and 50 K to 100 K footprints.

VM Sizing for VMware Telco Cloud Service Assurance

Footprint VMs vCPU Per VM RAM Per VM (GBs)

25 K 10 16 64

50 K 14 16 64

100 K 20 16 64

Scaling from 25 K to 50 K

Scale the number of nodes in your cluster by following Tanzu CLI or VMware Telco Cloud Automation UI as per the destination footprint.

Scaling on TKG using Tanzu CLI

VMware, Inc. 31

To scale the number of nodes in your cluster, use the following command.

Note You can run the following command only from the CPN IP of your VMware Telco Cloud Automation vSphere.

tanzu cluster scale <your TKG workload cluster> --controlplane-machine-count 3 --worker-machine-count <number_of_vms> --namespace=<namespace of cluster> Example:tanzu cluster scale tcsa-cluster --controlplane-machine-count 3 --worker-machine-count 14 --namespace=tcsacluster

Note The <number_of_vms> parameter refers to the number of VMs in the destination footprint of

the workload cluster. In the example, number of VMs is 14 because the destination footprint is 100 K.

Scaling on TKG using VMware Telco Cloud Automation UI

Depending on the footprint you want to scale up to, increase the replicas of the worker node in your cluster. To increase the number of worker nodes in your cluster, see Edit a Kubernetes Cluster Node Pool in VMware Telco Cloud Automation documentation.

To scale the VMware Telco Cloud Service Assurance in your cluster from the Deployment Container, perform the following steps:

n Navigate to the deploy.settings file inside the VMware Telco Cloud Service Assurance

deployer bundle.

cd tcx-deployer/scripts/deployment ls -lrttotal 88-r-xr-xr-x 1 root root 6696 May 1 utils.sh-r-xr-xr-x 1 root root 6844 May 1 uninstall-r-xr-xr-x 1 root root 6148 May 1 tcx_app_deployment.sh-r-xr-xr-x 1 root root 215 May 1 resize_lv.shdrwxr-xr-x 3 root root 197 May 1 m3a_demo-r-xr-xr-x 1 root root 6437 May 1 install-kubernetes.sh-r-xr-xr-x 1 root root 454 May 1 install-govc-r-xr-xr-x 1 root root 4511 May 1 footprint.json-r-xr-xr-x 1 root root 30363 May 1 deploy-vms.sh-r-xr-xr-x 1 root root 3796 May 1 deploy-vms.settings-r-xr-xr-x 1 root root 2185 May 1 deploy.settings vi deploy.settings

n Set the FOOTPRINT under Product Helm Config to the footprint you want to upgrade to.

# Product helm configFOOTPRINT=50k

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 32

n Set the DEPLOYMENT_ACTION to deploy-apps under Deployment modes and actions and then

Save and Exit the deploy.settings file.

# ========= Deployment modes and actions ========== ## Options are "init", "deploy-apps", "deploy-all" or "cleanup"DEPLOYMENT_ACTION="deploy-apps"

n Run the tcx_app_deployment script.

./tcx_app_deployment.sh

Scaling from 25 K to 100 K

Scale the number of nodes in your cluster by following Tanzu CLI or VMware Telco Cloud Automation UI as per the destination footprint.

Scaling on TKG using Tanzu CLI

To scale the number of nodes in your cluster, use the following command.

Note You can run the following command only from the CPN IP of your VMware Telco Cloud Automation vSphere.

tanzu cluster scale <your TKG workload cluster> --controlplane-machine-count 3 --worker-machine-count <number_of_vms> --namespace=<namespace of cluster> Example:tanzu cluster scale tcsa-cluster --controlplane-machine-count 3 --worker-machine-count 20 --namespace=tcsacluster

For scaling between footprints using VMware Telco Cloud Automation UI, see Scaling on TKG using VMware Telco Cloud Automation UI.

To scale the VMware Telco Cloud Service Assurance in your cluster from the Deployment Container, perform the following steps:

n Go to the deploy.settings file inside the VMware Telco Cloud Service Assurance deployer

bundle.

cd tcx-deployer/scripts/deployment ls -lrttotal 88-r-xr-xr-x 1 root root 6696 May 1 utils.sh-r-xr-xr-x 1 root root 6844 May 1 uninstall-r-xr-xr-x 1 root root 6148 May 1 tcx_app_deployment.sh-r-xr-xr-x 1 root root 215 May 1 resize_lv.shdrwxr-xr-x 3 root root 197 May 1 m3a_demo-r-xr-xr-x 1 root root 6437 May 1 install-kubernetes.sh-r-xr-xr-x 1 root root 454 May 1 install-govc-r-xr-xr-x 1 root root 4511 May 1 footprint.json-r-xr-xr-x 1 root root 30363 May 1 deploy-vms.sh

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 33

-r-xr-xr-x 1 root root 3796 May 1 deploy-vms.settings-r-xr-xr-x 1 root root 2185 May 1 deploy.settings vi deploy.settings

n Set the FOOTPRINT under Product Helm Config to the footprint you want to upgrade to.

# Product helm configFOOTPRINT=100k

n Set the DEPLOYMENT_ACTION to deploy-apps under Deployment modes and actions and then

Save and Exit the deploy.settings file.

# ========= Deployment modes and actions ========== ## Options are "init", "deploy-apps", "deploy-all" or "cleanup"DEPLOYMENT_ACTION="deploy-apps"

n Run the tcx_app_deployment script.

./tcx_app_deployment.sh

Scaling from 50 K to 100 K

Scale the number of nodes in your cluster by following Tanzu CLI or VMware Telco Cloud Automation UI as per the destination footprint.

Scaling on TKG using Tanzu CLI

To scale the number of nodes in your cluster, use the following command.

Note You can run the following command only from the CPN IP of your VMware Telco Cloud Automation vSphere.

tanzu cluster scale <your TKG workload cluster> --controlplane-machine-count 3 --worker-machine-count <number_of_vms> --namespace=<namespace of cluster> Example:tanzu cluster scale tcsa-cluster --controlplane-machine-count 3 --worker-machine-count 20 --namespace=tcsacluster

For scaling between footprints using VMware Telco Cloud Automation UI, see Scaling on TKG using VMware Telco Cloud Automation UI.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 34

To scale the VMware Telco Cloud Service Assurance in your cluster from the Deployment Container, perform the following steps:

n Go to the deploy.settings file inside the VMware Telco Cloud Service Assurance deployer

bundle.

cd tcx-deployer/scripts/deployment ls -lrttotal 88-r-xr-xr-x 1 root root 6696 May 1 utils.sh-r-xr-xr-x 1 root root 6844 May 1 uninstall-r-xr-xr-x 1 root root 6148 May 1 tcx_app_deployment.sh-r-xr-xr-x 1 root root 215 May 1 resize_lv.shdrwxr-xr-x 3 root root 197 May 1 m3a_demo-r-xr-xr-x 1 root root 6437 May 1 install-kubernetes.sh-r-xr-xr-x 1 root root 454 May 1 install-govc-r-xr-xr-x 1 root root 4511 May 1 footprint.json-r-xr-xr-x 1 root root 30363 May 1 deploy-vms.sh-r-xr-xr-x 1 root root 3796 May 1 deploy-vms.settings-r-xr-xr-x 1 root root 2185 May 1 deploy.settings vi deploy.settings

n Set the FOOTPRINT under Product Helm Config to the footprint you want to upgrade to.

# Product helm configFOOTPRINT=100k

n Set the DEPLOYMENT_ACTION to deploy-apps under Deployment modes and actions and then

Save and Exit the deploy.settings file.

# ========= Deployment modes and actions ========== ## Options are "init", "deploy-apps", "deploy-all" or "cleanup"DEPLOYMENT_ACTION="deploy-apps"

n Run the tcx_app_deployment script.

./tcx_app_deployment.sh

Incremental Scaling on AKS

You can scale your VMware Telco Cloud Service Assurance deployment on AKS from 25 K to 50 K, 25 K to 100 K, and 50 K to 100 K footprint.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 35

VM Sizing for VMware Telco Cloud Service Assurance

Footprint VMs Total vCPU Total RAM (GBs)

25 K 10 16 64

50 K 14 16 64

100 K 20 16 64

Scaling from 25 K to 50 K

n Scale the number of nodes in your cluster according to the footprint. The following snippet shows a sample example. For more information on how to scale the nodes for your AKS cluster, see the AKS documentation.

az aks scale --resource-group <your Resource-Group> --name <name of cluster> --node-count <number_of_vms> --nodepool-name <name of nodepool> Example:az aks scale --resource-group rg-vmw-us-west --name tcsa-cluster --node-count 14 --nodepool-name nodepool1

Note The <number_of_vms> parameter refers to the number of VMs in the destination

footprint of the workload cluster. In the example, number of VMs is 14 because the destination footprint is 100 K.

n To scale the VMware Telco Cloud Service Assurance in your cluster from the Deployment Container, perform the following steps:

n Navigate to the deploy.settings file inside the VMware Telco Cloud Service Assurance

deployer bundle.

cd tcx-deployer/scripts/deployment ls -lrttotal 88-r-xr-xr-x 1 root root 6696 May 1 utils.sh-r-xr-xr-x 1 root root 6844 May 1 uninstall-r-xr-xr-x 1 root root 6148 May 1 tcx_app_deployment.sh-r-xr-xr-x 1 root root 215 May 1 resize_lv.shdrwxr-xr-x 3 root root 197 May 1 m3a_demo-r-xr-xr-x 1 root root 6437 May 1 install-kubernetes.sh-r-xr-xr-x 1 root root 454 May 1 install-govc-r-xr-xr-x 1 root root 4511 May 1 footprint.json-r-xr-xr-x 1 root root 30363 May 1 deploy-vms.sh-r-xr-xr-x 1 root root 3796 May 1 deploy-vms.settings-r-xr-xr-x 1 root root 2185 May 1 deploy.settings vi deploy.settings

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 36

n Set the FOOTPRINT under Product Helm Config to the footprint you want to upgrade to.

# Product helm configFOOTPRINT=50k

n Set the DEPLOYMENT_ACTION to deploy-apps under Deployment modes and actions and then

Save and Exit the deploy.settings file.

# ========= Deployment modes and actions ========== ## Options are "init", "deploy-apps", "deploy-all" or "cleanup"DEPLOYMENT_ACTION="deploy-apps"

n Run the tcx_app_deployment script.

./tcx_app_deployment.sh

Scaling from 25 K to 100 K

n Scale the number of nodes in your cluster according to the footprint. The following snippet shows a sample example. For more information on how to scale the nodes for your AKS cluster, see the AKS documentation.

az aks scale --resource-group <your Resource-Group> --name <name of cluster> --node-count <number_of_vms> --nodepool-name <name of nodepool> Example:az aks scale --resource-group rg-vmw-us-west --name tcsa-cluster --node-count 20 --nodepool-name nodepool1

n To scale the VMware Telco Cloud Service Assurance footprint in your cluster from the Deployment Container, perform the following steps:

n Go to the deploy.settings file inside the VMware Telco Cloud Service Assurance

deployer bundle.

cd tcx-deployer/scripts/deployment ls -lrttotal 88-r-xr-xr-x 1 root root 6696 May 1 utils.sh-r-xr-xr-x 1 root root 6844 May 1 uninstall-r-xr-xr-x 1 root root 6148 May 1 tcx_app_deployment.sh-r-xr-xr-x 1 root root 215 May 1 resize_lv.shdrwxr-xr-x 3 root root 197 May 1 m3a_demo-r-xr-xr-x 1 root root 6437 May 1 install-kubernetes.sh-r-xr-xr-x 1 root root 454 May 1 install-govc-r-xr-xr-x 1 root root 4511 May 1 footprint.json-r-xr-xr-x 1 root root 30363 May 1 deploy-vms.sh-r-xr-xr-x 1 root root 3796 May 1 deploy-vms.settings-r-xr-xr-x 1 root root 2185 May 1 deploy.settings vi deploy.settings

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 37

n Set the FOOTPRINT under Product Helm Config to the footprint you want to upgrade to.

# Product helm configFOOTPRINT=100k

n Set the DEPLOYMENT_ACTION to deploy-apps under Deployment modes and actions and then

Save and Exit the deploy.settings file.

# ========= Deployment modes and actions ========== ## Options are "init", "deploy-apps", "deploy-all" or "cleanup"DEPLOYMENT_ACTION="deploy-apps"

n Run the tcx_app_deployment script.

./tcx_app_deployment.sh

Scaling from 50 K to 100 K

n Scale the number of nodes in your cluster according to the footprint. The following snippet shows a sample example. For more information on how to scale the nodes for your AKS cluster, see the AKS documentation.

az aks scale --resource-group <your Resource-Group> --name <name of cluster> --node-count <number_of_vms> --nodepool-name <name of nodepool> Example:az aks scale --resource-group rg-vmw-us-west --name tcsa-cluster --node-count 20 --nodepool-name nodepool1

n To scale the VMware Telco Cloud Service Assurance footprint in your cluster from the Deployment Container, perform the following steps:

n Go to the deploy.settings file inside the VMware Telco Cloud Service Assurance

deployer bundle.

cd tcx-deployer/scripts/deployment ls -lrttotal 88-r-xr-xr-x 1 root root 6696 May 1 utils.sh-r-xr-xr-x 1 root root 6844 May 1 uninstall-r-xr-xr-x 1 root root 6148 May 1 tcx_app_deployment.sh-r-xr-xr-x 1 root root 215 May 1 resize_lv.shdrwxr-xr-x 3 root root 197 May 1 m3a_demo-r-xr-xr-x 1 root root 6437 May 1 install-kubernetes.sh-r-xr-xr-x 1 root root 454 May 1 install-govc-r-xr-xr-x 1 root root 4511 May 1 footprint.json-r-xr-xr-x 1 root root 30363 May 1 deploy-vms.sh-r-xr-xr-x 1 root root 3796 May 1 deploy-vms.settings-r-xr-xr-x 1 root root 2185 May 1 deploy.settings vi deploy.settings

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 38

n Set the FOOTPRINT under Product Helm Config to the footprint you want to upgrade to.

# Product helm configFOOTPRINT=100k

n Set the DEPLOYMENT_ACTION to deploy-apps under Deployment modes and actions and then

Save and Exit the deploy.settings file.

# ========== Product details ========== ## Space separate list of values.yaml files to be used for deploying the productPRODUCT_VALUES_FILES="values-azure values-azure-100k values-imgpkg-overrides"

n Run the tcx_app_deployment script.

./tcx_app_deployment.sh

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 39

Configuring TKG with VMware Telco Cloud Automation 10This section provides information on how to deploy management cluster, workload cluster, configure Harbor registry, obtain workload cluster KUBECONFIG file, and update deployment settings file for TKG.

TKG supports multiple storage. However, VMware Telco Cloud Service Assurance deployment on TKG is qualified using vSAN storage.

This chapter includes the following topics:

n Steps to Deploy VMware Telco Cloud Service Assurance on TKG with VMware Telco Cloud Automation

n Deploying TKG Management Cluster

n Deploying TKG Workload Cluster

n Configuring Harbor Registry in VMware Telco Cloud Automation

n Obtaining KUBECONFIG File from TKG Workload Cluster

Steps to Deploy VMware Telco Cloud Service Assurance on TKG with VMware Telco Cloud Automation

This section provides high-level information on how to deploy VMware Telco Cloud Service Assurance on TKG with VMware Telco Cloud Automation.

The process for deploying the VMware Telco Cloud Service Assurance on a TKG cluster using VMware Telco Cloud Automation consists of the following steps:

Procedure

1 Using VMware Telco Cloud Automation Manager UI, create a management cluster template. For more information on how to create a management cluster template, see Create a Management Cluster Template in VMware Telco Cloud Automation documentation.

2 Using VMware Telco Cloud Automation Manger UI, create a workload cluster template. For more information on how to create a workload cluster template, see Create a Workload Cluster Template in VMware Telco Cloud Automation documentation.

VMware, Inc. 40

3 Using VMware Telco Cloud Automation Manager UI, deploy a management cluster using the management cluster template from previous step. For more information on how to deploy a management cluster using management cluster template, see Deploy a Management Cluster in VMware Telco Cloud Automation documentation.

4 Using VMware Telco Cloud Automation Manager UI, deploy a workload cluster using the workload cluster template from previous step. For more information on how to deploy a workload cluster using workload cluster template, see Deploy a Workload Cluster in VMware Telco Cloud Automation documentation.

5 From the local deployment host, download or copy the VMware Telco Cloud Service Assurance deployment files, the KUBECONFIG file for the TKG workload cluster from the previous step, and verify the Deployment Container is up and running.

6 Unpack the deployer bundle on the deployment host.

7 Update the deployment.settings file to match the deployment target for the TKG cluster

and the footprint of the deployment such as 25 K, 50 K, 100 K, and so on.

8 Run the tcx_app_deployment.sh script to install VMware Telco Cloud Service Assurance.

Deploying TKG Management Cluster

You must have a TKG management cluster to manage TKG workload clusters. With VMware Telco Cloud Automation, you can manage multiple workload clusters with a single TKG management cluster.

For deploying VMware Telco Cloud Service Assurance, you must create a TKG management cluster in VMware Telco Cloud Automation for managing the workload cluster for VMware Telco Cloud Service Assurance deployment. A VMware Telco Cloud Automation management template is used to define the resources and configuration of the management cluster such as the number of Control Plane Node (CPNs), number of worker nodes, CPU, orthe memory configuration for the nodes, CNI provider, CSI provider, and more. Use the sizing chart information for determining the size and the resource requirements required for the management cluster based on the VMware Telco Cloud Service Assurance footprint. For more information on the different footprint sizing and specifications, see the Non HA-Based System Requirements and HA-Based System Requirements.

The wizard in the VMware Telco Cloud Automation Manager UI for creating the management cluster guides you through the steps. If multiple versions of Kubernetes are supported by VMware Telco Cloud Automation, verify that you select one that the VMware Telco Cloud Service Assurance supports. For more information on how to deploy TKG management cluster, see Deploy a Management Cluster in VMware Telco Cloud Automation documentation.

Deploying TKG Workload Cluster

TKG workload cluster is the Kubernetes cluster where you can see the actual VMware Telco Cloud Service Assurance product deployed. TKG workload clusters are associated with a single management cluster, which provides certain services for cluster lifecycle and management.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 41

You must create the TKG workload cluster using the VMware Telco Cloud Automation Manager UI using the workload template. The VMware Telco Cloud Automation workload template is used to define the resources and configuration of the workload cluster such as the number of Control Plane Node (CPNs), number of worker nodes, CPU, or the memory configuration for the nodes, CNI provider, CSI provider, vSphere resource location, and more. Use the sizing chart information for determining the size and the resource requirements required for the workload cluster based on the VMware Telco Cloud Service Assurance footprint. For more information on the different footprint sizing and specifications, see the Non HA-Based System Requirements and HA-Based System Requirements.

Note The following versions are supported for TKG workload cluster:

n TKG Kubernetes Cluster: 1.20.14 or 1.21.8

n CNI Provider: Preferably Calico

n Photon OS: photon-3-kube-v1.20.14-vmware.1-tkg.2 or photon-3-kube-v1.21.8-vmware.1-tkg.2

Photon OS must be selected based on the TKG Kubernetes version.

The wizard in the VMware Telco Cloud Automation Manager UI for creating the workload cluster guides you through the steps. If multiple versions of Kubernetes are supported by VMware Telco Cloud Automation, verify that you select one that the VMware Telco Cloud Service Assurance supports. For more information on how to deploy TKG workload cluster, see Deploy a Workload Cluster in VMware Telco Cloud Automation documentation.

Configuring Harbor Registry in VMware Telco Cloud Automation

VMware Telco Cloud Automation provides a mechanism to link container registry systems into its configuration of TKG clusters.

Using the VMware Telco Cloud Automation Manager UI, you can define an entry for Partner Systems for the Harbor registry that is used to deploy VMware Telco Cloud Service Assurance specific deployment files. You can associate the Harbor registry to the workload cluster, especially when using a secure registry. Follow the Partner Systems wizard to register to the Harbor registry in VMware Telco Cloud Automation and to create an association to the workload cluster. For more information, see Registering Partner Systems in VMware Telco Cloud Automation documentation.

Configure TKG Cluster for Secure Harbor Registry

This section describes procedure to configure TKG cluster using secure Harbor registry.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 42

Procedure

1 From VMware Telco Cloud Automation, add the secure Harbor registry in Partner Systems page by registering the available Harbor instance. Use fqdn with https in URL field. Click the

Trust Certificate checkbox. In the VIM Associations tab, select the workload cluster that you use for VMware Telco Cloud Service Assurance deployment. To finish the registration, click the Finish button. For more information on how to add a Harbor registry in Partner Systems, see Add a Harbor Repository in VMware Telco Cloud Automation documentation.

2 ssh to one of the CPN nodes of the management cluster. You can find the IPs under the Caas Infrastructure page on the left navigation link, select the management cluster from the list, then select the Control Plane Nodes tab, and the Nodes table must list the available CPN nodes.

3 After you have logged in, use the following kubectl command to find the kapp-controller

instance for the workload cluster that is used for deployment.

capv@small-mgmt-cluster-master-control-plane-nsvtp [ ~ ]$ kubectl get apps -ANAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGEtcsa-test tcsa-test-kapp-controller Reconcile succeeded 27s 2d6htcsa-xlarge-cluster tcsa-xlarge-cluster-kapp-controller Canceled/paused 23h 26htkg-system antrea Reconcile succeeded 4m43s 34dtkg-system metrics-server Reconcile succeeded 22s 34dtkg-system tanzu-addons-manager Reconcile succeeded 5m24s 34dtkg-system vsphere-cpi Reconcile succeeded 77s 34dtkg-system vsphere-csi Reconcile succeeded 2m21s 34d

The kapp-controller instance to be updated is listed under the namespace of the same name of the workload cluster and the name of the app instance can be <workload_cluster_name>-kapp-controller.

After you identify the kapp-controller instance for the workload cluster, edit the configuration by using the following command.

kubectl edit app -n <workload_cluster_name> <workload_cluster_name>-kapp-controller

For example:

kubectl edit app -n tcsa-xlarge-cluster tcops-xlarge-cluster-kapp-controller

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 43

You can edit the following two properties for the values.

paused: truesyncPeriod: 100000h0s

You can find the properties defined in the following section of the application definition.

spec: cluster: kubeconfigSecretRef: key: value name: tcops-xlarge-cluster-kubeconfig deploy: - kapp: rawOptions: - --wait-timeout=30s fetch: - imgpkgBundle: image: projects.registry.vmware.com/tkg/packages/core/kapp-controller:v0.23.0_vmware.1-tkg.1 noopDelete: true paused: true syncPeriod: 5m0s template:

If the paused property is not already defined, then add it to the spec as shown. Save the changes and exit.

4 ssh to one of the CPN nodes of the workload cluster. You can find the IPs under the Caas Infrastructure page on the left navigation link, select the workload cluster from the list, then select the Control Plane Nodes tab, and the Nodes table must list the available CPN nodes. Alternatively, you can use the KUBECONFIG file for the workload cluster to execute kubectl commands against the cluster.

5 After you have logged in, use the following kubectl command to find the kapp-controller

configuration map instance used by kapp-controller running on the workload cluster.

[root@tcsa ~]$ k get cm -n tkg-system kapp-controller-config -o yaml | head -n 8apiVersion: v1data: caCerts: "" dangerousSkipTLSVerify: "" httpProxy: "" httpsProxy: "" noProxy: ""kind: ConfigMap

6 Update the configuration map definition to add the certificate information to the {{caCerts }} using the following command:

kubectl edit cm -n tkg-system kapp-controller-config

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 44

After the update, caCerts property looks like the following:

[root@tcsa ~]$ k get cm -n tkg-system kapp-controller-config -o yaml | head -n 8apiVersion: v1data: caCerts: | -----BEGIN CERTIFICATE----- MIIGNDCCBBygAwIBAgIUeB0MR1bIB3wUlnTGoAs3JYUGcXMwDQYJKoZIhvcNAQEN BQAwgYcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTESMBAGA1UEBwwJUGFsbyBB bHRvMQ4wDAYDVQQKDAVUZWxjbzEdMBsGA1UECwwUU29sdXRpb24gRW5naW5lZXJp bmcxKDAmBgNVBAMMH2hhYXMtd2d0MS05Ny0xMjAuZW5nLnZtd2FyZS5jb20wHhcN<removed some entries> ZQK7iLY80tbbSLuxnyrX1Oaq5U9pYsxjiCEt2XVzgOgfaZKUL6kD9U5LhI8Zj1qY nE3TsevcNE4LH3OXZqjUvpNhfBbMh2u+Ui3wFiwV0prjBQKeg8MCxBQJCVSmb/en q+UD0IwbIlg= -----END CERTIFICATE-----

Note The caCerts is a yaml file and proper indentation and spacing is required to keep the format of the file valid. The "|" character is the first character after the caCerts property name,

which denotes a multi-line string. Lastly, there are four spaces of indentation for every line of the certificate string.

If you want to add multiple CA certificates to the kapp-controller configuration, you must use the following format:

# A cert chain of trusted ca certs. These will be added to the system-wide # cert pool of trusted ca's (optional) caCerts: | -----BEGIN CERTIFICATE----- Certificate 1 -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Certificate 2 -----END CERTIFICATE-----

7 After the configuration map is updated, a restart of the kapp-controller pod is required. Use the following command to restart the pod:

kubectl rollout restart deployment -n tkg-system kapp-controller

After the restart is complete, you can proceed with the VMware Telco Cloud Service Assurance deployment using the registry information including the CA cert property.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 45

For example:

REGISTRY_URL=10.198.97.120/tcxdeployer/tcxREGISTRY_USERNAME=<username>REGISTRY_PASSWORD=<password>REGISTRY_CERTS_PATH=/home/root/harbor.crt

Note Customization to the deployment of the TKG management cluster has to be reverted in the event that the workload cluster must be updated or to perform any type of maintenance from the VMware Telco Cloud Automation Manager UI, including upgrades and cluster lifecycle management.

Obtaining KUBECONFIG File from TKG Workload Cluster

After deploying the TKG workload cluster, you can obtain the clusters KUBECONFIG file by logging in through ssh to one of the CPN nodes by using the IP of the node along with the username and password assigned during cluster creation and by invoking the following command.

ssh capv@<WORKLOAD_CPN_IP>kubectl config view --minify --raw >> /tmp/<cluster_name>

You can then copy the output file /tmp/<cluster_name> to the deployment host under the home

directory /home/<user>/.kube and use this location for the KUBECONFIG property in the

deploy.settings file used for deployment.

VMware Telco Cloud Service Assurance Deployment Guide

VMware, Inc. 46

Uninstall VMware Telco Cloud Service Assurance Deployment 11To uninstall the VMware Telco Cloud Service Assurance deployment, update the deploy.settings file as described in this section.

Note Customization to the deployment of the TKG management cluster has to be reverted in the event that the workload cluster must be updated or to perform any type of maintenance from the VMware Telco Cloud Automation Manager UI, including upgrades and cluster life cycle management.

Procedure

1 Set the DEPLOYMENT_ACTION to cleanup in the deploy.settings file.

# ========= Deployment modes and actions ========== ## Options are "init", "deploy-apps", "deploy-all" or "cleanup"

DEPLOYMENT_ACTION="cleanup"## Set this to '--force' if you want to cleanup by force without waiting for user confirmationDELETE_ARGS=

For example, DELETE_ARGS='--force'

2 Launch the Deployment Container on the deployment host. For more information, see Launch the Deployment Container.

3 Trigger the uninstallation by running the following script.

root [ ~ ]# cd tcx-deployer/scripts/deployment/root [ ~/tcx-deployer/scripts/deployment ]# ./tcx_app_deployment.sh

Note If you do not use the force command, the uninstallation step deletes the resources

from the cluster without the following confirmation message.

Are you sure you want to proceed? (y/n):y.

VMware, Inc. 47

Troubleshooting Deployment 12After deployment, if VMware Telco Cloud Service Assurance does not appear to function, the first step is to verify the logs for any Deployment Container or product installation issues.

For more information, see VMware Telco Cloud Service Assurance Troubleshooting Guide.

VMware, Inc. 48