Two-Tier deployment with Citrix ADC VPX, Citrix Ingress Controller, Citrix ADC CPX and Application Delivery Management(ADM) on Google Cloud
Section | Description |
---|---|
Section A | Citrix product overview for GCP K8's architecture and components |
Section B | GCP Infrastructure Setup |
Section C | Deploy a sample application using the sample YAML file library |
Section D | Integration with CNCF tools for Monitoring (Prometheus/Grafana) |
Section E | ADM as Microservices on GCP for Monitoring and Service Graph |
Section F | Delete deployment |
-
Citrix ADC VPX as tier 1 ADC for ingress-based internet client traffic.
A VPX instance in GCP enables you to take advantage of GCP computing capabilities and use Citrix load balancing and traffic management features for your business needs. You can deploy VPX in GCP as a standalone instance. Both single and multiple network interface card (NIC) configurations are supported.
-
The Kubernetes cluster using Google Kubernetes Engine (GKE) to form the container platform.
Kubernetes Engine is a managed, production-ready environment for deploying containerized applications. It enables rapid deployment and management of your applications and services.
-
Deploy a sample Citrix web application using the YAML file library.
Citrix has provided a sample microservice web application to test the two-tier application topology on GCP. We have also included the following components in the sample files for proof of concept:
- Sample Hotdrink Web Service in Kubernetes YAML file
- Sample Colddrink Web Service in Kubernetes YAML file
- Sample Guestbook Web Service in Kubernetes YAML file
- Sample Grafana Charting Service in Kubernetes YAML file
- Sample Prometheus Logging Service in Kubernetes YAML file
-
Deploy the Citrix ingress controller for tier 1 Citrix ADC automation into the GKE cluster.
The Citrix ingress controller built around Kubernetes automatically configures one or more Citrix ADC based on the ingress resource configuration. An ingress controller is a controller that watches the Kubernetes API server for updates to the ingress resource and reconfigures the ingress load balancer accordingly. The Citrix ingress controller can be deployed either directly using YAML files or by Helm Charts.
Citrix has provided sample YAML files for the Citrix ingress controller automation of the tier 1 VPX instance. The files automate several configurations on the tier 1 VPX including:
- Rewrite Polices and Actions
- Responder Polices and Actions
- Contents Switching URL rules
- Adding/Removing CPX Load Balancing Services
The Citrix ingress controller YAML file for GCP is located here: https://github.com/citrix/example-cpx-vpx-for-kubernetes-2-tier-microservices/tree/master/gcp
-
Deploy the Citrix Application Delivery Management (ADM) container into the GKE cluster.
In a dual-tiered ingress deployment, deploy Citrix ADC VPX/MPX outside the Kubernetes cluster (Tier 1) and Citrix ADC CPXs inside the Kubernetes cluster (Tier 2).
The tier 1 VPX/MPX would load balance the tier 2 CPX inside the Kubernetes cluster. This is a generic deployment model followed widely irrespective of the platform, whether it's Google Cloud, Amazon Web Services, Azure, or an on-premises deployment.
The tier 1 VPX/MPX automatically load balances the tier 2 CPXs. Citrix ingress controller completes the automation configurations by running as a pod inside the Kubernetes cluster. It configures a separate ingress class for the tier 1 VPX/MPX so that the configuration does not overlap with other ingress resources.
Prerequisites (mandatory):
-
Create a GCP account by following steps on url https://cloud.google.com/free/docs/gcp-free-tier , please use your credit card to validate and activate to paid account. Google will charge only if free-tier resources are exhausted.
-
Now Click My First Project on GCP console
Create "cnn-selab-atl" as project name
Now go to Compute Engine > VM Instances and wait till Compute Engine is ready
-
Increase
VPC/Networks, In-use IP addresses
quota to8
-
VPC/Networks
-
In-use IP addresses
-
-
Select "cnn-selab-atl" project and click on Activate Cloud Shell icon on right of search, than you will see cloud shell opened at the bottom of page for this project
-
Now we will run automated template script to bring GCP Infrastructure components required for hands-on. Script will run on your cloud shell which needs internet access so please make sure your system(laptop) is active.
It will take around 15 mins to run script and wait till you get message from cloud shell as
End of Automated deployment for the training lab
Downlaod/Clone the File Repository
git clone https://github.com/citrix/example-cpx-vpx-for-kubernetes-2-tier-microservices.git
cd example-cpx-vpx-for-kubernetes-2-tier-microservices/gcp/scripts/
Very Important: Change the REGION and ZONE as per your choice
perl automated_deployment.pl REGION ZONE
For Example
perl automated_deployment.pl us-east1 us-east1-b perl automated_deployment.pl europe-west1 europe-west1-b perl automated_deployment.pl asia-northeast1 asia-northeast1-b
Automated perl script creates below GCP Infrastructure components required for hands-on
After Successful deployment with out any errors in script execution if you get a message on
Cloud Shell
as shown than proceed to next step otherwise go to last section to delete deploymentIf automation script fails don't delete and create project with same name . Instead Go to "1st Step of Section F - Delete deployment Steps" at page end and retry from this step script after successful deletion
-
Once GCP Infrastructure is up with automated script. We have to access kubernetes cluster from the cloud shell.
Go to Kubernetes Engine > Clusters and click Connect icon
Copy paste Kubernetes CLI access on your cloud shell
Citrix ADC offers the two-tier architecture deployment solution to load balance the enterprise grade applications deployed in microservices and accessed through the Internet. Tier 1 has heavy load balancers such as VPX/SDX/MPX to load balance North-South traffic. Tier 2 has CPX deployment for managing microservices and load balances East-West traffic.
We will run all following commands till page end on Cloud Shell only
-
To check the kubernetes nodes are in ready status or not
kubectl get nodes
-
Create Cluster role binding to configure a cluster-admin.
Change the email-id of your GCP account to your hands-on GCP account
kubectl create clusterrolebinding citrix-cluster-admin --clusterrole=cluster-admin --user=<email-id of your GCP account>
Optional: If you paste in an incorrect email follow these steps to remove the role binding, you will then have to go in and repeat step 2 to correctly bind your Google Account as the Citrix Cluster Admin.
kubectl delete clusterrolebinding citrix-cluster-admin
-
Access the config files directory which are downloaded as part of automation script to run applications required for two-tier deployment
cd example-cpx-vpx-for-kubernetes-2-tier-microservices/gcp/config-files/
-
Create namespaces for tier-2-adc, team-hotdrink, team-colddrink, team-guestbook and monitoring where we will deploy micro services or applications
kubectl create -f namespace.yaml
-
Deploy the rbac.yaml in the default namespace to grant Role-based access control
kubectl create -f rbac.yaml
-
Deploy a unqiue CPX for each application hotdrink, colddrink, and guestbook microservices
Note: Please upload your TLS certificate and TLS key into hotdrink-secret.yaml. We have updated our security policies and removed SSL certificate from guides.
kubectl create -f cpx.yaml -n tier-2-adc kubectl create -f hotdrink-secret.yaml -n tier-2-adc
To check CPX pods status, if they are in
running status
go to next step, otherwise delete pods byreplacing create with delete in above commands
and redeploy themkubectl get pods -n tier-2-adc
-
Deploy hotdrink beverage application microservices-- SSL type microservice with hair-pin architecture
Note: Please upload your TLS certificate and TLS key into hotdrink-secret.yaml. We have updated our security policies and removed SSL certificate from guides.
kubectl create -f team_hotdrink.yaml -n team-hotdrink kubectl create -f hotdrink-secret.yaml -n team-hotdrink
To check hotdrink application pods status, if they are in
running status
go to next step, otherwise delete pods byreplacing create with delete in above commands
and redploy themkubectl get pods -n team-hotdrink
-
Deploy colddrink beverage application microservice-- SSL_TCP type microservice
Note: Please upload your TLS certificate and TLS key into colddrink-secret.yaml. We have updated our security policies and removed SSL certificate from guides.
kubectl create -f team_colddrink.yaml -n team-colddrink kubectl create -f colddrink-secret.yaml -n team-colddrink
To check colddrink application pods status, if they are in
running status
go to next step, otherwise delete pods byreplacing create with delete in above commands
and redploy themkubectl get pods -n team-colddrink
-
Deploy guestbook beverage application microservices-- NoSQL type microservice
kubectl create -f team_guestbook.yaml -n team-guestbook
To check guestbook application pods status, if they are in
running status
go to next step, otherwise delete pods byreplacing create with delete in above commands
and redploy themkubectl get pods -n team-guestbook
-
Validate the CPX deployed for above three applications. First, obtain the CPX pods deployed in tier-2-adc and than get the CLI access to CPX.
To get CPX pods in tier-2-adc namespace
kubectl get pods -n tier-2-adc
To get CLI access (bash) to the CPX pod (hotdrinks-cpx pod)
Change the CPX pod name in double quotes "" for below command and than execute
kubectl exec -it "copy and paste hotdrink CPX pod name here from the above step" bash -n tier-2-adc
To check whether the
CS vserver is in UP state
in the hotdrink-cpx, enter the following command after the root access to CPX and giveexit
after validation.cli_script.sh "show cs vserver"
-
Deploy the VPX ingress and ingress controller in tier-2-adc namespace, which configures tier-1-adc (VPX) automatically.
Citrix Ingress Controller (CIC) pushes the configuration to tier-1-adc (VPX) in an automated fashion by using smart annotations and Custom Resource Definitions (CRD)
kubectl create -f ingress_vpx.yaml -n tier-2-adc kubectl create -f cic_vpx.yaml -n tier-2-adc
-
Add DNS entries in your local machine's host files to access microservices from Internet.
For Windows Clients, go to: C:\Windows\System32\drivers\etc\hosts and edit in
Notepad++
with administrator accessFor macOS Clients, in the Terminal, enter: sudo nano /etc/hosts
Add the following entries in the host's file and save the file.
xxx.xxx.xxx.xxx hotdrink.beverages.com xxx.xxx.xxx.xxx colddrink.beverages.com xxx.xxx.xxx.xxx guestbook.beverages.com xxx.xxx.xxx.xxx grafana.beverages.com xxx.xxx.xxx.xxx prometheus.beverages.com
Replace above "xxx.xxx.xxx.xxx" with VIP or Client traffic public IP of tier-1-adc(VPX) , To get IPs go to Compute Engine > VM instances and double click on "citrix-adc-tier1-vpx" scroll down for nics as shown below
Copy Client/VIP traffic External IP and replace all "XXX.XXX.XXX.XXX" with IP in your host file
-
Now you can access each application over the Internet. For example,
https://hotdrink.beverages.com
orhttp://hotdrink.beverages.com
Here HTTP to HTTPS redirect is enabled using smart annotations, so you can acess url from either https(443) or http(80)
Now it's time to push Rewrite and Responder policies in to VPX through the Citrix Ingress Controller(CIC) using custom resource definition (CRD)
-
Deploy the CRD to push the Rewrite and Responder policies in to tier-1-adc in default namespace
kubectl create -f crd_rewrite_responder.yaml
-
Blacklist URLs : Configure the Responder policy on
hotdrink.beverages.com
to block access to the coffee pagekubectl create -f responderpolicy_hotdrink.yaml -n tier-2-adc
After you deploy the Responder policy,
click on coffee image
onhotdrink.beverages.com
to see following message -
Header insertion: Configure the Rewrite policy on
colddrink.beverages.com
to insert the session ID in the header.kubectl create -f rewritepolicy_colddrink.yaml -n tier-2-adc
After you deploy the Rewrite policy, access
https://colddrink.beverages.com
with developer mode enabled on the browser. In Chrome, press F12 and preserve the log in network category to see the session ID, which is inserted by the Rewrite policy on tier-1-adc (VPX).
-
Deploy Cloud Native Computing Foundation (CNCF) monitoring tools, such as Prometheus and Grafana to collect ADC proxy stats.
kubectl create -f monitoring.yaml -n monitoring kubectl create -f ingress_vpx_monitoring.yaml -n monitoring
-
Prometheus log aggregator :
Log in tohttp://grafana.beverages.com:8080
and complete the following one-time setup.-
Log in to the portal using
admin/admin
credentials and clickskip
on next page -
Click
Add data source
and select thePrometheus
data source -
Configure the following settings and click on
Save and test
button and you will get a prompt thatData Source is working
Make sure all
prometheus
shoud be in small letters
-
-
Grafana visual dashboard :
To monitor traffic stats of Citrix ADC-
As shown above from the left panel, select the Import option and
click url
https://raw.githubusercontent.com/citrix/example-cpx-vpx-for-kubernetes-2-tier-microservices/master/gcp/config-files/grafana_config.json to copy entire content and paste in to JSON. -
Click on 'Load' and than 'Import' in next page
-
-
Now we will run automated template script to bring GCP Infrastructure components required for ADM in K8s cluster hands-on. Script will run on your cloud shell which needs internet access so please make sure your system(laptop) is active.
It will take around 15 mins to run script and wait till you get message from cloud shell as
End of Automated deployment for the training lab
cd ~ cd example-cpx-vpx-for-kubernetes-2-tier-microservices/gcp/scripts/
Very Important: Change the REGION and ZONE as per your choice
perl adm_automated_deployment.pl REGION ZONE
For Example
perl adm_automated_deployment.pl us-west1 us-west1-b perl adm_automated_deployment.pl europe-west2 europe-west2-b perl adm_automated_deployment.pl asia-northeast2 asia-northeast2-b
Automated perl script creates below GCP Infrastructure components required for ADM in K8s cluster hands-on
After Successful deployment with out any errors in script execution if you get a message on
Cloud Shell
as shown than proceed to next step otherwise go to last section to delete deploymentIf automation script fails don't delete and create project with same name . Instead Go to "2nd Step of Section F - Delete deployment Steps" at page end and retry from this step script after successful deletion
-
Once GCP Infrastructure is up with automated script. we have to initialise NFS Storage for ADM
Select the
nfs-adm
instance onCompute Engine
and click onView gcloud command
as shownCopy and paste the
gcloud command
to SSHnfs-adm
Run below commands to make instance as nfs-server
sudo apt-get update sudo apt install nfs-kernel-server
Open
exports
filesudo nano /etc/exports
Add below entries in exports file and close by clicking keys
Ctrl+X
andY
/var/citrixadm_nfs/config *(rw,sync,no_root_squash) /var/citrixadm_nfs/datastore *(rw,sync,no_root_squash)
Run below to make nfs-service up and give
logout
to exit from nfs-storagesudo systemctl start nfs-kernel-server.service sudo service nfs-kernel-server restart
-
Access kubernetes cluster
k8s-cluster-with-adm
from the cloud shell to install ADM as microservices in K8s clusterGo to Kubernetes Engine > Clusters and click Connect icon
Copy and paste command line access on your cloud shell
-
Install
helm
ink8s-cluster-with-adm
clusterHelm
package installation required forADM K8s installation
We will download HELM package from helm repository https://helm.sh/docs/intro/install/.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
Note You can refer to helm.sh official website for downloading helm package if above scipt does not work for you.
kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller --upgrade
Validate the helm
Client
andServer
version to confirm helm installation, if you didn't see version instantly wait for some time and retryhelm version
helm version
-
Now it's time to install ADM in K8s cluster
Create
adm
namespace to deployadm microservices
kubectl create namespace adm
cd ..
Deploy
ADM Microservices
usinghelm
packagehelm install -n citrixadm --namespace adm ./citrixadm
Check the
status
of pods inadm
namespacewatch kubectl get pods -n adm
To check ADM Microservice installation is successful, go to citrix-adc-tier1-vpx-adm and wait till licensing lb verservers are up
Look for
Licensing load balancing vservers
are UP or not.If vservers are UP than access ADM otherwise wait till licensing vservers are UPDouble click on
citrix-adc-tier1-vpx-adm
to get ADM IPScroll down and Copy ADM IP as shown here on your browser to access
For Example: http://35.199.148.128
Acess ADM using default credentials UserName/Passowrd: nsroot/nsroot
Configure AppFlow Collector on CPX , Add Application K8s Cluster on ADM for Service Graph and Monitoring on ADM
Run configure appflow commands in
k8s-cluster-with-cpx
cluster only
-
Access
k8s-cluster-with-cpx
to configure hotdrink CPX-
Go to Kubernetes Engine > Clusters and click Connect icon
Copy paste Kubernetes CLI access on your cloud shell
-
To get CPX pods in tier-2-adc namespace
kubectl get pods -n tier-2-adc
-
To get CLI access (bash) to the CPX pod (hotdrinks-cpx pod)
Change the CPX pod name in double quotes "" for below command and than execute
kubectl exec -it "copy and paste hotdrink CPX pod name here from the above step" bash -n tier-2-adc
-
-
We will enable
App Flow
onHotdrinks CPX
to collect logs on ADMReplace ADM External IP with your ADM IP
cli_script.sh "add appflow collector af_mas_collector_logstream -IPAddress <ADM External IP> -port 5557 -Transport logstream" cli_script.sh "add appflow action af_mas_action_logstream -collectors af_mas_collector_logstream" cli_script.sh "add appflow policy af_mas_policy_logstream true af_mas_action_logstream" cli_script.sh "bind appflow global af_mas_policy_logstream 20 END -type REQ_DEFAULT" cli_script.sh "enable feature appflow" cli_script.sh "enable ns mode ULFD"
To check Appflow status is UP or not
cli_script.sh "show appflow collector"
-
Access ADM IP by using default credentials nsroot/nsroot and add
citrix-adc-tier1-vpx
-
On ADM go to Orchestration > Kubernetes > Clusters and follow steps shown on image to see service graph
Access Application k8s cluster on cloud shell to get required details to add cluster
Get back to cloud shell to access
k8s-cluster-with-cpx
- The Application Cluster-
Name:
Give Application cluster name, for example:k8s-cluster-with-cpx -
API Server URL:
Master node URL is the API server URL of Application K8s cluster. copy and paste Kubernetes master running URL by adding port "443" as shown in above imagekubectl cluster-info
-
Authentication Token:
To get this token we have to install cluster role and service account on Application K8s clustercd ~ cd example-cpx-vpx-for-kubernetes-2-tier-microservices/gcp/citrixadm-config-files/orchestartor-yamls
kubectl create -f cluster-role.yaml kubectl create -f service-account.yaml
kubectl get secret -n kube-system
Describe the secret service to get Authentication token
kubectl describe secret <admin-service-name> -n kube-system
Please copy the token and paste it on Notepad or Notepad++ and make sure entire token should be in "single line"
Click on
Create
than you will see below screen after cluster addition
-
-
Now access hotdrink url application over the Internet to capture traffic on ADM for Serviegraph ,
Wait for couple of minutes
to reflect service graph on ADM .For example,https://hotdrink.beverages.com
orhttp://hotdrink.beverages.com
-
On ADM go to
Applications > ServiceGraph
to see the service graph of Microservices and Summary Panel to check Latency,Errors..Click on
View as
for different service graph views to get better visibilitySKIP Troubleshooting ADM Service Graph if you can see Service Graph on ADM
-
If you can't see
Service Graph
access hotdrink CPX as mentioned inStep 10 of Section - C
, validate whether you can see any hits on appflow configured in hotdrink CPX.cli_script.sh "show appflow collector" cli_script.sh "show appflow policy"
-
Everything is working as expected but still you can't see service graph than follow below steps to make service graph work
Access adm k8s cluster by following Step 3 of prerequisites of ADM from
Section F
Get ADM pods
kubectl get pods -n adm
Replace k8sadapter pod as shown in above screen shot and run below commands to bash/cli for k8sadapter pod
kubectl exec -it <k8sadapterpod> bash -n adm
cd /var/log tail -f k8_logger.log
Now If you see "Invalid token as Error Message from logs " due to addition of \n in token, than delete k8sadapter pod but a new k8sadpater pod will be created instantly
kubectl delete pod <k8sadapterpod> -n adm
Now repeat the cluster addition step and once above issue is fixed access
hotdrink.beverages.com
url and wait for couple of minutes to see the service graph
To delete the entire deployment go to your cloud shell and run below commands to start the delete process
-
To delete Sample Application with tir-1-adc(VPX),tier-2-adc(CPX),CIC GCP Infrastructure
cd ~ git clone https://github.com/citrix/example-cpx-vpx-for-kubernetes-2-tier-microservices.git
cd ~ cd example-cpx-vpx-for-kubernetes-2-tier-microservices/gcp/scripts/
Very Important: Make sure the REGION and ZONE are same as the one used for GCP Infrastructure creation
perl automated_deployment.pl REGION ZONE delete
For Example
perl automated_deployment.pl us-east1 us-east1-b delete perl automated_deployment.pl europe-west1 europe-west1-b delete perl automated_deployment.pl asia-northeast1 asia-northeast1-b delete
Delete Process takes around 10 mins
Automated perl script deletes GCP Infrastructure components created for sample application hands-on
-
To delete ADM GCP Infrastructure
cd ~ git clone https://github.com/citrix/example-cpx-vpx-for-kubernetes-2-tier-microservices.git
cd ~ cd example-cpx-vpx-for-kubernetes-2-tier-microservices/gcp/scripts/
Very Important: Make sure the REGION and ZONE are same as the one used for GCP Infrastructure creation
perl adm_automated_deployment.pl REGION ZONE delete
For Example
perl adm_automated_deployment.pl us-west1 us-west1-b delete perl adm_automated_deployment.pl europe-west2 europe-west2-b delete perl adm_automated_deployment.pl asia-northeast2 asia-northeast2-b delete
Delete Process takes around 10 mins
Automated perl script deletes GCP Infrastructure components created for ADM in K8s hands-on