- 1. Introduction
- 2. Anthos on Bare Metal Ansible Module
- 3. Prerequisites for Anthos on Bare Metal
- 4. Google Cloud Configuration
- 5. Anthos Cluster Role
- 6. Connect Gateway Configuration
Anthos on bare metal allows you to run Kubernetes clusters on your own hardware infrastructure and also allows you to monitor your cluster from Google Cloud Console. Read more on Anthos on bare metal here.
This guide explains the steps required for creating Anthos clusters on bare metal on Ubuntu hosts and also explains steps to try it on Google Compute Engine (GCE) VMs.
This module can be used for installation of Anthos on Bare Metal nodes for all kind of clusters (admin, user, hybrid and standalone).
The ansible.cfg configuration file is placed in the <REPOSITORY_ROOT>/ansible directory. This configures the inventory location and disables host checking.
[defaults]
inventory = ./inventory/hosts.yml
host_key_checking = False
Install Ansible on Workstation and run the below command to create the Anthos cluster.
sudo apt install ansible
cd <REPOSITORY_ROOT>/ansible
ansible-playbook anthos.yml
The complete list of prerequisites is available here.
Other than cluster nodes, you need a workstation machine that is used for running Anthos installation commands. It can be a GCE VM, or an on-premise VM or an on-premise physical server. Following are the main prerequisites for a workstation:
- Operating system is the same supported Linux distribution running on the cluster node machines.
- More than 50 GB of free disk space.
- L3 connectivity to all cluster node machines.
- Access to all cluster node machines through SSH via private keys with passwordless root access. Access can be either direct or through sudo.
- Access to the control plane VIP.
You need a user account or a service account that has a Project Owner/Editor role creating the required Google Cloud resources. Alternatively You can add the following IAM roles to the user account (or service account):
- Service Account Admin
- Service Account Key Admin
- Project IAM Admin
- Compute Viewer
- Service Usage Admin
gcloud auth login
gcloud auth application-default login
The Ubuntu OS prerequisites for all the cluster nodes including Workstation can be accomplished throughubuntu-prereq/rhel-prereq Ansible role. The execution of this role is controlled by the below variable in vars/anthos_vars.yml file
vars/anthos_vars.yml
# Possible values: ubuntu | rhel
# Use rhel for CentOS as well
os_type: "ubuntu"
The role execution is done from the playbook (anthos.yml).
anthos.yml
- hosts: all
remote_user: "{{ login_user }}"
gather_facts: "no"
vars_files:
- vars/anthos_vars.yml
roles:
- role: ubuntu-prereq
become: yes
become_method: sudo
when: os_type == "ubuntu"
- role: rhel-prereq
become: yes
become_method: sudo
when: os_type == "rhel"
The workstation should have Docker installed and the non-root user that is used for Anthos installation should have access to the Docker. This can be achieved through the ws-docker role.
anthos.yml
- hosts: workstation
remote_user: "{{ login_user }}"
gather_facts: "no"
vars_files:
- vars/anthos_vars.yml
roles:
- role: ws-docker
become: yes
become_method: sudo
when: ws_docker == "yes"
The role execution can be controlled by the below variable.
vars/anthos_vars.yml
# Leave it blank if you would like to skip the docker installation and configuration
ws_docker: "yes"
The Workstation should have gcloud SDK installed. You can do this using thegcloud-sdk Ansible role.
anthos.yml
- role: gcloud-sdk
become: yes
become_method: sudo
when: gcloud_sdk == "yes"
The role execution can be controlled by the below variable.
vars/anthos_vars.yml
# Leave it blank if you would like to skip the gcloud SDK installation
gcloud_sdk: "yes"
The Workstation should have the kubectl tool installed. This can be done throughkubectl-tool Ansible role.
anthos.yml
- role: kubectl-tool
become: yes
become_method: sudo
when: kubectl_tool == "yes"
The role execution can be controlled by the below variable.
vars/anthos_vars.yml
# Leave it blank if you would like to skip the kubectl installation
kubectl_tool: "yes"
The Workstation should have the bmctl tool installed. This can be done throughbmctl-tool Ansible role.
anthos.yml
- role: bmctl-tool
become: yes
become_method: sudo
when: bmctl_tool == "yes"
The role execution can be controlled by the below variable.
vars/anthos_vars.yml
# Leave it blank if you would like to skip the bmctl installation
bmctl_tool: "yes"
The bmctl download location and version is configured through below variables.
vars/anthos_vars.yml
bmctl_download_url: gs://anthos-baremetal-release/bmctl/1.6.2/linux-amd64/bmctl
The Google Cloud configuration can be done from a Cloud Shell or a GCE VM Instance. If the configuration is done from the Workstation, then you should login with a Google account. You can find the required details here.
You can configure the Google Cloud as well as create required Service Accounts usingservice-accounts Ansible role.
anthos.yml
- role: service-accounts
when: service_accounts == "yes"
In case you have already created the Service Account and would like to skip this step, you can do so through the below variable.
vars/anthos_vars.yml
# Leave it blank if you would like to skip the bmctl installation
service_accounts: "yes"
The location for downloaded key files for Service Accounts and the names of Service Accounts are configured through below variables.
vars/anthos_vars.yml
# The Service Account Key files are stored in this location. The role that creates cluster searches this location for the key files.
gcp_sa_key_dir: /home/anthos/gcp_keys
# Below are the names of the Service Accounts created for the Anthos cluster
local_gcr_sa_name: anthos-gcr-svc-account
local_connect_agent_sa_name: connect-agent-svc-account
local_connect_register_sa_name: connect-register-svc-account
local_cloud_operations_sa_name: cloud-ops-svc-account
Note: The below bmctl command can also enable Google Cloud APIs and create the required service accounts. However, the Ansible role provides better control especially when you would like to use the same Service Accounts for different clusters.
bmctl create config -c [CLUSTER_NAME]
--enable-apis \
--create-service-accounts
The Anthos cluster can be created using theanthos Ansible role.
anthos.yml
- role: anthos
Theanthos Ansible role uses a number of variables in addition to the Service Accounts variables. It also uses the Inventory host file.
The IP address or the DNS names of cluster nodes including Workstation should be set in the Ansible inventory. Thecp_nodes contains the list of Control Plane Nodes andworker_nodes contains the list of Worker Nodes.
inventory/hosts.yml
all:
hosts:
children:
workstation:
hosts:
10.200.0.7:
cp_nodes:
hosts:
10.200.0.2:
10.200.0.3:
10.200.0.4:
worker_nodes:
hosts:
10.200.0.5:
10.200.0.6:
The variable file is placed in thevars directory under the root of the code repository. The purpose of each variable explained below.
vars/anthos_vars.yml
# Login user, group and home for Cluster nodes
login_user: anthos
login_user_group: anthos
login_user_home: /home/anthos
# Possible values: ubuntu | rhel
# Use rhel for CentOS as well
os_type: "ubuntu"
# Set value to "yes" to apply the respective role
ws_docker: "yes"
gcloud_sdk: "yes"
kubectl_tool: "yes"
bmctl_tool: "yes"
service_accounts: "yes"
# Link to download bmctl from. It also contains the version
bmctl_download_url: gs://anthos-baremetal-release/bmctl/1.7.0/linux-amd64/bmctl
# Directory used by bmctl tool for creating the cluster
bmctl_workspace_dir: bmctl-workspace
# Directory where Service Account keys are placed
gcp_sa_key_dir: /home/anthos/gcp_keys
# Names of the Service Accounts
local_gcr_sa_name: anthos-gcr-svc-account
local_connect_agent_sa_name: connect-agent-svc-account
local_connect_register_sa_name: connect-register-svc-account
local_cloud_operations_sa_name: cloud-ops-svc-account
# The SSH key file for logging into Anthos nodes
ssh_private_key_path: /home/anthos/.ssh/id_rsa
# Project ID for the Google Cloud Project
project_id: [PROJECT_ID]
# Google Cloud region
location: [REGION]
# Name of the Cluster
cluster_name: [CLUSTER_NAME]
# Type of cluster deployment. Possible values are: standalone | hybrid | admin | user
cluster_type: hybrid
# Number of maximum Pods that can run on a node
max_pod_per_node: 250
# Container runtime for the cluster. Possible values are: docker and containerd
container_runtime: docker
# enable/disable application logging for cluster workloads. use 'true' to enable
app_logs: false
# Kubernetes POD CIDR.Change it if the default one overlaps with the Cluster Nodes CIDR.
pod_cidr: 192.168.0.0/16
# Kubernetes Services CIDR.Change it if the default one overlaps with the Cluster Nodes CIDR.
service_cidr: 10.96.0.0/12
# Anthos Cluster Control Plane VIP. This should be in the Cluster Node subnet and should not be part of lb_address_pool.
cp_vip: 10.200.0.47
# Anthos Ingress VIP. This should be in the Cluster Node subnet and should be part of lb_address_pool.
ingress_vip: 10.200.0.48
# Address pool for Cluster Load Balancer
lb_address_pool:
- 10.200.0.48/28
# For a 'user' cluster, place the admin cluster kubeconfig file on workstation machine and set admin_kubeconfig_path to the absolute path of this file
admin_kubeconfig_path: [ADMIN_CLUSTER_KUBECONFIG]
cgw_members:
- [email id of IAM user or service account]
You can read about the admin cluster installation here.
The admin cluster consists of Control Plane nodes only. Therefore, the inventory file for an admin cluster would look like this.
inventory/hosts.yml
all:
hosts:
children:
workstation:
hosts:
10.200.0.7:
cp_nodes:
hosts:
10.200.0.2:
The cp_nodes list should contain an odd number of hosts (such as 1, 3 or 5). The installation ignores worker_nodes list if it is present in the inventory file.
You need to set the cluster_type and cluster_name variables in the variable file. Below is a sample variable file for the admin cluster using Ubuntu nodes. Replace the values enclosed in square brackets ([]) with actual values relevant to your GCP project and the cluster.
vars/anthos_vars.yml
login_user: anthos
login_user_group: anthos
login_user_home: /home/anthos
os_type: "ubuntu"
ws_docker: "yes"
gcloud_sdk: "yes"
kubectl_tool: "yes"
bmctl_tool: "yes"
service_accounts: "yes"
bmctl_download_url: gs://anthos-baremetal-release/bmctl/1.7.0/linux-amd64/bmctl
bmctl_workspace_dir: bmctl-workspace
gcp_sa_key_dir: /home/anthos/gcp_keys
local_gcr_sa_name: anthos-gcr-svc-account
local_connect_agent_sa_name: connect-agent-svc-account
local_connect_register_sa_name: connect-register-svc-account
local_cloud_operations_sa_name: cloud-ops-svc-account
ssh_private_key_path: /home/anthos/.ssh/id_rsa
project_id: [PROJECT_ID]
location: [REGION]
cluster_name: [CLUSTER_NAME]
cluster_type: admin
max_pod_per_node: 250
container_runtime: docker
app_logs: false
pod_cidr: 192.168.0.0/16
service_cidr: 10.96.0.0/12
cp_vip: 10.200.0.47
cgw_members:
- [email id of IAM user or service account]
Below variables are not used for the admin cluster. You can either ignore them or remove them from the variable file.
ingress_vip: 10.200.0.48
lb_address_pool:
- 10.200.0.48/28
admin_kubeconfig_path:
Run below command from workstation node (ansible should be installed on the workstation node) to create the cluster.
cd <REPOSITORY_ROOT>/ansible
ansible-playbook anthos.yml
You can read about the user cluster installation here.
The user cluster consists of Control Plane nodes as well as worker nodes. Therefore, the inventory file for a user cluster would look like this.
inventory/hosts.yml
all:
hosts:
children:
workstation:
hosts:
10.200.0.7:
cp_nodes:
hosts:
10.200.0.3:
worker_nodes:
hosts:
10.200.0.5:
10.200.0.6:
You need to set the cluster_type and cluster_name variables in the variable file. Below is a sample variable file for the user cluster using Ubuntu nodes. Replace the values enclosed in square brackets ([]) with the actual values relevant to your GCP project and the cluster.
Set admin_kubeconfig_path variable to the full path of the admin cluster kubeconfig file.
vars/anthos_vars.yml
login_user: anthos
login_user_group: anthos
login_user_home: /home/anthos
os_type: "ubuntu"
ws_docker: "yes"
gcloud_sdk: "yes"
kubectl_tool: "yes"
bmctl_tool: "yes"
service_accounts: "no"
bmctl_download_url: gs://anthos-baremetal-release/bmctl/1.7.0/linux-amd64/bmctl
bmctl_workspace_dir: bmctl-workspace
gcp_sa_key_dir: /home/anthos/gcp_keys
local_gcr_sa_name: anthos-gcr-svc-account
local_connect_agent_sa_name: connect-agent-svc-account
local_connect_register_sa_name: connect-register-svc-account
local_cloud_operations_sa_name: cloud-ops-svc-account
ssh_private_key_path: /home/anthos/.ssh/id_rsa
project_id: [PROJECT_ID]
location: [REGION]
cluster_name: [CLUSTER_NAME]
cluster_type: user
max_pod_per_node: 250
container_runtime: docker
app_logs: false
pod_cidr: 192.168.0.0/16
service_cidr: 10.96.0.0/12
cp_vip: 10.200.0.46
ingress_vip: 10.200.0.48
lb_address_pool:
- 10.200.0.48/28
admin_kubeconfig_path: /home/anthos/bmctl-workspace/admin-abm/admin-abm-kubeconfig
cgw_members:
- [email id of IAM user or service account]
Run below command from workstation node (ansible should be installed on the workstation node) to create the cluster.
cd <REPOSITORY_ROOT>/ansible
ansible-playbook anthos.yml
You can read about the hybrid cluster installation here.
The hybrid cluster consists of Control Plane nodes as well as worker nodes. Therefore, the inventory file for a hybrid cluster would look like this.
inventory/hosts.yml
all:
hosts:
children:
workstation:
hosts:
10.200.0.7:
cp_nodes:
hosts:
10.200.0.2:
10.200.0.3:
10.200.0.4:
worker_nodes:
hosts:
10.200.0.5:
10.200.0.6:
You need to set the cluster_type and cluster_name variables in the variable file. Below is a sample variable file for the hybrid cluster using Ubuntu nodes. Replace the values enclosed in square brackets ([]) with actual values relevant to your GCP project and the cluster.
vars/anthos_vars.yml
login_user: anthos
login_user_group: anthos
login_user_home: /home/anthos
os_type: "ubuntu"
ws_docker: "yes"
gcloud_sdk: "yes"
kubectl_tool: "yes"
bmctl_tool: "yes"
service_accounts: "yes"
bmctl_download_url: gs://anthos-baremetal-release/bmctl/1.7.0/linux-amd64/bmctl
bmctl_workspace_dir: bmctl-workspace
gcp_sa_key_dir: /home/anthos/gcp_keys
local_gcr_sa_name: anthos-gcr-svc-account
local_connect_agent_sa_name: connect-agent-svc-account
local_connect_register_sa_name: connect-register-svc-account
local_cloud_operations_sa_name: cloud-ops-svc-account
ssh_private_key_path: /home/anthos/.ssh/id_rsa
project_id: [PROJECT_ID]
location: [REGION]
cluster_name: [CLUSTER_NAME]
cluster_type: hybrid
max_pod_per_node: 250
container_runtime: docker
app_logs: false
pod_cidr: 192.168.0.0/16
service_cidr: 10.96.0.0/12
cp_vip: 10.200.0.47
ingress_vip: 10.200.0.48
lb_address_pool:
- 10.200.0.48/28
admin_kubeconfig_path:
cgw_members:
- [email id of IAM user or service account]
The variables admin_kubeconfig_path is not used by the hybrid cluster.
Run below command from workstation node (ansible should be installed on the workstation node) to create the cluster.
cd <REPOSITORY_ROOT>/ansible
ansible-playbook anthos.yml
You can read about the standalone cluster installation here.
The standalone cluster consists of Control Plane nodes as well as worker nodes. Therefore, the inventory file for a standalone cluster would look like this.
inventory/hosts.yml
all:
hosts:
children:
workstation:
hosts:
10.200.0.7:
cp_nodes:
hosts:
10.200.0.2:
10.200.0.3:
10.200.0.4:
worker_nodes:
hosts:
10.200.0.5:
10.200.0.6:
You need to set the cluster_type and cluster_name variables in the variable file. Below is a sample variable file for the standalone cluster using Ubuntu nodes. Replace the values enclosed in square brackets ([]) with actual values relevant to your GCP project and the cluster.
vars/anthos_vars.yml
login_user: anthos
login_user_group: anthos
login_user_home: /home/anthos
os_type: "ubuntu"
ws_docker: "yes"
gcloud_sdk: "yes"
kubectl_tool: "yes"
bmctl_tool: "yes"
service_accounts: "yes"
bmctl_download_url: gs://anthos-baremetal-release/bmctl/1.7.0/linux-amd64/bmctl
bmctl_workspace_dir: bmctl-workspace
gcp_sa_key_dir: /home/anthos/gcp_keys
local_gcr_sa_name: anthos-gcr-svc-account
local_connect_agent_sa_name: connect-agent-svc-account
local_connect_register_sa_name: connect-register-svc-account
local_cloud_operations_sa_name: cloud-ops-svc-account
ssh_private_key_path: /home/anthos/.ssh/id_rsa
project_id: [PROJECT_ID]
location: [REGION]
cluster_name: [CLUSTER_NAME]
cluster_type: standalone
max_pod_per_node: 250
container_runtime: docker
app_logs: false
pod_cidr: 192.168.0.0/16
service_cidr: 10.96.0.0/12
cp_vip: 10.200.0.47
ingress_vip: 10.200.0.48
lb_address_pool:
- 10.200.0.48/28
admin_kubeconfig_path:
cgw_members:
- [email id of IAM user or service account]
The variables admin_kubeconfig_path is not used by the hybrid cluster.
Run below command from workstation node (ansible should be installed on the workstation node) to create the cluster.
cd <REPOSITORY_ROOT>/ansible
ansible-playbook anthos.yml
You can use Connect Gateway for connecting to the registered clusters and run commands to monitor the workload. You can read more about Connect Gateway here.
You can configure the Connect Gateway withconnect-gateway Ansible role.
anthos.yml
- role: connect-gateway
This role requires the below variable that contains a list of users and/or service accounts that can connect to the cluster through Connect Gateway.
vars/anthos_vars.yml
cgw_members:
- user:[USER_EMAIL_ID]
- serviceAccount:[SERVICE_ACCOUNT_EMAIL_ID]
Open Cloud Shell from the GCP console after logging in using your Google account.
gcloud alpha container hub memberships get-credentials [CLUTER_NAME] --project [PROJECT_ID]
where [CLUSTER_NAME]
is the name of the Anthos cluster and [PROJECT_ID]
is the Google Cloud Project ID.
Run the below command to verify that you can connect successfully to the Cluster API.
NOTE: your email ID should be added to the CGW configuration.
Kubectl get pods -A