From 7ab0b39879624ea1bfd7779eec041abcdd4f258f Mon Sep 17 00:00:00 2001 From: phillip-toohill Date: Wed, 6 Mar 2024 15:07:22 -0600 Subject: [PATCH] Monitoring: enabling memcached monitoring Signed-off-by: Kevin Carter --- doc-requirements.txt | 4 + docs/Create-Persistent-Storage.md | 212 ------ docs/Deploy-Openstack.md | 713 ------------------ docs/build-local-images.md | 4 +- docs/components.md | 3 +- docs/deploy-required-infrastructure.md | 321 -------- docs/extra-osie.md | 10 + ...tarted.md => genestack-getting-started.md} | 6 +- docs/index.md | 38 - docs/infrastructure-ingress.md | 17 + docs/infrastructure-libvirt.md | 23 + docs/infrastructure-mariadb-connect.md | 11 + docs/infrastructure-mariadb.md | 39 + docs/infrastructure-memcached.md | 23 + docs/infrastructure-metallb.md | 53 ++ docs/infrastructure-namespace.md | 7 + docs/infrastructure-overview.md | 15 + ...kup.md => infrastructure-ovn-db-backup.md} | 0 docs/infrastructure-ovn-setup.md | 109 +++ docs/infrastructure-ovn.md | 6 + docs/infrastructure-rabbitmq.md | 28 + docs/{kube-config.md => k8s-config.md} | 0 ...8s-upgrade.md => k8s-kubespray-upgrade.md} | 0 docs/{build-k8s.md => k8s-kubespray.md} | 105 +-- docs/k8s-overview.md | 13 + docs/k8s-postdeploy.md | 65 ++ docs/openstack-cinder.md | 198 +++++ docs/openstack-clouds.md | 32 + docs/openstack-compute-kit.md | 172 +++++ docs/openstack-flavors.md | 12 + docs/openstack-glance-images.md | 191 +++++ docs/openstack-glance.md | 58 ++ docs/openstack-heat.md | 59 ++ docs/openstack-helm-make.md | 15 + docs/openstack-horizon.md | 38 + docs/openstack-keystone-federation.md | 85 +++ docs/openstack-keystone.md | 63 ++ docs/openstack-neutron-networks.md | 72 ++ docs/openstack-octavia.md | 58 ++ docs/openstack-overview.md | 23 + docs/openstack-skyline.md | 38 + docs/overrides/stylesheets/adr.css | 98 +++ docs/post-deploy-ops.md | 418 ---------- docs/storage-ceph-rook-external.md | 76 ++ docs/storage-ceph-rook-internal.md | 41 + docs/storage-nfs-external.md | 49 ++ docs/storage-overview.md | 17 + docs/storage-topolvm.md | 25 + mkdocs.yml | 99 ++- 49 files changed, 1944 insertions(+), 1818 deletions(-) delete mode 100644 docs/Create-Persistent-Storage.md delete mode 100644 docs/Deploy-Openstack.md delete mode 100644 docs/deploy-required-infrastructure.md create mode 100644 docs/extra-osie.md rename docs/{getting-started.md => genestack-getting-started.md} (84%) create mode 100644 docs/infrastructure-ingress.md create mode 100644 docs/infrastructure-libvirt.md create mode 100644 docs/infrastructure-mariadb-connect.md create mode 100644 docs/infrastructure-mariadb.md create mode 100644 docs/infrastructure-memcached.md create mode 100644 docs/infrastructure-metallb.md create mode 100644 docs/infrastructure-namespace.md create mode 100644 docs/infrastructure-overview.md rename docs/{ovn-db-backup.md => infrastructure-ovn-db-backup.md} (100%) create mode 100644 docs/infrastructure-ovn-setup.md create mode 100644 docs/infrastructure-ovn.md create mode 100644 docs/infrastructure-rabbitmq.md rename docs/{kube-config.md => k8s-config.md} (100%) rename docs/{k8s-upgrade.md => k8s-kubespray-upgrade.md} (100%) rename docs/{build-k8s.md => k8s-kubespray.md} (61%) create mode 100644 docs/k8s-overview.md create mode 100644 docs/k8s-postdeploy.md create mode 100644 docs/openstack-cinder.md create mode 100644 docs/openstack-clouds.md create mode 100644 docs/openstack-compute-kit.md create mode 100644 docs/openstack-flavors.md create mode 100644 docs/openstack-glance-images.md create mode 100644 docs/openstack-glance.md create mode 100644 docs/openstack-heat.md create mode 100644 docs/openstack-helm-make.md create mode 100644 docs/openstack-horizon.md create mode 100644 docs/openstack-keystone-federation.md create mode 100644 docs/openstack-keystone.md create mode 100644 docs/openstack-neutron-networks.md create mode 100644 docs/openstack-octavia.md create mode 100644 docs/openstack-overview.md create mode 100644 docs/openstack-skyline.md create mode 100644 docs/overrides/stylesheets/adr.css delete mode 100644 docs/post-deploy-ops.md create mode 100644 docs/storage-ceph-rook-external.md create mode 100644 docs/storage-ceph-rook-internal.md create mode 100644 docs/storage-nfs-external.md create mode 100644 docs/storage-overview.md create mode 100644 docs/storage-topolvm.md diff --git a/doc-requirements.txt b/doc-requirements.txt index 9a8a4ca4..4998cfdf 100644 --- a/doc-requirements.txt +++ b/doc-requirements.txt @@ -1,2 +1,6 @@ mkdocs mkdocs-material +mkdocs-material-adr +mkdocs-swagger-ui-tag +mkdocs-glightbox +markdown-exec diff --git a/docs/Create-Persistent-Storage.md b/docs/Create-Persistent-Storage.md deleted file mode 100644 index dc109e96..00000000 --- a/docs/Create-Persistent-Storage.md +++ /dev/null @@ -1,212 +0,0 @@ -# Persistent Storage Demo - -[![asciicast](https://asciinema.org/a/629785.svg)](https://asciinema.org/a/629785) - -# Deploying Your Persistent Storage - -For the basic needs of our Kubernetes environment, we need some basic persistent storage. Storage, like anything good in life, -is a choose your own adventure ecosystem, so feel free to ignore this section if you have something else that satisfies the need. - -The basis needs of Genestack are the following storage classes - -* general - a general storage cluster which is set as the deault. -* general-multi-attach - a multi-read/write storage backend - -These `StorageClass` types are needed by various systems; however, how you get to these storage classes is totally up to you. -The following sections provide a means to manage storage and provide our needed `StorageClass` types. - -> The following sections are not all needed; they're just references. - -## Rook (Ceph) - In Cluster - -### Deploy the Rook operator - -``` shell -kubectl apply -k /opt/genestack/kustomize/rook-operator/ -``` - -### Deploy the Rook cluster - -> [!IMPORTANT] -> Rook will deploy against nodes labeled `role=storage-node`. Make sure to have a look at the `/opt/genestack/kustomize/rook-cluster/rook-cluster.yaml` file to ensure it's setup to your liking, pay special attention to your `deviceFilter` -settings, especially if different devices have different device layouts. - -``` shell -kubectl apply -k /opt/genestack/kustomize/rook-cluster/ -``` - -### Validate the cluster is operational - -``` shell -kubectl --namespace rook-ceph get cephclusters.ceph.rook.io -``` - -> You can track the deployment with the following command `kubectl --namespace rook-ceph get pods -w`. - -### Create Storage Classes - -Once the rook cluster is online with a HEALTH status of `HEALTH_OK`, deploy the filesystem, storage-class, and pool defaults. - -``` shell -kubectl apply -k /opt/genestack/kustomize/rook-defaults -``` -> [!IMPORTANT] -> If installing prometheus after rook-ceph is installed, you may patch a running rook-ceph cluster with the following command: -``` shell -kubectl -n rook-ceph patch CephCluster rook-ceph --type=merge -p "{\"spec\": {\"monitoring\": {\"enabled\": true}}}" -``` -Ensure you have 'servicemonitors' defined in the rook-ceph namespace. - - -## Cephadm/ceph-ansible/Rook (Ceph) - External - -We can use an external ceph cluster and present it via rook-ceph to your cluster. - -### Prepare pools on external cluster - -``` shell -ceph osd pool create general 32 -ceph osd pool create general-multi-attach-data 32 -ceph osd pool create general-multi-attach-metadata 32 -rbd pool init general -ceph fs new general-multi-attach general-multi-attach-metadata general-multi-attach-data -``` - -### You must have a MDS service running, in this example I am tagging my 3 ceph nodes with MDS labels and creating a MDS service for the general-multi-attach Cephfs Pool - -``` shell -ceph orch host label add genestack-ceph1 mds -ceph orch host label add genestack-ceph2 mds -ceph orch host label add genestack-ceph3 mds -ceph orch apply mds myfs label:mds -``` - -### We will now download create-external-cluster-resources.py and create exports to run on your controller node. Using cephadm in this example: - -``` shell -./cephadm shell -yum install wget -y ; wget https://raw.githubusercontent.com/rook/rook/release-1.12/deploy/examples/create-external-cluster-resources.py -python3 create-external-cluster-resources.py --rbd-data-pool-name general --cephfs-filesystem-name general-multi-attach --namespace rook-ceph-external --format bash -``` -### Copy and paste the output, here is an example: -``` shell -root@genestack-ceph1:/# python3 create-external-cluster-resources.py --rbd-data-pool-name general --cephfs-filesystem-name general-multi-attach --namespace rook-ceph-external --format bash -export NAMESPACE=rook-ceph-external -export ROOK_EXTERNAL_FSID=d45869e0-ccdf-11ee-8177-1d25f5ec2433 -export ROOK_EXTERNAL_USERNAME=client.healthchecker -export ROOK_EXTERNAL_CEPH_MON_DATA=genestack-ceph1=10.1.1.209:6789 -export ROOK_EXTERNAL_USER_SECRET=AQATh89lf5KiBBAATgaOGAMELzPOIpiCg6ANfA== -export ROOK_EXTERNAL_DASHBOARD_LINK=https://10.1.1.209:8443/ -export CSI_RBD_NODE_SECRET=AQATh89l3AJjBRAAYD+/cuf3XPdMBmdmz4iWIA== -export CSI_RBD_NODE_SECRET_NAME=csi-rbd-node -export CSI_RBD_PROVISIONER_SECRET=AQATh89l9dH4BRAApBKzqwtaUqw9bNcBI/iGGw== -export CSI_RBD_PROVISIONER_SECRET_NAME=csi-rbd-provisioner -export CEPHFS_POOL_NAME=general-multi-attach-data -export CEPHFS_METADATA_POOL_NAME=general-multi-attach-metadata -export CEPHFS_FS_NAME=general-multi-attach -export CSI_CEPHFS_NODE_SECRET=AQATh89lFeqMBhAAJpHAE5vtukXYuRj2+WTh2g== -export CSI_CEPHFS_PROVISIONER_SECRET=AQATh89lHB0dBxAA7CHM/9rTSs79SLJSKVBYeg== -export CSI_CEPHFS_NODE_SECRET_NAME=csi-cephfs-node -export CSI_CEPHFS_PROVISIONER_SECRET_NAME=csi-cephfs-provisioner -export MONITORING_ENDPOINT=10.1.1.209 -export MONITORING_ENDPOINT_PORT=9283 -export RBD_POOL_NAME=general -export RGW_POOL_PREFIX=default -``` - -### Run the following commands to import the cluster after pasting in exports from external cluster -``` shell -kubectl apply -k /opt/genestack/kustomize/rook-operator/ -/opt/genestack/scripts/import-external-cluster.sh -helm repo add rook-release https://charts.rook.io/release -helm install --create-namespace --namespace rook-ceph-external rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f /opt/genestack/submodules/rook/deploy/charts/rook-ceph-cluster/values-external.yaml -kubectl patch storageclass general -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' -``` - -### Monitor progress: -``` shell -kubectl --namespace rook-ceph-external get cephcluster -w -``` - -### Should return when finished: -``` shell -NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID -rook-ceph-external /var/lib/rook 3 3m24s Connected Cluster connected successfully HEALTH_OK true d45869e0-ccdf-11ee-8177-1d25f5ec2433 -``` - - - -## NFS - External - -While NFS in K8S works great, it's not suitable for use in all situations. - -> Example: NFS is officially not supported by MariaDB and will fail to initialize the database backend when running on NFS. - -In Genestack, the `general` storage class is used by default for systems like RabbitMQ and MariaDB. If you intend to use NFS, you will need to ensure your use cases match the workloads and may need to make some changes within the manifests. - -### Install Base Packages - -NFS requires utilities to be installed on the host. Before you create workloads that require NFS make sure you have `nfs-common` installed on your target storage hosts (e.g. the controllers). - -### Add the NFS Provisioner Helm repo - -``` shell -helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ -``` - -### Install External NFS Provisioner - -This command will connect to the external storage provider and generate a storage class that services the `general` storage class. - -``` shell -helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ - --namespace nfs-provisioner \ - --create-namespace \ - --set nfs.server=172.16.27.67 \ - --set nfs.path=/mnt/storage/k8s \ - --set nfs.mountOptions={"nolock"} \ - --set storageClass.defaultClass=true \ - --set replicaCount=1 \ - --set storageClass.name=general \ - --set storageClass.provisionerName=nfs-provisioner-01 -``` - -This command will connect to the external storage provider and generate a storage class that services the `general-multi-attach` storage class. - -``` shell -helm install nfs-subdir-external-provisioner-multi nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ - --namespace nfs-provisioner \ - --create-namespace \ - --set nfs.server=172.16.27.67 \ - --set nfs.path=/mnt/storage/k8s \ - --set nfs.mountOptions={"nolock"} \ - --set replicaCount=1 \ - --set storageClass.name=general-multi-attach \ - --set storageClass.provisionerName=nfs-provisioner-02 \ - --set storageClass.accessModes=ReadWriteMany -``` - -## TopoLVM - In Cluster - -[TopoLVM](https://github.com/topolvm/topolvm) is a capacity aware storage provisioner which can make use of physical volumes.\ -The following steps are one way to set it up, however, consult the [documentation](https://github.com/topolvm/topolvm/blob/main/docs/getting-started.md) for a full breakdown of everything possible with TopoLVM. - -### Create the target volume group on your hosts - -TopoLVM requires access to a volume group on the physical host to work, which means we need to set up a volume group on our hosts. By default, TopoLVM will use the controllers as storage hosts. The genestack Kustomize solution sets the general storage volume group to `vg-general`. This value can be changed within Kustomize found at `kustomize/topolvm/general/kustomization.yaml`. - -> Simple example showing how to create the needed volume group. - -``` shell -# NOTE sdX is a placeholder for a physical drive or partition. -pvcreate /dev/sdX -vgcreate vg-general /dev/sdX -``` - -Once the volume group is on your storage nodes, the node is ready for use. - -### Deploy the TopoLVM Provisioner - -``` shell -kubectl kustomize --enable-helm /opt/genestack/kustomize/topolvm/general | kubectl apply -f - -``` diff --git a/docs/Deploy-Openstack.md b/docs/Deploy-Openstack.md deleted file mode 100644 index 6c5c77c9..00000000 --- a/docs/Deploy-Openstack.md +++ /dev/null @@ -1,713 +0,0 @@ -# Building the cloud - -From this point forward we're building our OpenStack cloud. The following commands will leverage `helm` as the package manager and `kustomize` as our configuration management backend. - -## Deployment choices - -When you're building the cloud, you have a couple of deployment choices, the most fundamental of which is `base` or `aio`. - -* `base` creates a production-ready environment that ensures an HA system is deployed across the hardware available in your cloud. -* `aio` creates a minimal cloud environment which is suitable for test, which may have low resources. - -The following examples all assume the use of a production environment, however, if you change `base` to `aio`, the deployment footprint will be changed for a given service. - -## The DNA of our services - -The DNA of the OpenStack services has been built to scale, and be managed in a pseudo light-outs environment. We're aiming to empower operators to do more, simply and easily. Here are the high-level talking points about the way we've structured our applications. - -* All services make use of our core infrastructure which is all managed by operators. -* Backups, rollbacks, and package management all built into our applications delivery. -* Databases, users, and grants are all run against a MariaDB Galera cluster which is setup for OpenStack to use a single right, and read from many. - * The primary node is part of application service discovery and will be automatically promoted / demoted within the cluster as needed. -* Queues, permissions, vhosts, and users are all backed by a RabbitMQ cluster with automatic failover. All of the queues deployed in the environment are done with Quorum queues, giving us a best of bread queing platform which gracefully recovers from faults while maintaining performance. -* Horizontal scaling groups have been applied to all of our services. This means we'll be able to auto scale API applications up and down based on the needs of the environment. - -## Deploy Keystone - -[![asciicast](https://asciinema.org/a/629802.svg)](https://asciinema.org/a/629802) - -### Create secrets. - -``` shell -kubectl --namespace openstack \ - create secret generic keystone-rabbitmq-password \ - --type Opaque \ - --from-literal=username="keystone" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" -kubectl --namespace openstack \ - create secret generic keystone-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic keystone-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic keystone-credential-keys \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -### Run the package deployment - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install keystone ./keystone \ - --namespace=openstack \ - --wait \ - --timeout 120m \ - -f /opt/genestack/helm-configs/keystone/keystone-helm-overrides.yaml \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.keystone.password="$(kubectl --namespace openstack get secret keystone-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.keystone.password="$(kubectl --namespace openstack get secret keystone-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args keystone/base -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -> NOTE: The image used here allows the system to run with RXT global authentication federation. - The federated plugin can be seen here, https://github.com/cloudnull/keystone-rxt - -Deploy the openstack admin client pod (optional) - -``` shell -kubectl --namespace openstack apply -f /opt/genestack/manifests/utils/utils-openstack-client-admin.yaml -``` - -### Validate functionality - -``` shell -kubectl --namespace openstack exec -ti openstack-admin-client -- openstack user list -``` - -## Deploy Glance - -[![asciicast](https://asciinema.org/a/629806.svg)](https://asciinema.org/a/629806) - -### Create secrets. - -``` shell -kubectl --namespace openstack \ - create secret generic glance-rabbitmq-password \ - --type Opaque \ - --from-literal=username="glance" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" -kubectl --namespace openstack \ - create secret generic glance-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic glance-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -> Before running the Glance deployment you should configure the backend which is defined in the - `helm-configs/glance/glance-helm-overrides.yaml` file. The default is a making the assumption we're running with Ceph deployed by - Rook so the backend is configured to be cephfs with multi-attach functionality. While this works great, you should consider all of - the available storage backends and make the right decision for your environment. - -### Run the package deployment - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install glance ./glance \ - --namespace=openstack \ - --wait \ - --timeout 120m \ - -f /opt/genestack/helm-configs/glance/glance-helm-overrides.yaml \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.glance.password="$(kubectl --namespace openstack get secret glance-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.glance.password="$(kubectl --namespace openstack get secret glance-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.glance.password="$(kubectl --namespace openstack get secret glance-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args glance/base -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -> Note that the defaults disable `storage_init` because we're using **pvc** as the image backend - type. In production this should be changed to swift. - -### Validate functionality - -``` shell -kubectl --namespace openstack exec -ti openstack-admin-client -- openstack image list -``` - -## Deploy Heat - -[![asciicast](https://asciinema.org/a/629807.svg)](https://asciinema.org/a/629807) - -### Create secrets - -``` shell -kubectl --namespace openstack \ - create secret generic heat-rabbitmq-password \ - --type Opaque \ - --from-literal=username="heat" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" -kubectl --namespace openstack \ - create secret generic heat-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic heat-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic heat-trustee \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic heat-stack-user \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -### Run the package deployment - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install heat ./heat \ - --namespace=openstack \ - --timeout 120m \ - -f /opt/genestack/helm-configs/heat/heat-helm-overrides.yaml \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.heat.password="$(kubectl --namespace openstack get secret heat-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.heat_trustee.password="$(kubectl --namespace openstack get secret heat-trustee -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.heat_stack_user.password="$(kubectl --namespace openstack get secret heat-stack-user -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.heat.password="$(kubectl --namespace openstack get secret heat-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.heat.password="$(kubectl --namespace openstack get secret heat-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args heat/base -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -### Validate functionality - -``` shell -kubectl --namespace openstack exec -ti openstack-admin-client -- openstack --os-interface internal orchestration service list -``` - -## Deploy Cinder - -[![asciicast](https://asciinema.org/a/629808.svg)](https://asciinema.org/a/629808) - -### Create secrets - -``` shell -kubectl --namespace openstack \ - create secret generic cinder-rabbitmq-password \ - --type Opaque \ - --from-literal=username="cinder" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" -kubectl --namespace openstack \ - create secret generic cinder-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic cinder-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -### Run the package deployment - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install cinder ./cinder \ - --namespace=openstack \ - --wait \ - --timeout 120m \ - -f /opt/genestack/helm-configs/cinder/cinder-helm-overrides.yaml \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args cinder/base -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -Once the helm deployment is complete cinder and all of it's API services will be online. However, using this setup there will be -no volume node at this point. The reason volume deployments have been disabled is because we didn't expose ceph to the openstack -environment and OSH makes a lot of ceph related assumptions. For testing purposes we're wanting to run with the logical volume -driver (reference) and manage the deployment of that driver in a hybrid way. As such there's a deployment outside of our normal -K8S workflow will be needed on our volume host. - -> The LVM volume makes the assumption that the storage node has the required volume group setup `lvmdriver-1` on the node - This is not something that K8S is handling at this time. - -While cinder can run with a great many different storage backends, for the simple case we want to run with the Cinder reference -driver, which makes use of Logical Volumes. Because this driver is incompatible with a containerized work environment, we need -to run the services on our baremetal targets. Genestack has a playbook which will facilitate the installation of our services -and ensure that we've deployed everything in a working order. The playbook can be found at `playbooks/deploy-cinder-volumes-reference.yaml`. -Included in the playbooks directory is an example inventory for our cinder hosts; however, any inventory should work fine. - -#### Host Setup - -The cinder target hosts need to have some basic setup run on them to make them compatible with our Logical Volume Driver. - -1. Ensure DNS is working normally. - -Assuming your storage node was also deployed as a K8S node when we did our initial Kubernetes deployment, the DNS should already be -operational for you; however, in the event you need to do some manual tweaking or if the node was note deployed as a K8S worker, then -make sure you setup the DNS resolvers correctly so that your volume service node can communicate with our cluster. - -> This is expected to be our CoreDNS IP, in my case this is `169.254.25.10`. - -This is an example of my **systemd-resolved** conf found in `/etc/systemd/resolved.conf` -``` conf -[Resolve] -DNS=169.254.25.10 -#FallbackDNS= -Domains=openstack.svc.cluster.local svc.cluster.local cluster.local -#LLMNR=no -#MulticastDNS=no -DNSSEC=no -Cache=no-negative -#DNSStubListener=yes -``` - -Restart your DNS service after changes are made. - -``` shell -systemctl restart systemd-resolved.service -``` - -2. Volume Group `cinder-volumes-1` needs to be created, which can be done in two simple commands. - -Create the physical volume - -``` shell -pvcreate /dev/vdf -``` - -Create the volume group - -``` shell -vgcreate cinder-volumes-1 /dev/vdf -``` - -It should be noted that this setup can be tweaked and tuned to your heart's desire; additionally, you can further extend a -volume group with multiple disks. The example above is just that, an example. Check out more from the upstream docs on how -to best operate your volume groups for your specific needs. - -#### Hybrid Cinder Volume deployment - -With the volume groups and DNS setup on your target hosts, it is now time to deploy the volume services. The playbook `playbooks/deploy-cinder-volumes-reference.yaml` will be used to create a release target for our python code-base and deploy systemd services -units to run the cinder-volume process. - -> [!IMPORTANT] -> Consider the **storage** network on your Cinder hosts that will be accessible to Nova compute hosts. By default, the playbook uses `ansible_default_ipv4.address` to configure the target address, which may or may not work for your environment. Append var, i.e., `-e cinder_storage_network_interface=ansible_br_mgmt` to use the specified iface address in `cinder.conf` for `my_ip` and `target_ip_address` in `cinder/backends.conf`. **Interface names with a `-` must be entered with a `_` and be prefixed with `ansible`** - -##### Example without storage network interface override - -``` shell -ansible-playbook -i inventory-example.yaml deploy-cinder-volumes-reference.yaml -``` - -Once the playbook has finished executing, check the cinder api to verify functionality. - -``` shell -root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume service list -+------------------+-------------------------------------------------+------+---------+-------+----------------------------+ -| Binary | Host | Zone | Status | State | Updated At | -+------------------+-------------------------------------------------+------+---------+-------+----------------------------+ -| cinder-scheduler | cinder-volume-worker | nova | enabled | up | 2023-12-26T17:43:07.000000 | -| cinder-volume | openstack-flex-node-4.cluster.local@lvmdriver-1 | nova | enabled | up | 2023-12-26T17:43:04.000000 | -+------------------+-------------------------------------------------+------+---------+-------+----------------------------+ -``` - -> Notice the volume service is up and running with our `lvmdriver-1` target. - -At this point it would be a good time to define your types within cinder. For our example purposes we need to define the `lvmdriver-1` -type so that we can schedule volumes to our environment. - -``` shell -root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume type create lvmdriver-1 -+-------------+--------------------------------------+ -| Field | Value | -+-------------+--------------------------------------+ -| description | None | -| id | 6af6ade2-53ca-4260-8b79-1ba2f208c91d | -| is_public | True | -| name | lvmdriver-1 | -+-------------+--------------------------------------+ -``` - -### Validate functionality - -If wanted, create a test volume to tinker with - -``` shell -root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume create --size 1 test -+---------------------+--------------------------------------+ -| Field | Value | -+---------------------+--------------------------------------+ -| attachments | [] | -| availability_zone | nova | -| bootable | false | -| consistencygroup_id | None | -| created_at | 2023-12-26T17:46:15.639697 | -| description | None | -| encrypted | False | -| id | c744af27-fb40-4ffa-8a84-b9f44cb19b2b | -| migration_status | None | -| multiattach | False | -| name | test | -| properties | | -| replication_status | None | -| size | 1 | -| snapshot_id | None | -| source_volid | None | -| status | creating | -| type | lvmdriver-1 | -| updated_at | None | -| user_id | 2ddf90575e1846368253474789964074 | -+---------------------+--------------------------------------+ - -root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume list -+--------------------------------------+------+-----------+------+-------------+ -| ID | Name | Status | Size | Attached to | -+--------------------------------------+------+-----------+------+-------------+ -| c744af27-fb40-4ffa-8a84-b9f44cb19b2b | test | available | 1 | | -+--------------------------------------+------+-----------+------+-------------+ -``` - -You can validate the environment is operational by logging into the storage nodes to validate the LVM targets are being created. - -``` shell -root@openstack-flex-node-4:~# lvs - LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert - c744af27-fb40-4ffa-8a84-b9f44cb19b2b cinder-volumes-1 -wi-a----- 1.00g -``` - -## Create Compute Kit Secrets - -[![asciicast](https://asciinema.org/a/629813.svg)](https://asciinema.org/a/629813) - -### Creating the Compute Kit Secrets - -Part of running Nova is also running placement. Setup all credentials now so we can use them across the nova and placement services. - -``` shell -# Shared -kubectl --namespace openstack \ - create secret generic metadata-shared-secret \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -``` shell -# Placement -kubectl --namespace openstack \ - create secret generic placement-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic placement-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -``` shell -# Nova -kubectl --namespace openstack \ - create secret generic nova-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic nova-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic nova-rabbitmq-password \ - --type Opaque \ - --from-literal=username="nova" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" -``` - -``` shell -# Ironic (NOT IMPLEMENTED YET) -kubectl --namespace openstack \ - create secret generic ironic-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -``` shell -# Designate (NOT IMPLEMENTED YET) -kubectl --namespace openstack \ - create secret generic designate-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -``` shell -# Neutron -kubectl --namespace openstack \ - create secret generic neutron-rabbitmq-password \ - --type Opaque \ - --from-literal=username="neutron" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" -kubectl --namespace openstack \ - create secret generic neutron-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic neutron-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -### Deploy Placement - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install placement ./placement --namespace=openstack \ - --namespace=openstack \ - --timeout 120m \ - -f /opt/genestack/helm-configs/placement/placement-helm-overrides.yaml \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.placement.password="$(kubectl --namespace openstack get secret placement-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.placement.password="$(kubectl --namespace openstack get secret placement-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.nova_api.password="$(kubectl --namespace openstack get secret nova-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args placement/base -``` - -### Deploy Nova - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install nova ./nova \ - --namespace=openstack \ - --timeout 120m \ - -f /opt/genestack/helm-configs/nova/nova-helm-overrides.yaml \ - --set conf.nova.neutron.metadata_proxy_shared_secret="$(kubectl --namespace openstack get secret metadata-shared-secret -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.nova.password="$(kubectl --namespace openstack get secret nova-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.ironic.password="$(kubectl --namespace openstack get secret ironic-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.placement.password="$(kubectl --namespace openstack get secret placement-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.nova.password="$(kubectl --namespace openstack get secret nova-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db_api.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db_api.auth.nova.password="$(kubectl --namespace openstack get secret nova-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db_cell0.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db_cell0.auth.nova.password="$(kubectl --namespace openstack get secret nova-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.nova.password="$(kubectl --namespace openstack get secret nova-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args nova/base -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -> NOTE: The above command is setting the ceph as disabled. While the K8S infrastructure has Ceph, - we're not exposing ceph to our openstack environment. - -If running in an environment that doesn't have hardware virtualization extensions add the following two `set` switches to the install command. - -``` shell ---set conf.nova.libvirt.virt_type=qemu --set conf.nova.libvirt.cpu_mode=none -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -### Deploy Neutron - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install neutron ./neutron \ - --namespace=openstack \ - --timeout 120m \ - -f /opt/genestack/helm-configs/neutron/neutron-helm-overrides.yaml \ - --set conf.metadata_agent.DEFAULT.metadata_proxy_shared_secret="$(kubectl --namespace openstack get secret metadata-shared-secret -o jsonpath='{.data.password}' | base64 -d)" \ - --set conf.ovn_metadata_agent.DEFAULT.metadata_proxy_shared_secret="$(kubectl --namespace openstack get secret metadata-shared-secret -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.nova.password="$(kubectl --namespace openstack get secret nova-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.placement.password="$(kubectl --namespace openstack get secret placement-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.designate.password="$(kubectl --namespace openstack get secret designate-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.ironic.password="$(kubectl --namespace openstack get secret ironic-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set conf.neutron.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ - --set conf.neutron.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ - --set conf.plugins.ml2_conf.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ - --set conf.plugins.ml2_conf.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args neutron/base -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -> The above command derives the OVN north/south bound database from our K8S environment. The insert `set` is making the assumption we're using **tcp** to connect. - -## Deploy Octavia - -[![asciicast](https://asciinema.org/a/629814.svg)](https://asciinema.org/a/629814) - -### Create secrets - -``` shell -kubectl --namespace openstack \ - create secret generic octavia-rabbitmq-password \ - --type Opaque \ - --from-literal=username="octavia" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" -kubectl --namespace openstack \ - create secret generic octavia-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic octavia-admin \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -kubectl --namespace openstack \ - create secret generic octavia-certificates \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -### Run the package deployment - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install octavia ./octavia \ - --namespace=openstack \ - --wait \ - --timeout 120m \ - -f /opt/genestack/helm-configs/octavia/octavia-helm-overrides.yaml \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.identity.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ - --set endpoints.oslo_messaging.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ - --set conf.octavia.certificates.ca_private_key_passphrase="$(kubectl --namespace openstack get secret octavia-certificates -o jsonpath='{.data.password}' | base64 -d)" \ - --set conf.octavia.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ - --set conf.octavia.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args octavia/base -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -Now validate functionality - -``` shell - -``` - -## Deploy Horizon - -[![asciicast](https://asciinema.org/a/629815.svg)](https://asciinema.org/a/629815) - -### Create secrets - -``` shell -kubectl --namespace openstack \ - create secret generic horizon-secrete-key \ - --type Opaque \ - --from-literal=username="horizon" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" -kubectl --namespace openstack \ - create secret generic horizon-db-password \ - --type Opaque \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -### Run the package deployment - -``` shell -cd /opt/genestack/submodules/openstack-helm - -helm upgrade --install horizon ./horizon \ - --namespace=openstack \ - --wait \ - --timeout 120m \ - -f /opt/genestack/helm-configs/horizon/horizon-helm-overrides.yaml \ - --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ - --set conf.horizon.local_settings.config.horizon_secret_key="$(kubectl --namespace openstack get secret horizon-secrete-key -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ - --set endpoints.oslo_db.auth.horizon.password="$(kubectl --namespace openstack get secret horizon-db-password -o jsonpath='{.data.password}' | base64 -d)" \ - --post-renderer /opt/genestack/kustomize/kustomize.sh \ - --post-renderer-args horizon/base -``` - -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. - -## Deploy Skyline - -[![asciicast](https://asciinema.org/a/629816.svg)](https://asciinema.org/a/629816) - -Skyline is an alternative Web UI for OpenStack. If you deploy horizon there's no need for Skyline. - -### Create secrets - -Skyline is a little different because there's no helm integration. Given this difference the deployment is far simpler, and all secrets can be managed in one object. - -``` shell -kubectl --namespace openstack \ - create secret generic skyline-apiserver-secrets \ - --type Opaque \ - --from-literal=service-username="skyline" \ - --from-literal=service-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ - --from-literal=service-domain="service" \ - --from-literal=service-project="service" \ - --from-literal=service-project-domain="service" \ - --from-literal=db-endpoint="mariadb-galera-primary.openstack.svc.cluster.local" \ - --from-literal=db-name="skyline" \ - --from-literal=db-username="skyline" \ - --from-literal=db-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ - --from-literal=secret-key="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ - --from-literal=keystone-endpoint="http://keystone-api.openstack.svc.cluster.local:5000" \ - --from-literal=default-region="RegionOne" -``` - -> Note all the configuration is in this one secret, so be sure to set your entries accordingly. - -### Run the deployment - -> [!TIP] -> Pause for a moment to consider if you will be wanting to access Skyline via your ingress controller over a specific FQDN. If so, modify `/opt/genestack/kustomize/skyline/fqdn/kustomization.yaml` to suit your needs then use `fqdn` below in lieu of `base`... - -``` shell -kubectl --namespace openstack apply -k /opt/genestack/kustomize/skyline/base -``` diff --git a/docs/build-local-images.md b/docs/build-local-images.md index 33fac891..6fb21bae 100644 --- a/docs/build-local-images.md +++ b/docs/build-local-images.md @@ -1,4 +1,6 @@ -## Optional - Building OVN with customer providers +# Building Custom Images + +## Octavia OVN with customer providers By default Octavia will run with Amphora, however, because we've OVN available to our environment we can also configure the OVN provider for use within the cluster. While the genestack defaults will include a container image that meets our needs, the following snippet will walk you through the manual build process making use of the internal kubernetes registry. diff --git a/docs/components.md b/docs/components.md index 7382540c..483f6a9e 100644 --- a/docs/components.md +++ b/docs/components.md @@ -1,5 +1,4 @@ - -## Included/Required Components +# Product Component Matrix The following components are part of the initial product release and largely deployed with Helm+Kustomize against the K8s API (v1.28 and up). diff --git a/docs/deploy-required-infrastructure.md b/docs/deploy-required-infrastructure.md deleted file mode 100644 index 09cb6db9..00000000 --- a/docs/deploy-required-infrastructure.md +++ /dev/null @@ -1,321 +0,0 @@ -# Infrastructure Deployment Demo - -[![asciicast](https://asciinema.org/a/629790.svg)](https://asciinema.org/a/629790) - -# Running the infrastructure deployment - -The infrastructure deployment can almost all be run in parallel. The above demo does everything serially to keep things consistent and easy to understand but if you just need to get things done, feel free to do it all at once. - -## Create our basic OpenStack namespace - -The following command will generate our OpenStack namespace and ensure we have everything needed to proceed with the deployment. - -``` shell -kubectl apply -k /opt/genestack/kustomize/openstack -``` - -## Deploy the MariaDB Operator and a Galera Cluster - -### Create secret - -``` shell -kubectl --namespace openstack \ - create secret generic mariadb \ - --type Opaque \ - --from-literal=root-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ - --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" -``` - -### Deploy the mariadb operator - -If you've changed your k8s cluster name from the default cluster.local, edit `clusterName` in `/opt/genestack/kustomize/mariadb-operator/kustomization.yaml` prior to deploying the mariadb operator. - -``` shell -kubectl kustomize --enable-helm /opt/genestack/kustomize/mariadb-operator | kubectl --namespace mariadb-system apply --server-side --force-conflicts -f - -``` - -> The operator may take a minute to get ready, before deploying the Galera cluster, wait until the webhook is online. - -``` shell -kubectl --namespace mariadb-system get pods -w -``` - -### Deploy the MariaDB Cluster - -``` shell -kubectl --namespace openstack apply -k /opt/genestack/kustomize/mariadb-cluster/base -``` - -> NOTE MariaDB has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. - -### Verify readiness with the following command - -``` shell -kubectl --namespace openstack get mariadbs -w -``` - -## Deploy the RabbitMQ Operator and a RabbitMQ Cluster - -### Deploy the RabbitMQ operator. - -``` shell -kubectl apply -k /opt/genestack/kustomize/rabbitmq-operator -``` -> The operator may take a minute to get ready, before deploying the RabbitMQ cluster, wait until the operator pod is online. - -### Deploy the RabbitMQ topology operator. - -``` shell -kubectl apply -k /opt/genestack/kustomize/rabbitmq-topology-operator -``` - -### Deploy the RabbitMQ cluster. - -``` shell -kubectl apply -k /opt/genestack/kustomize/rabbitmq-cluster/base -``` - -> NOTE RabbitMQ has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. - -### Validate the status with the following - -``` shell -kubectl --namespace openstack get rabbitmqclusters.rabbitmq.com -w -``` - -## Deploy a Memcached - -### Deploy the Memcached Cluster - -``` shell -kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base | kubectl apply --namespace openstack -f - -``` - -> NOTE Memcached has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. - -### Alternative - Deploy the Memcached Cluster With Monitoring Enabled - -``` shell -kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base-monitoring | kubectl apply --namespace openstack -f - -``` - -> NOTE Memcached has a base-monitoring configuration which is HA and production ready that also includes a metrics exporter for prometheus metrics collection. If you'd like to have monitoring enabled for your memcached cluster ensure the prometheus operator is installed first ([Deploy Prometheus](prometheus.md)). - - -### Verify readiness with the following command. - -``` shell -kubectl --namespace openstack get horizontalpodautoscaler.autoscaling memcached -w -``` - -# Deploy the ingress controllers - -We need two different Ingress controllers, one in the `openstack` namespace, the other in the `ingress-nginx` namespace. The `openstack` controller is for east-west connectivity, the `ingress-nginx` controller is for north-south. - -### Deploy our ingress controller within the ingress-nginx Namespace - -``` shell -kubectl kustomize --enable-helm /opt/genestack/kustomize/ingress/external | kubectl apply --namespace ingress-nginx -f - -``` - -### Deploy our ingress controller within the OpenStack Namespace - -``` shell -kubectl kustomize --enable-helm /opt/genestack/kustomize/ingress/internal | kubectl apply --namespace openstack -f - -``` - -The openstack ingress controller uses the class name `nginx-openstack`. - -## Setup the MetalLB Loadbalancer - -The MetalLb loadbalancer can be setup by editing the following file `metallb-openstack-service-lb.yml`, You will need to add -your "external" VIP(s) to the loadbalancer so that they can be used within services. These IP addresses are unique and will -need to be customized to meet the needs of your environment. - -### Example LB manifest - -```yaml -metadata: - name: openstack-external - namespace: metallb-system -spec: - addresses: - - 10.74.8.99/32 # This is assumed to be the public LB vip address - autoAssign: false ---- -apiVersion: metallb.io/v1beta1 -kind: L2Advertisement -metadata: - name: openstack-external-advertisement - namespace: metallb-system -spec: - ipAddressPools: - - openstack-external - nodeSelectors: # Optional block to limit nodes for a given advertisement - - matchLabels: - kubernetes.io/hostname: controller01.sjc.ohthree.com - - matchLabels: - kubernetes.io/hostname: controller02.sjc.ohthree.com - - matchLabels: - kubernetes.io/hostname: controller03.sjc.ohthree.com - interfaces: # Optional block to limit ifaces used to advertise VIPs - - br-mgmt -``` - -``` shell -kubectl apply -f /opt/genestack/manifests/metallb/metallb-openstack-service-lb.yml -``` - -Assuming your ingress controller is all setup and your metallb loadbalancer is operational you can patch the ingress controller to expose your external VIP address. - -``` shell -kubectl --namespace openstack patch service ingress -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip": "openstack-external-svc", "metallb.universe.tf/address-pool": "openstack-external"}}}' -kubectl --namespace openstack patch service ingress -p '{"spec": {"type": "LoadBalancer"}}' -``` - -Once patched you can see that the controller is operational with your configured VIP address. - -``` shell -kubectl --namespace openstack get services ingress -``` - -## Deploy Libvirt - -The first part of the compute kit is Libvirt. - -``` shell -kubectl kustomize --enable-helm /opt/genestack/kustomize/libvirt | kubectl apply --namespace openstack -f - -``` - -Once deployed you can validate functionality on your compute hosts with `virsh` - -``` shell -root@openstack-flex-node-3:~# virsh -Welcome to virsh, the virtualization interactive terminal. - -Type: 'help' for help with commands - 'quit' to quit - -virsh # list - Id Name State --------------------- - -virsh # -``` - -## Deploy Open vSwitch OVN - -Note that we're not deploying Openvswitch, however, we are using it. The implementation on Genestack is assumed to be -done with Kubespray which deploys OVN as its networking solution. Because those components are handled by our infrastructure -there's nothing for us to manage / deploy in this environment. OpenStack will leverage OVN within Kubernetes following the -scaling/maintenance/management practices of kube-ovn. - -### Configure OVN for OpenStack - -Post deployment we need to setup neutron to work with our integrated OVN environment. To make that work we have to annotate or nodes. Within the following commands we'll use a lookup to label all of our nodes the same way, however, the power of this system is the ability to customize how our machines are labeled and therefore what type of hardware layout our machines will have. This gives us the ability to use different hardware in different machines, in different availability zones. While this example is simple your cloud deployment doesn't have to be. - -``` shell -export ALL_NODES=$(kubectl get nodes -l 'openstack-network-node=enabled' -o 'jsonpath={.items[*].metadata.name}') -``` - -> Set the annotations you need within your environment to meet the needs of your workloads on the hardware you have. - -#### Set `ovn.openstack.org/int_bridge` - -Set the name of the OVS integration bridge we'll use. In general, this should be **br-int**, and while this setting is implicitly configured we're explicitly defining what the bridge will be on these nodes. - -``` shell -kubectl annotate \ - nodes \ - ${ALL_NODES} \ - ovn.openstack.org/int_bridge='br-int' -``` - -#### Set `ovn.openstack.org/bridges` - -Set the name of the OVS bridges we'll use. These are the bridges you will use on your hosts within OVS. The option is a string and comma separated. You can define as many OVS type bridges you need or want for your environment. - -> NOTE The functional example here annotates all nodes; however, not all nodes have to have the same setup. - -``` shell -kubectl annotate \ - nodes \ - ${ALL_NODES} \ - ovn.openstack.org/bridges='br-ex' -``` - -#### Set `ovn.openstack.org/ports` - -Set the port mapping for OVS interfaces to a local physical interface on a given machine. This option uses a colon between the OVS bridge and the and the physical interface, `OVS_BRIDGE:PHYSICAL_INTERFACE_NAME`. Multiple bridge mappings can be defined by separating values with a comma. - -``` shell -kubectl annotate \ - nodes \ - ${ALL_NODES} \ - ovn.openstack.org/ports='br-ex:bond1' -``` - -#### Set `ovn.openstack.org/mappings` - -Set the Neutron bridge mapping. This maps the Neutron interfaces to the ovs bridge names. These are colon delimitated between `NEUTRON_INTERFACE:OVS_BRIDGE`. Multiple bridge mappings can be defined here and are separated by commas. - -> Neutron interfaces are string value and can be anything you want. The `NEUTRON_INTERFACE` value defined will be used when you create provider type networks after the cloud is online. - -``` shell -kubectl annotate \ - nodes \ - ${ALL_NODES} \ - ovn.openstack.org/mappings='physnet1:br-ex' -``` - -#### Set `ovn.openstack.org/availability_zones` - -Set the OVN availability zones which inturn creates neutron availability zones. Multiple network availability zones can be defined and are colon separated which allows us to define all of the availability zones a node will be able to provide for, `nova:az1:az2:az3`. - -``` shell -kubectl annotate \ - nodes \ - ${ALL_NODES} \ - ovn.openstack.org/availability_zones='nova' -``` - -> Any availability zone defined here should also be defined within your **neutron.conf**. The "nova" availability zone is an assumed defined, however, because we're running in a mixed OVN environment, we should define where we're allowed to execute OpenStack workloads. - -#### Set `ovn.openstack.org/gateway` - -Define where the gateways nodes will reside. There are many ways to run this, some like every compute node to be a gateway, some like dedicated gateway hardware. Either way you will need at least one gateway node within your environment. - -``` shell -kubectl annotate \ - nodes \ - ${ALL_NODES} \ - ovn.openstack.org/gateway='enabled' -``` - -### Run the OVN integration - -With all of the annotations defined, we can now apply the network policy with the following command. - -``` shell -kubectl apply -k /opt/genestack/kustomize/ovn -``` - -After running the setup, nodes will have the label `ovn.openstack.org/configured` with a date stamp when it was configured. -If there's ever a need to reconfigure a node, simply remove the label and the DaemonSet will take care of it automatically. - -## Validation our infrastructure is operational - -Before going any further make sure you validate that the backends are operational. - -``` shell -# MariaDB -kubectl --namespace openstack get mariadbs - -#RabbitMQ -kubectl --namespace openstack get rabbitmqclusters.rabbitmq.com - -# Memcached -kubectl --namespace openstack get horizontalpodautoscaler.autoscaling memcached -``` - -Once everything is Ready and online. Continue with the installation. diff --git a/docs/extra-osie.md b/docs/extra-osie.md new file mode 100644 index 00000000..31eb2a17 --- /dev/null +++ b/docs/extra-osie.md @@ -0,0 +1,10 @@ +# OSIE Deployment + +``` shell +helm upgrade --install osie osie/osie \ + --namespace=osie \ + --create-namespace \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/osie/osie-helm-overrides.yaml +``` diff --git a/docs/getting-started.md b/docs/genestack-getting-started.md similarity index 84% rename from docs/getting-started.md rename to docs/genestack-getting-started.md index a702439d..ef85fed8 100644 --- a/docs/getting-started.md +++ b/docs/genestack-getting-started.md @@ -1,8 +1,4 @@ -# Welcome to the Genestack Wiki - -Welcome to the Genestack wiki! The following documents will breakdown a full end-to-end deployment and highlight how we can run a hybrid cloud environment, simply. - -## Getting Started +# Getting Started Before you can do anything we need to get the code. Because we've sold our soul to the submodule devil, you're going to need to recursively clone the repo into your location. diff --git a/docs/index.md b/docs/index.md index 9643bd3b..60241d01 100644 --- a/docs/index.md +++ b/docs/index.md @@ -14,41 +14,3 @@ to manage cloud infrastructure in the way you need it. They say a picture is worth 1000 words, so here's a picture. ![Genestack Architecture Diagram](assets/images/diagram-genestack.png) - ---- - -Building our cloud future has never been this simple. - -## 0.Getting Started - * [Getting Started](getting-started.md) - * [Building Virtual Environments for Testing](build-test-envs.md) - -## 1.Kubernetes - * [Building Your Kubernetes Environment](build-k8s.md) - * [Retrieve kube config](kube-config.md) - -## 2.Storage - * [Create Persistent Storage](Create-Persistent-Storage.md) - -## 3.Infrastructure - * [Deploy Required Infrastructure](deploy-required-infrastructure.md) - * [Deploy Prometheus](prometheus.md) - * [Deploy Vault](vault.md) - -## 4.Openstack Infrastructure - * [Deploy Openstack on k8s](Deploy-Openstack.md) - -## Post Deployment - * [Post Deploy Operations](post-deploy-ops.md) - * [Building Local Images](build-local-images.md) - * [OVN Database Backup](ovn-db-backup.md) - -## Upgrades - * [Running Genestack Upgrade](genestack-upgrade.md) - * [Running Kubernetes Upgrade](k8s-upgrade.md) - -## Monitoring - * [Deploy Prometheus](prometheus.md) - * [MySQL Exporter](prometheus-mysql-exporter.md) - * [RabbitMQ Exporter](prometheus-rabbitmq-exporter.md) - * [Openstack Exporter](prometheus-openstack-metrics-exporter.md) diff --git a/docs/infrastructure-ingress.md b/docs/infrastructure-ingress.md new file mode 100644 index 00000000..81a7e01c --- /dev/null +++ b/docs/infrastructure-ingress.md @@ -0,0 +1,17 @@ +# Deploy the ingress controllers + +We need two different Ingress controllers, one in the `openstack` namespace, the other in the `ingress-nginx` namespace. The `openstack` controller is for east-west connectivity, the `ingress-nginx` controller is for north-south. + +### Deploy our ingress controller within the ingress-nginx Namespace + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/ingress/external | kubectl apply --namespace ingress-nginx -f - +``` + +### Deploy our ingress controller within the OpenStack Namespace + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/ingress/internal | kubectl apply --namespace openstack -f - +``` + +The openstack ingress controller uses the class name `nginx-openstack`. diff --git a/docs/infrastructure-libvirt.md b/docs/infrastructure-libvirt.md new file mode 100644 index 00000000..d3485f67 --- /dev/null +++ b/docs/infrastructure-libvirt.md @@ -0,0 +1,23 @@ +# Deploy Libvirt + +The first part of the compute kit is Libvirt. + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/libvirt | kubectl apply --namespace openstack -f - +``` + +Once deployed you can validate functionality on your compute hosts with `virsh` + +``` shell +root@openstack-flex-node-3:~# virsh +Welcome to virsh, the virtualization interactive terminal. + +Type: 'help' for help with commands + 'quit' to quit + +virsh # list + Id Name State +-------------------- + +virsh # +``` diff --git a/docs/infrastructure-mariadb-connect.md b/docs/infrastructure-mariadb-connect.md new file mode 100644 index 00000000..a7af0f16 --- /dev/null +++ b/docs/infrastructure-mariadb-connect.md @@ -0,0 +1,11 @@ +# Connect to the database + +Sometimes an operator may need to connect to the database to troubleshoot things or otherwise make modifications to the databases in place. The following command can be used to connect to the database from a node within the cluster. + +``` shell +mysql -h $(kubectl -n openstack get service mariadb-galera-primary -o jsonpath='{.spec.clusterIP}') \ + -p$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d) \ + -u root +``` + +> The following command will leverage your kube configuration and dynamically source the needed information to connect to the MySQL cluster. You will need to ensure you have installed the mysql client tools on the system you're attempting to connect from. diff --git a/docs/infrastructure-mariadb.md b/docs/infrastructure-mariadb.md new file mode 100644 index 00000000..9af335ea --- /dev/null +++ b/docs/infrastructure-mariadb.md @@ -0,0 +1,39 @@ +# Deploy the MariaDB Operator and a Galera Cluster + +## Create secret + +``` shell +kubectl --namespace openstack \ + create secret generic mariadb \ + --type Opaque \ + --from-literal=root-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +## Deploy the mariadb operator + +If you've changed your k8s cluster name from the default cluster.local, edit `clusterName` in `/opt/genestack/kustomize/mariadb-operator/kustomization.yaml` prior to deploying the mariadb operator. + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/mariadb-operator | kubectl --namespace mariadb-system apply --server-side --force-conflicts -f - +``` + +> The operator may take a minute to get ready, before deploying the Galera cluster, wait until the webhook is online. + +``` shell +kubectl --namespace mariadb-system get pods -w +``` + +## Deploy the MariaDB Cluster + +``` shell +kubectl --namespace openstack apply -k /opt/genestack/kustomize/mariadb-cluster/base +``` + +> NOTE MariaDB has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. + +## Verify readiness with the following command + +``` shell +kubectl --namespace openstack get mariadbs -w +``` diff --git a/docs/infrastructure-memcached.md b/docs/infrastructure-memcached.md new file mode 100644 index 00000000..cc2fede4 --- /dev/null +++ b/docs/infrastructure-memcached.md @@ -0,0 +1,23 @@ +# Deploy a Memcached + +## Deploy the Memcached Cluster + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base | kubectl apply --namespace openstack -f - +``` + +> NOTE Memcached has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. + +### Alternative - Deploy the Memcached Cluster With Monitoring Enabled + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base-monitoring | kubectl apply --namespace openstack -f - +``` + +> NOTE Memcached has a base-monitoring configuration which is HA and production ready that also includes a metrics exporter for prometheus metrics collection. If you'd like to have monitoring enabled for your memcached cluster ensure the prometheus operator is installed first ([Deploy Prometheus](prometheus.md)). + +## Verify readiness with the following command. + +``` shell +kubectl --namespace openstack get horizontalpodautoscaler.autoscaling memcached -w +``` diff --git a/docs/infrastructure-metallb.md b/docs/infrastructure-metallb.md new file mode 100644 index 00000000..19dc5e30 --- /dev/null +++ b/docs/infrastructure-metallb.md @@ -0,0 +1,53 @@ + +# Setup the MetalLB Loadbalancer + +The MetalLb loadbalancer can be setup by editing the following file `metallb-openstack-service-lb.yml`, You will need to add +your "external" VIP(s) to the loadbalancer so that they can be used within services. These IP addresses are unique and will +need to be customized to meet the needs of your environment. + +## Example LB manifest + +```yaml +metadata: + name: openstack-external + namespace: metallb-system +spec: + addresses: + - 10.74.8.99/32 # This is assumed to be the public LB vip address + autoAssign: false +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: openstack-external-advertisement + namespace: metallb-system +spec: + ipAddressPools: + - openstack-external + nodeSelectors: # Optional block to limit nodes for a given advertisement + - matchLabels: + kubernetes.io/hostname: controller01.sjc.ohthree.com + - matchLabels: + kubernetes.io/hostname: controller02.sjc.ohthree.com + - matchLabels: + kubernetes.io/hostname: controller03.sjc.ohthree.com + interfaces: # Optional block to limit ifaces used to advertise VIPs + - br-mgmt +``` + +``` shell +kubectl apply -f /opt/genestack/manifests/metallb/metallb-openstack-service-lb.yml +``` + +Assuming your ingress controller is all setup and your metallb loadbalancer is operational you can patch the ingress controller to expose your external VIP address. + +``` shell +kubectl --namespace openstack patch service ingress -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip": "openstack-external-svc", "metallb.universe.tf/address-pool": "openstack-external"}}}' +kubectl --namespace openstack patch service ingress -p '{"spec": {"type": "LoadBalancer"}}' +``` + +Once patched you can see that the controller is operational with your configured VIP address. + +``` shell +kubectl --namespace openstack get services ingress +``` diff --git a/docs/infrastructure-namespace.md b/docs/infrastructure-namespace.md new file mode 100644 index 00000000..21b5eea4 --- /dev/null +++ b/docs/infrastructure-namespace.md @@ -0,0 +1,7 @@ +# Create our basic OpenStack namespace + +The following command will generate our OpenStack namespace and ensure we have everything needed to proceed with the deployment. + +``` shell +kubectl apply -k /opt/genestack/kustomize/openstack +``` diff --git a/docs/infrastructure-overview.md b/docs/infrastructure-overview.md new file mode 100644 index 00000000..29f6e833 --- /dev/null +++ b/docs/infrastructure-overview.md @@ -0,0 +1,15 @@ +# Infrastructure Deployment Demo + +[![asciicast](https://asciinema.org/a/629790.svg)](https://asciinema.org/a/629790) + +# Running the infrastructure deployment + +The infrastructure deployment can almost all be run in parallel. The above demo does everything serially to keep things consistent and easy to understand but if you just need to get things done, feel free to do it all at once. + + + + + + + + diff --git a/docs/ovn-db-backup.md b/docs/infrastructure-ovn-db-backup.md similarity index 100% rename from docs/ovn-db-backup.md rename to docs/infrastructure-ovn-db-backup.md diff --git a/docs/infrastructure-ovn-setup.md b/docs/infrastructure-ovn-setup.md new file mode 100644 index 00000000..b4ce0c29 --- /dev/null +++ b/docs/infrastructure-ovn-setup.md @@ -0,0 +1,109 @@ +# Configure OVN for OpenStack + +Post deployment we need to setup neutron to work with our integrated OVN environment. To make that work we have to annotate or nodes. Within the following commands we'll use a lookup to label all of our nodes the same way, however, the power of this system is the ability to customize how our machines are labeled and therefore what type of hardware layout our machines will have. This gives us the ability to use different hardware in different machines, in different availability zones. While this example is simple your cloud deployment doesn't have to be. + +``` shell +export ALL_NODES=$(kubectl get nodes -l 'openstack-network-node=enabled' -o 'jsonpath={.items[*].metadata.name}') +``` + +> Set the annotations you need within your environment to meet the needs of your workloads on the hardware you have. + +### Set `ovn.openstack.org/int_bridge` + +Set the name of the OVS integration bridge we'll use. In general, this should be **br-int**, and while this setting is implicitly configured we're explicitly defining what the bridge will be on these nodes. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/int_bridge='br-int' +``` + +### Set `ovn.openstack.org/bridges` + +Set the name of the OVS bridges we'll use. These are the bridges you will use on your hosts within OVS. The option is a string and comma separated. You can define as many OVS type bridges you need or want for your environment. + +> NOTE The functional example here annotates all nodes; however, not all nodes have to have the same setup. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/bridges='br-ex' +``` + +### Set `ovn.openstack.org/ports` + +Set the port mapping for OVS interfaces to a local physical interface on a given machine. This option uses a colon between the OVS bridge and the and the physical interface, `OVS_BRIDGE:PHYSICAL_INTERFACE_NAME`. Multiple bridge mappings can be defined by separating values with a comma. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/ports='br-ex:bond1' +``` + +### Set `ovn.openstack.org/mappings` + +Set the Neutron bridge mapping. This maps the Neutron interfaces to the ovs bridge names. These are colon delimitated between `NEUTRON_INTERFACE:OVS_BRIDGE`. Multiple bridge mappings can be defined here and are separated by commas. + +> Neutron interfaces are string value and can be anything you want. The `NEUTRON_INTERFACE` value defined will be used when you create provider type networks after the cloud is online. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/mappings='physnet1:br-ex' +``` + +### Set `ovn.openstack.org/availability_zones` + +Set the OVN availability zones which inturn creates neutron availability zones. Multiple network availability zones can be defined and are colon separated which allows us to define all of the availability zones a node will be able to provide for, `nova:az1:az2:az3`. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/availability_zones='nova' +``` + +> Any availability zone defined here should also be defined within your **neutron.conf**. The "nova" availability zone is an assumed defined, however, because we're running in a mixed OVN environment, we should define where we're allowed to execute OpenStack workloads. + +### Set `ovn.openstack.org/gateway` + +Define where the gateways nodes will reside. There are many ways to run this, some like every compute node to be a gateway, some like dedicated gateway hardware. Either way you will need at least one gateway node within your environment. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/gateway='enabled' +``` + +## Run the OVN integration + +With all of the annotations defined, we can now apply the network policy with the following command. + +``` shell +kubectl apply -k /opt/genestack/kustomize/ovn +``` + +After running the setup, nodes will have the label `ovn.openstack.org/configured` with a date stamp when it was configured. +If there's ever a need to reconfigure a node, simply remove the label and the DaemonSet will take care of it automatically. + +## Validation our infrastructure is operational + +Before going any further make sure you validate that the backends are operational. + +``` shell +# MariaDB +kubectl --namespace openstack get mariadbs + +#RabbitMQ +kubectl --namespace openstack get rabbitmqclusters.rabbitmq.com + +# Memcached +kubectl --namespace openstack get horizontalpodautoscaler.autoscaling memcached +``` + +Once everything is Ready and online. Continue with the installation. diff --git a/docs/infrastructure-ovn.md b/docs/infrastructure-ovn.md new file mode 100644 index 00000000..1ddb0d91 --- /dev/null +++ b/docs/infrastructure-ovn.md @@ -0,0 +1,6 @@ +# Deploy Open vSwitch OVN + +Note that we're not deploying Openvswitch, however, we are using it. The implementation on Genestack is assumed to be +done with Kubespray which deploys OVN as its networking solution. Because those components are handled by our infrastructure +there's nothing for us to manage / deploy in this environment. OpenStack will leverage OVN within Kubernetes following the +scaling/maintenance/management practices of kube-ovn. \ No newline at end of file diff --git a/docs/infrastructure-rabbitmq.md b/docs/infrastructure-rabbitmq.md new file mode 100644 index 00000000..5ef40d42 --- /dev/null +++ b/docs/infrastructure-rabbitmq.md @@ -0,0 +1,28 @@ +# Deploy the RabbitMQ Operator and a RabbitMQ Cluster + +## Deploy the RabbitMQ operator. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rabbitmq-operator +``` +> The operator may take a minute to get ready, before deploying the RabbitMQ cluster, wait until the operator pod is online. + +## Deploy the RabbitMQ topology operator. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rabbitmq-topology-operator +``` + +## Deploy the RabbitMQ cluster. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rabbitmq-cluster/base +``` + +> NOTE RabbitMQ has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. + +## Validate the status with the following + +``` shell +kubectl --namespace openstack get rabbitmqclusters.rabbitmq.com -w +``` diff --git a/docs/kube-config.md b/docs/k8s-config.md similarity index 100% rename from docs/kube-config.md rename to docs/k8s-config.md diff --git a/docs/k8s-upgrade.md b/docs/k8s-kubespray-upgrade.md similarity index 100% rename from docs/k8s-upgrade.md rename to docs/k8s-kubespray-upgrade.md diff --git a/docs/build-k8s.md b/docs/k8s-kubespray.md similarity index 61% rename from docs/build-k8s.md rename to docs/k8s-kubespray.md index 313f1f56..495894f0 100644 --- a/docs/build-k8s.md +++ b/docs/k8s-kubespray.md @@ -1,20 +1,4 @@ -# Kubernetes Deployment Demo - -[![asciicast](https://asciinema.org/a/629780.svg)](https://asciinema.org/a/629780) - -# Run The Genestack Kubernetes Deployment - -Genestack assumes Kubernetes is present and available to run workloads on. We don't really care how your Kubernetes was deployed or what flavor of Kubernetes you're running. -For our purposes we're using Kubespray, but you do you. We just need the following systems in your environment. - -* Kube-OVN -* Persistent Storage -* MetalLB -* Ingress Controller - -If you have those three things in your environment, you should be fully compatible with Genestack. - -## Deployment Kubespray +# Deployment Kubespray Currently only the k8s provider kubespray is supported and included as submodule into the code base. @@ -27,10 +11,6 @@ Kubespray will be using OVN for all of the network functions, as such, you will While the Kubespray tooling will do a lot of prep and setup work to ensure success, you will need to prepare your networking infrastructure and basic storage layout before running the playbooks. -### SSH Config - -The deploy has created a openstack-flex-keypair.config copy this into the config file in .ssh, if one is not there create it. - #### Minimum system requirements * 2 Network Interfaces @@ -147,86 +127,3 @@ ansible-playbook --inventory /etc/genestack/inventory/openstack-flex-inventory.y > Given the use of a venv, when running with `sudo` be sure to use the full path and pass through your environment variables; `sudo -E /home/ubuntu/.venvs/genestack/bin/ansible-playbook`. Once the cluster is online, you can run `kubectl` to interact with the environment. - -### Retrieve Kube Config - -The instructions can be found here [Kube Config](https://rackerlabs.github.io/genestack/kube-config/) - - -### Remove taint from our Controllers - -In an environment with a limited set of control plane nodes removing the NoSchedule will allow you to converge the -openstack controllers with the k8s controllers. - -``` shell -# Remote taint from control-plane nodes -kubectl taint nodes $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') node-role.kubernetes.io/control-plane:NoSchedule- -``` - -### Optional - Deploy K8S Dashboard RBAC - -While the dashboard is installed you will have no ability to access it until we setup some basic RBAC. - -``` shell -kubectl apply -k /opt/genestack/kustomize/k8s-dashboard -``` - -You can now retrieve a permanent token. - -``` shell -kubectl get secret admin-user -n kube-system -o jsonpath={".data.token"} | base64 -d -``` - - -## Label all of the nodes in the environment - -> The following example assumes the node names can be used to identify their purpose within our environment. That - may not be the case in reality. Adapt the following commands to meet your needs. - -``` shell -# Label the storage nodes - optional and only used when deploying ceph for K8S infrastructure shared storage -kubectl label node $(kubectl get nodes | awk '/ceph/ {print $1}') role=storage-node - -# Label the openstack controllers -kubectl label node $(kubectl get nodes | awk '/controller/ {print $1}') openstack-control-plane=enabled - -# Label the openstack compute nodes -kubectl label node $(kubectl get nodes | awk '/compute/ {print $1}') openstack-compute-node=enabled - -# Label the openstack network nodes -kubectl label node $(kubectl get nodes | awk '/network/ {print $1}') openstack-network-node=enabled - -# Label the openstack storage nodes -kubectl label node $(kubectl get nodes | awk '/storage/ {print $1}') openstack-storage-node=enabled - -# With OVN we need the compute nodes to be "network" nodes as well. While they will be configured for networking, they wont be gateways. -kubectl label node $(kubectl get nodes | awk '/compute/ {print $1}') openstack-network-node=enabled - -# Label all workers - Recommended and used when deploying Kubernetes specific services -kubectl label node $(kubectl get nodes | awk '/worker/ {print $1}') node-role.kubernetes.io/worker=worker -``` - -Check the node labels - -``` shell -# Verify the nodes are operational and labled. -kubectl get nodes -o wide --show-labels=true -``` -``` shell -# Here is a way to make it look a little nicer: -kubectl get nodes -o json | jq '[.items[] | {"NAME": .metadata.name, "LABELS": .metadata.labels}]' -``` - -## Install Helm - -While `helm` should already be installed with the **host-setup** playbook, you will need to install helm manually on nodes. There are lots of ways to install helm, check the upstream [docs](https://helm.sh/docs/intro/install/) to learn more about installing helm. - -### Run `make` for our helm components - -``` shell -cd /opt/genestack/submodules/openstack-helm && -make all - -cd /opt/genestack/submodules/openstack-helm-infra && -make all -``` diff --git a/docs/k8s-overview.md b/docs/k8s-overview.md new file mode 100644 index 00000000..6d1fc6f3 --- /dev/null +++ b/docs/k8s-overview.md @@ -0,0 +1,13 @@ +# Run The Genestack Kubernetes Deployment + +[![asciicast](https://asciinema.org/a/629780.svg)](https://asciinema.org/a/629780) + +Genestack assumes Kubernetes is present and available to run workloads on. We don't really care how your Kubernetes was deployed or what flavor of Kubernetes you're running. +For our purposes we're using Kubespray, but you do you. We just need the following systems in your environment. + +* Kube-OVN +* Persistent Storage +* MetalLB +* Ingress Controller + +If you have those three things in your environment, you should be fully compatible with Genestack. diff --git a/docs/k8s-postdeploy.md b/docs/k8s-postdeploy.md new file mode 100644 index 00000000..0cadaecf --- /dev/null +++ b/docs/k8s-postdeploy.md @@ -0,0 +1,65 @@ +# Post deployment Operations + +## Remove taint from our Controllers + +In an environment with a limited set of control plane nodes removing the NoSchedule will allow you to converge the +openstack controllers with the k8s controllers. + +``` shell +# Remote taint from control-plane nodes +kubectl taint nodes $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') node-role.kubernetes.io/control-plane:NoSchedule- +``` + +## Optional - Deploy K8S Dashboard RBAC + +While the dashboard is installed you will have no ability to access it until we setup some basic RBAC. + +``` shell +kubectl apply -k /opt/genestack/kustomize/k8s-dashboard +``` + +You can now retrieve a permanent token. + +``` shell +kubectl get secret admin-user -n kube-system -o jsonpath={".data.token"} | base64 -d +``` + +## Label all of the nodes in the environment + +> The following example assumes the node names can be used to identify their purpose within our environment. That + may not be the case in reality. Adapt the following commands to meet your needs. + +``` shell +# Label the storage nodes - optional and only used when deploying ceph for K8S infrastructure shared storage +kubectl label node $(kubectl get nodes | awk '/ceph/ {print $1}') role=storage-node + +# Label the openstack controllers +kubectl label node $(kubectl get nodes | awk '/controller/ {print $1}') openstack-control-plane=enabled + +# Label the openstack compute nodes +kubectl label node $(kubectl get nodes | awk '/compute/ {print $1}') openstack-compute-node=enabled + +# Label the openstack network nodes +kubectl label node $(kubectl get nodes | awk '/network/ {print $1}') openstack-network-node=enabled + +# Label the openstack storage nodes +kubectl label node $(kubectl get nodes | awk '/storage/ {print $1}') openstack-storage-node=enabled + +# With OVN we need the compute nodes to be "network" nodes as well. While they will be configured for networking, they wont be gateways. +kubectl label node $(kubectl get nodes | awk '/compute/ {print $1}') openstack-network-node=enabled + +# Label all workers - Recommended and used when deploying Kubernetes specific services +kubectl label node $(kubectl get nodes | awk '/worker/ {print $1}') node-role.kubernetes.io/worker=worker +``` + +Check the node labels + +``` shell +# Verify the nodes are operational and labled. +kubectl get nodes -o wide --show-labels=true +``` + +``` shell +# Here is a way to make it look a little nicer: +kubectl get nodes -o json | jq '[.items[] | {"NAME": .metadata.name, "LABELS": .metadata.labels}]' +``` diff --git a/docs/openstack-cinder.md b/docs/openstack-cinder.md new file mode 100644 index 00000000..360c8ee1 --- /dev/null +++ b/docs/openstack-cinder.md @@ -0,0 +1,198 @@ +# Deploy Cinder + +[![asciicast](https://asciinema.org/a/629808.svg)](https://asciinema.org/a/629808) + +## Create secrets + +``` shell +kubectl --namespace openstack \ + create secret generic cinder-rabbitmq-password \ + --type Opaque \ + --from-literal=username="cinder" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic cinder-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic cinder-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +## Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install cinder ./cinder \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/cinder/cinder-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args cinder/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +Once the helm deployment is complete cinder and all of it's API services will be online. However, using this setup there will be +no volume node at this point. The reason volume deployments have been disabled is because we didn't expose ceph to the openstack +environment and OSH makes a lot of ceph related assumptions. For testing purposes we're wanting to run with the logical volume +driver (reference) and manage the deployment of that driver in a hybrid way. As such there's a deployment outside of our normal +K8S workflow will be needed on our volume host. + +> The LVM volume makes the assumption that the storage node has the required volume group setup `lvmdriver-1` on the node + This is not something that K8S is handling at this time. + +While cinder can run with a great many different storage backends, for the simple case we want to run with the Cinder reference +driver, which makes use of Logical Volumes. Because this driver is incompatible with a containerized work environment, we need +to run the services on our baremetal targets. Genestack has a playbook which will facilitate the installation of our services +and ensure that we've deployed everything in a working order. The playbook can be found at `playbooks/deploy-cinder-volumes-reference.yaml`. +Included in the playbooks directory is an example inventory for our cinder hosts; however, any inventory should work fine. + +### Host Setup + +The cinder target hosts need to have some basic setup run on them to make them compatible with our Logical Volume Driver. + +1. Ensure DNS is working normally. + +Assuming your storage node was also deployed as a K8S node when we did our initial Kubernetes deployment, the DNS should already be +operational for you; however, in the event you need to do some manual tweaking or if the node was note deployed as a K8S worker, then +make sure you setup the DNS resolvers correctly so that your volume service node can communicate with our cluster. + +> This is expected to be our CoreDNS IP, in my case this is `169.254.25.10`. + +This is an example of my **systemd-resolved** conf found in `/etc/systemd/resolved.conf` +``` conf +[Resolve] +DNS=169.254.25.10 +#FallbackDNS= +Domains=openstack.svc.cluster.local svc.cluster.local cluster.local +#LLMNR=no +#MulticastDNS=no +DNSSEC=no +Cache=no-negative +#DNSStubListener=yes +``` + +Restart your DNS service after changes are made. + +``` shell +systemctl restart systemd-resolved.service +``` + +2. Volume Group `cinder-volumes-1` needs to be created, which can be done in two simple commands. + +Create the physical volume + +``` shell +pvcreate /dev/vdf +``` + +Create the volume group + +``` shell +vgcreate cinder-volumes-1 /dev/vdf +``` + +It should be noted that this setup can be tweaked and tuned to your heart's desire; additionally, you can further extend a +volume group with multiple disks. The example above is just that, an example. Check out more from the upstream docs on how +to best operate your volume groups for your specific needs. + +### Hybrid Cinder Volume deployment + +With the volume groups and DNS setup on your target hosts, it is now time to deploy the volume services. The playbook `playbooks/deploy-cinder-volumes-reference.yaml` will be used to create a release target for our python code-base and deploy systemd services +units to run the cinder-volume process. + +> [!IMPORTANT] +> Consider the **storage** network on your Cinder hosts that will be accessible to Nova compute hosts. By default, the playbook uses `ansible_default_ipv4.address` to configure the target address, which may or may not work for your environment. Append var, i.e., `-e cinder_storage_network_interface=ansible_br_mgmt` to use the specified iface address in `cinder.conf` for `my_ip` and `target_ip_address` in `cinder/backends.conf`. **Interface names with a `-` must be entered with a `_` and be prefixed with `ansible`** + +#### Example without storage network interface override + +``` shell +ansible-playbook -i inventory-example.yaml deploy-cinder-volumes-reference.yaml +``` + +Once the playbook has finished executing, check the cinder api to verify functionality. + +``` shell +root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume service list ++------------------+-------------------------------------------------+------+---------+-------+----------------------------+ +| Binary | Host | Zone | Status | State | Updated At | ++------------------+-------------------------------------------------+------+---------+-------+----------------------------+ +| cinder-scheduler | cinder-volume-worker | nova | enabled | up | 2023-12-26T17:43:07.000000 | +| cinder-volume | openstack-flex-node-4.cluster.local@lvmdriver-1 | nova | enabled | up | 2023-12-26T17:43:04.000000 | ++------------------+-------------------------------------------------+------+---------+-------+----------------------------+ +``` + +> Notice the volume service is up and running with our `lvmdriver-1` target. + +At this point it would be a good time to define your types within cinder. For our example purposes we need to define the `lvmdriver-1` +type so that we can schedule volumes to our environment. + +``` shell +root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume type create lvmdriver-1 ++-------------+--------------------------------------+ +| Field | Value | ++-------------+--------------------------------------+ +| description | None | +| id | 6af6ade2-53ca-4260-8b79-1ba2f208c91d | +| is_public | True | +| name | lvmdriver-1 | ++-------------+--------------------------------------+ +``` + +### Validate functionality + +If wanted, create a test volume to tinker with + +``` shell +root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume create --size 1 test ++---------------------+--------------------------------------+ +| Field | Value | ++---------------------+--------------------------------------+ +| attachments | [] | +| availability_zone | nova | +| bootable | false | +| consistencygroup_id | None | +| created_at | 2023-12-26T17:46:15.639697 | +| description | None | +| encrypted | False | +| id | c744af27-fb40-4ffa-8a84-b9f44cb19b2b | +| migration_status | None | +| multiattach | False | +| name | test | +| properties | | +| replication_status | None | +| size | 1 | +| snapshot_id | None | +| source_volid | None | +| status | creating | +| type | lvmdriver-1 | +| updated_at | None | +| user_id | 2ddf90575e1846368253474789964074 | ++---------------------+--------------------------------------+ + +root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume list ++--------------------------------------+------+-----------+------+-------------+ +| ID | Name | Status | Size | Attached to | ++--------------------------------------+------+-----------+------+-------------+ +| c744af27-fb40-4ffa-8a84-b9f44cb19b2b | test | available | 1 | | ++--------------------------------------+------+-----------+------+-------------+ +``` + +You can validate the environment is operational by logging into the storage nodes to validate the LVM targets are being created. + +``` shell +root@openstack-flex-node-4:~# lvs + LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert + c744af27-fb40-4ffa-8a84-b9f44cb19b2b cinder-volumes-1 -wi-a----- 1.00g +``` diff --git a/docs/openstack-clouds.md b/docs/openstack-clouds.md new file mode 100644 index 00000000..afc23481 --- /dev/null +++ b/docs/openstack-clouds.md @@ -0,0 +1,32 @@ +# Create an OpenStack Cloud Config + +There are a lot of ways you can go to connect to your cluster. This example will use your cluster internals to generate a cloud config compatible with your environment using the Admin user. + +## Create the needed directories + +``` shell +mkdir -p ~/.config/openstack +``` + +## Generate the cloud config file + +``` shell +cat > ~/.config/openstack/clouds.yaml < In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +> NOTE: The above command is setting the ceph as disabled. While the K8S infrastructure has Ceph, + we're not exposing ceph to our openstack environment. + +If running in an environment that doesn't have hardware virtualization extensions add the following two `set` switches to the install command. + +``` shell +--set conf.nova.libvirt.virt_type=qemu --set conf.nova.libvirt.cpu_mode=none +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +## Deploy Neutron + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install neutron ./neutron \ + --namespace=openstack \ + --timeout 120m \ + -f /opt/genestack/helm-configs/neutron/neutron-helm-overrides.yaml \ + --set conf.metadata_agent.DEFAULT.metadata_proxy_shared_secret="$(kubectl --namespace openstack get secret metadata-shared-secret -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.ovn_metadata_agent.DEFAULT.metadata_proxy_shared_secret="$(kubectl --namespace openstack get secret metadata-shared-secret -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.nova.password="$(kubectl --namespace openstack get secret nova-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.placement.password="$(kubectl --namespace openstack get secret placement-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.designate.password="$(kubectl --namespace openstack get secret designate-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.ironic.password="$(kubectl --namespace openstack get secret ironic-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.neutron.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --set conf.neutron.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --set conf.plugins.ml2_conf.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --set conf.plugins.ml2_conf.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args neutron/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +> The above command derives the OVN north/south bound database from our K8S environment. The insert `set` is making the assumption we're using **tcp** to connect. diff --git a/docs/openstack-flavors.md b/docs/openstack-flavors.md new file mode 100644 index 00000000..b149a462 --- /dev/null +++ b/docs/openstack-flavors.md @@ -0,0 +1,12 @@ +# Create Flavors + +These are the default flavors expected in an OpenStack cloud. Customize these flavors based on your needs. See the upstream admin [docs](https://docs.openstack.org/nova/latest/admin/flavors.html) for more information on managing flavors. + +``` shell +openstack --os-cloud default flavor create --public m1.extra_tiny --ram 512 --disk 0 --vcpus 1 --ephemeral 0 --swap 0 +openstack --os-cloud default flavor create --public m1.tiny --ram 1024 --disk 10 --vcpus 1 --ephemeral 0 --swap 0 +openstack --os-cloud default flavor create --public m1.small --ram 2048 --disk 20 --vcpus 2 --ephemeral 0 --swap 0 +openstack --os-cloud default flavor create --public m1.medium --ram 4096 --disk 40 --vcpus 4 --ephemeral 8 --swap 2048 +openstack --os-cloud default flavor create --public m1.large --ram 8192 --disk 80 --vcpus 6 --ephemeral 16 --swap 4096 +openstack --os-cloud default flavor create --public m1.extra_large --ram 16384 --disk 160 --vcpus 8 --ephemeral 32 --swap 8192 +``` diff --git a/docs/openstack-glance-images.md b/docs/openstack-glance-images.md new file mode 100644 index 00000000..fcd198a9 --- /dev/null +++ b/docs/openstack-glance-images.md @@ -0,0 +1,191 @@ + +# Download Images + +## Get Ubuntu + +### Ubuntu 22.04 (Jammy) + +``` shell +wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file jammy-server-cloudimg-amd64.img \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=ubuntu \ + --property os_distro=ubuntu \ + --property os_version=22.04 \ + Ubuntu-22.04 +``` + +### Ubuntu 20.04 (Focal) + +``` shell +wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file focal-server-cloudimg-amd64.img \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=ubuntu \ + --property os_distro=ubuntu \ + --property os_version=20.04 \ + Ubuntu-20.04 +``` + +## Get Debian + +### Debian 12 + +``` shell +wget https://cloud.debian.org/cdimage/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file debian-12-genericcloud-amd64.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=debian \ + --property os_distro=debian \ + --property os_version=12 \ + Debian-12 +``` + +### Debian 11 + +``` shell +wget https://cloud.debian.org/cdimage/cloud/bullseye/latest/debian-11-genericcloud-amd64.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file debian-11-genericcloud-amd64.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=debian \ + --property os_distro=debian \ + --property os_version=11 \ + Debian-11 +``` + +## Get CentOS + +### Centos Stream 9 + +``` shell +wget http://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=centos \ + --property os_distro=centos \ + --property os_version=9 \ + CentOS-Stream-9 +``` + +### Centos Stream 8 + +``` shell +wget http://cloud.centos.org/centos/8-stream/x86_64/images/CentOS-Stream-GenericCloud-8-latest.x86_64.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file CentOS-Stream-GenericCloud-8-latest.x86_64.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=centos \ + --property os_distro=centos \ + --property os_version=8 \ + CentOS-Stream-8 +``` + +## Get openSUSE Leap + +### Leap 15 + +``` shell +wget https://download.opensuse.org/distribution/leap/15.5/appliances/openSUSE-Leap-15.5-Minimal-VM.x86_64-kvm-and-xen.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file openSUSE-Leap-15.5-Minimal-VM.x86_64-kvm-and-xen.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=opensuse \ + --property os_distro=suse \ + --property os_version=15 \ + openSUSE-Leap-15 +``` +  \ No newline at end of file diff --git a/docs/openstack-glance.md b/docs/openstack-glance.md new file mode 100644 index 00000000..e3c6a3c9 --- /dev/null +++ b/docs/openstack-glance.md @@ -0,0 +1,58 @@ +# Deploy Glance + +[![asciicast](https://asciinema.org/a/629806.svg)](https://asciinema.org/a/629806) + +## Create secrets. + +``` shell +kubectl --namespace openstack \ + create secret generic glance-rabbitmq-password \ + --type Opaque \ + --from-literal=username="glance" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic glance-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic glance-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +> Before running the Glance deployment you should configure the backend which is defined in the + `helm-configs/glance/glance-helm-overrides.yaml` file. The default is a making the assumption we're running with Ceph deployed by + Rook so the backend is configured to be cephfs with multi-attach functionality. While this works great, you should consider all of + the available storage backends and make the right decision for your environment. + +## Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install glance ./glance \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/glance/glance-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.glance.password="$(kubectl --namespace openstack get secret glance-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.glance.password="$(kubectl --namespace openstack get secret glance-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.glance.password="$(kubectl --namespace openstack get secret glance-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args glance/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +> Note that the defaults disable `storage_init` because we're using **pvc** as the image backend + type. In production this should be changed to swift. + +## Validate functionality + +``` shell +kubectl --namespace openstack exec -ti openstack-admin-client -- openstack image list +``` diff --git a/docs/openstack-heat.md b/docs/openstack-heat.md new file mode 100644 index 00000000..b14ac339 --- /dev/null +++ b/docs/openstack-heat.md @@ -0,0 +1,59 @@ +# Deploy Heat + +[![asciicast](https://asciinema.org/a/629807.svg)](https://asciinema.org/a/629807) + +## Create secrets + +``` shell +kubectl --namespace openstack \ + create secret generic heat-rabbitmq-password \ + --type Opaque \ + --from-literal=username="heat" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic heat-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic heat-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic heat-trustee \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic heat-stack-user \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +## Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install heat ./heat \ + --namespace=openstack \ + --timeout 120m \ + -f /opt/genestack/helm-configs/heat/heat-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.heat.password="$(kubectl --namespace openstack get secret heat-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.heat_trustee.password="$(kubectl --namespace openstack get secret heat-trustee -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.heat_stack_user.password="$(kubectl --namespace openstack get secret heat-stack-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.heat.password="$(kubectl --namespace openstack get secret heat-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.heat.password="$(kubectl --namespace openstack get secret heat-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args heat/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +## Validate functionality + +``` shell +kubectl --namespace openstack exec -ti openstack-admin-client -- openstack --os-interface internal orchestration service list +``` diff --git a/docs/openstack-helm-make.md b/docs/openstack-helm-make.md new file mode 100644 index 00000000..9a38a8e0 --- /dev/null +++ b/docs/openstack-helm-make.md @@ -0,0 +1,15 @@ +# OpenStack Helm + +## Install Helm + +While `helm` should already be installed with the **host-setup** playbook, you will need to install helm manually on nodes. There are lots of ways to install helm, check the upstream [docs](https://helm.sh/docs/intro/install/) to learn more about installing helm. + +## Run `make` for our helm components + +``` shell +cd /opt/genestack/submodules/openstack-helm && +make all + +cd /opt/genestack/submodules/openstack-helm-infra && +make all +``` diff --git a/docs/openstack-horizon.md b/docs/openstack-horizon.md new file mode 100644 index 00000000..eec63ffe --- /dev/null +++ b/docs/openstack-horizon.md @@ -0,0 +1,38 @@ +# Deploy Horizon + +[![asciicast](https://asciinema.org/a/629815.svg)](https://asciinema.org/a/629815) + +## Create secrets + +``` shell +kubectl --namespace openstack \ + create secret generic horizon-secrete-key \ + --type Opaque \ + --from-literal=username="horizon" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic horizon-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +## Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install horizon ./horizon \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/horizon/horizon-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.horizon.local_settings.config.horizon_secret_key="$(kubectl --namespace openstack get secret horizon-secrete-key -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.horizon.password="$(kubectl --namespace openstack get secret horizon-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args horizon/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. diff --git a/docs/openstack-keystone-federation.md b/docs/openstack-keystone-federation.md new file mode 100644 index 00000000..5c12da6a --- /dev/null +++ b/docs/openstack-keystone-federation.md @@ -0,0 +1,85 @@ + +# Setup the Keystone Federation Plugin + +## Create the domain + +``` shell +openstack --os-cloud default domain create rackspace_cloud_domain +``` + +## Create the identity provider + +``` shell +openstack --os-cloud default identity provider create --remote-id rackspace --domain rackspace_cloud_domain rackspace +``` + +### Create the mapping for our identity provider + +You're also welcome to generate your own mapping to suit your needs; however, if you want to use the example mapping (which is suitable for production) you can. + +``` json +[ + { + "local": [ + { + "user": { + "name": "{0}", + "email": "{1}" + } + }, + { + "projects": [ + { + "name": "{2}_Flex", + "roles": [ + { + "name": "member" + }, + { + "name": "load-balancer_member" + }, + { + "name": "heat_stack_user" + } + ] + } + ] + } + ], + "remote": [ + { + "type": "RXT_UserName" + }, + { + "type": "RXT_Email" + }, + { + "type": "RXT_TenantName" + }, + { + "type": "RXT_orgPersonType", + "any_one_of": [ + "admin", + "default", + "user-admin", + "tenant-access" + ] + } + ] + } +] +``` + +> Save the mapping to a local file before uploading it to keystone. In the examples, the mapping is stored at `/tmp/mapping.json`. + +Now register the mapping within Keystone. + +``` shell +openstack --os-cloud default mapping create --rules /tmp/mapping.json rackspace_mapping +``` + +## Create the federation protocol + +``` shell +openstack --os-cloud default federation protocol create rackspace --mapping rackspace_mapping --identity-provider rackspace +``` diff --git a/docs/openstack-keystone.md b/docs/openstack-keystone.md new file mode 100644 index 00000000..a52a6bc3 --- /dev/null +++ b/docs/openstack-keystone.md @@ -0,0 +1,63 @@ + +# Deploy Keystone + +[![asciicast](https://asciinema.org/a/629802.svg)](https://asciinema.org/a/629802) + +## Create secrets. + +``` shell +kubectl --namespace openstack \ + create secret generic keystone-rabbitmq-password \ + --type Opaque \ + --from-literal=username="keystone" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic keystone-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic keystone-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic keystone-credential-keys \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +## Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install keystone ./keystone \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/keystone/keystone-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.keystone.password="$(kubectl --namespace openstack get secret keystone-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.keystone.password="$(kubectl --namespace openstack get secret keystone-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args keystone/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +> NOTE: The image used here allows the system to run with RXT global authentication federation. + The federated plugin can be seen here, https://github.com/cloudnull/keystone-rxt + +Deploy the openstack admin client pod (optional) + +``` shell +kubectl --namespace openstack apply -f /opt/genestack/manifests/utils/utils-openstack-client-admin.yaml +``` + +## Validate functionality + +``` shell +kubectl --namespace openstack exec -ti openstack-admin-client -- openstack user list +``` diff --git a/docs/openstack-neutron-networks.md b/docs/openstack-neutron-networks.md new file mode 100644 index 00000000..0f5a419b --- /dev/null +++ b/docs/openstack-neutron-networks.md @@ -0,0 +1,72 @@ +# Creating Different Neutron Network Types + +The following commands are examples of creating several different network types. + +## Create Shared Provider Networks + +### Flat Network + +``` shell +openstack --os-cloud default network create --share \ + --availability-zone-hint nova \ + --external \ + --provider-network-type flat \ + --provider-physical-network physnet1 \ + flat +``` + +#### Flat Subnet + +``` shell +openstack --os-cloud default subnet create --subnet-range 172.16.24.0/22 \ + --gateway 172.16.24.2 \ + --dns-nameserver 172.16.24.2 \ + --allocation-pool start=172.16.25.150,end=172.16.25.200 \ + --dhcp \ + --network flat \ + flat_subnet +``` + +### VLAN Network + +``` shell +openstack --os-cloud default network create --share \ + --availability-zone-hint nova \ + --external \ + --provider-segment 404 \ + --provider-network-type vlan \ + --provider-physical-network physnet1 \ + vlan404 +``` + +#### VLAN Subnet + +``` shell +openstack --os-cloud default subnet create --subnet-range 10.10.10.0/23 \ + --gateway 10.10.10.1 \ + --dns-nameserver 10.10.10.1 \ + --allocation-pool start=10.10.11.10,end=10.10.11.254 \ + --dhcp \ + --network vlan404 \ + vlan404_subnet +``` + +## Creating Tenant type networks + +### L3 (Tenant) Network + +``` shell +openstack --os-cloud default network create l3 +``` + +#### L3 (Tenant) Subnet + +``` shell +openstack --os-cloud default subnet create --subnet-range 10.0.10.0/24 \ + --gateway 10.0.10.1 \ + --dns-nameserver 1.1.1.1 \ + --allocation-pool start=10.0.10.2,end=10.0.10.254 \ + --dhcp \ + --network l3 \ + l3_subnet +``` diff --git a/docs/openstack-octavia.md b/docs/openstack-octavia.md new file mode 100644 index 00000000..ce4799c7 --- /dev/null +++ b/docs/openstack-octavia.md @@ -0,0 +1,58 @@ + +# Deploy Octavia + +[![asciicast](https://asciinema.org/a/629814.svg)](https://asciinema.org/a/629814) + +### Create secrets + +``` shell +kubectl --namespace openstack \ + create secret generic octavia-rabbitmq-password \ + --type Opaque \ + --from-literal=username="octavia" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic octavia-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic octavia-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic octavia-certificates \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +## Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install octavia ./octavia \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/octavia/octavia-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.octavia.certificates.ca_private_key_passphrase="$(kubectl --namespace openstack get secret octavia-certificates -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.octavia.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --set conf.octavia.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args octavia/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +Now validate functionality + +``` shell + +``` diff --git a/docs/openstack-overview.md b/docs/openstack-overview.md new file mode 100644 index 00000000..cf8d9be2 --- /dev/null +++ b/docs/openstack-overview.md @@ -0,0 +1,23 @@ +# Building the cloud + +From this point forward we're building our OpenStack cloud. The following commands will leverage `helm` as the package manager and `kustomize` as our configuration management backend. + +## Deployment choices + +When you're building the cloud, you have a couple of deployment choices, the most fundamental of which is `base` or `aio`. + +* `base` creates a production-ready environment that ensures an HA system is deployed across the hardware available in your cloud. +* `aio` creates a minimal cloud environment which is suitable for test, which may have low resources. + +The following examples all assume the use of a production environment, however, if you change `base` to `aio`, the deployment footprint will be changed for a given service. + +## The DNA of our services + +The DNA of the OpenStack services has been built to scale, and be managed in a pseudo light-outs environment. We're aiming to empower operators to do more, simply and easily. Here are the high-level talking points about the way we've structured our applications. + +* All services make use of our core infrastructure which is all managed by operators. +* Backups, rollbacks, and package management all built into our applications delivery. +* Databases, users, and grants are all run against a MariaDB Galera cluster which is setup for OpenStack to use a single right, and read from many. + * The primary node is part of application service discovery and will be automatically promoted / demoted within the cluster as needed. +* Queues, permissions, vhosts, and users are all backed by a RabbitMQ cluster with automatic failover. All of the queues deployed in the environment are done with Quorum queues, giving us a best of bread queing platform which gracefully recovers from faults while maintaining performance. +* Horizontal scaling groups have been applied to all of our services. This means we'll be able to auto scale API applications up and down based on the needs of the environment. diff --git a/docs/openstack-skyline.md b/docs/openstack-skyline.md new file mode 100644 index 00000000..8a931b1c --- /dev/null +++ b/docs/openstack-skyline.md @@ -0,0 +1,38 @@ +# Deploy Skyline + +[![asciicast](https://asciinema.org/a/629816.svg)](https://asciinema.org/a/629816) + +Skyline is an alternative Web UI for OpenStack. If you deploy horizon there's no need for Skyline. + +## Create secrets + +Skyline is a little different because there's no helm integration. Given this difference the deployment is far simpler, and all secrets can be managed in one object. + +``` shell +kubectl --namespace openstack \ + create secret generic skyline-apiserver-secrets \ + --type Opaque \ + --from-literal=service-username="skyline" \ + --from-literal=service-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ + --from-literal=service-domain="service" \ + --from-literal=service-project="service" \ + --from-literal=service-project-domain="service" \ + --from-literal=db-endpoint="mariadb-galera-primary.openstack.svc.cluster.local" \ + --from-literal=db-name="skyline" \ + --from-literal=db-username="skyline" \ + --from-literal=db-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ + --from-literal=secret-key="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ + --from-literal=keystone-endpoint="http://keystone-api.openstack.svc.cluster.local:5000" \ + --from-literal=default-region="RegionOne" +``` + +> Note all the configuration is in this one secret, so be sure to set your entries accordingly. + +## Run the deployment + +> [!TIP] +> Pause for a moment to consider if you will be wanting to access Skyline via your ingress controller over a specific FQDN. If so, modify `/opt/genestack/kustomize/skyline/fqdn/kustomization.yaml` to suit your needs then use `fqdn` below in lieu of `base`... + +``` shell +kubectl --namespace openstack apply -k /opt/genestack/kustomize/skyline/base +``` diff --git a/docs/overrides/stylesheets/adr.css b/docs/overrides/stylesheets/adr.css new file mode 100644 index 00000000..5bbb36fc --- /dev/null +++ b/docs/overrides/stylesheets/adr.css @@ -0,0 +1,98 @@ +.adr_header { + display: grid; + grid-template-columns: fit-content(30%) auto; + width: 100%; + font-size: 0.7rem; +} + +.adr_header>dd { + margin: 0 !important; + padding: 0.1rem 0.3rem 0.1rem; +} + +.adr_header>dt { + font-weight: bold; +} + +.c-pill { + align-items: center; + font-family: "Open Sans", Arial, Verdana, sans-serif; + font-weight: bold; + font-size: 14px; + height: 100%; + white-space: nowrap; + width: auto; + position: relative; + border-radius: 100px; + line-height: 1 !important; + overflow: hidden; + padding: 0px 12px 0px 20px; + text-overflow: ellipsis; + line-height: 1.25rem; + color: #595959; + word-break: break-word; + &:before { + border-radius: 50%; + content: ''; + height: 10px; + left: 6px; + margin-top: -5px; + position: absolute; + top: 50%; + width: 10px; + } +} + +.c-pill-draft { + background: #a3a3a3; +} + +.c-pill-draft:before { + background: #505050; +} + +.c-pill-proposed { + background: #b6d8ff; +} + +.c-pill-proposed:before { + background: #0077ff; +} + +.c-pill-accepted { + background: #b4eda0; +} + +.c-pill-accepted:before { + background: #6BC167; +} + +.c-pill-rejected { + background: #ffd5d1; +} + +.c-pill-rejected:before { + background: #ff4436; +} + +.c-pill-superseded { + background: #ffebb6; +} + +.c-pill-superseded:before { + background: #ffc400; +} + +.adr_header .md-tag { + display: inline !important; + +} + +.adr_header .md-tags { + margin-bottom: 0% !important; + margin-top: unset !important; +} + +.md-grid { + max-width: 95%; +} diff --git a/docs/post-deploy-ops.md b/docs/post-deploy-ops.md deleted file mode 100644 index c5056a14..00000000 --- a/docs/post-deploy-ops.md +++ /dev/null @@ -1,418 +0,0 @@ -After deploying the cloud operating environment, you're cloud will be ready to do work. While so what's next? Within this page we've a series of steps you can take to further build your cloud environment. - -## Create an OpenStack Cloud Config - -There are a lot of ways you can go to connect to your cluster. This example will use your cluster internals to generate a cloud config compatible with your environment using the Admin user. - -### Create the needed directories - -``` shell -mkdir -p ~/.config/openstack -``` - -### Generate the cloud config file - -``` shell -cat > ~/.config/openstack/clouds.yaml < Save the mapping to a local file before uploading it to keystone. In the examples, the mapping is stored at `/tmp/mapping.json`. - -Now register the mapping within Keystone. - -``` shell -openstack --os-cloud default mapping create --rules /tmp/mapping.json rackspace_mapping -``` - -### Create the federation protocol - -``` shell -openstack --os-cloud default federation protocol create rackspace --mapping rackspace_mapping --identity-provider rackspace -``` - -## Create Flavors - -These are the default flavors expected in an OpenStack cloud. Customize these flavors based on your needs. See the upstream admin [docs](https://docs.openstack.org/nova/latest/admin/flavors.html) for more information on managing flavors. - -``` shell -openstack --os-cloud default flavor create --public m1.extra_tiny --ram 512 --disk 0 --vcpus 1 --ephemeral 0 --swap 0 -openstack --os-cloud default flavor create --public m1.tiny --ram 1024 --disk 10 --vcpus 1 --ephemeral 0 --swap 0 -openstack --os-cloud default flavor create --public m1.small --ram 2048 --disk 20 --vcpus 2 --ephemeral 0 --swap 0 -openstack --os-cloud default flavor create --public m1.medium --ram 4096 --disk 40 --vcpus 4 --ephemeral 8 --swap 2048 -openstack --os-cloud default flavor create --public m1.large --ram 8192 --disk 80 --vcpus 6 --ephemeral 16 --swap 4096 -openstack --os-cloud default flavor create --public m1.extra_large --ram 16384 --disk 160 --vcpus 8 --ephemeral 32 --swap 8192 -``` - -## Download Images - -### Get Ubuntu - -#### Ubuntu 22.04 (Jammy) - -``` shell -wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img -openstack --os-cloud default image create \ - --progress \ - --disk-format qcow2 \ - --container-format bare \ - --public \ - --file jammy-server-cloudimg-amd64.img \ - --property hw_scsi_model=virtio-scsi \ - --property hw_disk_bus=scsi \ - --property hw_vif_multiqueue_enabled=true \ - --property hw_qemu_guest_agent=yes \ - --property hypervisor_type=kvm \ - --property img_config_drive=optional \ - --property hw_machine_type=q35 \ - --property hw_firmware_type=uefi \ - --property os_require_quiesce=yes \ - --property os_type=linux \ - --property os_admin_user=ubuntu \ - --property os_distro=ubuntu \ - --property os_version=22.04 \ - Ubuntu-22.04 -``` - -#### Ubuntu 20.04 (Focal) - -``` shell -wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img -openstack --os-cloud default image create \ - --progress \ - --disk-format qcow2 \ - --container-format bare \ - --public \ - --file focal-server-cloudimg-amd64.img \ - --property hw_scsi_model=virtio-scsi \ - --property hw_disk_bus=scsi \ - --property hw_vif_multiqueue_enabled=true \ - --property hw_qemu_guest_agent=yes \ - --property hypervisor_type=kvm \ - --property img_config_drive=optional \ - --property hw_machine_type=q35 \ - --property hw_firmware_type=uefi \ - --property os_require_quiesce=yes \ - --property os_type=linux \ - --property os_admin_user=ubuntu \ - --property os_distro=ubuntu \ - --property os_version=20.04 \ - Ubuntu-20.04 -``` - -### Get Debian - -#### Debian 12 - -``` shell -wget https://cloud.debian.org/cdimage/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2 -openstack --os-cloud default image create \ - --progress \ - --disk-format qcow2 \ - --container-format bare \ - --public \ - --file debian-12-genericcloud-amd64.qcow2 \ - --property hw_scsi_model=virtio-scsi \ - --property hw_disk_bus=scsi \ - --property hw_vif_multiqueue_enabled=true \ - --property hw_qemu_guest_agent=yes \ - --property hypervisor_type=kvm \ - --property img_config_drive=optional \ - --property hw_machine_type=q35 \ - --property hw_firmware_type=uefi \ - --property os_require_quiesce=yes \ - --property os_type=linux \ - --property os_admin_user=debian \ - --property os_distro=debian \ - --property os_version=12 \ - Debian-12 -``` - -#### Debian 11 - -``` shell -wget https://cloud.debian.org/cdimage/cloud/bullseye/latest/debian-11-genericcloud-amd64.qcow2 -openstack --os-cloud default image create \ - --progress \ - --disk-format qcow2 \ - --container-format bare \ - --public \ - --file debian-11-genericcloud-amd64.qcow2 \ - --property hw_scsi_model=virtio-scsi \ - --property hw_disk_bus=scsi \ - --property hw_vif_multiqueue_enabled=true \ - --property hw_qemu_guest_agent=yes \ - --property hypervisor_type=kvm \ - --property img_config_drive=optional \ - --property hw_machine_type=q35 \ - --property hw_firmware_type=uefi \ - --property os_require_quiesce=yes \ - --property os_type=linux \ - --property os_admin_user=debian \ - --property os_distro=debian \ - --property os_version=11 \ - Debian-11 -``` - -### Get CentOS - -#### Centos Stream 9 - -``` shell -wget http://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2 -openstack --os-cloud default image create \ - --progress \ - --disk-format qcow2 \ - --container-format bare \ - --public \ - --file CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2 \ - --property hw_scsi_model=virtio-scsi \ - --property hw_disk_bus=scsi \ - --property hw_vif_multiqueue_enabled=true \ - --property hw_qemu_guest_agent=yes \ - --property hypervisor_type=kvm \ - --property img_config_drive=optional \ - --property hw_machine_type=q35 \ - --property os_require_quiesce=yes \ - --property os_type=linux \ - --property os_admin_user=centos \ - --property os_distro=centos \ - --property os_version=9 \ - CentOS-Stream-9 -``` - -#### Centos Stream 8 - -``` shell -wget http://cloud.centos.org/centos/8-stream/x86_64/images/CentOS-Stream-GenericCloud-8-latest.x86_64.qcow2 -openstack --os-cloud default image create \ - --progress \ - --disk-format qcow2 \ - --container-format bare \ - --public \ - --file CentOS-Stream-GenericCloud-8-latest.x86_64.qcow2 \ - --property hw_scsi_model=virtio-scsi \ - --property hw_disk_bus=scsi \ - --property hw_vif_multiqueue_enabled=true \ - --property hw_qemu_guest_agent=yes \ - --property hypervisor_type=kvm \ - --property img_config_drive=optional \ - --property hw_machine_type=q35 \ - --property hw_firmware_type=uefi \ - --property os_require_quiesce=yes \ - --property os_type=linux \ - --property os_admin_user=centos \ - --property os_distro=centos \ - --property os_version=8 \ - CentOS-Stream-8 -``` - -### Get openSUSE Leap - -#### Leap 15 - -``` shell -wget https://download.opensuse.org/distribution/leap/15.5/appliances/openSUSE-Leap-15.5-Minimal-VM.x86_64-kvm-and-xen.qcow2 -openstack --os-cloud default image create \ - --progress \ - --disk-format qcow2 \ - --container-format bare \ - --public \ - --file openSUSE-Leap-15.5-Minimal-VM.x86_64-kvm-and-xen.qcow2 \ - --property hw_scsi_model=virtio-scsi \ - --property hw_disk_bus=scsi \ - --property hw_vif_multiqueue_enabled=true \ - --property hw_qemu_guest_agent=yes \ - --property hypervisor_type=kvm \ - --property img_config_drive=optional \ - --property hw_machine_type=q35 \ - --property os_require_quiesce=yes \ - --property os_type=linux \ - --property os_admin_user=opensuse \ - --property os_distro=suse \ - --property os_version=15 \ - openSUSE-Leap-15 -``` -  -## Create Shared Provider Networks - -The following commands are examples of creating several different network types. - -### Flat Network - -``` shell -openstack --os-cloud default network create --share \ - --availability-zone-hint nova \ - --external \ - --provider-network-type flat \ - --provider-physical-network physnet1 \ - flat -``` - -### Flat Subnet - -``` shell -openstack --os-cloud default subnet create --subnet-range 172.16.24.0/22 \ - --gateway 172.16.24.2 \ - --dns-nameserver 172.16.24.2 \ - --allocation-pool start=172.16.25.150,end=172.16.25.200 \ - --dhcp \ - --network flat \ - flat_subnet -``` - -### VLAN Network - -``` shell -openstack --os-cloud default network create --share \ - --availability-zone-hint nova \ - --external \ - --provider-segment 404 \ - --provider-network-type vlan \ - --provider-physical-network physnet1 \ - vlan404 -``` - -### VLAN Subnet - -``` shell -openstack --os-cloud default subnet create --subnet-range 10.10.10.0/23 \ - --gateway 10.10.10.1 \ - --dns-nameserver 10.10.10.1 \ - --allocation-pool start=10.10.11.10,end=10.10.11.254 \ - --dhcp \ - --network vlan404 \ - vlan404_subnet -``` - -### L3 (Tenant) Network - -``` shell -openstack --os-cloud default network create l3 -``` - -### L3 (Tenant) Subnet - -``` shell -openstack --os-cloud default subnet create --subnet-range 10.0.10.0/24 \ - --gateway 10.0.10.1 \ - --dns-nameserver 1.1.1.1 \ - --allocation-pool start=10.0.10.2,end=10.0.10.254 \ - --dhcp \ - --network l3 \ - l3_subnet -``` - -> You can validate that the role has been assigned to the group and domain using the `openstack role assignment list` - -# Third Party Integration - -## OSIE Deployment - -``` shell -helm upgrade --install osie osie/osie \ - --namespace=osie \ - --create-namespace \ - --wait \ - --timeout 120m \ - -f /opt/genestack/helm-configs/osie/osie-helm-overrides.yaml -``` - -# Connect to the database - -Sometimes an operator may need to connect to the database to troubleshoot things or otherwise make modifications to the databases in place. The following command can be used to connect to the database from a node within the cluster. - -``` shell -mysql -h $(kubectl -n openstack get service mariadb-galera-primary -o jsonpath='{.spec.clusterIP}') \ - -p$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d) \ - -u root -``` - -> The following command will leverage your kube configuration and dynamically source the needed information to connect to the MySQL cluster. You will need to ensure you have installed the mysql client tools on the system you're attempting to connect from. diff --git a/docs/storage-ceph-rook-external.md b/docs/storage-ceph-rook-external.md new file mode 100644 index 00000000..a4123fe2 --- /dev/null +++ b/docs/storage-ceph-rook-external.md @@ -0,0 +1,76 @@ +# Cephadm/ceph-ansible/Rook (Ceph) - External + +We can use an external ceph cluster and present it via rook-ceph to your cluster. + +## Prepare pools on external cluster + +``` shell +ceph osd pool create general 32 +ceph osd pool create general-multi-attach-data 32 +ceph osd pool create general-multi-attach-metadata 32 +rbd pool init general +ceph fs new general-multi-attach general-multi-attach-metadata general-multi-attach-data +``` + +## You must have a MDS service running, in this example I am tagging my 3 ceph nodes with MDS labels and creating a MDS service for the general-multi-attach Cephfs Pool + +``` shell +ceph orch host label add genestack-ceph1 mds +ceph orch host label add genestack-ceph2 mds +ceph orch host label add genestack-ceph3 mds +ceph orch apply mds myfs label:mds +``` + +## We will now download create-external-cluster-resources.py and create exports to run on your controller node. Using cephadm in this example: + +``` shell +./cephadm shell +yum install wget -y ; wget https://raw.githubusercontent.com/rook/rook/release-1.12/deploy/examples/create-external-cluster-resources.py +python3 create-external-cluster-resources.py --rbd-data-pool-name general --cephfs-filesystem-name general-multi-attach --namespace rook-ceph-external --format bash +``` + +## Copy and paste the output, here is an example: +``` shell +root@genestack-ceph1:/# python3 create-external-cluster-resources.py --rbd-data-pool-name general --cephfs-filesystem-name general-multi-attach --namespace rook-ceph-external --format bash +export NAMESPACE=rook-ceph-external +export ROOK_EXTERNAL_FSID=d45869e0-ccdf-11ee-8177-1d25f5ec2433 +export ROOK_EXTERNAL_USERNAME=client.healthchecker +export ROOK_EXTERNAL_CEPH_MON_DATA=genestack-ceph1=10.1.1.209:6789 +export ROOK_EXTERNAL_USER_SECRET=AQATh89lf5KiBBAATgaOGAMELzPOIpiCg6ANfA== +export ROOK_EXTERNAL_DASHBOARD_LINK=https://10.1.1.209:8443/ +export CSI_RBD_NODE_SECRET=AQATh89l3AJjBRAAYD+/cuf3XPdMBmdmz4iWIA== +export CSI_RBD_NODE_SECRET_NAME=csi-rbd-node +export CSI_RBD_PROVISIONER_SECRET=AQATh89l9dH4BRAApBKzqwtaUqw9bNcBI/iGGw== +export CSI_RBD_PROVISIONER_SECRET_NAME=csi-rbd-provisioner +export CEPHFS_POOL_NAME=general-multi-attach-data +export CEPHFS_METADATA_POOL_NAME=general-multi-attach-metadata +export CEPHFS_FS_NAME=general-multi-attach +export CSI_CEPHFS_NODE_SECRET=AQATh89lFeqMBhAAJpHAE5vtukXYuRj2+WTh2g== +export CSI_CEPHFS_PROVISIONER_SECRET=AQATh89lHB0dBxAA7CHM/9rTSs79SLJSKVBYeg== +export CSI_CEPHFS_NODE_SECRET_NAME=csi-cephfs-node +export CSI_CEPHFS_PROVISIONER_SECRET_NAME=csi-cephfs-provisioner +export MONITORING_ENDPOINT=10.1.1.209 +export MONITORING_ENDPOINT_PORT=9283 +export RBD_POOL_NAME=general +export RGW_POOL_PREFIX=default +``` + +## Run the following commands to import the cluster after pasting in exports from external cluster +``` shell +kubectl apply -k /opt/genestack/kustomize/rook-operator/ +/opt/genestack/scripts/import-external-cluster.sh +helm repo add rook-release https://charts.rook.io/release +helm install --create-namespace --namespace rook-ceph-external rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f /opt/genestack/submodules/rook/deploy/charts/rook-ceph-cluster/values-external.yaml +kubectl patch storageclass general -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' +``` + +## Monitor progress: +``` shell +kubectl --namespace rook-ceph-external get cephcluster -w +``` + +## Should return when finished: +``` shell +NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID +rook-ceph-external /var/lib/rook 3 3m24s Connected Cluster connected successfully HEALTH_OK true d45869e0-ccdf-11ee-8177-1d25f5ec2433 +``` diff --git a/docs/storage-ceph-rook-internal.md b/docs/storage-ceph-rook-internal.md new file mode 100644 index 00000000..dc14c680 --- /dev/null +++ b/docs/storage-ceph-rook-internal.md @@ -0,0 +1,41 @@ +# Rook (Ceph) - In Cluster + +## Deploy the Rook operator + +``` shell +kubectl apply -k /opt/genestack/kustomize/rook-operator/ +``` + +## Deploy the Rook cluster + +> [!IMPORTANT] +> Rook will deploy against nodes labeled `role=storage-node`. Make sure to have a look at the `/opt/genestack/kustomize/rook-cluster/rook-cluster.yaml` file to ensure it's setup to your liking, pay special attention to your `deviceFilter` +settings, especially if different devices have different device layouts. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rook-cluster/ +``` + +## Validate the cluster is operational + +``` shell +kubectl --namespace rook-ceph get cephclusters.ceph.rook.io +``` + +> You can track the deployment with the following command `kubectl --namespace rook-ceph get pods -w`. + +## Create Storage Classes + +Once the rook cluster is online with a HEALTH status of `HEALTH_OK`, deploy the filesystem, storage-class, and pool defaults. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rook-defaults +``` +> [!IMPORTANT] +> If installing prometheus after rook-ceph is installed, you may patch a running rook-ceph cluster with the following command: +``` shell +kubectl -n rook-ceph patch CephCluster rook-ceph --type=merge -p "{\"spec\": {\"monitoring\": {\"enabled\": true}}}" +``` + +Ensure you have 'servicemonitors' defined in the rook-ceph namespace. + diff --git a/docs/storage-nfs-external.md b/docs/storage-nfs-external.md new file mode 100644 index 00000000..d3addcd7 --- /dev/null +++ b/docs/storage-nfs-external.md @@ -0,0 +1,49 @@ +# NFS - External + +While NFS in K8S works great, it's not suitable for use in all situations. + +> Example: NFS is officially not supported by MariaDB and will fail to initialize the database backend when running on NFS. + +In Genestack, the `general` storage class is used by default for systems like RabbitMQ and MariaDB. If you intend to use NFS, you will need to ensure your use cases match the workloads and may need to make some changes within the manifests. + +## Install Base Packages + +NFS requires utilities to be installed on the host. Before you create workloads that require NFS make sure you have `nfs-common` installed on your target storage hosts (e.g. the controllers). + +## Add the NFS Provisioner Helm repo + +``` shell +helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ +``` + +## Install External NFS Provisioner + +This command will connect to the external storage provider and generate a storage class that services the `general` storage class. + +``` shell +helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ + --namespace nfs-provisioner \ + --create-namespace \ + --set nfs.server=172.16.27.67 \ + --set nfs.path=/mnt/storage/k8s \ + --set nfs.mountOptions={"nolock"} \ + --set storageClass.defaultClass=true \ + --set replicaCount=1 \ + --set storageClass.name=general \ + --set storageClass.provisionerName=nfs-provisioner-01 +``` + +This command will connect to the external storage provider and generate a storage class that services the `general-multi-attach` storage class. + +``` shell +helm install nfs-subdir-external-provisioner-multi nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ + --namespace nfs-provisioner \ + --create-namespace \ + --set nfs.server=172.16.27.67 \ + --set nfs.path=/mnt/storage/k8s \ + --set nfs.mountOptions={"nolock"} \ + --set replicaCount=1 \ + --set storageClass.name=general-multi-attach \ + --set storageClass.provisionerName=nfs-provisioner-02 \ + --set storageClass.accessModes=ReadWriteMany +``` diff --git a/docs/storage-overview.md b/docs/storage-overview.md new file mode 100644 index 00000000..26d611bc --- /dev/null +++ b/docs/storage-overview.md @@ -0,0 +1,17 @@ +# Persistent Storage Demo + +[![asciicast](https://asciinema.org/a/629785.svg)](https://asciinema.org/a/629785) + +## Deploying Your Persistent Storage + +For the basic needs of our Kubernetes environment, we need some basic persistent storage. Storage, like anything good in life, +is a choose your own adventure ecosystem, so feel free to ignore this section if you have something else that satisfies the need. + +The basis needs of Genestack are the following storage classes + +* general - a general storage cluster which is set as the deault. +* general-multi-attach - a multi-read/write storage backend + +These `StorageClass` types are needed by various systems; however, how you get to these storage classes is totally up to you. +The following sections provide a means to manage storage and provide our needed `StorageClass` types. While there may be many +persistent storage options, not all of them are needed. diff --git a/docs/storage-topolvm.md b/docs/storage-topolvm.md new file mode 100644 index 00000000..65773391 --- /dev/null +++ b/docs/storage-topolvm.md @@ -0,0 +1,25 @@ +# TopoLVM - In Cluster + +[TopoLVM](https://github.com/topolvm/topolvm) is a capacity aware storage provisioner which can make use of physical volumes. + +The following steps are one way to set it up, however, consult the [documentation](https://github.com/topolvm/topolvm/blob/main/docs/getting-started.md) for a full breakdown of everything possible with TopoLVM. + +## Create the target volume group on your hosts + +TopoLVM requires access to a volume group on the physical host to work, which means we need to set up a volume group on our hosts. By default, TopoLVM will use the controllers as storage hosts. The genestack Kustomize solution sets the general storage volume group to `vg-general`. This value can be changed within Kustomize found at `kustomize/topolvm/general/kustomization.yaml`. + +> Simple example showing how to create the needed volume group. + +``` shell +# NOTE sdX is a placeholder for a physical drive or partition. +pvcreate /dev/sdX +vgcreate vg-general /dev/sdX +``` + +Once the volume group is on your storage nodes, the node is ready for use. + +### Deploy the TopoLVM Provisioner + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/topolvm/general | kubectl apply -f - +``` diff --git a/mkdocs.yml b/mkdocs.yml index ef070e3f..59fbf604 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,11 +1,11 @@ --- site_name: Genestack site_description: >- - Genestack — where Kubernetes and OpenStack tango in the cloud. Imagine a waltz between systems that deploy what you need. + Genestack — by Rackspace, where Kubernetes and OpenStack tango in the cloud. Imagine a waltz between systems that deploy what you need. theme: name: material - + logo: assets/images/genestack-cropped-small.png palette: - media: "(prefers-color-scheme: light)" scheme: default @@ -41,13 +41,38 @@ theme: - search.suggest - toc.follow +extra_css: + - stylesheets/adr.css + plugins: + # - blog - search + - swagger-ui-tag + - mkdocs-material-adr/adr + - glightbox markdown_extensions: - admonition - attr_list - def_list + - pymdownx.tasklist: + custom_checkbox: true + - pymdownx.superfences: + custom_fences: + - name: python + class: python + validator: !!python/name:markdown_exec.validator + format: !!python/name:markdown_exec.formatter + - name: mermaid + class: mermaid + format: !!python/name:pymdownx.superfences.fence_code_format + - pymdownx.emoji: + emoji_index: !!python/name:material.extensions.emoji.twemoji + emoji_generator: !!python/name:material.extensions.emoji.to_svg + - pymdownx.highlight: + anchor_linenums: true + line_spans: __span + pygments_lang_class: true repo_name: rackerlabs/genestack repo_url: https://github.com/rackerlabs/genestack @@ -55,6 +80,72 @@ dev_addr: "127.0.0.1:8001" edit_uri: "edit/main/docs" nav: - - Documentation: 'index.md' + - Welcome: index.md - Components: components.md - - Quickstart: quickstart.md + - Quickstart: + - Building Virtual Environments: build-test-envs.md + - Simple Setup: quickstart.md + - Deployment Guide: + - Getting Started: genestack-getting-started.md + - Building the Cloud: + - Kubernetes: + - K8s Overview: k8s-overview.md + - Kubespray: k8s-kubespray.md + - Post Deployment: k8s-postdeploy.md + - Upgrade: k8s-kubespray-upgrade.md + - Retrieve kube config: k8s-config.md + - Storage: + - Storage Overview: storage-overview.md + - Ceph Internal: storage-ceph-rook-internal.md + - Ceph External: storage-ceph-rook-external.md + - NFS External: storage-nfs-external.md + - TopoLVM: storage-topolvm.md + - Monitoring: + - Monitoring Overview: prometheus.md + - Secrets: + - Vault Overview: vault.md + - Vault Operator: vault-secrets-operator.md + - Infrastructure: + - Infrastructure Overview: infrastructure-overview.md + - Namespace: infrastructure-namespace.md + - Ingress: infrastructure-ingress.md + - MariaDB: + - MariaDB Overview: infrastructure-mariadb.md + - MySQL Exporter: prometheus-mysql-exporter.md + - Connecting to the Database: infrastructure-mariadb-connect.md + - RabbitMQ: infrastructure-rabbitmq.md + - Memcached: infrastructure-memcached.md + - Libvirt: infrastructure-libvirt.md + - OVN: + - OVN Overview: infrastructure-ovn.md + - Setup: infrastructure-ovn-setup.md + - Database Backup: infrastructure-ovn-db-backup.md + - MetalLB: infrastructure-metallb.md + - RabbitMQ Exporter: prometheus-rabbitmq-exporter.md + - OpenStack: + - OpenStack Overview: openstack-overview.md + - Prepare OpenStack: openstack-helm-make.md + - OpenStack Services: + - Keystone: + - Keystone Overview: openstack-keystone.md + - Federation: openstack-keystone-federation.md + - Glance: + - Glance Overview: openstack-glance.md + - Images: openstack-glance-images.md + - Heat: openstack-heat.md + - Cinder: openstack-cinder.md + - Compute Kit: + - Compute Kit Overview: openstack-compute-kit.md + - Flavors: openstack-flavors.md + - Creating Networks: openstack-neutron-networks.md + - Dashboard: + - Horizon: openstack-horizon.md + - skyline: openstack-skyline.md + - Octavia: openstack-octavia.md + - Openstack Exporter: prometheus-openstack-metrics-exporter.md + - OpenStack Clouds YAML: openstack-clouds.md + - Operational Guide: + - Building Local Images: build-local-images.md + - Running Genestack Upgrade: genestack-upgrade.md + - Third Party Tools: + - OSIE: extra-osie.md