diff --git a/README.md b/README.md index 50b08d61..93cc369f 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,8 @@ Genestack — where Kubernetes and OpenStack tango in the cloud. Imagine a waltz between systems that deploy what you need. +## Documentation +[Genestack Documentation](https://rackerlabs.github.io/genestack/) ## Included/Required Components * Kubernetes: diff --git a/docs/Create-Persistent-Storage.md b/docs/Create-Persistent-Storage.md new file mode 100644 index 00000000..dd1d21dd --- /dev/null +++ b/docs/Create-Persistent-Storage.md @@ -0,0 +1,207 @@ +# Persistent Storage Demo + +[![asciicast](https://asciinema.org/a/629785.svg)](https://asciinema.org/a/629785) + +# Deploying Your Persistent Storage + +For the basic needs of our Kubernetes environment, we need some basic persistent storage. Storage, like anything good in life, +is a choose your own adventure ecosystem, so feel free to ignore this section if you have something else that satisfies the need. + +The basis needs of Genestack are the following storage classes + +* general - a general storage cluster which is set as the deault. +* general-multi-attach - a multi-read/write storage backend + +These `StorageClass` types are needed by various systems; however, how you get to these storage classes is totally up to you. +The following sections provide a means to manage storage and provide our needed `StorageClass` types. + +> The following sections are not all needed; they're just references. + +## Rook (Ceph) - In Cluster + +### Deploy the Rook operator + +``` shell +kubectl apply -k /opt/genestack/kustomize/rook-operator/ +``` + +### Deploy the Rook cluster + +> [!IMPORTANT] +> Rook will deploy against nodes labeled `role=storage-node`. Make sure to have a look at the `/opt/genestack/kustomize/rook-cluster/rook-cluster.yaml` file to ensure it's setup to your liking, pay special attention to your `deviceFilter` +settings, especially if different devices have different device layouts. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rook-cluster/ +``` + +### Validate the cluster is operational + +``` shell +kubectl --namespace rook-ceph get cephclusters.ceph.rook.io +``` + +> You can track the deployment with the following command `kubectl --namespace rook-ceph get pods -w`. + +### Create Storage Classes + +Once the rook cluster is online with a HEALTH status of `HEALTH_OK`, deploy the filesystem, storage-class, and pool defaults. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rook-defaults +``` + + + +## Cephadm/ceph-ansible/Rook (Ceph) - External + +We can use an external ceph cluster and present it via rook-ceph to your cluster. + +### Prepare pools on external cluster + +``` shell +ceph osd pool create general 32 +ceph osd pool create general-multi-attach-data 32 +ceph osd pool create general-multi-attach-metadata 32 +rbd pool init general +ceph fs new general-multi-attach general-multi-attach-metadata general-multi-attach-data +``` + +### You must have a MDS service running, in this example I am tagging my 3 ceph nodes with MDS labels and creating a MDS service for the general-multi-attach Cephfs Pool + +``` shell +ceph orch host label add genestack-ceph1 mds +ceph orch host label add genestack-ceph2 mds +ceph orch host label add genestack-ceph3 mds +ceph orch apply mds myfs label:mds +``` + +### We will now download create-external-cluster-resources.py and create exports to run on your controller node. Using cephadm in this example: + +``` shell +./cephadm shell +yum install wget -y ; wget https://raw.githubusercontent.com/rook/rook/release-1.12/deploy/examples/create-external-cluster-resources.py +python3 create-external-cluster-resources.py --rbd-data-pool-name general --cephfs-filesystem-name general-multi-attach --namespace rook-ceph-external --format bash +``` +### Copy and paste the output, here is an example: +``` shell +root@genestack-ceph1:/# python3 create-external-cluster-resources.py --rbd-data-pool-name general --cephfs-filesystem-name general-multi-attach --namespace rook-ceph-external --format bash +export NAMESPACE=rook-ceph-external +export ROOK_EXTERNAL_FSID=d45869e0-ccdf-11ee-8177-1d25f5ec2433 +export ROOK_EXTERNAL_USERNAME=client.healthchecker +export ROOK_EXTERNAL_CEPH_MON_DATA=genestack-ceph1=10.1.1.209:6789 +export ROOK_EXTERNAL_USER_SECRET=AQATh89lf5KiBBAATgaOGAMELzPOIpiCg6ANfA== +export ROOK_EXTERNAL_DASHBOARD_LINK=https://10.1.1.209:8443/ +export CSI_RBD_NODE_SECRET=AQATh89l3AJjBRAAYD+/cuf3XPdMBmdmz4iWIA== +export CSI_RBD_NODE_SECRET_NAME=csi-rbd-node +export CSI_RBD_PROVISIONER_SECRET=AQATh89l9dH4BRAApBKzqwtaUqw9bNcBI/iGGw== +export CSI_RBD_PROVISIONER_SECRET_NAME=csi-rbd-provisioner +export CEPHFS_POOL_NAME=general-multi-attach-data +export CEPHFS_METADATA_POOL_NAME=general-multi-attach-metadata +export CEPHFS_FS_NAME=general-multi-attach +export CSI_CEPHFS_NODE_SECRET=AQATh89lFeqMBhAAJpHAE5vtukXYuRj2+WTh2g== +export CSI_CEPHFS_PROVISIONER_SECRET=AQATh89lHB0dBxAA7CHM/9rTSs79SLJSKVBYeg== +export CSI_CEPHFS_NODE_SECRET_NAME=csi-cephfs-node +export CSI_CEPHFS_PROVISIONER_SECRET_NAME=csi-cephfs-provisioner +export MONITORING_ENDPOINT=10.1.1.209 +export MONITORING_ENDPOINT_PORT=9283 +export RBD_POOL_NAME=general +export RGW_POOL_PREFIX=default +``` + +### Run the following commands to import the cluster after pasting in exports from external cluster +``` shell +kubectl apply -k /opt/genestack/kustomize/rook-operator/ +/opt/genestack/scripts/import-external-cluster.sh +helm repo add rook-release https://charts.rook.io/release +helm install --create-namespace --namespace rook-ceph-external rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f /opt/genestack/submodules/rook/deploy/charts/rook-ceph-cluster/values-external.yaml +kubectl patch storageclass general -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' +``` + +### Monitor progress: +``` shell +kubectl --namespace rook-ceph-external get cephcluster -w +``` + +### Should return when finished: +``` shell +NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID +rook-ceph-external /var/lib/rook 3 3m24s Connected Cluster connected successfully HEALTH_OK true d45869e0-ccdf-11ee-8177-1d25f5ec2433 +``` + + + +## NFS - External + +While NFS in K8S works great, it's not suitable for use in all situations. + +> Example: NFS is officially not supported by MariaDB and will fail to initialize the database backend when running on NFS. + +In Genestack, the `general` storage class is used by default for systems like RabbitMQ and MariaDB. If you intend to use NFS, you will need to ensure your use cases match the workloads and may need to make some changes within the manifests. + +### Install Base Packages + +NFS requires utilities to be installed on the host. Before you create workloads that require NFS make sure you have `nfs-common` installed on your target storage hosts (e.g. the controllers). + +### Add the NFS Provisioner Helm repo + +``` shell +helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ +``` + +### Install External NFS Provisioner + +This command will connect to the external storage provider and generate a storage class that services the `general` storage class. + +``` shell +helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ + --namespace nfs-provisioner \ + --create-namespace \ + --set nfs.server=172.16.27.67 \ + --set nfs.path=/mnt/storage/k8s \ + --set nfs.mountOptions={"nolock"} \ + --set storageClass.defaultClass=true \ + --set replicaCount=1 \ + --set storageClass.name=general \ + --set storageClass.provisionerName=nfs-provisioner-01 +``` + +This command will connect to the external storage provider and generate a storage class that services the `general-multi-attach` storage class. + +``` shell +helm install nfs-subdir-external-provisioner-multi nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ + --namespace nfs-provisioner \ + --create-namespace \ + --set nfs.server=172.16.27.67 \ + --set nfs.path=/mnt/storage/k8s \ + --set nfs.mountOptions={"nolock"} \ + --set replicaCount=1 \ + --set storageClass.name=general-multi-attach \ + --set storageClass.provisionerName=nfs-provisioner-02 \ + --set storageClass.accessModes=ReadWriteMany +``` + +## TopoLVM - In Cluster + +[TopoLVM](https://github.com/topolvm/topolvm) is a capacity aware storage provisioner which can make use of physical volumes.\ +The following steps are one way to set it up, however, consult the [documentation](https://github.com/topolvm/topolvm/blob/main/docs/getting-started.md) for a full breakdown of everything possible with TopoLVM. + +### Create the target volume group on your hosts + +TopoLVM requires access to a volume group on the physical host to work, which means we need to set up a volume group on our hosts. By default, TopoLVM will use the controllers as storage hosts. The genestack Kustomize solution sets the general storage volume group to `vg-general`. This value can be changed within Kustomize found at `kustomize/topolvm/general/kustomization.yaml`. + +> Simple example showing how to create the needed volume group. + +``` shell +# NOTE sdX is a placeholder for a physical drive or partition. +pvcreate /dev/sdX +vgcreate vg-general /dev/sdX +``` + +Once the volume group is on your storage nodes, the node is ready for use. + +### Deploy the TopoLVM Provisioner + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/topolvm/general | kubectl apply -f - +``` diff --git a/docs/Deploy-Openstack.md b/docs/Deploy-Openstack.md new file mode 100644 index 00000000..dfeebbf8 --- /dev/null +++ b/docs/Deploy-Openstack.md @@ -0,0 +1,702 @@ +# Building the cloud + +From this point forward we're building our OpenStack cloud. The following commands will leverage `helm` as the package manager and `kustomize` as our configuration management backend. + +## Deployment choices + +When you're building the cloud, you have a couple of deployment choices, the most fundamental of which is `base` or `aio`. + +* `base` creates a production-ready environment that ensures an HA system is deployed across the hardware available in your cloud. +* `aio` creates a minimal cloud environment which is suitable for test, which may have low resources. + +The following examples all assume the use of a production environment, however, if you change `base` to `aio`, the deployment footprint will be changed for a given service. + +## The DNA of our services + +The DNA of the OpenStack services has been built to scale, and be managed in a pseudo light-outs environment. We're aiming to empower operators to do more, simply and easily. Here are the high-level talking points about the way we've structured our applications. + +* All services make use of our core infrastructure which is all managed by operators. +* Backups, rollbacks, and package management all built into our applications delivery. +* Databases, users, and grants are all run against a MariaDB Galera cluster which is setup for OpenStack to use a single right, and read from many. + * The primary node is part of application service discovery and will be automatically promoted / demoted within the cluster as needed. +* Queues, permissions, vhosts, and users are all backed by a RabbitMQ cluster with automatic failover. All of the queues deployed in the environment are done with Quorum queues, giving us a best of bread queing platform which gracefully recovers from faults while maintaining performance. +* Horizontal scaling groups have been applied to all of our services. This means we'll be able to auto scale API applications up and down based on the needs of the environment. + +## Deploy Keystone + +[![asciicast](https://asciinema.org/a/629802.svg)](https://asciinema.org/a/629802) + +### Create secrets. + +``` shell +kubectl --namespace openstack \ + create secret generic keystone-rabbitmq-password \ + --type Opaque \ + --from-literal=username="keystone" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic keystone-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic keystone-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic keystone-credential-keys \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +### Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install keystone ./keystone \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/keystone/keystone-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.keystone.password="$(kubectl --namespace openstack get secret keystone-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.keystone.password="$(kubectl --namespace openstack get secret keystone-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args keystone/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +> NOTE: The image used here allows the system to run with RXT global authentication federation. + The federated plugin can be seen here, https://github.com/cloudnull/keystone-rxt + +Deploy the openstack admin client pod (optional) + +``` shell +kubectl --namespace openstack apply -f /opt/genestack/manifests/utils/utils-openstack-client-admin.yaml +``` + +### Validate functionality + +``` shell +kubectl --namespace openstack exec -ti openstack-admin-client -- openstack user list +``` + +## Deploy Glance + +[![asciicast](https://asciinema.org/a/629806.svg)](https://asciinema.org/a/629806) + +### Create secrets. + +``` shell +kubectl --namespace openstack \ + create secret generic glance-rabbitmq-password \ + --type Opaque \ + --from-literal=username="glance" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic glance-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic glance-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +> Before running the Glance deployment you should configure the backend which is defined in the + `helm-configs/glance/glance-helm-overrides.yaml` file. The default is a making the assumption we're running with Ceph deployed by + Rook so the backend is configured to be cephfs with multi-attach functionality. While this works great, you should consider all of + the available storage backends and make the right decision for your environment. + +### Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install glance ./glance \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/glance/glance-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.glance.password="$(kubectl --namespace openstack get secret glance-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.glance.password="$(kubectl --namespace openstack get secret glance-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.glance.password="$(kubectl --namespace openstack get secret glance-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args glance/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +> Note that the defaults disable `storage_init` because we're using **pvc** as the image backend + type. In production this should be changed to swift. + +### Validate functionality + +``` shell +kubectl --namespace openstack exec -ti openstack-admin-client -- openstack image list +``` + +## Deploy Heat + +[![asciicast](https://asciinema.org/a/629807.svg)](https://asciinema.org/a/629807) + +### Create secrets + +``` shell +kubectl --namespace openstack \ + create secret generic heat-rabbitmq-password \ + --type Opaque \ + --from-literal=username="heat" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic heat-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic heat-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic heat-trustee \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic heat-stack-user \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +### Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install heat ./heat \ + --namespace=openstack \ + --timeout 120m \ + -f /opt/genestack/helm-configs/heat/heat-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.heat.password="$(kubectl --namespace openstack get secret heat-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.heat_trustee.password="$(kubectl --namespace openstack get secret heat-trustee -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.heat_stack_user.password="$(kubectl --namespace openstack get secret heat-stack-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.heat.password="$(kubectl --namespace openstack get secret heat-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.heat.password="$(kubectl --namespace openstack get secret heat-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args heat/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +### Validate functionality + +``` shell +kubectl --namespace openstack exec -ti openstack-admin-client -- openstack --os-interface internal orchestration service list +``` + +## Deploy Cinder + +[![asciicast](https://asciinema.org/a/629808.svg)](https://asciinema.org/a/629808) + +### Create secrets + +``` shell +kubectl --namespace openstack \ + create secret generic cinder-rabbitmq-password \ + --type Opaque \ + --from-literal=username="cinder" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic cinder-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic cinder-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +### Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install cinder ./cinder \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/cinder/cinder-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args cinder/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +Once the helm deployment is complete cinder and all of it's API services will be online. However, using this setup there will be +no volume node at this point. The reason volume deployments have been disabled is because we didn't expose ceph to the openstack +environment and OSH makes a lot of ceph related assumptions. For testing purposes we're wanting to run with the logical volume +driver (reference) and manage the deployment of that driver in a hybrid way. As such there's a deployment outside of our normal +K8S workflow will be needed on our volume host. + +> The LVM volume makes the assumption that the storage node has the required volume group setup `lvmdriver-1` on the node + This is not something that K8S is handling at this time. + +While cinder can run with a great many different storage backends, for the simple case we want to run with the Cinder reference +driver, which makes use of Logical Volumes. Because this driver is incompatible with a containerized work environment, we need +to run the services on our baremetal targets. Genestack has a playbook which will facilitate the installation of our services +and ensure that we've deployed everything in a working order. The playbook can be found at `playbooks/deploy-cinder-volumes-reference.yaml`. +Included in the playbooks directory is an example inventory for our cinder hosts; however, any inventory should work fine. + +#### Host Setup + +The cinder target hosts need to have some basic setup run on them to make them compatible with our Logical Volume Driver. + +1. Ensure DNS is working normally. + +Assuming your storage node was also deployed as a K8S node when we did our initial Kubernetes deployment, the DNS should already be +operational for you; however, in the event you need to do some manual tweaking or if the node was note deployed as a K8S worker, then +make sure you setup the DNS resolvers correctly so that your volume service node can communicate with our cluster. + +> This is expected to be our CoreDNS IP, in my case this is `169.254.25.10`. + +This is an example of my **systemd-resolved** conf found in `/etc/systemd/resolved.conf` +``` conf +[Resolve] +DNS=169.254.25.10 +#FallbackDNS= +Domains=openstack.svc.cluster.local svc.cluster.local cluster.local +#LLMNR=no +#MulticastDNS=no +DNSSEC=no +Cache=no-negative +#DNSStubListener=yes +``` + +Restart your DNS service after changes are made. + +``` shell +systemctl restart systemd-resolved.service +``` + +2. Volume Group `cinder-volumes-1` needs to be created, which can be done in two simple commands. + +Create the physical volume + +``` shell +pvcreate /dev/vdf +``` + +Create the volume group + +``` shell +vgcreate cinder-volumes-1 /dev/vdf +``` + +It should be noted that this setup can be tweaked and tuned to your heart's desire; additionally, you can further extend a +volume group with multiple disks. The example above is just that, an example. Check out more from the upstream docs on how +to best operate your volume groups for your specific needs. + +#### Hybrid Cinder Volume deployment + +With the volume groups and DNS setup on your target hosts, it is now time to deploy the volume services. The playbook `playbooks/deploy-cinder-volumes-reference.yaml` will be used to create a release target for our python code-base and deploy systemd services +units to run the cinder-volume process. + +> [!IMPORTANT] +> Consider the **storage** network on your Cinder hosts that will be accessible to Nova compute hosts. By default, the playbook uses `ansible_default_ipv4.address` to configure the target address, which may or may not work for your environment. Append var, i.e., `-e cinder_storage_network_interface=ansible_br_mgmt` to use the specified iface address in `cinder.conf` for `my_ip` and `target_ip_address` in `cinder/backends.conf`. **Interface names with a `-` must be entered with a `_` and be prefixed with `ansible`** + +##### Example without storage network interface override + +``` shell +ansible-playbook -i inventory-example.yaml deploy-cinder-volumes-reference.yaml +``` + +Once the playbook has finished executing, check the cinder api to verify functionality. + +``` shell +root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume service list ++------------------+-------------------------------------------------+------+---------+-------+----------------------------+ +| Binary | Host | Zone | Status | State | Updated At | ++------------------+-------------------------------------------------+------+---------+-------+----------------------------+ +| cinder-scheduler | cinder-volume-worker | nova | enabled | up | 2023-12-26T17:43:07.000000 | +| cinder-volume | openstack-flex-node-4.cluster.local@lvmdriver-1 | nova | enabled | up | 2023-12-26T17:43:04.000000 | ++------------------+-------------------------------------------------+------+---------+-------+----------------------------+ +``` + +> Notice the volume service is up and running with our `lvmdriver-1` target. + +At this point it would be a good time to define your types within cinder. For our example purposes we need to define the `lvmdriver-1` +type so that we can schedule volumes to our environment. + +``` shell +root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume type create lvmdriver-1 ++-------------+--------------------------------------+ +| Field | Value | ++-------------+--------------------------------------+ +| description | None | +| id | 6af6ade2-53ca-4260-8b79-1ba2f208c91d | +| is_public | True | +| name | lvmdriver-1 | ++-------------+--------------------------------------+ +``` + +### Validate functionality + +If wanted, create a test volume to tinker with + +``` shell +root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume create --size 1 test ++---------------------+--------------------------------------+ +| Field | Value | ++---------------------+--------------------------------------+ +| attachments | [] | +| availability_zone | nova | +| bootable | false | +| consistencygroup_id | None | +| created_at | 2023-12-26T17:46:15.639697 | +| description | None | +| encrypted | False | +| id | c744af27-fb40-4ffa-8a84-b9f44cb19b2b | +| migration_status | None | +| multiattach | False | +| name | test | +| properties | | +| replication_status | None | +| size | 1 | +| snapshot_id | None | +| source_volid | None | +| status | creating | +| type | lvmdriver-1 | +| updated_at | None | +| user_id | 2ddf90575e1846368253474789964074 | ++---------------------+--------------------------------------+ + +root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-admin-client -- openstack volume list ++--------------------------------------+------+-----------+------+-------------+ +| ID | Name | Status | Size | Attached to | ++--------------------------------------+------+-----------+------+-------------+ +| c744af27-fb40-4ffa-8a84-b9f44cb19b2b | test | available | 1 | | ++--------------------------------------+------+-----------+------+-------------+ +``` + +You can validate the environment is operational by logging into the storage nodes to validate the LVM targets are being created. + +``` shell +root@openstack-flex-node-4:~# lvs + LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert + c744af27-fb40-4ffa-8a84-b9f44cb19b2b cinder-volumes-1 -wi-a----- 1.00g +``` + +## Create Compute Kit Secrets + +[![asciicast](https://asciinema.org/a/629813.svg)](https://asciinema.org/a/629813) + +### Creating the Compute Kit Secrets + +Part of running Nova is also running placement. Setup all credentials now so we can use them across the nova and placement services. + +``` shell +# Placement +kubectl --namespace openstack \ + create secret generic placement-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic placement-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +``` shell +# Nova +kubectl --namespace openstack \ + create secret generic nova-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic nova-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic nova-rabbitmq-password \ + --type Opaque \ + --from-literal=username="nova" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +``` + +``` shell +# Ironic (NOT IMPLEMENTED YET) +kubectl --namespace openstack \ + create secret generic ironic-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +``` shell +# Designate (NOT IMPLEMENTED YET) +kubectl --namespace openstack \ + create secret generic designate-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +``` shell +# Neutron +kubectl --namespace openstack \ + create secret generic neutron-rabbitmq-password \ + --type Opaque \ + --from-literal=username="neutron" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic neutron-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic neutron-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +### Deploy Placement + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install placement ./placement --namespace=openstack \ + --namespace=openstack \ + --timeout 120m \ + -f /opt/genestack/helm-configs/placement/placement-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.placement.password="$(kubectl --namespace openstack get secret placement-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.placement.password="$(kubectl --namespace openstack get secret placement-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.nova_api.password="$(kubectl --namespace openstack get secret nova-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args placement/base +``` + +### Deploy Nova + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install nova ./nova \ + --namespace=openstack \ + --timeout 120m \ + -f /opt/genestack/helm-configs/nova/nova-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.nova.password="$(kubectl --namespace openstack get secret nova-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.ironic.password="$(kubectl --namespace openstack get secret ironic-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.placement.password="$(kubectl --namespace openstack get secret placement-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.cinder.password="$(kubectl --namespace openstack get secret cinder-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.nova.password="$(kubectl --namespace openstack get secret nova-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db_api.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db_api.auth.nova.password="$(kubectl --namespace openstack get secret nova-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db_cell0.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db_cell0.auth.nova.password="$(kubectl --namespace openstack get secret nova-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.nova.password="$(kubectl --namespace openstack get secret nova-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args nova/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +> NOTE: The above command is setting the ceph as disabled. While the K8S infrastructure has Ceph, + we're not exposing ceph to our openstack environment. + +If running in an environment that doesn't have hardware virtualization extensions add the following two `set` switches to the install command. + +``` shell +--set conf.nova.libvirt.virt_type=qemu --set conf.nova.libvirt.cpu_mode=none +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +### Deploy Neutron + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install neutron ./neutron \ + --namespace=openstack \ + --timeout 120m \ + -f /opt/genestack/helm-configs/neutron/neutron-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.nova.password="$(kubectl --namespace openstack get secret nova-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.placement.password="$(kubectl --namespace openstack get secret placement-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.designate.password="$(kubectl --namespace openstack get secret designate-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.ironic.password="$(kubectl --namespace openstack get secret ironic-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.neutron.password="$(kubectl --namespace openstack get secret neutron-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.neutron.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --set conf.neutron.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --set conf.plugins.ml2_conf.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --set conf.plugins.ml2_conf.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args neutron/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +> The above command derives the OVN north/south bound database from our K8S environment. The insert `set` is making the assumption we're using **tcp** to connect. + +## Deploy Octavia + +[![asciicast](https://asciinema.org/a/629814.svg)](https://asciinema.org/a/629814) + +### Create secrets + +``` shell +kubectl --namespace openstack \ + create secret generic octavia-rabbitmq-password \ + --type Opaque \ + --from-literal=username="octavia" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic octavia-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic octavia-admin \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +kubectl --namespace openstack \ + create secret generic octavia-certificates \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +### Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install octavia ./octavia \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/octavia/octavia-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.identity.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.admin.password="$(kubectl --namespace openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d)" \ + --set endpoints.oslo_messaging.auth.octavia.password="$(kubectl --namespace openstack get secret octavia-rabbitmq-password -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.octavia.certificates.ca_private_key_passphrase="$(kubectl --namespace openstack get secret octavia-certificates -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.octavia.ovn.ovn_nb_connection="tcp:$(kubectl --namespace kube-system get service ovn-nb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --set conf.octavia.ovn.ovn_sb_connection="tcp:$(kubectl --namespace kube-system get service ovn-sb -o jsonpath='{.spec.clusterIP}:{.spec.ports[0].port}')" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args octavia/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +Now validate functionality + +``` shell + +``` + +## Deploy Horizon + +[![asciicast](https://asciinema.org/a/629815.svg)](https://asciinema.org/a/629815) + +### Create secrets + +``` shell +kubectl --namespace openstack \ + create secret generic horizon-secrete-key \ + --type Opaque \ + --from-literal=username="horizon" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-64};echo;)" +kubectl --namespace openstack \ + create secret generic horizon-db-password \ + --type Opaque \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +### Run the package deployment + +``` shell +cd /opt/genestack/submodules/openstack-helm + +helm upgrade --install horizon ./horizon \ + --namespace=openstack \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/horizon/horizon-helm-overrides.yaml \ + --set endpoints.identity.auth.admin.password="$(kubectl --namespace openstack get secret keystone-admin -o jsonpath='{.data.password}' | base64 -d)" \ + --set conf.horizon.local_settings.config.horizon_secret_key="$(kubectl --namespace openstack get secret horizon-secrete-key -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.admin.password="$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d)" \ + --set endpoints.oslo_db.auth.horizon.password="$(kubectl --namespace openstack get secret horizon-db-password -o jsonpath='{.data.password}' | base64 -d)" \ + --post-renderer /opt/genestack/kustomize/kustomize.sh \ + --post-renderer-args horizon/base +``` + +> In a production like environment you may need to include production specific files like the example variable file found in + `helm-configs/prod-example-openstack-overrides.yaml`. + +## Deploy Skyline + +[![asciicast](https://asciinema.org/a/629816.svg)](https://asciinema.org/a/629816) + +Skyline is an alternative Web UI for OpenStack. If you deploy horizon there's no need for Skyline. + +### Create secrets + +Skyline is a little different because there's no helm integration. Given this difference the deployment is far simpler, and all secrets can be managed in one object. + +``` shell +kubectl --namespace openstack \ + create secret generic skyline-apiserver-secrets \ + --type Opaque \ + --from-literal=service-username="skyline" \ + --from-literal=service-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ + --from-literal=service-domain="service" \ + --from-literal=service-project="service" \ + --from-literal=service-project-domain="service" \ + --from-literal=db-endpoint="mariadb-galera-primary.openstack.svc.cluster.local" \ + --from-literal=db-name="skyline" \ + --from-literal=db-username="skyline" \ + --from-literal=db-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ + --from-literal=secret-key="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ + --from-literal=keystone-endpoint="http://keystone-api.openstack.svc.cluster.local:5000" \ + --from-literal=default-region="RegionOne" +``` + +> Note all the configuration is in this one secret, so be sure to set your entries accordingly. + +### Run the deployment + +> [!TIP] +> Pause for a moment to consider if you will be wanting to access Skyline via your ingress controller over a specific FQDN. If so, modify `/opt/genestack/kustomize/skyline/fqdn/kustomization.yaml` to suit your needs then use `fqdn` below in lieu of `base`... + +``` shell +kubectl --namespace openstack apply -k /opt/genestack/kustomize/skyline/base +``` diff --git a/docs/build-k8s.md b/docs/build-k8s.md new file mode 100644 index 00000000..ebb2080d --- /dev/null +++ b/docs/build-k8s.md @@ -0,0 +1,209 @@ +# Kubernetes Deployment Demo + +[![asciicast](https://asciinema.org/a/629780.svg)](https://asciinema.org/a/629780) + +# Run The Genestack Kubernetes Deployment + +Genestack assumes Kubernetes is present and available to run workloads on. We don't really care how your Kubernetes was deployed or what flavor of Kubernetes you're running. For our purposes we're using Kubespray, but you do you. We just need the following systems in your environment. + +* Kube-OVN +* Persistent Storage +* MetalLB +* Ingress Controller + +If you have those three things in your environment, you should be fully compatible with Genestack. + +## Deployment Kubespray + +Currently only the k8s provider kubespray is supported and included as submodule into the code base. + +> Existing OpenStack Ansible inventory can be converted using the `/opt/genestack/scripts/convert_osa_inventory.py` + script which provides a `hosts.yml` + +### Before you Deploy + +Kubespray will be using OVN for all of the network functions, as such, you will need to ensure your hosts are ready to receive the deployment at a low level. While the Kubespray tooling will do a lot of prep and setup work to ensure success, you will need to prepare +your networking infrastructure and basic storage layout before running the playbooks. + +#### Minimum system requirements + +* 2 Network Interfaces + +> While we would expect the environment to be running with multiple bonds in a production cloud, two network interfaces is all that's required. This can be achieved with vlan tagged devices, physical ethernet devices, macvlan, or anything else. Have a look at the netplan example file found [here](https://github.com/rackerlabs/genestack/blob/main/etc/netplan/default-DHCP.yaml) for an example of how you could setup the network. + +* Ensure we're running kernel 5.17+ + +> While the default kernel on most modern operating systems will work, we recommend running with Kernel 6.2+. + +* Kernel modules + +> The Kubespray tool chain will attempt to deploy a lot of things, one thing is a set of `sysctl` options which will include bridge tunings. Given the tooling will assume bridging is functional, you will need to ensure the `br_netfilter` module is loaded or you're using a kernel that includes that functionality as a built-in. + +* Executable `/tmp` + +> The `/tmp` directory is used as a download and staging location within the environment. You will need to make sure that the `/tmp` is executable. By default, some kick-systems set the mount option **noexec**, if that is defined you should remove it before running the deployment. + +### Create your Inventory + +A default inventory file for kubespray is provided at `/etc/genestack/inventory` and must be modified. + +Checkout the [openstack-flex/prod-inventory-example.yaml](https://github.com/rackerlabs/genestack/blob/main/ansible/inventory/openstack-flex/inventory.yaml.example) file for an example of a target environment. + +> NOTE before you deploy the kubernetes cluster you should define the `kube_override_hostname` option in your inventory. + This variable will set the node name which we will want to be an FQDN. When you define the option, it should have the + same suffix defined in our `cluster_name` variable. + +However, any Kubespray compatible inventory will work with this deployment tooling. The official [Kubespray documentation](https://kubespray.io) can be used to better understand the inventory options and requirements. Within the `ansible/playbooks/inventory` directory there is a directory named `openstack-flex` and `openstack-enterprise`. These directories provide everything we need to run a successful Kubernetes environment for genestack at scale. The difference between **enterprise** and **flex** are just target environment types. + +### Ensure systems have a proper FQDN Hostname + +Before running the Kubernetes deployment, make sure that all hosts have a properly configured FQDN. + +``` shell +source /opt/genestack/scripts/genestack.rc +ansible -m shell -a 'hostnamectl set-hostname {{ inventory_hostname }}' --become all +``` + +> NOTE in the above command I'm assuming the use of `cluster.local` this is the default **cluster_name** as defined in the + group_vars k8s_cluster file. If you change that option, make sure to reset your domain name on your hosts accordingly. + + +The ansible inventory is expected at `/etc/genestack/inventory` + +### Prepare hosts for installation + +``` shell +source /opt/genestack/scripts/genestack.rc +cd /opt/genestack/ansible/playbooks +``` + +> The RC file sets a number of environment variables that help ansible to run in a more easily to understand way. + +While the `ansible-playbook` command should work as is with the sourced environment variables, sometimes it's necessary to set some overrides on the command line. The following example highlights a couple of overrides that are generally useful. + +#### Example host setup playbook + +``` shell +ansible-playbook host-setup.yml +``` + +#### Example host setup playbook with overrides + +``` shell +# Example overriding things on the CLI +ansible-playbook host-setup.yml --inventory /etc/genestack/inventory/openstack-flex-inventory.yaml \ + --private-key ${HOME}/.ssh/openstack-flex-keypair.key +``` + +### Run the cluster deployment + +This is used to deploy kubespray against infra on an OpenStack cloud. If you're deploying on baremetal you will need to setup an inventory that meets your environmental needs. + +Change the directory to the kubespray submodule. + +``` shell +cd /opt/genestack/submodules/kubespray +``` + +Source your environment variables + +``` shell +source /opt/genestack/scripts/genestack.rc +``` + +> The RC file sets a number of environment variables that help ansible to run in a more easy to understand way. + +Once the inventory is updated and configuration altered (networking etc), the Kubernetes cluster can be initialized with + +``` shell +ansible-playbook cluster.yml +``` + +The cluster deployment playbook can also have overrides defined to augment how the playbook is executed. + +``` shell +ansible-playbook --inventory /etc/genestack/inventory/openstack-flex-inventory.yaml \ + --private-key /home/ubuntu/.ssh/openstack-flex-keypair.key \ + --user ubuntu \ + --become \ + cluster.yml +``` + +> Given the use of a venv, when running with `sudo` be sure to use the full path and pass through your environment variables; `sudo -E /home/ubuntu/.venvs/genestack/bin/ansible-playbook`. + +Once the cluster is online, you can run `kubectl` to interact with the environment. + + +### Optional - Remove taint from our Controllers + +In an environment with a limited set of control plane nodes removing the NoSchedule will allow you to converge the +openstack controllers with the k8s controllers. + +``` shell +# Remote taint from control-plane nodes +kubectl taint nodes $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') node-role.kubernetes.io/control-plane:NoSchedule- +``` + +### Optional - Deploy K8S Dashboard RBAC + +While the dashboard is installed you will have no ability to access it until we setup some basic RBAC. + +``` shell +kubectl apply -k /opt/genestack/kustomize/k8s-dashboard +``` + +You can now retrieve a permanent token. + +``` shell +kubectl get secret admin-user -n kube-system -o jsonpath={".data.token"} | base64 -d +``` + + +## Label all of the nodes in the environment + +> The following example assumes the node names can be used to identify their purpose within our environment. That + may not be the case in reality. Adapt the following commands to meet your needs. + +``` shell +# Label the storage nodes - optional and only used when deploying ceph for K8S infrastructure shared storage +kubectl label node $(kubectl get nodes | awk '/ceph/ {print $1}') role=storage-node + +# Label the openstack controllers +kubectl label node $(kubectl get nodes | awk '/controller/ {print $1}') openstack-control-plane=enabled + +# Label the openstack compute nodes +kubectl label node $(kubectl get nodes | awk '/compute/ {print $1}') openstack-compute-node=enabled + +# Label the openstack network nodes +kubectl label node $(kubectl get nodes | awk '/network/ {print $1}') openstack-network-node=enabled + +# Label the openstack storage nodes +kubectl label node $(kubectl get nodes | awk '/storage/ {print $1}') openstack-storage-node=enabled + +# With OVN we need the compute nodes to be "network" nodes as well. While they will be configured for networking, they wont be gateways. +kubectl label node $(kubectl get nodes | awk '/compute/ {print $1}') openstack-network-node=enabled + +# Label all workers - Recommended and used when deploying Kubernetes specific services +kubectl label node $(kubectl get nodes | awk '/worker/ {print $1}') node-role.kubernetes.io/worker=worker +``` + +Check the node labels + +``` shell +# Verify the nodes are operational and labled. +kubectl get nodes -o wide --show-labels=true +``` + +## Install Helm + +While `helm` should already be installed with the **host-setup** playbook, it is possible that you may need to install helm manually on nodes. There are lots of ways to install helm, check the upstream [docs](https://helm.sh/docs/intro/install/) to learn more about installing helm. + +### Run `make` for our helm components + +``` shell +cd /opt/genestack/submodules/openstack-helm && +make all + +cd /opt/genestack/submodules/openstack-helm-infra && +make all +``` diff --git a/docs/build-local-images.md b/docs/build-local-images.md new file mode 100644 index 00000000..33fac891 --- /dev/null +++ b/docs/build-local-images.md @@ -0,0 +1,48 @@ +## Optional - Building OVN with customer providers + +By default Octavia will run with Amphora, however, because we've OVN available to our environment we can also configure the OVN provider for use within the cluster. While the genestack defaults will include a container image that meets our needs, the following snippet will walk you through the manual build process making use of the internal kubernetes registry. + +``` shell +# Pre-made container files for build purposes can be found within the repo. +cd /opt/genestack/Containerfiles + +# Install buildah. +apt update +apt -y install buildah + +# Build the ovn integration into the ovn release image. Note the version argument. +# this option is variable and should be adjusted for your specific needs. +buildah build -f OctaviaOVN-Containerfile --build-arg VERSION=master-ubuntu_jammy +# List the local images to get the IP of the new image. +buildah images + +REPOSITORY TAG IMAGE ID CREATED SIZE + THISISTHENEWIMG 11 minutes ago 388 MB +docker.io/loci/octavia master-ubuntu_jammy THISISTHEBASE 3 weeks ago 323 MB + +# Push the new image to our internal registry. +buildah push --tls-verify=false THISISTHENEWIMG docker://registry.kube-system/octavia:ubuntu_jammy-ovn + +# You can validate that the image is present. +curl -k https://registry.kube-system/v2/_catalog + +# Create an override file. +cat > /opt/octavia-ovn-helm-overrides.yaml < /opt/registry.ca +``` + +> NOTE the above commands make the assumption that you're running a docker registry within the kube-system namespace and are running the provided genestack ingress definition to support that environment. If you have a different registry you will need to adjust the commands to fit your environment. + +Once the above commands have been executed, the file `/opt/octavia-ovn-helm-overrides.yaml` will be present and can be included in our helm command when we deploy Octavia. + +> If you're using the local registry with a self-signed certificate, you will need to include the CA `/opt/registry.ca` in all of your potential worker nodes so that the container image is able to be pulled. diff --git a/docs/build-test-envs.md b/docs/build-test-envs.md new file mode 100644 index 00000000..462279ec --- /dev/null +++ b/docs/build-test-envs.md @@ -0,0 +1,112 @@ +# Lab Build Demo + +[![asciicast](https://asciinema.org/a/629776.svg)](https://asciinema.org/a/629776) + +The information on this page is only needed when building an environment in Virtual Machines. + +## Prerequisites + +Take a moment to orient yourself, there are a few items to consider before moving forward to help you get underway. + +### Clone Genestack + +> Your local genestack repository will be transferred to the eventual launcher instance for convenience (_perfect for development_). +See [[Getting Started|https://github.com/rackerlabs/genestack/wiki#getting-started]] for an example on how to recursively clone the repository and its submodules. + +### Create a VirtualEnv + +This is optional but always recommended. There are multiple tools for this, pick your poison. + +### Install Ansible Dependencies + +> Activate your venv if you're using one. + +``` +pip install ansible openstacksdk +``` + +### Configure openstack client + +The openstacksdk used by the ansible playbook needs a valid configuration to your environment to stand up the test resources. + +An example `clouds.yaml` that could be placed in [ansible/playbooks/](../../tree/main/ansible/playbooks): + +``` +cache: + auth: true + expiration_time: 3600 +clouds: + dfw: + auth: + auth_url: https://$YOUR_KEYSTONE_HOST/v3 + project_name: $YOUR_PROJECT_ID + project_domain_name: $YOUR_PROJECT_DOMAIN + username: $YOUR_USER + password: $YOUR_PASS + user_domain_name: $YOUR_USER_DOMAIN + region_name: + - DFW3 + interface: public + identity_api_version: "3" +``` + +See the configuration guide [[here|https://docs.openstack.org/openstacksdk/latest/user/config/configuration.html]] for more examples. + +## Create a Test Environment + +> This is used to deploy new infra on an existing OpenStack cloud. If you're deploying on baremetal this document can be skipped. + +If deploying in a lab environment on an OpenStack cloud, you can run the `infra-deploy.yaml` playbook which will create all of the resources needed to operate the test environment. + +Before running the `infra-deploy.yaml` playbook, be sure you have the required ansible collections installed. + +``` shell +ansible-galaxy collection install -r ansible-collection-requirements.yml +``` + +Move to the ansible playbooks directory within Genestack. + +``` shell +cd ansible/playbooks +``` + +Run the test infrastructure deployment. + +> Ensure `os_cloud_name` as well as other values within your `infra-deploy.yaml` match a valid cloud name in your openstack configuration as well as resource names within it. + +> [!IMPORTANT] +> Pay close attention to the values for both `kube_ovn_iface` and `kube_ovn_default_interface_name`, they will need to match the desired interface name(s) within your test hosts! + +``` shell +ansible-playbook -i localhost, infra-deploy.yaml +``` + +Here's an example where all of the cloud defaults have been overridden to use known options within my OpenStack Cloud environment. + +``` shell +ansible-playbook -i localhost, infra-deploy.yaml -e os_image_id=Ubuntu-22.04 \ + -e os_cloud_name=dfw \ + -e os_launcher_flavor=m1.small \ + -e os_node_flavor=m1.large +``` + +The test infrastructure will create the following OpenStack resources. + +* Neutron Network/Subnet + * Assign a floating IP +* Cinder Volumes +* Nova Servers + +The result of the playbook will look something like this. + +![image](https://github.com/rackerlabs/genestack/assets/2066292/7c7f4230-256c-4392-9928-767edb2ad0f0) + +* The first three nodes within the build playbook will be assumed as controllers +* The last three nodes will be assumed to be storage nodes with 3 volumes attached to them each +* All other nodes will be assumed to be compute nodes. + +### Running the deployment + +The lab deployment playbook will build an environment suitable for running Genestack, however, it does not by itself run the full deployment. Once your resources are online, you can login to the "launcher" node and begin running the deployment. To make things fairly simple, the working development directory will be sync'd to the launcher node, along with keys and your generated inventory. + +> If you're wanting to inspect the generated inventory, you can find it in your home directory. diff --git a/docs/deploy-required-infrastructure.md b/docs/deploy-required-infrastructure.md new file mode 100644 index 00000000..4d6a4a8f --- /dev/null +++ b/docs/deploy-required-infrastructure.md @@ -0,0 +1,312 @@ +# Infrastructure Deployment Demo + +[![asciicast](https://asciinema.org/a/629790.svg)](https://asciinema.org/a/629790) + +# Running the infrastructure deployment + +The infrastructure deployment can almost all be run in parallel. The above demo does everything serially to keep things consistent and easy to understand but if you just need to get things done, feel free to do it all at once. + +## Create our basic OpenStack namespace + +The following command will generate our OpenStack namespace and ensure we have everything needed to proceed with the deployment. + +``` shell +kubectl apply -k /opt/genestack/kustomize/openstack +``` + +## Deploy the MariaDB Operator and a Galera Cluster + +### Create secret + +``` shell +kubectl --namespace openstack \ + create secret generic mariadb \ + --type Opaque \ + --from-literal=root-password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" \ + --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" +``` + +### Deploy the mariadb operator + +If you've changed your k8s cluster name from the default cluster.local, edit `clusterName` in `/opt/genestack/kustomize/mariadb-operator/kustomization.yaml` prior to deploying the mariadb operator. + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/mariadb-operator | kubectl --namespace mariadb-system apply --server-side --force-conflicts -f - +``` + +> The operator may take a minute to get ready, before deploying the Galera cluster, wait until the webhook is online. + +``` shell +kubectl --namespace mariadb-system get pods -w +``` + +### Deploy the MariaDB Cluster + +``` shell +kubectl --namespace openstack apply -k /opt/genestack/kustomize/mariadb-cluster/base +``` + +> NOTE MariaDB has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. + +### Verify readiness with the following command + +``` shell +kubectl --namespace openstack get mariadbs -w +``` + +## Deploy the RabbitMQ Operator and a RabbitMQ Cluster + +### Deploy the RabbitMQ operator. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rabbitmq-operator +``` +> The operator may take a minute to get ready, before deploying the RabbitMQ cluster, wait until the operator pod is online. + +### Deploy the RabbitMQ topology operator. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rabbitmq-topology-operator +``` + +### Deploy the RabbitMQ cluster. + +``` shell +kubectl apply -k /opt/genestack/kustomize/rabbitmq-cluster/base +``` + +> NOTE RabbitMQ has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. + +### Validate the status with the following + +``` shell +kubectl --namespace openstack get rabbitmqclusters.rabbitmq.com -w +``` + +## Deploy a Memcached + +### Deploy the Memcached Cluster + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base | kubectl apply --namespace openstack -f - +``` + +> NOTE Memcached has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. + +### Verify readiness with the following command. + +``` shell +kubectl --namespace openstack get horizontalpodautoscaler.autoscaling memcached -w +``` + +# Deploy the ingress controllers + +We need two different Ingress controllers, one in the `openstack` namespace, the other in the `ingress-nginx` namespace. The `openstack` controller is for east-west connectivity, the `ingress-nginx` controller is for north-south. + +### Deploy our ingress controller within the ingress-nginx Namespace + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/ingress/external | kubectl apply --namespace ingress-nginx -f - +``` + +### Deploy our ingress controller within the OpenStack Namespace + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/ingress/internal | kubectl apply --namespace openstack -f - +``` + +The openstack ingress controller uses the class name `nginx-openstack`. + +## Setup the MetalLB Loadbalancer + +The MetalLb loadbalancer can be setup by editing the following file `metallb-openstack-service-lb.yml`, You will need to add +your "external" VIP(s) to the loadbalancer so that they can be used within services. These IP addresses are unique and will +need to be customized to meet the needs of your environment. + +### Example LB manifest + +```yaml +metadata: + name: openstack-external + namespace: metallb-system +spec: + addresses: + - 10.74.8.99/32 # This is assumed to be the public LB vip address + autoAssign: false +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: openstack-external-advertisement + namespace: metallb-system +spec: + ipAddressPools: + - openstack-external + nodeSelectors: # Optional block to limit nodes for a given advertisement + - matchLabels: + kubernetes.io/hostname: controller01.sjc.ohthree.com + - matchLabels: + kubernetes.io/hostname: controller02.sjc.ohthree.com + - matchLabels: + kubernetes.io/hostname: controller03.sjc.ohthree.com + interfaces: # Optional block to limit ifaces used to advertise VIPs + - br-mgmt +``` + +``` shell +kubectl apply -f /opt/genestack/manifests/metallb/metallb-openstack-service-lb.yml +``` + +Assuming your ingress controller is all setup and your metallb loadbalancer is operational you can patch the ingress controller to expose your external VIP address. + +``` shell +kubectl --namespace openstack patch service ingress -p '{"metadata":{"annotations":{"metallb.universe.tf/allow-shared-ip": "openstack-external-svc", "metallb.universe.tf/address-pool": "openstack-external"}}}' +kubectl --namespace openstack patch service ingress -p '{"spec": {"type": "LoadBalancer"}}' +``` + +Once patched you can see that the controller is operational with your configured VIP address. + +``` shell +kubectl --namespace openstack get services ingress +``` + +## Deploy Libvirt + +The first part of the compute kit is Libvirt. + +``` shell +kubectl kustomize --enable-helm /opt/genestack/kustomize/libvirt | kubectl apply --namespace openstack -f - +``` + +Once deployed you can validate functionality on your compute hosts with `virsh` + +``` shell +root@openstack-flex-node-3:~# virsh +Welcome to virsh, the virtualization interactive terminal. + +Type: 'help' for help with commands + 'quit' to quit + +virsh # list + Id Name State +-------------------- + +virsh # +``` + +## Deploy Open vSwitch OVN + +Note that we're not deploying Openvswitch, however, we are using it. The implementation on Genestack is assumed to be +done with Kubespray which deploys OVN as its networking solution. Because those components are handled by our infrastructure +there's nothing for us to manage / deploy in this environment. OpenStack will leverage OVN within Kubernetes following the +scaling/maintenance/management practices of kube-ovn. + +### Configure OVN for OpenStack + +Post deployment we need to setup neutron to work with our integrated OVN environment. To make that work we have to annotate or nodes. Within the following commands we'll use a lookup to label all of our nodes the same way, however, the power of this system is the ability to customize how our machines are labeled and therefore what type of hardware layout our machines will have. This gives us the ability to use different hardware in different machines, in different availability zones. While this example is simple your cloud deployment doesn't have to be. + +``` shell +export ALL_NODES=$(kubectl get nodes -l 'openstack-network-node=enabled' -o 'jsonpath={.items[*].metadata.name}') +``` + +> Set the annotations you need within your environment to meet the needs of your workloads on the hardware you have. + +#### Set `ovn.openstack.org/int_bridge` + +Set the name of the OVS integration bridge we'll use. In general, this should be **br-int**, and while this setting is implicitly configured we're explicitly defining what the bridge will be on these nodes. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/int_bridge='br-int' +``` + +#### Set `ovn.openstack.org/bridges` + +Set the name of the OVS bridges we'll use. These are the bridges you will use on your hosts within OVS. The option is a string and comma separated. You can define as many OVS type bridges you need or want for your environment. + +> NOTE The functional example here annotates all nodes; however, not all nodes have to have the same setup. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/bridges='br-ex' +``` + +#### Set `ovn.openstack.org/ports` + +Set the port mapping for OVS interfaces to a local physical interface on a given machine. This option uses a colon between the OVS bridge and the and the physical interface, `OVS_BRIDGE:PHYSICAL_INTERFACE_NAME`. Multiple bridge mappings can be defined by separating values with a comma. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/ports='br-ex:bond1' +``` + +#### Set `ovn.openstack.org/mappings` + +Set the Neutron bridge mapping. This maps the Neutron interfaces to the ovs bridge names. These are colon delimitated between `NEUTRON_INTERFACE:OVS_BRIDGE`. Multiple bridge mappings can be defined here and are separated by commas. + +> Neutron interfaces are string value and can be anything you want. The `NEUTRON_INTERFACE` value defined will be used when you create provider type networks after the cloud is online. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/mappings='physnet1:br-ex' +``` + +#### Set `ovn.openstack.org/availability_zones` + +Set the OVN availability zones which inturn creates neutron availability zones. Multiple network availability zones can be defined and are colon separated which allows us to define all of the availability zones a node will be able to provide for, `nova:az1:az2:az3`. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/availability_zones='nova' +``` + +> Any availability zone defined here should also be defined within your **neutron.conf**. The "nova" availability zone is an assumed defined, however, because we're running in a mixed OVN environment, we should define where we're allowed to execute OpenStack workloads. + +#### Set `ovn.openstack.org/gateway` + +Define where the gateways nodes will reside. There are many ways to run this, some like every compute node to be a gateway, some like dedicated gateway hardware. Either way you will need at least one gateway node within your environment. + +``` shell +kubectl annotate \ + nodes \ + ${ALL_NODES} \ + ovn.openstack.org/gateway='enabled' +``` + +### Run the OVN integration + +With all of the annotations defined, we can now apply the network policy with the following command. + +``` shell +kubectl apply -k /opt/genestack/kustomize/ovn +``` + +After running the setup, nodes will have the label `ovn.openstack.org/configured` with a date stamp when it was configured. +If there's ever a need to reconfigure a node, simply remove the label and the DaemonSet will take care of it automatically. + +## Validation our infrastructure is operational + +Before going any further make sure you validate that the backends are operational. + +``` shell +# MariaDB +kubectl --namespace openstack get mariadbs + +#RabbitMQ +kubectl --namespace openstack get rabbitmqclusters.rabbitmq.com + +# Memcached +kubectl --namespace openstack get horizontalpodautoscaler.autoscaling memcached +``` + +Once everything is Ready and online. Continue with the installation. diff --git a/docs/genestack-upgrade.md b/docs/genestack-upgrade.md new file mode 100644 index 00000000..737fffa1 --- /dev/null +++ b/docs/genestack-upgrade.md @@ -0,0 +1,31 @@ +Running a genestack upgrade is fairly simple and consists of mainly updating the `git` checkout and then running through the needed `helm` charts to deploy updated applications. + +## Updating the Genestack + +Change to the genestack directory. + +``` shell +cd /opt/genestack +``` + +Fetch the latest checkout from your remote. + +``` shell +git fetch origin +git rebase origin/main +``` + +> You may want to checkout a specific SHA or tag when running a stable environment. + +Update the submodules. + +``` shell +git pull --recurse-submodules +``` + +## Updating the genestack applications + +An update is generally the same as an install. Many of the Genestack applications are governed by operators which include lifecycle management. + +* When needing to run an upgrade for the infrastructure operators, consult the operator documentation to validate the steps required. +* When needing to run an upgrade for the OpenStack components, simply re-run the `helm` charts as documented in the Genestack installation process. diff --git a/docs/getting-started.md b/docs/getting-started.md new file mode 100644 index 00000000..a702439d --- /dev/null +++ b/docs/getting-started.md @@ -0,0 +1,31 @@ +# Welcome to the Genestack Wiki + +Welcome to the Genestack wiki! The following documents will breakdown a full end-to-end deployment and highlight how we can run a hybrid cloud environment, simply. + +## Getting Started + +Before you can do anything we need to get the code. Because we've sold our soul to the submodule devil, you're going to need to recursively clone the repo into your location. + +> Throughout the all our documentation and examples the genestack code base will be assumed to be in `/opt`. + +``` shell +git clone --recurse-submodules -j4 https://github.com/rackerlabs/genestack /opt/genestack +``` + +## Basic Setup + +The basic setup requires ansible, ansible collection and helm installed to install Kubernetes and OpenStack Helm: + +The environment variable `GENESTACK_PRODUCT` is used to bootstrap specific configurations and alters playbook handling. +It is persisted at /etc/genestack/product` for subsequent executions, it only has to be used once. + +``` shell +export GENESTACK_PRODUCT=openstack-enterprise +#GENESTACK_PRODUCT=openstack-flex + +/opt/genestack/bootstrap.sh +``` + +> If running this command with `sudo`, be sure to run with `-E`. `sudo -E /opt/genestack/bootstrap.sh`. This will ensure your active environment is passed into the bootstrap command. + +Once the bootstrap is completed the default Kubernetes provider will be configured inside `/etc/genestack/provider` diff --git a/docs/index.md b/docs/index.md index e69de29b..51cac85e 100644 --- a/docs/index.md +++ b/docs/index.md @@ -0,0 +1,20 @@ +#### 1.Getting Started + * [Getting Started](getting-started.md) +#### 2.Kubernetes + * [Building Your Kubernetes Environment](build-k8s.md) + * [Retrieve kube config](kube-config.md) +#### 3.Storage + * [Create Persistent Storage](create-persistent-storage.md) +#### 4.Openstack Infrastructure + * [Deploy Openstack on k8s](deploy-openstack.md) +####Build Images + * [Building Local Images](build-local-images.md) +####Build Test Environments + * [Building Virtual Environments for Testing](build-test-envs.md) +####Networking + * [OVN Database Backup](ovn-db-backup.md) +####Post Deployment + * [Post Deploy Operations](post-deploy-ops.md) +####Upgrades + * [Running Genestack Upgrade](genestack-upgrade.md) + * [Running Kubernetes Upgrade](k8s-upgrade.md) diff --git a/docs/k8s-upgrade.md b/docs/k8s-upgrade.md new file mode 100644 index 00000000..07ab52d0 --- /dev/null +++ b/docs/k8s-upgrade.md @@ -0,0 +1,63 @@ +Upgrades within the Kubernetes ecosystem are plentiful and happen often. While upgrades are not something that we want to process all the time, it is something that we want to be able to confidently process. With our Kubernetes providers, upgrades are handled in a way that maximizes uptime and should mostly not force resources into data-plane downtime. + +# Kubespray + +Running upgrades with Kubespary is handled by the `upgrade-cluster.yml` playbook. While this playbook works, it does have a couple of caveats. + +1. An upgrade can only handle one major jump. If you're running 1.26 and want to go to 1.28, you'll need to upgrade to 1.27 first and repeat the process until you land on the desired version. + +2. The upgrade playbook will drain and move workloads around to ensure the environment maximizes uptime. While maximizing uptime makes for incredible user experiences, it does mean the process of executing an upgrade can be very long (2+ hours is normal); plan accordingly. + +## Preparing the upgrade + +When running Kubespray using the Genestack submodule, review the [Genestack Update Process](https://github.com/rackerlabs/genestack/wiki/Running-a-Genestack-upgrade) before continuing with the kubespray upgrade and deployment. + +Genestack stores inventory in the `/etc/genestack/inventory` directory. Before running the upgrade, you will need to set the **kube_version** variable to your new target version. This variable is generally found within the `/etc/genestack/inventory/group_vars/k8s_cluster/k8s-cluster.yml` file. + +> Review all of the group variables within an environment before running a major upgrade. Things change, and you need to be aware of your environment details before running the upgrade. + +Once the group variables are set, you can proceed with the upgrade execution. + +## Running the upgrade + +Running an upgrade with Kubespray is fairly simple and executed via `ansible-playbook`. + +Before running the playbook be sure to source your environment variables. + +``` shell +source /opt/genestack/scripts/genestack.rc +``` + +Change to the `kubespary` directory. + +``` shell +cd /opt/genestack/submodules/kubespray +``` + +Now run the upgrade. + +``` shell +ansible-playbook upgrade-cluster.yml +``` + +> While the basic command could work, be sure to include any and all flags needed for your environment before running the upgrade. + +### Running an unsafe upgrade + +When running an upgrade, it is possible to force the upgrade by running the cluster playbook with the `upgrade_cluster_setup` flag set to **true**. This option is a lot faster, though does introduce the possibility of service disruption during the upgrade operation. + +``` shell +ansible-playbook cluster.yml -e upgrade_cluster_setup=true +``` + +### Post upgrade operations + +After running the upgrade it's sometimes a good thing to do some spot checks of your nodes and ensure everything is online, operating normally. + +#### Dealing with failure + +If an upgrade failed on the first attempt but succeeded on a subsequent run, you may have a node in a `Ready`, but `SchedulingDisabled` state. If you find yourself in this scenario you may need to `uncordon` the node to get things back to operating normally. + +``` shell +kubectl uncordon $NODE +``` diff --git a/docs/kube-config.md b/docs/kube-config.md new file mode 100644 index 00000000..d6cdecd0 --- /dev/null +++ b/docs/kube-config.md @@ -0,0 +1,32 @@ +Once the environment is online, proceed to login to the environment and begin the deployment normally. You'll find the launch node has everything needed, in the places they belong, to get the environment online. + +## Install `kubectl` + +Install the `kubectl` tool. + +``` shell +curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" +sudo mv kubectl /usr/local/bin/ +sudo chmod +x /usr/local/bin/kubectl +``` + +## Retrieve the kube config + +Retrieve the kube config from our first controller. + +> In the following example, X.X.X.X is expected to be the first controller. + +> In the following example, ubuntu is the assumed user. + +``` shell +mkdir -p ~/.kube +rsync -e "ssh -F ${HOME}/.ssh/openstack-flex-keypair.config" \ + --rsync-path="sudo rsync" \ + -avz ubuntu@X.X.X.X:/root/.kube/config "${HOME}/.kube/config" +``` + +Edit the kube config to point at the first controller. + +``` shell +sed -i 's@server.*@server: https://X.X.X.X:6443@g' "${HOME}/.kube/config" +``` diff --git a/docs/ovn-db-backup.md b/docs/ovn-db-backup.md new file mode 100644 index 00000000..f0e7c8c6 --- /dev/null +++ b/docs/ovn-db-backup.md @@ -0,0 +1,132 @@ +- [Background](#background) +- [Backup](#backup) +- [Restoration and recovery](#restoration-and-recovery) + - [Recovering when a majority of OVN DB nodes work fine](#recovering-when-a-majority-of-ovn-db-nodes-work-fine) + - [Recovering from a majority of OVN DB node failures or a total cluster failure](#recovering-from-a-majority-of-ovn-db-node-failures-or-a-total-cluster-failure) + - [Trying to use _OVN_ DB files in `/etc/origin/ovn` on the _k8s_ nodes](#trying-to-use-ovn-db-files-in-etcoriginovn-on-the-k8s-nodes) + - [Finding the first node](#finding-the-first-node) + - [Trying to create a pod for `ovsdb-tool`](#trying-to-create-a-pod-for-ovsdb-tool) + - [`ovsdb-tool` from your Linux distribution's packaging system](#ovsdb-tool-from-your-linux-distributions-packaging-system) + - [Conclusion of using the OVN DB files on your _k8s_ nodes](#conclusion-of-using-the-ovn-db-files-on-your-k8s-nodes) + - [Full recovery](#full-recovery) + +# Background + +By default, _Genestack_ creates a pod that runs _OVN_ snapshots daily in the `kube-system` namespace where you find other centralized _OVN_ things. These get stored on a persistent storage volume associated with the `ovndb-backup` _PersistentVolumeClaim_. Snapshots older than 30 days get deleted. + +You should primarily follow the [Kube-OVN documentation on backup and recovery](https://kubeovn.github.io/docs/stable/en/ops/recover-db/) and consider the information here supplementary. + +# Backup + +A default _Genestack_ installation creates a _k8s_ _CronJob_ in the `kube-system` namespace along side the other central OVN components that will store snapshots of the OVN NB and SB in the _PersistentVolume_ for the _PersistentVolumeClaim_ named `ovndb-backup`. Storing these on the persistent volume like this matches the conventions for _MariaDB_ in _Genestack_. + +You may wish to implement shipping these off of the cluster to a permanent location, as you might have cluster problems that could interfere with your ability to get these off of the _PersistentVolume_ when you need these backups. + +# Restoration and recovery + +## Recovering when a majority of OVN DB nodes work fine + +If you have a majority of _k8s_ nodes running `ovn-central` working fine, you can just follow the directions in the _Kube-OVN_ documentation for kicking a node out. Things mostly work normally when you have a majority because OVSDB HA uses a raft algorithm which only requires a majority of the nodes for full functionality, so you don't have to do anything too strange or extreme to recover. You essentially kick the bad node out and let it recover. + +## Recovering from a majority of OVN DB node failures or a total cluster failure + +**You probably shouldn't use this section if you don't have a majority OVN DB node failure. Just kick out the minority of bad nodes as indicated above instead**. Use this section to recover from a failure of the **majority** of nodes. + +As a first step, you will need to get database files to run the recovery. You can try to use files on your nodes as described below, or use one of the backup snapshots. + +### Trying to use _OVN_ DB files in `/etc/origin/ovn` on the _k8s_ nodes + +You can use the information in this section to try to get the files to use for your recovery from your running _k8s_ nodes. + +The _Kube-OVN_ shows trying to use _OVN_ DB files from `/etc/origin/ovn` on the _k8s_ nodes. You can try this, or skip this section and use a backup snapshot as shown below if you have one. However, you can probably try to use the files on the nodes as described here first, and then switch to the latest snapshot backup from the `CronJob` later if trying to use the files on the _k8s_ nodes doesn't seem to work, since restoring from the snapshot backup fully rebuilds the database. + +The directions in the _Kube-OVN_ documentation use `docker run` to get a working `ovsdb-tool` to try to work with the OVN DB files on the nodes, but _k8s_ installations mostly use `CRI-O`, `containerd`, or other container runtimes, so you probably can't pull the image and run it with `docker` as shown. I will cover this and some alternatives below. + +#### Finding the first node + +The _Kube-OVN_ documentation directs you to pick the node running the `ovn-central` pod associated with the first IP of the `NODE_IPS` environment variable. You should find the `NODE_IPS` environment variable defined on an `ovn-central` pod or the `ovn-central` _Deployment_. Assuming you can run the `kubectl` commands, the following example gets the node IPs off of one of the the deployment: + +``` +$ kubectl get deployment -n kube-system ovn-central -o yaml | grep -A1 'name: NODE_IPS' + - name: NODE_IPS + value: 10.130.140.246,10.130.140.250,10.130.140.252 +``` + +Then find the _k8s_ node with the first IP. You can see your _k8s_ nodes and their IPs with the command `kubectl get node -o wide`: + +``` +$ kubectl get node -o wide | grep 10.130.140.246 +k8s-controller01 Ready control-plane 3d17h v1.28.6 10.130.140.246 Ubuntu 22.04.3 LTS 6.5.0-17-generic containerd://1.7.11 +root@k8s-controller01:~# +``` + + +#### Trying to create a pod for `ovsdb-tool` + +As an alternative to `docker run` since your _k8s_ cluster probably doesn't use _Docker_ itself, you can **possibly** try to create a pod instead of running a container directly, but you should **try it before scaling your _OVN_ replicas down to 0**, as not having `ovn-central` available should interfere with pod creation. The broken `ovn-central` might still prevent _k8s_ from creating the pod even if you haven't scaled your replicas down, however. + +**Read below the pod manifest for edits you may need to make** + +``` +apiVersion: v1 +kind: Pod +metadata: + name: ovn-central-kubectl + namespace: kube-system +spec: + serviceAccount: "ovn" + serviceAccountName: "ovn" + nodeName: + tolerations: + - key: node-role.kubernetes.io/control-plane + operator: "Exists" + effect: "NoSchedule" + volumes: + - name: host-config-ovn + hostPath: + path: /etc/origin/ovn + type: "" + - name: backup + persistentVolumeClaim: + claimName: ovndb-backup + containers: + - name: ovn-central-kubectl + command: + - "/usr/bin/sleep" + args: + - "infinity" + image: docker.io/kubeovn/kube-ovn:v1.11.5 + volumeMounts: + - mountPath: /etc/ovn + name: host-config-ovn + - mountPath: /backup + name: backup +``` + +You also have to make sure to get the pod on the _k8s_ node with the first IP of `NODE_IPS` from your `ovn-central` installation, as the _Kube-OVN_ documentation indicates, so see the section on "finding the first node" above to fill in `` in the example pod manifest above. + +You can save this to a YAML file, and `kubectl apply -f `. + +You may need to delete the `backup` stuff under `.spec.volumes` and `.spec.containers[].volumeMounts` if you don't have that volume (although a default _Genestack_ installation does the scheduled snapshots there) or trying to use it causes problems, but if it works, you can possibly `kubectl cp` a previous backup off it to restore. + +Additionally, you may need to delete the tolerations in the manifest if you untainted your controllers. + +To reiterate, if you reached this step, this pod creation may not work because of your `ovn-central` problems, but a default `Genestack` can't `docker run` the container directly as shown in the `Kube-OVN` documentation because it probably uses _containerd_ instead of _Docker_. I tried creating a pod like this with `ovn-central` scaled to 0 pods, and the pod stays in `ContainerCreating` status. + +If creating this pod worked, **scale your replicas to 0**, use `ovsdb-tool` to make the files you will use for restore (both north and south DB), then jump to _Full Recovery_ as described below here and in the _Kube-OVN_ documentation. + +#### `ovsdb-tool` from your Linux distribution's packaging system + +As an alternative to the `docker run`, which may not work on your cluster, and the pod creation, which may not work because of your broken OVN, if you still want to try to use the OVN DB files on your _k8s_ nodes instead of going to one of your snapshot backups, you can try to install your distribution's package with the `ovsdb-tool`, `openvswitch-common` on Ubuntu, although you risk (and will probably have) a slight version mismatch with the OVS version within your normal `ovn-central` pods. OVSDB has a stable format and this likely will not cause any problems, although you should probably restore a previously saved snapshot in preference to using an `ovsdb-tool` with a slightly mismatched version, but you may consider using the mismatch version if you don't have other options. + +#### Conclusion of using the OVN DB files on your _k8s_ nodes + +The entire section on using the OVN DB files from your nodes just gives you an alternative way to a planned snapshot backup to try to get something to restore the database from. From here forward, the directions converge with full recovery as described below and in the full _Kube-OVN_ documentation. + +### Full recovery + +You start here when you have north database and south database files you want to use to run your recovery, whether you retrieved it from one of your _k8s_ nodes as described above, or got it from one of your snapshots. Technically, the south database should get rebuilt with only the north database, but if you have the two that go together, you can save the time it would take for a full rebuild by also restoring the south DB. It also avoids relying on the ability to rebuild the south DB in case something goes wrong. + +If you just have your _PersistentVolume_ with the snapshots, you can try to create a pod as shown in the example manifest above with the _PersistentVolume_ mounted and `kubectl cp` the files off. + +However you got the files, full recovery from here forward works exactly as described in the _Kube-OVN_ documentation, which at a high level, starts with you scaling your replicas down to 0. diff --git a/docs/post-deploy-ops.md b/docs/post-deploy-ops.md new file mode 100644 index 00000000..c5056a14 --- /dev/null +++ b/docs/post-deploy-ops.md @@ -0,0 +1,418 @@ +After deploying the cloud operating environment, you're cloud will be ready to do work. While so what's next? Within this page we've a series of steps you can take to further build your cloud environment. + +## Create an OpenStack Cloud Config + +There are a lot of ways you can go to connect to your cluster. This example will use your cluster internals to generate a cloud config compatible with your environment using the Admin user. + +### Create the needed directories + +``` shell +mkdir -p ~/.config/openstack +``` + +### Generate the cloud config file + +``` shell +cat > ~/.config/openstack/clouds.yaml < Save the mapping to a local file before uploading it to keystone. In the examples, the mapping is stored at `/tmp/mapping.json`. + +Now register the mapping within Keystone. + +``` shell +openstack --os-cloud default mapping create --rules /tmp/mapping.json rackspace_mapping +``` + +### Create the federation protocol + +``` shell +openstack --os-cloud default federation protocol create rackspace --mapping rackspace_mapping --identity-provider rackspace +``` + +## Create Flavors + +These are the default flavors expected in an OpenStack cloud. Customize these flavors based on your needs. See the upstream admin [docs](https://docs.openstack.org/nova/latest/admin/flavors.html) for more information on managing flavors. + +``` shell +openstack --os-cloud default flavor create --public m1.extra_tiny --ram 512 --disk 0 --vcpus 1 --ephemeral 0 --swap 0 +openstack --os-cloud default flavor create --public m1.tiny --ram 1024 --disk 10 --vcpus 1 --ephemeral 0 --swap 0 +openstack --os-cloud default flavor create --public m1.small --ram 2048 --disk 20 --vcpus 2 --ephemeral 0 --swap 0 +openstack --os-cloud default flavor create --public m1.medium --ram 4096 --disk 40 --vcpus 4 --ephemeral 8 --swap 2048 +openstack --os-cloud default flavor create --public m1.large --ram 8192 --disk 80 --vcpus 6 --ephemeral 16 --swap 4096 +openstack --os-cloud default flavor create --public m1.extra_large --ram 16384 --disk 160 --vcpus 8 --ephemeral 32 --swap 8192 +``` + +## Download Images + +### Get Ubuntu + +#### Ubuntu 22.04 (Jammy) + +``` shell +wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file jammy-server-cloudimg-amd64.img \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=ubuntu \ + --property os_distro=ubuntu \ + --property os_version=22.04 \ + Ubuntu-22.04 +``` + +#### Ubuntu 20.04 (Focal) + +``` shell +wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file focal-server-cloudimg-amd64.img \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=ubuntu \ + --property os_distro=ubuntu \ + --property os_version=20.04 \ + Ubuntu-20.04 +``` + +### Get Debian + +#### Debian 12 + +``` shell +wget https://cloud.debian.org/cdimage/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file debian-12-genericcloud-amd64.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=debian \ + --property os_distro=debian \ + --property os_version=12 \ + Debian-12 +``` + +#### Debian 11 + +``` shell +wget https://cloud.debian.org/cdimage/cloud/bullseye/latest/debian-11-genericcloud-amd64.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file debian-11-genericcloud-amd64.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=debian \ + --property os_distro=debian \ + --property os_version=11 \ + Debian-11 +``` + +### Get CentOS + +#### Centos Stream 9 + +``` shell +wget http://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=centos \ + --property os_distro=centos \ + --property os_version=9 \ + CentOS-Stream-9 +``` + +#### Centos Stream 8 + +``` shell +wget http://cloud.centos.org/centos/8-stream/x86_64/images/CentOS-Stream-GenericCloud-8-latest.x86_64.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file CentOS-Stream-GenericCloud-8-latest.x86_64.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property hw_firmware_type=uefi \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=centos \ + --property os_distro=centos \ + --property os_version=8 \ + CentOS-Stream-8 +``` + +### Get openSUSE Leap + +#### Leap 15 + +``` shell +wget https://download.opensuse.org/distribution/leap/15.5/appliances/openSUSE-Leap-15.5-Minimal-VM.x86_64-kvm-and-xen.qcow2 +openstack --os-cloud default image create \ + --progress \ + --disk-format qcow2 \ + --container-format bare \ + --public \ + --file openSUSE-Leap-15.5-Minimal-VM.x86_64-kvm-and-xen.qcow2 \ + --property hw_scsi_model=virtio-scsi \ + --property hw_disk_bus=scsi \ + --property hw_vif_multiqueue_enabled=true \ + --property hw_qemu_guest_agent=yes \ + --property hypervisor_type=kvm \ + --property img_config_drive=optional \ + --property hw_machine_type=q35 \ + --property os_require_quiesce=yes \ + --property os_type=linux \ + --property os_admin_user=opensuse \ + --property os_distro=suse \ + --property os_version=15 \ + openSUSE-Leap-15 +``` +  +## Create Shared Provider Networks + +The following commands are examples of creating several different network types. + +### Flat Network + +``` shell +openstack --os-cloud default network create --share \ + --availability-zone-hint nova \ + --external \ + --provider-network-type flat \ + --provider-physical-network physnet1 \ + flat +``` + +### Flat Subnet + +``` shell +openstack --os-cloud default subnet create --subnet-range 172.16.24.0/22 \ + --gateway 172.16.24.2 \ + --dns-nameserver 172.16.24.2 \ + --allocation-pool start=172.16.25.150,end=172.16.25.200 \ + --dhcp \ + --network flat \ + flat_subnet +``` + +### VLAN Network + +``` shell +openstack --os-cloud default network create --share \ + --availability-zone-hint nova \ + --external \ + --provider-segment 404 \ + --provider-network-type vlan \ + --provider-physical-network physnet1 \ + vlan404 +``` + +### VLAN Subnet + +``` shell +openstack --os-cloud default subnet create --subnet-range 10.10.10.0/23 \ + --gateway 10.10.10.1 \ + --dns-nameserver 10.10.10.1 \ + --allocation-pool start=10.10.11.10,end=10.10.11.254 \ + --dhcp \ + --network vlan404 \ + vlan404_subnet +``` + +### L3 (Tenant) Network + +``` shell +openstack --os-cloud default network create l3 +``` + +### L3 (Tenant) Subnet + +``` shell +openstack --os-cloud default subnet create --subnet-range 10.0.10.0/24 \ + --gateway 10.0.10.1 \ + --dns-nameserver 1.1.1.1 \ + --allocation-pool start=10.0.10.2,end=10.0.10.254 \ + --dhcp \ + --network l3 \ + l3_subnet +``` + +> You can validate that the role has been assigned to the group and domain using the `openstack role assignment list` + +# Third Party Integration + +## OSIE Deployment + +``` shell +helm upgrade --install osie osie/osie \ + --namespace=osie \ + --create-namespace \ + --wait \ + --timeout 120m \ + -f /opt/genestack/helm-configs/osie/osie-helm-overrides.yaml +``` + +# Connect to the database + +Sometimes an operator may need to connect to the database to troubleshoot things or otherwise make modifications to the databases in place. The following command can be used to connect to the database from a node within the cluster. + +``` shell +mysql -h $(kubectl -n openstack get service mariadb-galera-primary -o jsonpath='{.spec.clusterIP}') \ + -p$(kubectl --namespace openstack get secret mariadb -o jsonpath='{.data.root-password}' | base64 -d) \ + -u root +``` + +> The following command will leverage your kube configuration and dynamically source the needed information to connect to the MySQL cluster. You will need to ensure you have installed the mysql client tools on the system you're attempting to connect from. diff --git a/mkdocs.yml b/mkdocs.yml index 38e0d707..185d3e73 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -55,6 +55,5 @@ dev_addr: "127.0.0.1:8001" edit_uri: "edit/main/docs" nav: - - Home: index.md - - Quick-Start: quickstart.md + - Documentation: 'index.md' - Components: components.md