From 02d1b8c70892e991d72fb74aab1eb4e3050af3c8 Mon Sep 17 00:00:00 2001 From: Kevin Carter Date: Wed, 6 Mar 2024 20:06:00 -0600 Subject: [PATCH] fix: Update blockquotes to use Admonitions Signed-off-by: Kevin Carter --- docs/build-local-images.md | 8 +++- docs/build-test-envs.md | 21 ++++++--- docs/genestack-architecture.md | 13 ++++++ docs/genestack-components.md | 62 ++++++++++++++++++++++++++ docs/genestack-getting-started.md | 8 +++- docs/genestack-upgrade.md | 4 +- docs/infrastructure-mariadb-connect.md | 4 +- docs/infrastructure-mariadb.md | 8 +++- docs/infrastructure-memcached.md | 8 +++- docs/infrastructure-ovn-setup.md | 16 +++++-- docs/infrastructure-postgresql.md | 11 ++--- docs/infrastructure-rabbitmq.md | 9 +++- docs/k8s-config.md | 8 +++- docs/k8s-kubespray-upgrade.md | 8 +++- docs/k8s-kubespray.md | 48 ++++++++++++-------- docs/k8s-labels.md | 5 ++- docs/openstack-cinder.md | 18 +++++--- docs/openstack-compute-kit.md | 24 ++++++---- docs/openstack-glance.md | 17 +++---- docs/openstack-gnocchi.md | 5 ++- docs/openstack-heat.md | 5 ++- docs/openstack-horizon.md | 5 ++- docs/openstack-keystone-federation.md | 4 +- docs/openstack-keystone.md | 10 +++-- docs/openstack-octavia.md | 5 ++- docs/openstack-skyline.md | 9 ++-- docs/quickstart.md | 4 +- docs/storage-ceph-rook-internal.md | 4 +- docs/storage-nfs-external.md | 4 +- docs/storage-topolvm.md | 4 +- 30 files changed, 265 insertions(+), 94 deletions(-) create mode 100644 docs/genestack-architecture.md create mode 100644 docs/genestack-components.md diff --git a/docs/build-local-images.md b/docs/build-local-images.md index 6fb21bae..2f73ab33 100644 --- a/docs/build-local-images.md +++ b/docs/build-local-images.md @@ -43,8 +43,12 @@ EOF kubectl --namespace kube-system get secret registry-kube-system-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > /opt/registry.ca ``` -> NOTE the above commands make the assumption that you're running a docker registry within the kube-system namespace and are running the provided genestack ingress definition to support that environment. If you have a different registry you will need to adjust the commands to fit your environment. +!!! note + + the above commands make the assumption that you're running a docker registry within the kube-system namespace and are running the provided genestack ingress definition to support that environment. If you have a different registry you will need to adjust the commands to fit your environment. Once the above commands have been executed, the file `/opt/octavia-ovn-helm-overrides.yaml` will be present and can be included in our helm command when we deploy Octavia. -> If you're using the local registry with a self-signed certificate, you will need to include the CA `/opt/registry.ca` in all of your potential worker nodes so that the container image is able to be pulled. +!!! tip + + If you're using the local registry with a self-signed certificate, you will need to include the CA `/opt/registry.ca` in all of your potential worker nodes so that the container image is able to be pulled. diff --git a/docs/build-test-envs.md b/docs/build-test-envs.md index a24d96b5..79814e65 100644 --- a/docs/build-test-envs.md +++ b/docs/build-test-envs.md @@ -10,8 +10,9 @@ Take a moment to orient yourself, there are a few items to consider before movin ### Clone Genestack -> Your local genestack repository will be transferred to the eventual launcher instance for convenience **perfect for development**. -See [Getting Started](quickstart.md] for an example on how to recursively clone the repository and its submodules. +!!! note + + Your local genestack repository will be transferred to the eventual launcher instance for convenience **perfect for development**. See [Getting Started](quickstart.md] for an example on how to recursively clone the repository and its submodules. ### Create a VirtualEnv @@ -19,7 +20,9 @@ This is optional but always recommended. There are multiple tools for this, pick ### Install Ansible Dependencies -> Activate your venv if you're using one. +!!! info + + Activate your venv if you're using one. ``` pip install ansible openstacksdk @@ -54,7 +57,9 @@ See the configuration guide [here](https://docs.openstack.org/openstacksdk/lates ## Create a Test Environment -> This is used to deploy new infra on an existing OpenStack cloud. If you're deploying on baremetal this document can be skipped. +!!! abstract + + This is used to deploy new infra on an existing OpenStack cloud. If you're deploying on baremetal this document can be skipped. If deploying in a lab environment on an OpenStack cloud, you can run the `infra-deploy.yaml` playbook which will create all of the resources needed to operate the test environment. @@ -72,7 +77,9 @@ cd ansible/playbooks Run the test infrastructure deployment. -> Ensure `os_cloud_name` as well as other values within your `infra-deploy.yaml` match a valid cloud name in your openstack configuration as well as resource names within it. +!!! tip + + Ensure `os_cloud_name` as well as other values within your `infra-deploy.yaml` match a valid cloud name in your openstack configuration as well as resource names within it. !!! note @@ -110,7 +117,9 @@ The result of the playbook will look something like this. The lab deployment playbook will build an environment suitable for running Genestack, however, it does not by itself run the full deployment. Once your resources are online, you can login to the "launcher" node and begin running the deployment. To make things fairly simple, the working development directory will be sync'd to the launcher node, along with keys and your generated inventory. -> If you're wanting to inspect the generated inventory, you can find it in your home directory. +!!! tip + + If you're wanting to inspect the generated inventory, you can find it in your home directory. ### SSH to lab diff --git a/docs/genestack-architecture.md b/docs/genestack-architecture.md new file mode 100644 index 00000000..7495311d --- /dev/null +++ b/docs/genestack-architecture.md @@ -0,0 +1,13 @@ +# Environment Architecture + +Genestack is making use of some homegrown solutions, community operators, and OpenStack-Helm. Everything +in Genestack comes together to form cloud in a new and exciting way; all built with opensource solutions +to manage cloud infrastructure in the way you need it. + +They say a picture is worth 1000 words, so here's a picture. + +![Genestack Architecture Diagram](assets/images/diagram-genestack.png) + +The idea behind Genestack is simple, build an Open Infrastructure system that unites Public and Private +clouds with a platform that is simple enough for the hobbyist yet capable of exceeding the needs of the +enterprise. diff --git a/docs/genestack-components.md b/docs/genestack-components.md new file mode 100644 index 00000000..483f6a9e --- /dev/null +++ b/docs/genestack-components.md @@ -0,0 +1,62 @@ +# Product Component Matrix + +The following components are part of the initial product release +and largely deployed with Helm+Kustomize against the K8s API (v1.28 and up). +Some components are intially only installed with the public cloud based, +OpenStack Flex, service, while OpenStack Enterprise naturally provides a larger +variety of services: + +| Group | Component | OpenStack Flex | OpenStack Enterprise | +|------------|----------------------|----------------|----------------------| +| Kubernetes | Kubernetes | Required | Required | +| Kubernetes | Kubernetes Dashboard | Required | Required | +| Kubernetes | Cert-Manager | Required | Required | +| Kubernetes | MetaLB (L2/L3) | Required | Required | +| Kubernetes | Core DNS | Required | Required | +| Kubernetes | Ingress Controller (Nginx) | Required | Required | +| Kubernetes | Kube-Proxy (IPVS) | Required | Required | +| Kubernetes | Calico | Optional | Required | +| Kubernetes | Kube-OVN | Required | Optional | +| Kubernetes | Helm | Required | Required | +| Kubernetes | Kustomize | Required | Required | +| OpenStack | openVswitch (Helm) | Optional | Required | +| OpenStack | Galera (Operator) | Required | Required | +| OpenStack | rabbitMQ (Operator) | Required | Required | +| OpenStack | memcacheD (Operator) | Required | Required | +| OpenStack | Ceph Rook | Optional | Required | +| OpenStack | iscsi/tgtd | Required | Optional | +| OpenStack | Keystone (Helm) | Required | Required | +| OpenStack | Glance (Helm) | Required | Required | +| OpenStack | Cinder (Helm) | Required | Required | +| OpenStack | Nova (Helm) | Required | Required | +| OpenStack | Neutron (Helm) | Required | Required | +| OpenStack | Placement (Helm) | Required | Required | +| OpenStack | Horizon (Helm) | Required | Required | +| OpenStack | Skyline (Helm) | Optional | Optional | +| OpenStack | Heat (Helm) | Required | Required | +| OpenStack | Designate (Helm) | Optional | Required | +| OpenStack | Barbican (Helm) | Required | Required | +| OpenStack | Octavia (Helm) | Required | Required | +| OpenStack | Ironic (Helm) | Optional | Required | +| OpenStack | metal3.io | Optional | Required | + +Initial monitoring componets consists of the following projects + +| Group | Component | OpenStack Flex | OpenStack Enterprise | +|------------|----------------------|----------------|----------------------| +| Kubernetes | Prometheus | Required | Required | +| Kubernetes | Thanos | Required | Required | +| Kubernetes | Alertmanager | Required | Required | +| Kubernetes | Grafana | Required | Required | +| Kubernetes | Node Exporter | Required | Required | +| Kubernetes | redfish Exporter | Required | Required | +| OpenStack | OpenStack Exporter | Required | Required | + +At a later stage these components will be added + +| Group | Component | OpenStack Flex | OpenStack Enterprise | +|-----------|----------------------|----------------|----------------------| +| OpenStack | MongoDB | Optional | Required | +| OpenStack | Aodh (Helm) | Optional | Required | +| OpenStack | Ceilometer (Helm) | Optional | Required | +| OpenStack | Masakari (Helm) | Optional | Required | diff --git a/docs/genestack-getting-started.md b/docs/genestack-getting-started.md index ef85fed8..78a8707c 100644 --- a/docs/genestack-getting-started.md +++ b/docs/genestack-getting-started.md @@ -2,7 +2,9 @@ Before you can do anything we need to get the code. Because we've sold our soul to the submodule devil, you're going to need to recursively clone the repo into your location. -> Throughout the all our documentation and examples the genestack code base will be assumed to be in `/opt`. +!!! info + + Throughout the all our documentation and examples the genestack code base will be assumed to be in `/opt`. ``` shell git clone --recurse-submodules -j4 https://github.com/rackerlabs/genestack /opt/genestack @@ -22,6 +24,8 @@ export GENESTACK_PRODUCT=openstack-enterprise /opt/genestack/bootstrap.sh ``` -> If running this command with `sudo`, be sure to run with `-E`. `sudo -E /opt/genestack/bootstrap.sh`. This will ensure your active environment is passed into the bootstrap command. +!!! tip + + If running this command with `sudo`, be sure to run with `-E`. `sudo -E /opt/genestack/bootstrap.sh`. This will ensure your active environment is passed into the bootstrap command. Once the bootstrap is completed the default Kubernetes provider will be configured inside `/etc/genestack/provider` diff --git a/docs/genestack-upgrade.md b/docs/genestack-upgrade.md index 737fffa1..a0a560ba 100644 --- a/docs/genestack-upgrade.md +++ b/docs/genestack-upgrade.md @@ -15,7 +15,9 @@ git fetch origin git rebase origin/main ``` -> You may want to checkout a specific SHA or tag when running a stable environment. +!!! tip + + You may want to checkout a specific SHA or tag when running a stable environment. Update the submodules. diff --git a/docs/infrastructure-mariadb-connect.md b/docs/infrastructure-mariadb-connect.md index a7af0f16..76f2c9e8 100644 --- a/docs/infrastructure-mariadb-connect.md +++ b/docs/infrastructure-mariadb-connect.md @@ -8,4 +8,6 @@ mysql -h $(kubectl -n openstack get service mariadb-galera-primary -o jsonpath=' -u root ``` -> The following command will leverage your kube configuration and dynamically source the needed information to connect to the MySQL cluster. You will need to ensure you have installed the mysql client tools on the system you're attempting to connect from. +!!! info + + The following command will leverage your kube configuration and dynamically source the needed information to connect to the MySQL cluster. You will need to ensure you have installed the mysql client tools on the system you're attempting to connect from. diff --git a/docs/infrastructure-mariadb.md b/docs/infrastructure-mariadb.md index 9af335ea..f45bf166 100644 --- a/docs/infrastructure-mariadb.md +++ b/docs/infrastructure-mariadb.md @@ -18,7 +18,9 @@ If you've changed your k8s cluster name from the default cluster.local, edit `cl kubectl kustomize --enable-helm /opt/genestack/kustomize/mariadb-operator | kubectl --namespace mariadb-system apply --server-side --force-conflicts -f - ``` -> The operator may take a minute to get ready, before deploying the Galera cluster, wait until the webhook is online. +!!! info + + The operator may take a minute to get ready, before deploying the Galera cluster, wait until the webhook is online. ``` shell kubectl --namespace mariadb-system get pods -w @@ -30,7 +32,9 @@ kubectl --namespace mariadb-system get pods -w kubectl --namespace openstack apply -k /opt/genestack/kustomize/mariadb-cluster/base ``` -> NOTE MariaDB has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. +!!! note + + MariaDB has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. ## Verify readiness with the following command diff --git a/docs/infrastructure-memcached.md b/docs/infrastructure-memcached.md index cc2fede4..a963897d 100644 --- a/docs/infrastructure-memcached.md +++ b/docs/infrastructure-memcached.md @@ -6,7 +6,9 @@ kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base | kubectl apply --namespace openstack -f - ``` -> NOTE Memcached has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. +!!! note + + Memcached has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. ### Alternative - Deploy the Memcached Cluster With Monitoring Enabled @@ -14,7 +16,9 @@ kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base | kubect kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base-monitoring | kubectl apply --namespace openstack -f - ``` -> NOTE Memcached has a base-monitoring configuration which is HA and production ready that also includes a metrics exporter for prometheus metrics collection. If you'd like to have monitoring enabled for your memcached cluster ensure the prometheus operator is installed first ([Deploy Prometheus](prometheus.md)). +!!! note + + Memcached has a base-monitoring configuration which is HA and production ready that also includes a metrics exporter for prometheus metrics collection. If you'd like to have monitoring enabled for your memcached cluster ensure the prometheus operator is installed first ([Deploy Prometheus](prometheus.md)). ## Verify readiness with the following command. diff --git a/docs/infrastructure-ovn-setup.md b/docs/infrastructure-ovn-setup.md index 3d50aaeb..282adfc6 100644 --- a/docs/infrastructure-ovn-setup.md +++ b/docs/infrastructure-ovn-setup.md @@ -6,7 +6,9 @@ Post deployment we need to setup neutron to work with our integrated OVN environ export ALL_NODES=$(kubectl get nodes -l 'openstack-network-node=enabled' -o 'jsonpath={.items[*].metadata.name}') ``` -> Set the annotations you need within your environment to meet the needs of your workloads on the hardware you have. +!!! note + + Set the annotations you need within your environment to meet the needs of your workloads on the hardware you have. ### Set `ovn.openstack.org/int_bridge` @@ -23,7 +25,9 @@ kubectl annotate \ Set the name of the OVS bridges we'll use. These are the bridges you will use on your hosts within OVS. The option is a string and comma separated. You can define as many OVS type bridges you need or want for your environment. -> NOTE The functional example here annotates all nodes; however, not all nodes have to have the same setup. +!!! note + + The functional example here annotates all nodes; however, not all nodes have to have the same setup. ``` shell kubectl annotate \ @@ -47,7 +51,9 @@ kubectl annotate \ Set the Neutron bridge mapping. This maps the Neutron interfaces to the ovs bridge names. These are colon delimitated between `NEUTRON_INTERFACE:OVS_BRIDGE`. Multiple bridge mappings can be defined here and are separated by commas. -> Neutron interfaces are string value and can be anything you want. The `NEUTRON_INTERFACE` value defined will be used when you create provider type networks after the cloud is online. +!!! note + + Neutron interfaces are string value and can be anything you want. The `NEUTRON_INTERFACE` value defined will be used when you create provider type networks after the cloud is online. ``` shell kubectl annotate \ @@ -67,7 +73,9 @@ kubectl annotate \ ovn.openstack.org/availability_zones='nova' ``` -> Any availability zone defined here should also be defined within your **neutron.conf**. The "nova" availability zone is an assumed defined, however, because we're running in a mixed OVN environment, we should define where we're allowed to execute OpenStack workloads. +!!! note + + Any availability zone defined here should also be defined within your **neutron.conf**. The "nova" availability zone is an assumed defined, however, because we're running in a mixed OVN environment, we should define where we're allowed to execute OpenStack workloads. ### Set `ovn.openstack.org/gateway` diff --git a/docs/infrastructure-postgresql.md b/docs/infrastructure-postgresql.md index 11bf7b50..1ec86d22 100644 --- a/docs/infrastructure-postgresql.md +++ b/docs/infrastructure-postgresql.md @@ -19,9 +19,9 @@ kubectl --namespace openstack create secret generic postgresql-db-audit \ ## Run the package deployment -> Consider the PVC size you will need for the environment you're deploying in. - Make adjustments as needed near `storage.[pvc|archive_pvc].size` and - `volume.backup.size` to your helm overrides. +!!! tip + + Consider the PVC size you will need for the environment you're deploying in. Make adjustments as needed near `storage.[pvc|archive_pvc].size` and `volume.backup.size` to your helm overrides. ```shell cd /opt/genestack/submodules/openstack-helm-infra @@ -37,5 +37,6 @@ helm upgrade --install postgresql ./postgresql \ --set endpoints.postgresql.auth.audit.password="$(kubectl --namespace openstack get secret postgresql-db-audit -o jsonpath='{.data.password}' | base64 -d)" ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. diff --git a/docs/infrastructure-rabbitmq.md b/docs/infrastructure-rabbitmq.md index 5ef40d42..b59a485f 100644 --- a/docs/infrastructure-rabbitmq.md +++ b/docs/infrastructure-rabbitmq.md @@ -5,7 +5,10 @@ ``` shell kubectl apply -k /opt/genestack/kustomize/rabbitmq-operator ``` -> The operator may take a minute to get ready, before deploying the RabbitMQ cluster, wait until the operator pod is online. + +!!! note + + The operator may take a minute to get ready, before deploying the RabbitMQ cluster, wait until the operator pod is online. ## Deploy the RabbitMQ topology operator. @@ -19,7 +22,9 @@ kubectl apply -k /opt/genestack/kustomize/rabbitmq-topology-operator kubectl apply -k /opt/genestack/kustomize/rabbitmq-cluster/base ``` -> NOTE RabbitMQ has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. +!!! note + + RabbitMQ has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment. ## Validate the status with the following diff --git a/docs/k8s-config.md b/docs/k8s-config.md index d6cdecd0..6ee09954 100644 --- a/docs/k8s-config.md +++ b/docs/k8s-config.md @@ -14,9 +14,13 @@ sudo chmod +x /usr/local/bin/kubectl Retrieve the kube config from our first controller. -> In the following example, X.X.X.X is expected to be the first controller. +!!! tip -> In the following example, ubuntu is the assumed user. + In the following example, X.X.X.X is expected to be the first controller. + +!!! note + + In the following example, ubuntu is the assumed user. ``` shell mkdir -p ~/.kube diff --git a/docs/k8s-kubespray-upgrade.md b/docs/k8s-kubespray-upgrade.md index 07ab52d0..69802663 100644 --- a/docs/k8s-kubespray-upgrade.md +++ b/docs/k8s-kubespray-upgrade.md @@ -14,7 +14,9 @@ When running Kubespray using the Genestack submodule, review the [Genestack Upda Genestack stores inventory in the `/etc/genestack/inventory` directory. Before running the upgrade, you will need to set the **kube_version** variable to your new target version. This variable is generally found within the `/etc/genestack/inventory/group_vars/k8s_cluster/k8s-cluster.yml` file. -> Review all of the group variables within an environment before running a major upgrade. Things change, and you need to be aware of your environment details before running the upgrade. +!!! note + + Review all of the group variables within an environment before running a major upgrade. Things change, and you need to be aware of your environment details before running the upgrade. Once the group variables are set, you can proceed with the upgrade execution. @@ -40,7 +42,9 @@ Now run the upgrade. ansible-playbook upgrade-cluster.yml ``` -> While the basic command could work, be sure to include any and all flags needed for your environment before running the upgrade. +!!! note + + While the basic command could work, be sure to include any and all flags needed for your environment before running the upgrade. ### Running an unsafe upgrade diff --git a/docs/k8s-kubespray.md b/docs/k8s-kubespray.md index 495894f0..21ada648 100644 --- a/docs/k8s-kubespray.md +++ b/docs/k8s-kubespray.md @@ -2,8 +2,9 @@ Currently only the k8s provider kubespray is supported and included as submodule into the code base. -> Existing OpenStack Ansible inventory can be converted using the `/opt/genestack/scripts/convert_osa_inventory.py` - script which provides a `hosts.yml` +!!! info + + Existing OpenStack Ansible inventory can be converted using the `/opt/genestack/scripts/convert_osa_inventory.py` script which provides a `hosts.yml` ### Before you Deploy @@ -15,23 +16,27 @@ you will need to prepare your networking infrastructure and basic storage layout * 2 Network Interfaces -> While we would expect the environment to be running with multiple bonds in a production cloud, two network interfaces is all that's required. -> This can be achieved with vlan tagged devices, physical ethernet devices, macvlan, or anything else. -> Have a look at the netplan example file found [here](https://github.com/rackerlabs/genestack/blob/main/etc/netplan/default-DHCP.yaml) for an example of how you could setup the network. +!!! note + + While we would expect the environment to be running with multiple bonds in a production cloud, two network interfaces is all that's required. This can be achieved with vlan tagged devices, physical ethernet devices, macvlan, or anything else. Have a look at the netplan example file found [here](https://github.com/rackerlabs/genestack/blob/main/etc/netplan/default-DHCP.yaml) for an example of how you could setup the network. * Ensure we're running kernel 5.17+ -> While the default kernel on most modern operating systems will work, we recommend running with Kernel 6.2+. +!!! tip + + While the default kernel on most modern operating systems will work, we recommend running with Kernel 6.2+. * Kernel modules -> The Kubespray tool chain will attempt to deploy a lot of things, one thing is a set of `sysctl` options which will include bridge tunings. -> Given the tooling will assume bridging is functional, you will need to ensure the `br_netfilter` module is loaded or you're using a kernel that includes that functionality as a built-in. +!!! warning + + The Kubespray tool chain will attempt to deploy a lot of things, one thing is a set of `sysctl` options which will include bridge tunings. Given the tooling will assume bridging is functional, you will need to ensure the `br_netfilter` module is loaded or you're using a kernel that includes that functionality as a built-in. * Executable `/tmp` -> The `/tmp` directory is used as a download and staging location within the environment. You will need to make sure that the `/tmp` is executable. -> By default, some kick-systems set the mount option **noexec**, if that is defined you should remove it before running the deployment. +!!! warning + + The `/tmp` directory is used as a download and staging location within the environment. You will need to make sure that the `/tmp` is executable. By default, some kick-systems set the mount option **noexec**, if that is defined you should remove it before running the deployment. ### Create your Inventory @@ -39,9 +44,9 @@ A default inventory file for kubespray is provided at `/etc/genestack/inventory` Checkout the [openstack-flex/prod-inventory-example.yaml](https://github.com/rackerlabs/genestack/blob/main/ansible/inventory/openstack-flex/inventory.yaml.example) file for an example of a target environment. -> NOTE before you deploy the kubernetes cluster you should define the `kube_override_hostname` option in your inventory. - This variable will set the node name which we will want to be an FQDN. When you define the option, it should have the - same suffix defined in our `cluster_name` variable. +!!! note + + Before you deploy the kubernetes cluster you should define the `kube_override_hostname` option in your inventory. This variable will set the node name which we will want to be an FQDN. When you define the option, it should have the same suffix defined in our `cluster_name` variable. However, any Kubespray compatible inventory will work with this deployment tooling. The official [Kubespray documentation](https://kubespray.io) can be used to better understand the inventory options and requirements. Within the `ansible/playbooks/inventory` directory there is a directory named `openstack-flex` and `openstack-enterprise`. These directories provide everything we need to run a successful Kubernetes environment for genestack at scale. The difference between **enterprise** and **flex** are just target environment types. @@ -54,8 +59,9 @@ source /opt/genestack/scripts/genestack.rc ansible -m shell -a 'hostnamectl set-hostname {{ inventory_hostname }}' --become all ``` -> NOTE in the above command I'm assuming the use of `cluster.local` this is the default **cluster_name** as defined in the - group_vars k8s_cluster file. If you change that option, make sure to reset your domain name on your hosts accordingly. +!!! note + + In the above command I'm assuming the use of `cluster.local` this is the default **cluster_name** as defined in the group_vars k8s_cluster file. If you change that option, make sure to reset your domain name on your hosts accordingly. The ansible inventory is expected at `/etc/genestack/inventory` @@ -67,7 +73,9 @@ source /opt/genestack/scripts/genestack.rc cd /opt/genestack/ansible/playbooks ``` -> The RC file sets a number of environment variables that help ansible to run in a more easily to understand way. +!!! note + + The RC file sets a number of environment variables that help ansible to run in a more easily to understand way. While the `ansible-playbook` command should work as is with the sourced environment variables, sometimes it's necessary to set some overrides on the command line. The following example highlights a couple of overrides that are generally useful. @@ -104,7 +112,9 @@ Source your environment variables source /opt/genestack/scripts/genestack.rc ``` -> The RC file sets a number of environment variables that help ansible to run in a more easy to understand way. +!!! note + + The RC file sets a number of environment variables that help ansible to run in a more easy to understand way. Once the inventory is updated and configuration altered (networking etc), the Kubernetes cluster can be initialized with @@ -124,6 +134,8 @@ ansible-playbook --inventory /etc/genestack/inventory/openstack-flex-inventory.y cluster.yml ``` -> Given the use of a venv, when running with `sudo` be sure to use the full path and pass through your environment variables; `sudo -E /home/ubuntu/.venvs/genestack/bin/ansible-playbook`. +!!! tip + + Given the use of a venv, when running with `sudo` be sure to use the full path and pass through your environment variables; `sudo -E /home/ubuntu/.venvs/genestack/bin/ansible-playbook`. Once the cluster is online, you can run `kubectl` to interact with the environment. diff --git a/docs/k8s-labels.md b/docs/k8s-labels.md index 75b64120..1d080091 100644 --- a/docs/k8s-labels.md +++ b/docs/k8s-labels.md @@ -3,8 +3,9 @@ To use the K8S environment for OpenStack all of the nodes MUST be labeled. The following Labels will be used within your environment. Make sure you label things accordingly. -> The following example assumes the node names can be used to identify their purpose within our environment. - That may not be the case in reality. Adapt the following commands to meet your needs. +!!! note + + The following example assumes the node names can be used to identify their purpose within our environment. That may not be the case in reality. Adapt the following commands to meet your needs. ``` shell # Label the storage nodes - optional and only used when deploying ceph for K8S infrastructure shared storage diff --git a/docs/openstack-cinder.md b/docs/openstack-cinder.md index 30586e99..72fc3f26 100644 --- a/docs/openstack-cinder.md +++ b/docs/openstack-cinder.md @@ -40,8 +40,9 @@ helm upgrade --install cinder ./cinder \ --post-renderer-args cinder/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. Once the helm deployment is complete cinder and all of it's API services will be online. However, using this setup there will be no volume node at this point. The reason volume deployments have been disabled is because we didn't expose ceph to the openstack @@ -49,8 +50,9 @@ environment and OSH makes a lot of ceph related assumptions. For testing purpose driver (reference) and manage the deployment of that driver in a hybrid way. As such there's a deployment outside of our normal K8S workflow will be needed on our volume host. -> The LVM volume makes the assumption that the storage node has the required volume group setup `lvmdriver-1` on the node - This is not something that K8S is handling at this time. +!!! note + + The LVM volume makes the assumption that the storage node has the required volume group setup `lvmdriver-1` on the node This is not something that K8S is handling at this time. While cinder can run with a great many different storage backends, for the simple case we want to run with the Cinder reference driver, which makes use of Logical Volumes. Because this driver is incompatible with a containerized work environment, we need @@ -68,7 +70,9 @@ Assuming your storage node was also deployed as a K8S node when we did our initi operational for you; however, in the event you need to do some manual tweaking or if the node was note deployed as a K8S worker, then make sure you setup the DNS resolvers correctly so that your volume service node can communicate with our cluster. -> This is expected to be our CoreDNS IP, in my case this is `169.254.25.10`. +!!! note + + This is expected to be our CoreDNS IP, in my case this is `169.254.25.10`. This is an example of my **systemd-resolved** conf found in `/etc/systemd/resolved.conf` ``` conf @@ -134,7 +138,9 @@ root@openstack-flex-node-0:~# kubectl --namespace openstack exec -ti openstack-a +------------------+-------------------------------------------------+------+---------+-------+----------------------------+ ``` -> Notice the volume service is up and running with our `lvmdriver-1` target. +!!! note + + The volume service is up and running with our `lvmdriver-1` target. At this point it would be a good time to define your types within cinder. For our example purposes we need to define the `lvmdriver-1` type so that we can schedule volumes to our environment. diff --git a/docs/openstack-compute-kit.md b/docs/openstack-compute-kit.md index 15abaaff..3e33cd95 100644 --- a/docs/openstack-compute-kit.md +++ b/docs/openstack-compute-kit.md @@ -122,11 +122,13 @@ helm upgrade --install nova ./nova \ --post-renderer-args nova/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip -> NOTE: The above command is setting the ceph as disabled. While the K8S infrastructure has Ceph, - we're not exposing ceph to our openstack environment. + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. + +!!! note + + The above command is setting the ceph as disabled. While the K8S infrastructure has Ceph, we're not exposing ceph to our openstack environment. If running in an environment that doesn't have hardware virtualization extensions add the following two `set` switches to the install command. @@ -134,8 +136,9 @@ If running in an environment that doesn't have hardware virtualization extension --set conf.nova.libvirt.virt_type=qemu --set conf.nova.libvirt.cpu_mode=none ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. ## Deploy Neutron @@ -166,7 +169,10 @@ helm upgrade --install neutron ./neutron \ --post-renderer-args neutron/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. + +!!! info -> The above command derives the OVN north/south bound database from our K8S environment. The insert `set` is making the assumption we're using **tcp** to connect. + The above command derives the OVN north/south bound database from our K8S environment. The insert `set` is making the assumption we're using **tcp** to connect. diff --git a/docs/openstack-glance.md b/docs/openstack-glance.md index e3c6a3c9..63e2aaac 100644 --- a/docs/openstack-glance.md +++ b/docs/openstack-glance.md @@ -20,10 +20,9 @@ kubectl --namespace openstack \ --from-literal=password="$(< /dev/urandom tr -dc _A-Za-z0-9 | head -c${1:-32};echo;)" ``` -> Before running the Glance deployment you should configure the backend which is defined in the - `helm-configs/glance/glance-helm-overrides.yaml` file. The default is a making the assumption we're running with Ceph deployed by - Rook so the backend is configured to be cephfs with multi-attach functionality. While this works great, you should consider all of - the available storage backends and make the right decision for your environment. +!!! info + + Before running the Glance deployment you should configure the backend which is defined in the `helm-configs/glance/glance-helm-overrides.yaml` file. The default is a making the assumption we're running with Ceph deployed by Rook so the backend is configured to be cephfs with multi-attach functionality. While this works great, you should consider all of the available storage backends and make the right decision for your environment. ## Run the package deployment @@ -45,11 +44,13 @@ helm upgrade --install glance ./glance \ --post-renderer-args glance/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. + +!!! note -> Note that the defaults disable `storage_init` because we're using **pvc** as the image backend - type. In production this should be changed to swift. + The defaults disable `storage_init` because we're using **pvc** as the image backend type. In production this should be changed to swift. ## Validate functionality diff --git a/docs/openstack-gnocchi.md b/docs/openstack-gnocchi.md index 8b4bcd05..7f28dbb8 100644 --- a/docs/openstack-gnocchi.md +++ b/docs/openstack-gnocchi.md @@ -70,8 +70,9 @@ helm upgrade --install gnocchi ./gnocchi \ --post-renderer-args gnocchi/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. ## Validate the metric endpoint diff --git a/docs/openstack-heat.md b/docs/openstack-heat.md index b14ac339..9578cddc 100644 --- a/docs/openstack-heat.md +++ b/docs/openstack-heat.md @@ -49,8 +49,9 @@ helm upgrade --install heat ./heat \ --post-renderer-args heat/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. ## Validate functionality diff --git a/docs/openstack-horizon.md b/docs/openstack-horizon.md index eec63ffe..6672744c 100644 --- a/docs/openstack-horizon.md +++ b/docs/openstack-horizon.md @@ -34,5 +34,6 @@ helm upgrade --install horizon ./horizon \ --post-renderer-args horizon/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. diff --git a/docs/openstack-keystone-federation.md b/docs/openstack-keystone-federation.md index 5c12da6a..031a7ed3 100644 --- a/docs/openstack-keystone-federation.md +++ b/docs/openstack-keystone-federation.md @@ -70,7 +70,9 @@ You're also welcome to generate your own mapping to suit your needs; however, if ] ``` -> Save the mapping to a local file before uploading it to keystone. In the examples, the mapping is stored at `/tmp/mapping.json`. +!!! tip + + Save the mapping to a local file before uploading it to keystone. In the examples, the mapping is stored at `/tmp/mapping.json`. Now register the mapping within Keystone. diff --git a/docs/openstack-keystone.md b/docs/openstack-keystone.md index a52a6bc3..36dcb0b5 100644 --- a/docs/openstack-keystone.md +++ b/docs/openstack-keystone.md @@ -44,11 +44,13 @@ helm upgrade --install keystone ./keystone \ --post-renderer-args keystone/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip -> NOTE: The image used here allows the system to run with RXT global authentication federation. - The federated plugin can be seen here, https://github.com/cloudnull/keystone-rxt + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. + +!!! note + + The image used here allows the system to run with RXT global authentication federation. The federated plugin can be seen here, https://github.com/cloudnull/keystone-rxt Deploy the openstack admin client pod (optional) diff --git a/docs/openstack-octavia.md b/docs/openstack-octavia.md index ce4799c7..ca4e7234 100644 --- a/docs/openstack-octavia.md +++ b/docs/openstack-octavia.md @@ -48,8 +48,9 @@ helm upgrade --install octavia ./octavia \ --post-renderer-args octavia/base ``` -> In a production like environment you may need to include production specific files like the example variable file found in - `helm-configs/prod-example-openstack-overrides.yaml`. +!!! tip + + In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`. Now validate functionality diff --git a/docs/openstack-skyline.md b/docs/openstack-skyline.md index 8a931b1c..761dcd6e 100644 --- a/docs/openstack-skyline.md +++ b/docs/openstack-skyline.md @@ -26,12 +26,15 @@ kubectl --namespace openstack \ --from-literal=default-region="RegionOne" ``` -> Note all the configuration is in this one secret, so be sure to set your entries accordingly. +!!! note + + All the configuration is in this one secret, so be sure to set your entries accordingly. ## Run the deployment -> [!TIP] -> Pause for a moment to consider if you will be wanting to access Skyline via your ingress controller over a specific FQDN. If so, modify `/opt/genestack/kustomize/skyline/fqdn/kustomization.yaml` to suit your needs then use `fqdn` below in lieu of `base`... +!!! tip + + Pause for a moment to consider if you will be wanting to access Skyline via your ingress controller over a specific FQDN. If so, modify `/opt/genestack/kustomize/skyline/fqdn/kustomization.yaml` to suit your needs then use `fqdn` below in lieu of `base`... ``` shell kubectl --namespace openstack apply -k /opt/genestack/kustomize/skyline/base diff --git a/docs/quickstart.md b/docs/quickstart.md index f7796d4f..b9b7f802 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -2,7 +2,9 @@ Before you can do anything we need to get the code. Because we've sold our soul to the submodule devil, you're going to need to recursively clone the repo into your location. -> Throughout the all our documentation and examples the genestack code base will be assumed to be in `/opt`. +!!! note + + Throughout the all our documentation and examples the genestack code base will be assumed to be in `/opt`. ``` shell git clone --recurse-submodules -j4 https://github.com/rackerlabs/genestack /opt/genestack diff --git a/docs/storage-ceph-rook-internal.md b/docs/storage-ceph-rook-internal.md index 5150b542..1d6b9102 100644 --- a/docs/storage-ceph-rook-internal.md +++ b/docs/storage-ceph-rook-internal.md @@ -22,7 +22,9 @@ kubectl apply -k /opt/genestack/kustomize/rook-cluster/ kubectl --namespace rook-ceph get cephclusters.ceph.rook.io ``` -> You can track the deployment with the following command `kubectl --namespace rook-ceph get pods -w`. +!!! note + + You can track the deployment with the following command `kubectl --namespace rook-ceph get pods -w`. ## Create Storage Classes diff --git a/docs/storage-nfs-external.md b/docs/storage-nfs-external.md index d3addcd7..848e2b4d 100644 --- a/docs/storage-nfs-external.md +++ b/docs/storage-nfs-external.md @@ -2,7 +2,9 @@ While NFS in K8S works great, it's not suitable for use in all situations. -> Example: NFS is officially not supported by MariaDB and will fail to initialize the database backend when running on NFS. +!!! warning + + NFS is officially not supported by MariaDB and will fail to initialize the database backend when running on NFS. In Genestack, the `general` storage class is used by default for systems like RabbitMQ and MariaDB. If you intend to use NFS, you will need to ensure your use cases match the workloads and may need to make some changes within the manifests. diff --git a/docs/storage-topolvm.md b/docs/storage-topolvm.md index 65773391..f1a3d5db 100644 --- a/docs/storage-topolvm.md +++ b/docs/storage-topolvm.md @@ -8,7 +8,9 @@ The following steps are one way to set it up, however, consult the [documentatio TopoLVM requires access to a volume group on the physical host to work, which means we need to set up a volume group on our hosts. By default, TopoLVM will use the controllers as storage hosts. The genestack Kustomize solution sets the general storage volume group to `vg-general`. This value can be changed within Kustomize found at `kustomize/topolvm/general/kustomization.yaml`. -> Simple example showing how to create the needed volume group. +!!! info + + Simple example showing how to create the needed volume group. ``` shell # NOTE sdX is a placeholder for a physical drive or partition.