Skip to content

Commit

Permalink
fix: Update blockquotes to use Admonitions
Browse files Browse the repository at this point in the history
Signed-off-by: Kevin Carter <[email protected]>
  • Loading branch information
cloudnull committed Mar 7, 2024
1 parent af0b36e commit 02d1b8c
Show file tree
Hide file tree
Showing 30 changed files with 265 additions and 94 deletions.
8 changes: 6 additions & 2 deletions docs/build-local-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,12 @@ EOF
kubectl --namespace kube-system get secret registry-kube-system-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > /opt/registry.ca
```

> NOTE the above commands make the assumption that you're running a docker registry within the kube-system namespace and are running the provided genestack ingress definition to support that environment. If you have a different registry you will need to adjust the commands to fit your environment.
!!! note

the above commands make the assumption that you're running a docker registry within the kube-system namespace and are running the provided genestack ingress definition to support that environment. If you have a different registry you will need to adjust the commands to fit your environment.

Once the above commands have been executed, the file `/opt/octavia-ovn-helm-overrides.yaml` will be present and can be included in our helm command when we deploy Octavia.

> If you're using the local registry with a self-signed certificate, you will need to include the CA `/opt/registry.ca` in all of your potential worker nodes so that the container image is able to be pulled.
!!! tip

If you're using the local registry with a self-signed certificate, you will need to include the CA `/opt/registry.ca` in all of your potential worker nodes so that the container image is able to be pulled.
21 changes: 15 additions & 6 deletions docs/build-test-envs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,19 @@ Take a moment to orient yourself, there are a few items to consider before movin

### Clone Genestack

> Your local genestack repository will be transferred to the eventual launcher instance for convenience **perfect for development**.
See [Getting Started](quickstart.md] for an example on how to recursively clone the repository and its submodules.
!!! note

Your local genestack repository will be transferred to the eventual launcher instance for convenience **perfect for development**. See [Getting Started](quickstart.md] for an example on how to recursively clone the repository and its submodules.

### Create a VirtualEnv

This is optional but always recommended. There are multiple tools for this, pick your poison.

### Install Ansible Dependencies

> Activate your venv if you're using one.
!!! info

Activate your venv if you're using one.

```
pip install ansible openstacksdk
Expand Down Expand Up @@ -54,7 +57,9 @@ See the configuration guide [here](https://docs.openstack.org/openstacksdk/lates
## Create a Test Environment
> This is used to deploy new infra on an existing OpenStack cloud. If you're deploying on baremetal this document can be skipped.
!!! abstract
This is used to deploy new infra on an existing OpenStack cloud. If you're deploying on baremetal this document can be skipped.
If deploying in a lab environment on an OpenStack cloud, you can run the `infra-deploy.yaml` playbook which will create all of the resources needed to operate the test environment.

Expand All @@ -72,7 +77,9 @@ cd ansible/playbooks

Run the test infrastructure deployment.

> Ensure `os_cloud_name` as well as other values within your `infra-deploy.yaml` match a valid cloud name in your openstack configuration as well as resource names within it.
!!! tip

Ensure `os_cloud_name` as well as other values within your `infra-deploy.yaml` match a valid cloud name in your openstack configuration as well as resource names within it.

!!! note

Expand Down Expand Up @@ -110,7 +117,9 @@ The result of the playbook will look something like this.

The lab deployment playbook will build an environment suitable for running Genestack, however, it does not by itself run the full deployment. Once your resources are online, you can login to the "launcher" node and begin running the deployment. To make things fairly simple, the working development directory will be sync'd to the launcher node, along with keys and your generated inventory.

> If you're wanting to inspect the generated inventory, you can find it in your home directory.
!!! tip

If you're wanting to inspect the generated inventory, you can find it in your home directory.

### SSH to lab

Expand Down
13 changes: 13 additions & 0 deletions docs/genestack-architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Environment Architecture

Genestack is making use of some homegrown solutions, community operators, and OpenStack-Helm. Everything
in Genestack comes together to form cloud in a new and exciting way; all built with opensource solutions
to manage cloud infrastructure in the way you need it.

They say a picture is worth 1000 words, so here's a picture.

![Genestack Architecture Diagram](assets/images/diagram-genestack.png)

The idea behind Genestack is simple, build an Open Infrastructure system that unites Public and Private
clouds with a platform that is simple enough for the hobbyist yet capable of exceeding the needs of the
enterprise.
62 changes: 62 additions & 0 deletions docs/genestack-components.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Product Component Matrix

The following components are part of the initial product release
and largely deployed with Helm+Kustomize against the K8s API (v1.28 and up).
Some components are intially only installed with the public cloud based,
OpenStack Flex, service, while OpenStack Enterprise naturally provides a larger
variety of services:

| Group | Component | OpenStack Flex | OpenStack Enterprise |
|------------|----------------------|----------------|----------------------|
| Kubernetes | Kubernetes | Required | Required |
| Kubernetes | Kubernetes Dashboard | Required | Required |
| Kubernetes | Cert-Manager | Required | Required |
| Kubernetes | MetaLB (L2/L3) | Required | Required |
| Kubernetes | Core DNS | Required | Required |
| Kubernetes | Ingress Controller (Nginx) | Required | Required |
| Kubernetes | Kube-Proxy (IPVS) | Required | Required |
| Kubernetes | Calico | Optional | Required |
| Kubernetes | Kube-OVN | Required | Optional |
| Kubernetes | Helm | Required | Required |
| Kubernetes | Kustomize | Required | Required |
| OpenStack | openVswitch (Helm) | Optional | Required |
| OpenStack | Galera (Operator) | Required | Required |
| OpenStack | rabbitMQ (Operator) | Required | Required |
| OpenStack | memcacheD (Operator) | Required | Required |
| OpenStack | Ceph Rook | Optional | Required |
| OpenStack | iscsi/tgtd | Required | Optional |
| OpenStack | Keystone (Helm) | Required | Required |
| OpenStack | Glance (Helm) | Required | Required |
| OpenStack | Cinder (Helm) | Required | Required |
| OpenStack | Nova (Helm) | Required | Required |
| OpenStack | Neutron (Helm) | Required | Required |
| OpenStack | Placement (Helm) | Required | Required |
| OpenStack | Horizon (Helm) | Required | Required |
| OpenStack | Skyline (Helm) | Optional | Optional |
| OpenStack | Heat (Helm) | Required | Required |
| OpenStack | Designate (Helm) | Optional | Required |
| OpenStack | Barbican (Helm) | Required | Required |
| OpenStack | Octavia (Helm) | Required | Required |
| OpenStack | Ironic (Helm) | Optional | Required |
| OpenStack | metal3.io | Optional | Required |

Initial monitoring componets consists of the following projects

| Group | Component | OpenStack Flex | OpenStack Enterprise |
|------------|----------------------|----------------|----------------------|
| Kubernetes | Prometheus | Required | Required |
| Kubernetes | Thanos | Required | Required |
| Kubernetes | Alertmanager | Required | Required |
| Kubernetes | Grafana | Required | Required |
| Kubernetes | Node Exporter | Required | Required |
| Kubernetes | redfish Exporter | Required | Required |
| OpenStack | OpenStack Exporter | Required | Required |

At a later stage these components will be added

| Group | Component | OpenStack Flex | OpenStack Enterprise |
|-----------|----------------------|----------------|----------------------|
| OpenStack | MongoDB | Optional | Required |
| OpenStack | Aodh (Helm) | Optional | Required |
| OpenStack | Ceilometer (Helm) | Optional | Required |
| OpenStack | Masakari (Helm) | Optional | Required |
8 changes: 6 additions & 2 deletions docs/genestack-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@

Before you can do anything we need to get the code. Because we've sold our soul to the submodule devil, you're going to need to recursively clone the repo into your location.

> Throughout the all our documentation and examples the genestack code base will be assumed to be in `/opt`.
!!! info

Throughout the all our documentation and examples the genestack code base will be assumed to be in `/opt`.

``` shell
git clone --recurse-submodules -j4 https://github.com/rackerlabs/genestack /opt/genestack
Expand All @@ -22,6 +24,8 @@ export GENESTACK_PRODUCT=openstack-enterprise
/opt/genestack/bootstrap.sh
```

> If running this command with `sudo`, be sure to run with `-E`. `sudo -E /opt/genestack/bootstrap.sh`. This will ensure your active environment is passed into the bootstrap command.
!!! tip

If running this command with `sudo`, be sure to run with `-E`. `sudo -E /opt/genestack/bootstrap.sh`. This will ensure your active environment is passed into the bootstrap command.

Once the bootstrap is completed the default Kubernetes provider will be configured inside `/etc/genestack/provider`
4 changes: 3 additions & 1 deletion docs/genestack-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,9 @@ git fetch origin
git rebase origin/main
```

> You may want to checkout a specific SHA or tag when running a stable environment.
!!! tip

You may want to checkout a specific SHA or tag when running a stable environment.

Update the submodules.

Expand Down
4 changes: 3 additions & 1 deletion docs/infrastructure-mariadb-connect.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,6 @@ mysql -h $(kubectl -n openstack get service mariadb-galera-primary -o jsonpath='
-u root
```

> The following command will leverage your kube configuration and dynamically source the needed information to connect to the MySQL cluster. You will need to ensure you have installed the mysql client tools on the system you're attempting to connect from.
!!! info

The following command will leverage your kube configuration and dynamically source the needed information to connect to the MySQL cluster. You will need to ensure you have installed the mysql client tools on the system you're attempting to connect from.
8 changes: 6 additions & 2 deletions docs/infrastructure-mariadb.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,9 @@ If you've changed your k8s cluster name from the default cluster.local, edit `cl
kubectl kustomize --enable-helm /opt/genestack/kustomize/mariadb-operator | kubectl --namespace mariadb-system apply --server-side --force-conflicts -f -
```

> The operator may take a minute to get ready, before deploying the Galera cluster, wait until the webhook is online.
!!! info

The operator may take a minute to get ready, before deploying the Galera cluster, wait until the webhook is online.

``` shell
kubectl --namespace mariadb-system get pods -w
Expand All @@ -30,7 +32,9 @@ kubectl --namespace mariadb-system get pods -w
kubectl --namespace openstack apply -k /opt/genestack/kustomize/mariadb-cluster/base
```

> NOTE MariaDB has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment.
!!! note

MariaDB has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment.

## Verify readiness with the following command

Expand Down
8 changes: 6 additions & 2 deletions docs/infrastructure-memcached.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,19 @@
kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base | kubectl apply --namespace openstack -f -
```

> NOTE Memcached has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment.
!!! note

Memcached has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment.

### Alternative - Deploy the Memcached Cluster With Monitoring Enabled

``` shell
kubectl kustomize --enable-helm /opt/genestack/kustomize/memcached/base-monitoring | kubectl apply --namespace openstack -f -
```

> NOTE Memcached has a base-monitoring configuration which is HA and production ready that also includes a metrics exporter for prometheus metrics collection. If you'd like to have monitoring enabled for your memcached cluster ensure the prometheus operator is installed first ([Deploy Prometheus](prometheus.md)).
!!! note

Memcached has a base-monitoring configuration which is HA and production ready that also includes a metrics exporter for prometheus metrics collection. If you'd like to have monitoring enabled for your memcached cluster ensure the prometheus operator is installed first ([Deploy Prometheus](prometheus.md)).

## Verify readiness with the following command.

Expand Down
16 changes: 12 additions & 4 deletions docs/infrastructure-ovn-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@ Post deployment we need to setup neutron to work with our integrated OVN environ
export ALL_NODES=$(kubectl get nodes -l 'openstack-network-node=enabled' -o 'jsonpath={.items[*].metadata.name}')
```

> Set the annotations you need within your environment to meet the needs of your workloads on the hardware you have.
!!! note

Set the annotations you need within your environment to meet the needs of your workloads on the hardware you have.

### Set `ovn.openstack.org/int_bridge`

Expand All @@ -23,7 +25,9 @@ kubectl annotate \

Set the name of the OVS bridges we'll use. These are the bridges you will use on your hosts within OVS. The option is a string and comma separated. You can define as many OVS type bridges you need or want for your environment.

> NOTE The functional example here annotates all nodes; however, not all nodes have to have the same setup.
!!! note

The functional example here annotates all nodes; however, not all nodes have to have the same setup.

``` shell
kubectl annotate \
Expand All @@ -47,7 +51,9 @@ kubectl annotate \

Set the Neutron bridge mapping. This maps the Neutron interfaces to the ovs bridge names. These are colon delimitated between `NEUTRON_INTERFACE:OVS_BRIDGE`. Multiple bridge mappings can be defined here and are separated by commas.

> Neutron interfaces are string value and can be anything you want. The `NEUTRON_INTERFACE` value defined will be used when you create provider type networks after the cloud is online.
!!! note

Neutron interfaces are string value and can be anything you want. The `NEUTRON_INTERFACE` value defined will be used when you create provider type networks after the cloud is online.

``` shell
kubectl annotate \
Expand All @@ -67,7 +73,9 @@ kubectl annotate \
ovn.openstack.org/availability_zones='nova'
```

> Any availability zone defined here should also be defined within your **neutron.conf**. The "nova" availability zone is an assumed defined, however, because we're running in a mixed OVN environment, we should define where we're allowed to execute OpenStack workloads.
!!! note

Any availability zone defined here should also be defined within your **neutron.conf**. The "nova" availability zone is an assumed defined, however, because we're running in a mixed OVN environment, we should define where we're allowed to execute OpenStack workloads.

### Set `ovn.openstack.org/gateway`

Expand Down
11 changes: 6 additions & 5 deletions docs/infrastructure-postgresql.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ kubectl --namespace openstack create secret generic postgresql-db-audit \

## Run the package deployment

> Consider the PVC size you will need for the environment you're deploying in.
Make adjustments as needed near `storage.[pvc|archive_pvc].size` and
`volume.backup.size` to your helm overrides.
!!! tip

Consider the PVC size you will need for the environment you're deploying in. Make adjustments as needed near `storage.[pvc|archive_pvc].size` and `volume.backup.size` to your helm overrides.

```shell
cd /opt/genestack/submodules/openstack-helm-infra
Expand All @@ -37,5 +37,6 @@ helm upgrade --install postgresql ./postgresql \
--set endpoints.postgresql.auth.audit.password="$(kubectl --namespace openstack get secret postgresql-db-audit -o jsonpath='{.data.password}' | base64 -d)"
```

> In a production like environment you may need to include production specific files like the example variable file found in
`helm-configs/prod-example-openstack-overrides.yaml`.
!!! tip

In a production like environment you may need to include production specific files like the example variable file found in `helm-configs/prod-example-openstack-overrides.yaml`.
9 changes: 7 additions & 2 deletions docs/infrastructure-rabbitmq.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,10 @@
``` shell
kubectl apply -k /opt/genestack/kustomize/rabbitmq-operator
```
> The operator may take a minute to get ready, before deploying the RabbitMQ cluster, wait until the operator pod is online.

!!! note

The operator may take a minute to get ready, before deploying the RabbitMQ cluster, wait until the operator pod is online.

## Deploy the RabbitMQ topology operator.

Expand All @@ -19,7 +22,9 @@ kubectl apply -k /opt/genestack/kustomize/rabbitmq-topology-operator
kubectl apply -k /opt/genestack/kustomize/rabbitmq-cluster/base
```

> NOTE RabbitMQ has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment.
!!! note

RabbitMQ has a base configuration which is HA and production ready. If you're deploying on a small cluster the `aio` configuration may better suit the needs of the environment.

## Validate the status with the following

Expand Down
8 changes: 6 additions & 2 deletions docs/k8s-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,13 @@ sudo chmod +x /usr/local/bin/kubectl

Retrieve the kube config from our first controller.

> In the following example, X.X.X.X is expected to be the first controller.
!!! tip

> In the following example, ubuntu is the assumed user.
In the following example, X.X.X.X is expected to be the first controller.

!!! note

In the following example, ubuntu is the assumed user.

``` shell
mkdir -p ~/.kube
Expand Down
8 changes: 6 additions & 2 deletions docs/k8s-kubespray-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ When running Kubespray using the Genestack submodule, review the [Genestack Upda

Genestack stores inventory in the `/etc/genestack/inventory` directory. Before running the upgrade, you will need to set the **kube_version** variable to your new target version. This variable is generally found within the `/etc/genestack/inventory/group_vars/k8s_cluster/k8s-cluster.yml` file.

> Review all of the group variables within an environment before running a major upgrade. Things change, and you need to be aware of your environment details before running the upgrade.
!!! note

Review all of the group variables within an environment before running a major upgrade. Things change, and you need to be aware of your environment details before running the upgrade.

Once the group variables are set, you can proceed with the upgrade execution.

Expand All @@ -40,7 +42,9 @@ Now run the upgrade.
ansible-playbook upgrade-cluster.yml
```

> While the basic command could work, be sure to include any and all flags needed for your environment before running the upgrade.
!!! note

While the basic command could work, be sure to include any and all flags needed for your environment before running the upgrade.

### Running an unsafe upgrade

Expand Down
Loading

0 comments on commit 02d1b8c

Please sign in to comment.