Warning : The Debian and Yocto Ansible branches were recently merged. The content of this main branch has been forced push to contain the merged code. The content of the old main branch is now archived on the yocto_legacy branch
If you have pulled the main branch before December 19, 2024, you have to do a fresh pull. We strongly encourage you to switch to the new main branch, as the following branches will not be maintained anymore :
-
debiancentos
-
yocto_legacy
-
debian-main
If you have ongoing work on the old main branch, you can proceed as follows. This assumes your HEAD points to this work
$ git switch -c my_work_branch
$ git fetch
$ git rebase yocto_legacy
$ git branch -f main origin/main
Your work is now rebased on yocto_legacy branch.
The SEAPATH distribution images are generated and preconfigured on other repositories. However, some elements cannot be statically configured during image creation or are specific settings that cannot be included in a generic image
To perform these tasks we use Ansible which is a tool designed to configure Linux machines.
The Ansible documentation is accessible at https://docs.ansible.com/.
The images are generated in the https://github.com/seapath/yocto-bsp:yocto-bsp repository.
Most of the configuration is done inside the yocto layers. This ansible repository is used to configure - The network on each machine - The high-availability cluster
The distribution debian iso generated by [build_debian_iso](https://github.com/seapath/build_debian_iso/) only contain the software preinstalled.
All the configuration (network, clustering, …​) is done with ansible playbooks found in the current repo.
Machines that need to be configured by Ansible simply need to provide SSH access and have a Unix Shell and a Python interpreter. Both Yocto and Debian SEAPATH images already fit with this requirements.
cqfd
is a quick and convenient way to run commands in the current directory,
but within a pre-defined Docker container. Using cqfd
allows you to avoids
installing anything else than Docker on your development machine.
Note
|
We recommend using this method as it greatly simplifies the build configuration management process. |
-
Install
docker
if it is not already done.
On Ubuntu, please run:
$ sudo apt-get install docker.io
-
Install
cqfd
:
$ git clone https://github.com/savoirfairelinux/cqfd.git
$ cd cqfd
$ sudo make install
The project page on Github contains detailed information on usage and installation.
-
Make sure that
docker
does not requiresudo
Please use the following commands to add your user account to the docker
group:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
Log out and log back in, so that your group membership can be re-evaluated.
The first step with cqfd
is to create the build container. For this, use the
cqfd init
command in the Ansible directory:
$ cqfd init
Note
|
The step above is only required once, as once the container image has been
created on your machine, it will become persistent. Further calls to cqfd init
will do nothing, unless the container definition (.cqfd/docker/Dockerfile) has
changed in the source tree.
|
User can now run commands through cqfd
by using cqfd run
followed by the
command to run. For instance
$ cqfd run ansible-playbook -i inventory.yaml myplaybook.yaml
Note
|
Later you must prefix all ansible command with cqfd run .
|
Without cqfd
you need to install the dependencies manually.
The client machine that is going to run Ansible must have Ansible 2.10 installed
an Inventory file and playbook files to play. To install Ansible 2.10 on this
machine please refer to the Ansible documentation at
https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html.
Warning: Curently only the Ansible version 2.10 is supported. Other versions will not work.
Also you must also install the netaddr
and six
python3 module as well as the rsync
package.
Ansible plays playbooks in hosts described in an Ansible inventory. In this inventory are described the hosts, the way to access these hosts, their configurations. Hosts can be grouped into groups. Ansible Inventory documentation is available at https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html.
In the inventories/examples directory you can find various examples for a seapath cluster, a standalone machine and a virtual machine. See the https://github.com/seapath/ansible/tree/main/inventories#readme:"associated README" for more informations.
Other formats are valid for inventory file but in this document we will only cover the YAML format. This file also contains some commented examples of common variables that can be used with Ansible, but does not contain the variables used by the SEAPATH playbooks.
Note: If you are not familiar with the YAML format you will find a description here: https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
You need to pass your inventory file to all Ansible command with the -i
argument. To validate your Ansible inventory file you can use the
ansible-inventory
command with --list
argument.
For instance if your Ansible file is cluster.yaml:
$ ansible-inventory -i cluster.yaml --list
An Ansible inventory file respects a hierarchy. Ansible actions can be later applied to all hosts included in this level. All level can have hosts and vars (variables). The top level is all. hosts defined here are ungrouped and vars are globals. If you defined a children entry in all you can define a group. For instance:
all:
hosts:
host1:
vars:
my_global_var: variable_content
children:
group1:
hosts:
host2:
host3:
vars:
my_group1_scope_variable: variable_content
group2:
hosts:
host4:
my_host_variable: variable_content
Once you have an Ansible inventory you can test host connexion with the ping module:
$ ansible -i cluster.yaml all -m ping
Like all Ansible commands you need to specify your inventory file with the -i
argument, the host or group to apply the action.
For instance here we use the module ping with the -m ping
argument.
To check all host in group1:
$ ansible -i cluster.yaml group1 -m ping
To check only host3:
$ ansible -i cluster.yaml host3 -m ping
In the inventories folder there is also another inventory example: seapath_cluster_definitio_example.yaml. This example adds the variables with their descriptions used by the SEAPATH playbooks. This inventory file should be used as a starting point for writing your inventory file.
Playbooks are files that will contain the actions to be performed by Ansible. For more information about playbooks, see the Ansible documentation: https://docs.ansible.com/ansible/2.9/user_guide/playbooks.html. Ready-to-use playbooks are provided in this repository. Playbooks performing specific actions such as importing a disk will have to be written by you, referring if necessary to the playbook examples in the examples/playbooks folder.
To make writing playbooks easier and simpler, Ansible has set up roles that allow you to group playbooks that can be reused later in other playbooks.
The playbooks useful for this project can be found in the roles folder. Each role contains a README file describing its use.
Calling a role in a playbook is done as in the example below:
- hosts: hypervisors
vars:
- disk_name: disk
- action: check
roles:
- seapath_manage_disks
For more information about roles see: https://docs.ansible.com/ansible/2.9/user_guide/playbooks_reuse_roles.html
First, make sure you are using the git branch corresponding to your version of Seapath.
On Seapath Debian:
$ git checkout debian-main
On Seapath Yocto:
$ git checkout main
Before you can start using playbooks to configure and manage your SEAPATH cluster or standalone version, You need to write the inventory file describing your cluster. To do this you can rely on the example files in the inventories folder (see inventories README.md for more details).
You can place your own inventory file in the inventories folder provided for this purpose.
In the rest of the document we will consider that the cluster inventory file will be called cluster_inventory.yaml and that the network topology inventory is called networktopology_inventory.yaml_ and will both be placed in the inventories folder.
To set up a SEAPATH machine you can use the playbook seapath_setup_main.yaml which regroups the other playbooks. This playbook also configures the cluster on machines described in cluster_machines Ansible group.
To launch the playbook seapath_setup_main.yaml use the following command:
$ ansible-playbook -i inventories/cluster_inventory.yaml -i inventories/networktopology_inventory.yaml --skip-tags "package-install" playbooks/seapath_setup_main.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/cluster_inventory.yaml -i inventories/networktopology_inventory.yaml --skip-tags "package-install"playbooks/seapath_setup_main.yaml
The --skip-tags "package-install" is there for ceph-ansible no to try to install packages (they are already installed and if your host has no internet connection, it will make the playbook fail).
If your inventory contains different types of machines, you can add --limit cluster_machines
or --limit standalone_machine
to only apply this playbook to a group. It is useful for example, to avoid targetting the VMs when applying a change on the cluster machines.
If your are using SEAPATH Debian, the security features are applied with the hardening playbook playbooks/seapath_setup_hardened_debian.yaml. If you are using SEAPATH Yocto, the security features are already applied in meta-seapath.
To launch the playbook seapath_setup_hardened_debian.yaml use the following command:
$ ansible-playbook -i inventories/cluster_inventory.yaml -i inventories/networktopology_inventory.yaml playbooks/seapath_setup_hardened_debian.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/cluster_inventory.yaml -i inventories/networktopology_inventory.yaml playbooks/seapath_setup_hardened_debian.yaml
The SEAPATH Ansible modules documentation is published on ansible-galaxy
A basic virtual machine for SEAPATH based on debian can be created using the build_debian_iso repository.
You can also create a yocto VM using the flavour cqfd guest_efi, as described in the yocto-bsp repository, in the following way:
$ cqfd -b guest_efi
To deploy this machine on the cluster, follow these steps :
- Create a folder vm_images
at the base of this repo
- Place the generated qcow2 file in the vm_images
directory with the name guest.qcow2
.
- Create an inventory describing your virtual machines. Follow the example inventories/examples/seapath-vm-deployement.yaml
- For a cluster, call the playbook playbooks/deploy_vms_cluster.yaml
$ ansible-playbook -i inventories/cluster_inventory.yaml -i inventories/vm_inventory.yaml playbooks/deploy_vms_cluster.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/cluster_inventory.yaml -i inventories/vm_inventory.yaml playbooks/deploy_vms_cluster.yaml
Otherwise, for the standalone, call the playbook playbooks/deploy_vms_standalone.yaml
$ ansible-playbook -i inventories/standalone_inventory.yaml -i inventories/vm_inventory.yaml playbooks/deploy_vms_standalone.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/standalone_inventory.yaml -i inventories/vm_inventory.yaml playbooks/deploy_vms_standalone.yaml
On Debian, hypervisors are updated using apt update
commands. More informations on the https://lf-energy.atlassian.net/wiki/spaces/SEAP/pages/31820194/Update+and+Rollback:"wiki".
Machines are updated using software update.
First, create a swu file using the yocto-bsp repository.
Then, the update will be deployed by ansible. You need to pass two variables in the command line :
- machine_to_update
is the name of the machine that ansible will update
- swu_image
is the name of the swu file that was created in yocto-bsp.
Note: The swu image must be placed in the swu_images
directory.
For the update of a machine in the cluster, call the playbook playbooks/update_machine_cluster.yaml
$ ansible-playbook -i inventories/cluster_inventory.yaml -e "machine_to_update=node1" -e "swu_image=update.swu" playbooks/update_machine_cluster.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/cluster_inventory.yaml -e "machine_to_update=node" -e "swu_image=update.swu" playbooks/update_machine_cluster.yaml
Otherwise, for the standalone, call the playbook playbooks/update_machine_standalone.yaml
$ ansible-playbook -i inventories/standalone_inventory.yaml -e "machine_to_update=node1" -e "swu_image=update.swu" playbooks/update_machine_standalone.yaml
Or if you use cqfd
:
$ cqfd run ansible-playbook -i inventories/standalone_inventory.yaml -e "machine_to_update=node" -e "swu_image=update.swu" playbooks/update_machine_standalone.yaml
A CI is actually running on the ansible repository. If you want to contribute to the project, this CI will launch your code to configure a cluster and run all non regression tests.
After opening your pull request, the CI is visible as a Github Action on the conversation page. A link to a test report is given in the step "Upload test report". All tests must pass for the pull request to be merged.
For more information please see :
-
The Wiki for a user oriented guide.
-
The CI repository for technical implementation.
Ansible-lint is run on every pull request toward the debian-main branch. Some rules are ignored, they can be found in the configuration file on the CI repository.