layout | author | title | description | navbar_active | pub-date | pub-year | category | comments | tags | |
---|---|---|---|---|---|---|---|---|---|---|
post |
mmazur |
KubeVirt with Ansible, part 2 |
A deeper dive into Ansible 2.8's KubeVirt features |
Blogs |
8 Jul |
2019 |
news |
true |
|
[Part 1][blog part 1] contained a short introduction to basic VM management with Ansible's kubevirt_vm
module.
This time we'll paint a more complete picture of all the features on offer.
As before, examples found herein are also available as full working playbooks in our [playbooks example repository][blog examples]. Additionally, each section of this post links to the corresponding module's Ansible documentation page. Those pages always contain an Examples section, which the reader is encouraged to look through, as they have many more ways of using the modules than can reasonably fit here.
[blog part 1]: {% post_url 2019-05-21-kubevirt-with-ansible-part-1 %} [blog examples]: https://github.com/kubevirt/ansible-kubevirt-modules/tree/master/examples/blog
Virtual machines managed by KubeVirt are highly customizable. Among the features accessible from Ansible, are:
- various libvirt–level virtualized hardware tweaks (e.g.
machine_type
orcpu_model
), - network interface configuration (
interfaces
), including multi–NIC utilizing the Multus CNI, - non–persistent VMs (
ephemeral: yes
), - direct DataVolumes support (
datavolumes
), - and OpenShift Templates support (
template
).
[datavols]: {% post_url 2018-10-10-CDI-DataVolumes %}
- Ansible module documentation
- DataVolumes
- [Introductory blog post]({% post_url 2018-10-10-CDI-DataVolumes %})
- Upstream documentation
- Multus
- [Introductory blog post]({% post_url 2018-09-12-attaching-to-multiple-networks %})
- GitHub repo
The main functionality of the kubevirt_pvc
module is to manage Persistent Volume Claims. The following snippet
should seem familiar to anyone who dealt with PVCs before:
kubevirt_pvc:
name: pvc1
namespace: default
size: 100Mi
access_modes:
- ReadWriteOnce
Running it inside a playbook will result in a new PVC named pvc1 with the access mode ReadWriteOnce and at least 100Mi of storage assigned.
The option dedicated to working with VM images is named cdi_source
and lets one fill a PVC with data immediately
upon creation. But before we get to the examples, the Containerized Data Importer needs to be properly deployed,
which is as simple as running the following commands:
export CDI_VER=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*")
kubectl apply -f https://github.com/kubevirt/containerized-data-importer/releases/download/$CDI_VER/cdi-operator.yaml
kubectl apply -f https://github.com/kubevirt/containerized-data-importer/releases/download/$CDI_VER/cdi-cr.yaml
Once kubectl get pods -n cdi
confirms all pods are ready, CDI is good to go.
The module can instruct CDI to fill the PVC with data from:
- a remote HTTP(S) server (
http:
), - a container registry (
registry:
), - a local file (
upload: yes
), though this requires usingkubevirt_cdi_upload
for the actual upload step, - or nowhere (the
blank: yes
option).
Here's a simple example:
kubevirt_pvc:
name: pvc2
namespace: default
size: 100Mi
access_modes:
- ReadWriteOnce
wait: yes
cdi_source:
http:
url: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
info Please notice the
wait: yes
parameter. The module will only exit after CDI has completed transferring its data.
Let's see this in action:
[mmazur@klapek part2]$ ansible-playbook pvc_cdi.yaml
(…)
TASK [Create pvc and fetch data] **********************************************************************************
changed: [localhost]
PLAY RECAP ********************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[mmazur@klapek part2]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc2 Bound local-pv-6b6380e2 37Gi RWO local 71s
[mmazur@klapek part2]$ kubectl get pvc/pvc2 -o yaml|grep cdi
cdi.kubevirt.io/storage.import.endpoint: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
cdi.kubevirt.io/storage.import.importPodName: importer-pvc2-gvn5c
cdi.kubevirt.io/storage.import.source: http
cdi.kubevirt.io/storage.pod.phase: Succeeded
Everything worked as expected.
- Ansible module documentation (kubevirt_pvc)
- Ansible module documentation (kubevirt_cdi_upload)
- CDI GitHub Repo
The default way of using Ansible is to iterate over a list of hosts and perform operations on each one. Listing KubeVirt VMs can be done using the KubeVirt inventory plugin. It needs a bit of setting up before it can be used.
First, enable the plugin in ansible.cfg
:
[inventory]
enable_plugins = kubevirt
Then configure the plugin using a file named kubevirt.yml
or kubevirt.yaml
:
plugin: kubevirt
connections:
- namespaces:
- default
network_name: default
And now let's see if it worked and there's a VM running in the default namespace (as represented by the
namespace_default
inventory group):
[mmazur@klapek part2]$ ansible -i kubevirt.yaml namespace_default --list-hosts
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'
hosts (0):
Right, we don't have any VMs running. Let's go back to [part 1][blog part 1], create vm1
, make sure it's runing
and then try again:
[mmazur@klapek part2]$ ansible-playbook ../part1/02_vm1.yaml
(…)
PLAY RECAP ********************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[mmazur@klapek part2]$ ansible-playbook ../part1/01_vm1_running.yaml
(…)
PLAY RECAP ********************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[mmazur@klapek part2]$ ansible -i kubevirt.yaml namespace_default --list-hosts
hosts (1):
default-vm1-2c680040-9e75-11e9-8839-525500d15501
Works!
Lastly, for the sake of brevity, a quick mention of the remaining modules:
- kubevirt_presets allows setting up VM presets to be used by deployed VMs,
- kubevirt_template brings in a generic templating mechanism, when running on top of OpenShift or OKD,
- and kubevirt_rs lets one configure KubeVirt's own ReplicaSets for running multiple instances of a specified virtual machine.