Distribution of master data (IP address) to slaves via a single point. The master data is applied before the usual contextualization is done. Only CernVM 4 is supported.
- Create or get a context file for your machines.
- Go to the Cluster Pairing Service and generate a new cluster pairing pin.
- Enter this as
cvm_cluster_pin
in theucernvm
section of your context file. - Create a context file for your master machine by adding
cvm_cluster_master=yes
to theucernvm
section of the original context file. - Create a context file for your slave machine(s). You must place the
###MASTER_IP_PLACEHOLDER###
string in this file, to the location where the master's IP address should be placed. - Launch a master VM.
- Launch slave VMs.
Cluster PIN is for one time use only. If you want to create a new cluster or recreate an old one, you must generate a new PIN.
[amiconfig]
plugins=cernvm
[cernvm]
organisations=ATLAS
shell=/bin/bash
config_url=http://cernvm.cern.ch/config
users=atlas:atlas:$6$4ZctFHqh$9kZHLCHVUvWBs76SjWxN2QfN.vATu7/AWh1dsVV8PRqm6UdBCvcPG8YVE4epYV2dVMg.nJxG1yqbi/8q8VPhO1
edition=Desktop
screenRes=1280x700
keyboard=us
startXDM=on
[ucernvm-begin]
cvm_cluster_master=yes
cvm_cluster_pin=enico1qq0oey
cvmfs_branch=cernvm-sl7.cern.ch
cvmfs_server=hepvm.cern.ch
cvmfs_path=cvm4
[ucernvm-end]
[example_section]
my_master_ip=###MASTER_IP_PLACEHOLDER###
[amiconfig]
plugins=cernvm
[cernvm]
organisations=ATLAS
shell=/bin/bash
config_url=http://cernvm.cern.ch/config
users=atlas:atlas:$6$4ZctFHqh$9kZHLCHVUvWBs76SjWxN2QfN.vATu7/AWh1dsVV8PRqm6UdBCvcPG8YVE4epYV2dVMg.nJxG1yqbi/8q8VPhO1
edition=Desktop
screenRes=1280x700
keyboard=us
startXDM=on
[ucernvm-begin]
cvm_cluster_pin=enico1qq0oey
cvmfs_branch=cernvm-sl7.cern.ch
cvmfs_server=hepvm.cern.ch
cvmfs_path=cvm4
[ucernvm-end]
As you can see, the master and slave context files only differ in the content of the ucernvm
section, where
the master context has an additional field cvm_cluster_master=yes
.
These have to be placed in the ucernvm
section of the context file.
cvm_cluster_pin
: cluster contextualization PIN, acquired from the cluster service, which is needed for correct master/slave identification. This key automatically expires after 24 hours, so you need to launch your machines before.cvm_cluster_master
: whether the context file is for the master or not.cvm_service_url
: URL of the service you want to use for the synchronization. Defaults tohttps://cernvm-online.cern.ch
. You may specify the port as well, e.g.my-service.example.com:8000
.
Worker nodes does not have to be created before the master node, you can create them at the same time. Worker nodes poll the Cluster PIN Service for approximately 25 minutes, every 30 seconds (totalling to 50 requests). If the master node boot is not completed by this time, no cluster contextualization is done and the worker nodes boot resumes.
The same mechanism applies to the master node when POSTing the master data to the Cluster PIN Service. If the data could not be submitted during this time (e.g. there is no internet connection or the Cluster PIN Service is not responding), master resumes the boot, without any cluster contextualization.
If the master cannot POST the data, e.g. when you accidentally create two machines with a master context with the same/reused PIN, it will keep trying for a specified period of time (see above), until it gives up and resumes the boot. No cluster contextualization is performed in that case.
There are two components:
- Cluster contextualization service, which stores and provides the data.
- Contextualization agents
amiconfig
andcloud-init
to process the data.
This is a lightweight Django application, currently residing at cernvm-online.cern.ch. If you want implement your own service, it must respect the REST API (see below).
Amiconfig contextualization agent performs the data fetching and replacement in the bootloader phase.
It consists of one file (scripts.d/08clustercontext
) in the cernvm-micro repository.
Cloud-init contextualization agent performs the data fetching and replacement before the cloud-config
phase (for the cloud-init service). It also submits the master_ready
status, when master bootup is completed (and slaves can
start fetching the data).
It consists of two systemd services: cernvm-cluster-contextualization
, which performs the
data fetching and replacement in the cloud-config context file, and cernvm-master-ready
that sends a master_ready
status to the service (when run on a master).
- Master VM is created with the master context.
- It pushes its data (IP address) to the REST service under the
cvm_cluster_pin
provided in the context file. - Slave VMs are created with the slave context.
- Slaves fetch the master data (IP address) from the REST service, stored under the
cvm_cluster_pin
provided in the context file. - Slaves finish their boot process.
POST (accepts no data)
{
}
Returns newly created cluster pin:
{
"pin": "string",
"creation_time": "datetime"
}
GET
Returns info about the cluster pin:
{
"pin": "string",
"creation_time": "datetime"
}
DELETE
Delete the given cluster pin.
GET
Returns list of keys (items) saved for the given cluster pin:
[
{
"key": "master_ip",
"value": "10.10.25.5"
}
]
POST
{
"key": "key_of_the_item",
"value": "item_value"
}
Returns newly created item:
{
"key": "key_of_the_item",
"value": "item_value"
}
GET
Returns info about key (item) for the given cluster item.
By default uses plain/text
content type:
master_ip: 10.10.25.5
You can request JSON by setting the Accept
header to application/json
:
{
"key": "master_ip",
"value": "10.10.25.5"
}
PUT, PATCH
Modify the key (item).
{
"key": "key_of_the_item",
"value": "item_value"
}
DELETE
Delete the given cluster pin.