_________ .__ _____ __ .__
/ _____/_ _ _|__|/ ____\/ |______ ____ __ __| | _____ _______
\_____ \\ \/ \/ / \ __\\ __\__ \ _/ ___\| | \ | \__ \\_ __ \
/ \\ /| || | | | / __ \\ \___| | / |__/ __ \| | \/
/_______ / \/\_/ |__||__| |__| (____ /\___ >____/|____(____ /__|
\/ \/ \/ \/
This repository will create a virtualized OpenStack Swift cluster using Vagrant, VirtualBox, Ansible.
- Too long; didn't read
- Features
- Requirements
- Networking setup
- Starting over
- Development environment
- Modules
- Future work
- Issues
- Notes
Note this will start seven virtual machines on your computer.
$ git clone [email protected]:curtisgithub/swiftacular.git
$ cd swiftacular
# Checkout some modules to help with managing openstack
$ git clone https://github.com/openstack-ansible/openstack-ansible-modules library/openstack
$ vagrant up
$ cp group_vars/all.example group_vars/all # and edit if desired
$ ansible-playbook deploy_swift_cluster.yml
- CentOS 6.5 with OpenStack Havana packages
- Ubuntu 12.04 with OpenStack Havana packages
- Ubuntu 14.04 with OpenStack Icehouse packages
Ubuntu 14.04 is probably the most tested version right now, then Ubuntu 12.04, followed up by Redhat/CentOS 6.5+.
The Vagrantfile has the above boxes in place with Ubuntu 12.04 being the default uncommented box. To use one of the other operating systems as the basis for Swiftacular, simply uncomment the OS you would like to use in the Vagrant file, and make sure the other boxes are commented out.
- Run OpenStack Swift in vms on your local computer, but with multiple servers
- Replication network is used, which means this could be a basis for a geo-replication system
- SSL - Keystone is configured to use SSL and the Swift Proxy is proxied by an SSL server
- Sparse files to back Swift disks
- Tests for uploading files into Swift
- Use of gauntlt attacks to verify installation
- Supports Ubuntu Precise 12.04, Trusty 14.04 and CentOS 6.5
- Vagrant and Virtualbox
- For Ubuntu I am using the official Vagrant Precise64 images
- For CentOS 6 I am using the Vagrant box provided by Puppet Labs
- Enough resources on your computer to run seven vms
Seven Vagrant-based virtual machines are used for this playbook:
- package_cache - One apt-cacher-ng server so that you don't have to download packages from the Internet over and over again, only once
- authentication - One Keystone server for authentication
- lbssl - One SSL termination server that will be used to proxy connections to the Swift Proxy server
- swift-proxy - One Swift proxy server
- swift-storage - Three Swift storage nodes
Each vm will have four networks (technically five including the Vagrant network). In a real production system every server would not need to be attached to every network, and in fact you would want to avoid that. In this case, they are all attached to every network.
- eth0 - Used by Vagrant
- eth1 - 192.168.100.0/24 - The "public" network that users would connect to
- eth2 - 10.0.10.0/24 - This is the network between the SSL terminator and the Swift Proxy
- eth3 - 10.0.20.0/24 - The local Swift internal network
- eth4 - 10.0.30.0/24 - The replication network which is a feature of OpenStack Swift starting with the Havana release
Because this playbook configures self-signed SSL certificates and by default the swift client will complain about that fact, either the --insecure
option needs to be used or alternatively the SWIFTCLIENT_INSECURE
environment variable can be set to true.
You can install the swift client anywhere that you have access to the SSL termination point and Keystone. So you could put it on your local laptop as well, probably with:
$ pip install python-swiftclient
However, I usually login to the package_cache server and use swift from there.
$ vagrant ssh swift-package-cache-01
vagrant@swift-package-cache-01:~$ . testrc
vagrant@swift-package-cache-01:~$ swift list
vagrant@swift-package-cache-01:~$ echo "swift is cool" > swift.txt
vagrant@swift-package-cache-01:~$ swift upload swifty swift.txt
swift.txt
vagrant@swift-package-cache-01:~$ swift list
swifty
vagrant@swift-package-cache-01:~$ swift list swifty
swift.txt
If you want to redo the installation there are a few ways.
To restart completely:
$ vagrant destroy -f
$ vagrant up
# wait...
$ ansible-playbook deploy_swift_cluster.yml
There is a script to destroy and rebuild everything but the package cache:
$ ./bin/redo
$ ansible -m ping all # just to check if networking is up
$ ansible-playbook deploy_swift_cluster.yml
To remove and redo only the rings and fake/sparse disks without destroying any virtual machines:
$ ansible-playbook playbooks/remove_rings.yml
$ ansible-playbook deploy_swift_cluster.yml
To remove the keystone database and redo the endpoints, users, regions, etc:
$ ansible-playbook ./playbook/remove_keystone.yml
$ ansible-playbook deploy_swift_cluster.yml
This playbook was developed in the following environment:
- OSX 10.8.2
- Ansible 1.4
- Virtualbox 4.2.6
- Vagrant 1.3.5
There is an swift-ansible-modules directory in the library directory that contains a couple of modules taken from the official Ansible modules as well as the openstack-ansible-modules and for now both have been modified to allow the "insecure" option, which means self-signed certificates. I hope to get those changes into their respective repositories soon.
See the issues in the tracking system on Github for Swiftacular with the enhancement label.
See the issues in the tracking tracking system on Github for Swiftacular.
- I know that Vagrant can automatically start Ansible playbooks on the creation of a vm, but I prefer to run the playbook manually
- LXC is likely a better fit than Virtualbox given all the vms are the same OS and we don't need to boot any vms within vms inception style
- Starting the vms is a bit slow I believe because of the extra networks