diff --git a/doc/source/deployment.md b/doc/source/deployment.md
deleted file mode 100644
index 680a2d9..0000000
--- a/doc/source/deployment.md
+++ /dev/null
@@ -1,209 +0,0 @@
-# Deployment
-
-Deployment can be done via two methods: stand-alone Docker or OpenStack cloud.
-
-Additionally, you can kick off the deployment with the `./scripts/deploy.sh` which bootstraps a simple deployment
-using the stand-alone Docker method.
-
-## Base Deployment
-
-Start by creating `hosts/containers` (or similar) and add your baremetal machine
-with the following template:
-
- jenkins_master
- logstash
- elasticsearch
- kibana
-
-These names (e.g. jenkins_master, logstash, etc) should match the names as defined in `./docker-compose.yml`.
-
-### Adding baremetal slaves to a Docker deployment
-
-If you need to add jenkins slaves (baremetal), add slave information in `./hosts/containers`
-as the following (be sure to add `ansible_connection=ssh` as well).
-
- [jenkins_slave]
- slave01 ansible_connection=ssh ansible_host=10.10.1.1 ansible_user=ansible
-
- [jenkins_slave:vars]
- slave_description=TOAD Testing Node
- slave_remoteFS=/home/stack
- slave_port=22
- slave_credentialsId=stack-credential
- slave_label=toad
-
-### Running containers and start provisioning
-
-Then, you can run the following commands to setup containers and to setup the TOAD environment.
-
- $ docker-compose up -d
- $ ansible-playbook site.yml -vvvv -i hosts/containers \
- -e use_openstack_deploy=false -e deploy_type='docker' -c docker
-
-After you finish, you can stop these containers and restart them.
-
- $ docker-compose stop
-
-Or, to restart the containers:
-
- $ docker-compose restart
-
-The following command deletes the containers:
-
- $ docker-compose down
-
-## Base Deployment (OpenStack)
-
-> **NOTE**: Deploying directly to OpenStack virtual machines is deprecated. It is
-> recommended that you perform a deployment using the Docker method (even if that is
-> hosted in a cloud instance on OpenStack). In a future version this method may be
-> removed.
-
-You may need to modify the `host_vars/localhost` file to adjust the
-`security_group` names, as the playbook does not currently create security
-groups and rules for you. It is assumed you've created the following sets of
-security groups, and opened the corresponding ports:
-
-* default
- * `TCP: 22`
-* elasticsearch
- * `TCP: 9200`
-* filebeat-input
- * `TCP: 5044`
-* web_ports
- * `TCP: 80, 443`
-
-> **NOTE**: The security groups are only relevant for OpenStack cloud
-> deployments.
-
-The base set of four VMs created for the CI components in OpenStack are listed
-as follows (as defined in `host_vars/localhost`):
-
- instance_list:
- - { name: elasticsearch, security_groups: "default,elasticsearch" }
- - { name: logstash, security_groups: "default,filebeat-input" }
- - { name: kibana, security_groups: "default,web_ports" }
- - { name: jenkins_master, security_groups: "default,web_ports" }
-
-After configuration, you can run the following command which will connect to
-localhost to run the `shade` applications, authenticate to the OpenStack API
-you've supplied in `clouds.yml` and then deploy the stack.
-
- ansible-playbook site.yml
-
-## Configure Jenkins plugins
-
-In order to configure `scp` plugin, you'll need to use the `jenkins_scp_sites`
-var. It expects a list of sites where Jenkins will copy the artifacts from
-the jobs. The hostname / IP address should be relative to the Jenkins master
-server, as that is where the SCP module will be executed.
-
-Format is the following (see _Example Variable Override File_ for an example):
-
- jenkins_scp_sites:
- - hostname: test_hostname
- user: jenkins1
- password: abc
- path: /test/path
- - hostname: test_hostname
- port: 23
- user: jenkins1
- keyfile: abc
- path: /test/path
-
-### Jenkins Slave Installation
-
-If you wish to automate the deployment of your Jenkins baremetal slave
-machine, you can use Kickstart (or other similar methods). A base minimal
-installation of a CentOS node, as booted from a cdrom (we're using CentOS as
-booted from the vFlash partition on a DRAC) can be configured during boot by
-pressing tab at the "Install CentOS" screen.
-
-Add the following after the word `quiet` to statically configure a network and
-boot from the `ks.cfg` file (as supplied in the `samples/` directory). You'll
-need to host the `ks.cfg` file from a web server accessible from your Jenkins
-baremetal slave node.
-
- ...quiet inst.ks=http://10.10.0.10/ks.cfg ksdevice=em1 ip=10.10.0.100::10.10.0.1:255.255.255.0:nfv-slave-01:em1:none nameserver=10.10.10.1
-
-* `inst.ks`: Network path to the Kickstart file
-* `ksdevice`: Device name to apply the network configuration to
-* `ip`: Format is: `[my_ip_address]::[gateway]:[netmask]:[hostname]:[device_name]:[boot_options]`
-* `nameserver`: IP address of DNS nameserver
-
-After booting, your machine should automatically deploy to a base minimum.
-
-### Jenkins Slave Deployment
-
-To deploy a Jenkins slave, you need to have a baremetal machine to connect to.
-You can tell Ansible about this machine by creating a new inventory file in the
-`hosts/` directory. You won't pollute the repository since all inventory files
-except the `hosts/localhost` file as ignored.
-
-Start by creating `hosts/slaves` (or similar) and add your baremetal machine
-with the following template:
-
- [jenkins_slave]
- slave01 ansible_host=10.10.1.1 ansible_user=ansible
-
- [jenkins_slave:vars]
- slave_description=TOAD Testing Node
- slave_remoteFS=/home/stack
- slave_port=22
- slave_credentialsId=stack-credential
- slave_label=toad
-
-Add additional fields if necessary. It is assumed that the `ansible` user has
-been previously created, and that you can login either via SSH keys, or provide
-the `--ask-pass` flag to your Ansible run. The `ansible` user is also assumed
-to have been setup with passwordless sudo (unless you add `--ask-become-pass`
-during your Ansible run).
-
-For OSP deployments, the build slaves need to be registered under RHN, and
-repositories and guest images need to be synced locally. In order to enable
-repository sync, you need to set the ``slave_mirror_sync`` var to ``true``.
-
-> **NOTE**: By default, the system relies on the slave hostname and public IP
-> to generate a valid repository address. Please ensure that slave hostname is
-> set properly, and that is resolving to a public ip, reachable by all the VMs
-> or baremetal servers involved in the deployments.
-
-## Baremetal deployment
-
-In order to perform baremetal deployments, an additional repository to host the
-hardware environment configuration is needed. A sample repository is provided:
-`https://github.com/redhat-nfvpe/toad_envs`
-
-You can customize the repositories using the following settings:
-- `jenkins_job_baremetal_env_git_src`: path to the repository where to host the
- environments
-- `jenkins_job_baremetal_env_path`: if the environment is in a subfolder of the
- repo, please specify the relative path here.
-
-The environment repo needs to have a folder for each environment that wants to
-be tested. Each environment needs to have the following content:
-- `deploy_config.yml`: it contains `extra_args` var, containing the
- parameters needed to deploy the overcloud. It specifies flavors, nodes to
- scale and templates to be used.
-- `env_settings.yml`: TripleO quickstart environment settings for the baremetal
- deployment. It defines the network settings, undercloud configuration
- parameters and any additional settings needed.
-- `instackenv.json`: Data file where all the baremetal nodes are specified. For
- each node, the IPMI address/user/password is required, as well as the
- provisioning MAC addresses.
-- `net_environment.yml`: TripleO environment file that will be used. You can
- specify here all the typical TripleO settings that need to be customized.
-
-## RHN subscription
-
-On a Red Hat system, subscription of slaves can be managed automatically
-if you pass the right credentials:
-* `rhn_subscription_username`
-* `rhn_subscription_password`
-* `rhn_subscription_pool_id`
-
-Subscription can be managed automatically either on master or slaves, with the
-flags:
-* `master_subscribe_rhn`
-* `slave_subscribe_rhn`
-
diff --git a/doc/source/deployment.rst b/doc/source/deployment.rst
new file mode 100644
index 0000000..677d7e2
--- /dev/null
+++ b/doc/source/deployment.rst
@@ -0,0 +1,249 @@
+Deployment
+==========
+
+Deployment can be done via two methods: stand-alone Docker or OpenStack
+cloud.
+
+Additionally, you can kick off the deployment with the
+``./scripts/deploy.sh`` which bootstraps a simple deployment using the
+stand-alone Docker method.
+
+Base Deployment
+---------------
+
+Start by creating ``hosts/containers`` (or similar) and add your
+baremetal machine with the following template:
+
+::
+
+ jenkins_master
+ logstash
+ elasticsearch
+ kibana
+
+These names (e.g. jenkins\_master, logstash, etc) should match the names
+as defined in ``./docker-compose.yml``.
+
+Adding baremetal slaves to a Docker deployment
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you need to add jenkins slaves (baremetal), add slave information in
+``./hosts/containers`` as the following (be sure to add
+``ansible_connection=ssh`` as well).
+
+::
+
+ [jenkins_slave]
+ slave01 ansible_connection=ssh ansible_host=10.10.1.1 ansible_user=ansible
+
+ [jenkins_slave:vars]
+ slave_description=TOAD Testing Node
+ slave_remoteFS=/home/stack
+ slave_port=22
+ slave_credentialsId=stack-credential
+ slave_label=toad
+
+Running containers and start provisioning
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Then, you can run the following commands to setup containers and to
+setup the TOAD environment.
+
+::
+
+ $ docker-compose up -d
+ $ ansible-playbook site.yml -vvvv -i hosts/containers \
+ -e use_openstack_deploy=false -e deploy_type='docker' -c docker
+
+After you finish, you can stop these containers and restart them.
+
+::
+
+ $ docker-compose stop
+
+Or, to restart the containers:
+
+::
+
+ $ docker-compose restart
+
+The following command deletes the containers:
+
+::
+
+ $ docker-compose down
+
+Base Deployment (OpenStack)
+---------------------------
+
+ **NOTE**: Deploying directly to OpenStack virtual machines is
+ deprecated. It is recommended that you perform a deployment using
+ the Docker method (even if that is hosted in a cloud instance on
+ OpenStack). In a future version this method may be removed.
+
+You may need to modify the ``host_vars/localhost`` file to adjust the
+``security_group`` names, as the playbook does not currently create
+security groups and rules for you. It is assumed you've created the
+following sets of security groups, and opened the corresponding ports:
+
+- default
+- ``TCP: 22``
+- elasticsearch
+- ``TCP: 9200``
+- filebeat-input
+- ``TCP: 5044``
+- web\_ports
+- ``TCP: 80, 443``
+
+ **NOTE**: The security groups are only relevant for OpenStack cloud
+ deployments.
+
+The base set of four VMs created for the CI components in OpenStack are
+listed as follows (as defined in ``host_vars/localhost``):
+
+::
+
+ instance_list:
+ - { name: elasticsearch, security_groups: "default,elasticsearch" }
+ - { name: logstash, security_groups: "default,filebeat-input" }
+ - { name: kibana, security_groups: "default,web_ports" }
+ - { name: jenkins_master, security_groups: "default,web_ports" }
+
+After configuration, you can run the following command which will
+connect to localhost to run the ``shade`` applications, authenticate to
+the OpenStack API you've supplied in ``clouds.yml`` and then deploy the
+stack.
+
+::
+
+ ansible-playbook site.yml
+
+Configure Jenkins plugins
+-------------------------
+
+In order to configure ``scp`` plugin, you'll need to use the
+``jenkins_scp_sites`` var. It expects a list of sites where Jenkins will
+copy the artifacts from the jobs. The hostname / IP address should be
+relative to the Jenkins master server, as that is where the SCP module
+will be executed.
+
+Format is the following (see *Example Variable Override File* for an
+example):
+
+::
+
+ jenkins_scp_sites:
+ - hostname: test_hostname
+ user: jenkins1
+ password: abc
+ path: /test/path
+ - hostname: test_hostname
+ port: 23
+ user: jenkins1
+ keyfile: abc
+ path: /test/path
+
+Jenkins Slave Installation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you wish to automate the deployment of your Jenkins baremetal slave
+machine, you can use Kickstart (or other similar methods). A base
+minimal installation of a CentOS node, as booted from a cdrom (we're
+using CentOS as booted from the vFlash partition on a DRAC) can be
+configured during boot by pressing tab at the "Install CentOS" screen.
+
+Add the following after the word ``quiet`` to statically configure a
+network and boot from the ``ks.cfg`` file (as supplied in the
+``samples/`` directory). You'll need to host the ``ks.cfg`` file from a
+web server accessible from your Jenkins baremetal slave node.
+
+::
+
+ ...quiet inst.ks=http://10.10.0.10/ks.cfg ksdevice=em1 ip=10.10.0.100::10.10.0.1:255.255.255.0:nfv-slave-01:em1:none nameserver=10.10.10.1
+
+- ``inst.ks``: Network path to the Kickstart file
+- ``ksdevice``: Device name to apply the network configuration to
+- ``ip``: Format is:
+ ``[my_ip_address]::[gateway]:[netmask]:[hostname]:[device_name]:[boot_options]``
+- ``nameserver``: IP address of DNS nameserver
+
+After booting, your machine should automatically deploy to a base
+minimum.
+
+Jenkins Slave Deployment
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To deploy a Jenkins slave, you need to have a baremetal machine to
+connect to. You can tell Ansible about this machine by creating a new
+inventory file in the ``hosts/`` directory. You won't pollute the
+repository since all inventory files except the ``hosts/localhost`` file
+as ignored.
+
+Start by creating ``hosts/slaves`` (or similar) and add your baremetal
+machine with the following template:
+
+::
+
+ [jenkins_slave]
+ slave01 ansible_host=10.10.1.1 ansible_user=ansible
+
+ [jenkins_slave:vars]
+ slave_description=TOAD Testing Node
+ slave_remoteFS=/home/stack
+ slave_port=22
+ slave_credentialsId=stack-credential
+ slave_label=toad
+
+Add additional fields if necessary. It is assumed that the ``ansible``
+user has been previously created, and that you can login either via SSH
+keys, or provide the ``--ask-pass`` flag to your Ansible run. The
+``ansible`` user is also assumed to have been setup with passwordless
+sudo (unless you add ``--ask-become-pass`` during your Ansible run).
+
+For OSP deployments, the build slaves need to be registered under RHN,
+and repositories and guest images need to be synced locally. In order to
+enable repository sync, you need to set the ``slave_mirror_sync`` var to
+``true``.
+
+ **NOTE**: By default, the system relies on the slave hostname and
+ public IP to generate a valid repository address. Please ensure that
+ slave hostname is set properly, and that is resolving to a public
+ ip, reachable by all the VMs or baremetal servers involved in the
+ deployments.
+
+Baremetal deployment
+--------------------
+
+In order to perform baremetal deployments, an additional repository to
+host the hardware environment configuration is needed. A sample
+repository is provided: ``https://github.com/redhat-nfvpe/toad_envs``
+
+You can customize the repositories using the following settings: -
+``jenkins_job_baremetal_env_git_src``: path to the repository where to
+host the environments - ``jenkins_job_baremetal_env_path``: if the
+environment is in a subfolder of the repo, please specify the relative
+path here.
+
+The environment repo needs to have a folder for each environment that
+wants to be tested. Each environment needs to have the following
+content: - ``deploy_config.yml``: it contains ``extra_args`` var,
+containing the parameters needed to deploy the overcloud. It specifies
+flavors, nodes to scale and templates to be used. -
+``env_settings.yml``: TripleO quickstart environment settings for the
+baremetal deployment. It defines the network settings, undercloud
+configuration parameters and any additional settings needed. -
+``instackenv.json``: Data file where all the baremetal nodes are
+specified. For each node, the IPMI address/user/password is required, as
+well as the provisioning MAC addresses. - ``net_environment.yml``:
+TripleO environment file that will be used. You can specify here all the
+typical TripleO settings that need to be customized.
+
+RHN subscription
+----------------
+
+On a Red Hat system, subscription of slaves can be managed automatically
+if you pass the right credentials: \* ``rhn_subscription_username`` \*
+``rhn_subscription_password`` \* ``rhn_subscription_pool_id``
+
+Subscription can be managed automatically either on master or slaves,
+with the flags: \* ``master_subscribe_rhn`` \* ``slave_subscribe_rhn``
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 755fe56..6f460cd 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -1,7 +1,7 @@
Welcome to the documentation for TOAD!
======================================
-.. image:: toad_logo.png
+.. image:: images/toad_logo.png
:align: center
:width: 225px
diff --git a/doc/source/quickstart.md b/doc/source/quickstart.rst
similarity index 66%
rename from doc/source/quickstart.md
rename to doc/source/quickstart.rst
index 8cedea7..69279c7 100644
--- a/doc/source/quickstart.md
+++ b/doc/source/quickstart.rst
@@ -1,34 +1,47 @@
-# Quickstart
+.. _quickstart:
+
+Quickstart
+==========
If you're on a Fedora 25 (or later) or CentOS 7.3 system, and you're ok with
running a bash script as root, you can bootstrap your system with the following
command:
+::
+
curl -sSL http://bit.ly/toad-bootstrap | sh
+.. note:: You may have issues with Fedora 25 as not all roles in TripleO
+ Quickstart are setup to handle Fedora. It is recommended that you use CentOS.
+
After bootstrapping your machine, you can perform an "all in one" Jenkins
Master/Slave deployment with the following command:
+::
+
su - toad
curl -sSL http://bit.ly/toad-deploy | sh
-## All In One Deployment
+All In One Deployment
+---------------------
-With the _All In One_ (AIO) deployment, a Jenkins Master will be instantiated
+With the *All In One* (AIO) deployment, a Jenkins Master will be instantiated
via Docker Compose and configured via Ansible. A Jenkins Slave will then be
added to the Jenkins Master by logging into the virtual host that hosts the
Docker containers (including the Jenkins Master).
-![TOAD All In One][toad_aio_overview]
+.. image:: images/toad_aio_overview.png
For an AIO deployment, you first bootstrap the node and then deploy the
-contents of TOAD onto the virtual host (see [Quickstart](#quickstart)). After
+contents of TOAD onto the virtual host (see :ref:`quickstart`). After
instantiating your Jenkins Master via Docker Compose, it is configured via
Ansible.
-During the Ansible run in the `deploy.sh` script, it will also deploy a Jenkins
-Slave from the Master. The host for the Slave is the virtual host itself, and
-this is done via the `toad_default` Docker network.
+During the Ansible run in the ``deploy.sh`` script, it will also deploy a
+Jenkins Slave from the Master. The host for the Slave is the virtual host
+itself, and this is done via the ``toad_default`` Docker network.
+
+::
[toad@virthost toad]$ docker network ls
NETWORK ID NAME DRIVER SCOPE
@@ -40,6 +53,8 @@ this is done via the `toad_default` Docker network.
If we inspect this Docker network, we can see our Jenkins Master network
address and the address of the gateway (our virtual host).
+::
+
[toad@virthost toad]$ docker network inspect toad_default
[
{
@@ -72,23 +87,26 @@ address and the address of the gateway (our virtual host).
}
]
-The Jenkins Master will then SSH from `172.18.0.3` into the virtual host via
-the `toad_default` bridge through the gateway, and configure it as a Jenkins
+The Jenkins Master will then SSH from ``172.18.0.3`` into the virtual host via
+the ``toad_default`` bridge through the gateway, and configure it as a Jenkins
Slave. The Jenkins Slave will be used to execute the Jenkins jobs that we've
configured via JJB (Jenkins Job Builder), which will run the TripleO
-`quickstart.sh` script.
+``quickstart.sh`` script.
-> **NOTE**: The bridges created by Docker are dynamically named. You can link
-> the bridge name in Linux to the Docker bridge by looking at the `ID` field in
-> the `docker network inspect` output, by taking the first 12 characters, and
-> comparing that to the bridge names output by running `brctl show` or `ip a s`
+.. note:: The bridges created by Docker are dynamically named. You can link the
+bridge name in Linux to the Docker bridge by looking at the ``ID`` field in the
+``docker network inspect`` output, by taking the first 12 characters, and
+comparing that to the bridge names output by running ``brctl show`` or ``ip a
+s``
The TripleO quickstart script will then setup an undercloud, controller, and
-compute node (by default, the `minimal.yml` configuration) via libvirt on the
-virtual host. The connection to the undercloud is made via the `brext` bridge,
-which is configured under libvirt as the `external` bridge. More information
-about the `external` bridge can be seen by running `virsh net-dumpxml
-external`:
+compute node (by default, the ``minimal.yml`` configuration) via libvirt on the
+virtual host. The connection to the undercloud is made via the ``brext``
+bridge, which is configured under libvirt as the ``external`` bridge. More
+information about the ``external`` bridge can be seen by running ``virsh
+net-dumpxml external``:
+
+::
[toad@virthost toad]$ sudo virsh net-dumpxml external
@@ -108,9 +126,11 @@ external`:
-Triple quickstart will also create another bridge called `brovc` for the
+Triple quickstart will also create another bridge called ``brovc`` for the
communication between the undercloud and the overcloud. In libvirt it is
-configured as the `overcloud` network:
+configured as the ``overcloud`` network:
+
+::
[toad@virthost toad]$ sudo virsh net-dumpxml overcloud
@@ -125,30 +145,31 @@ single network node. You'll need a fairly robust machine for this type of setup
though. It is recommended that you use a machine with at least 32GB of RAM,
ideally 64GB of RAM.
-## Logging Into Web Services
+Logging Into Web Services
+-------------------------
-Web services are deployed behind the [traefik reverse
-proxy](https://docs.traefik.io/) and can be accessed via hostnames. These
-hostnames are configured in `docker-compose.yml` via a traefik frontend rule.
-The hostname is one of `jenkins.${PROXY_DOMAIN}` or `kibana.${PROXY_DOMAIN}`
-where `${PROXY_DOMAIN}` is an environment variable defined in `toad/.env`. By
-default the domain is defined as `toad.tld`.
+Web services are deployed behind the `traefik reverse proxy
+`_ and can be accessed via hostnames. These hostnames
+are configured in ``docker-compose.yml`` via a traefik frontend rule. The
+hostname is one of ``jenkins.${PROXY_DOMAIN}`` or ``kibana.${PROXY_DOMAIN}``
+where ``${PROXY_DOMAIN}`` is an environment variable defined in ``toad/.env``.
+By default the domain is defined as ``toad.tld``.
In order for the traefik proxy to know where to forward your requests, you'll
-need to either
+need to either
1. setup DNS services to point the subdomains to the primary address of your
virtual host where Docker and Traefik are running
-1. modify your `/etc/hosts` file to statically assigned the hostname to the
+1. modify your ``/etc/hosts`` file to statically assigned the hostname to the
primary interface of your virtual host.
For example, in my network my virtual host primary network interface has the IP
-address of `192.168.5.151`. I can then define the default hostname to that
-address in my `/etc/hosts` file.
+address of ``192.168.5.151``. I can then define the default hostname to that
+address in my ``/etc/hosts`` file.
+
+::
192.168.5.151 kibana.toad.tld jenkins.toad.tld
Once you've completed the provisioning steps, you can login to your Jenkins
master by browsing to http://jenkins.toad.tld.
-
-[toad_aio_overview]: images/toad_aio_overview.png
diff --git a/doc/source/requirements.md b/doc/source/requirements.md
deleted file mode 100644
index 614e7a6..0000000
--- a/doc/source/requirements.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Requirements
-
-TOAD is generally deployed in Docker containers. You can choose to deploy using
-Docker, or, together with an existing OpenStack deployment. Below you will find
-the list of requirements for each of the deployment scenarios.
-
-For Ansible, several roles are required, and you can install them as follows:
-
- ansible-galaxy install -r requirements.yml
-
-## Docker
-
-TOAD primarily utilizes Docker containers. In order to use Docker, you need to
-install [docker-compose](https://docs.docker.com/compose/).
-
-At present, our `docker-compose` YAML file uses the version 2 specification,
-and should work with docker-compose version 1.6.0 or greater, and Docker engine
-1.10.0 or later.
-
-## OpenStack
-
-You'll need to install the `shade` dependency so that you can interact with
-OpenStack (assuming you are deploying to an OpenStack cloud).
-
- pip install --user shade
-
-### Setup OpenStack Connection
-
-If you're going to install to an OpenStack cloud, you'll need to configure a
-cloud to connect to. You can do this by creating the `~/.config/openstack/`
-directory and placing the following contents into the `clouds.yml` file within
-that directory (adjust to your own cloud connection):
-
- clouds:
- mycloud:
- auth:
- auth_url: http://theclowd.com:5000/v2.0
- username: cloud_user
- password: cloud_pass
- project_name: "My Cloud Project"
-
diff --git a/doc/source/requirements.rst b/doc/source/requirements.rst
new file mode 100644
index 0000000..3abc821
--- /dev/null
+++ b/doc/source/requirements.rst
@@ -0,0 +1,53 @@
+Requirements
+============
+
+TOAD is generally deployed in Docker containers. You can choose to
+deploy using Docker, or, together with an existing OpenStack deployment.
+Below you will find the list of requirements for each of the deployment
+scenarios.
+
+For Ansible, several roles are required, and you can install them as
+follows:
+
+::
+
+ ansible-galaxy install -r requirements.yml
+
+Docker
+------
+
+TOAD primarily utilizes Docker containers. In order to use Docker, you
+need to install `docker-compose `__.
+
+At present, our ``docker-compose`` YAML file uses the version 2
+specification, and should work with docker-compose version 1.6.0 or
+greater, and Docker engine 1.10.0 or later.
+
+OpenStack
+---------
+
+You'll need to install the ``shade`` dependency so that you can interact
+with OpenStack (assuming you are deploying to an OpenStack cloud).
+
+::
+
+ pip install --user shade
+
+Setup OpenStack Connection
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you're going to install to an OpenStack cloud, you'll need to
+configure a cloud to connect to. You can do this by creating the
+``~/.config/openstack/`` directory and placing the following contents
+into the ``clouds.yml`` file within that directory (adjust to your own
+cloud connection):
+
+::
+
+ clouds:
+ mycloud:
+ auth:
+ auth_url: http://theclowd.com:5000/v2.0
+ username: cloud_user
+ password: cloud_pass
+ project_name: "My Cloud Project"
diff --git a/doc/source/tracking_development.md b/doc/source/tracking_development.md
deleted file mode 100644
index 7d185b2..0000000
--- a/doc/source/tracking_development.md
+++ /dev/null
@@ -1,4 +0,0 @@
-# Tracking Development
-
-Development is tracked via [Waffle.IO](https://waffle.io) on the [TOAD Waffle
-Board](https://waffle.io/redhat-nfvpe/toad/join).
diff --git a/doc/source/tracking_development.rst b/doc/source/tracking_development.rst
new file mode 100644
index 0000000..ac162e5
--- /dev/null
+++ b/doc/source/tracking_development.rst
@@ -0,0 +1,5 @@
+Tracking Development
+====================
+
+Development is tracked via `Waffle.IO `__ on the
+`TOAD Waffle Board `__.