Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quickstack #1

Open
wants to merge 46 commits into
base: quickstack
Choose a base branch
from
Open

Quickstack #1

wants to merge 46 commits into from

Conversation

trozet
Copy link

@trozet trozet commented Jan 21, 2015

trystack/quickstack won't work on hosts with Cent7 because they do not have the RDO repo. Added a small puppet module to stage that first before running quickstack. Tested it on my setup.

-repo makes sure RDO repo is added, along with correctly replacing /etc/hosts/
-both are staged first before running quickstack
  Addition of controller_networker.pp allows user in foreman to launch controller_networker node consolidated.
  Modifications to all files include support for using opendaylight as an ML2 driver.
  Opendaylight is installed on network or controller_networker node.
  ML2 is configured to point to opendaylight on control or controller_networker node.

  New global parameters for foreman:
    odl_flag        = 'true' (optional, set to opendaylight to use opendaylight)
    odl_rest_port   = '8081' (optional, defaults to 8081 if not provided.  Must not be 8080 if using controller_networker.pp)
    odl_control_ip  = '10.4.9.2' (optional for controller_networker.pp, must be provided otherwise.  Private ip of ODL interface)
@trozet
Copy link
Author

trozet commented Feb 11, 2015

Added more changes to allow opendaylight integration into openstack.

Patch fixes:
  - Modifies selinux to be permissive (for opendaylight/openstack operation)
  - Modifies prestaging for puppet to be "presetup" instead "first".

Quickstack uses "first" to install other services, and we want the repo
to be installed even before that so used an earlier staging area.
…onfiguring yum proxy

Added puppet code to configure /etc/yum.conf with the proxy address of
global parameter proxy_address.  Example:
proxy_address="http://mycache.mydomain.com:3128"
 - Initially just provides a required package for tempest to run
The parameters were duplicated and causing the class to fail when
applied.
This is needed for ceph to install correctly.  Needed for cinder
backend.
Changes include:
 - Remove ODL install.  We will use a separate class for this now
 - Fixes amqp_password, amqp_username to be variables defaulted to
single_username, single_password
 - Adds default value for rbd_secret_uuid as this var should not be
mandatory
 - Defaults odl_control_ip to be the first controller in the array
legacy.  python-ceph package is now renamed python-rados package
Can get ntp here: puppet module install puppetlabs-ntp
ovs_tunnel_if is no longer needed for HA.  Now use private_network,
storage_network with x.x.x.x format network (10.0.0.0).  The interface
will be found during puppet application.
trozet added 17 commits April 29, 2015 14:54
This was masked by the fact that hiera was not disabled so puppet was
just grabbing a random value from a yaml file for this variable.
Introduces 6 new required global params:
 - heat_admin_vip
 - heat_private_vip
 - heat_public_vip
 - heat_cfn_admin_vip
 - heat_cfn_private_vip
 - heat_cfn_public_vip
Changes include:
 - openvswitch resource now defined in init.pp.  Instead of in
   quickstack::neutron::all to avoid dependency cycle.
 - external_net_setup.pp configures br-ex, neutron, and creates provider
network and subnet
 - controller_networker.pp calls external_net_setup.pp if
"external_network_flag" is true

New global parameters required (only if external_network_flag is true):
 - public_gateway
 - public_dns
 - public_network
 - public_subnet
Patch changes behavior to do the following:
 - openvswitch is now installed at the beginning of the puppet run
 - public interface config is changed to be an ovsport on br-ex
 - br-ex is created with the IP address formerly on public interface
 - neutron is configured to use br-ex
 - after neutron is running, an external  provider_network and
   provider_subnet are created
Bug where external network was being applied to compute nodes.
NetworkManager is stopped and then dhcp doesn't renew.  This patch adds
a network restart after networkmanager is killed to try to resolve the
issue.
Now for non-HA deployment you only need:
private_network
public_network

Which are determined by deploy.sh for you.
Now only required parameters:
private_network
private_subnet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant