From 7d42b063561a5bdb2b574d756b0375aabc0a096e Mon Sep 17 00:00:00 2001 From: "Eden G. Adogla" Date: Sun, 9 Jan 2022 08:45:42 -0800 Subject: [PATCH] readme - edited README for typos and clarity Signed-off-by: Eden G. Adogla --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ed052a6..9680760 100644 --- a/README.md +++ b/README.md @@ -25,9 +25,9 @@ The Vagrant boxes used in the simulation include: ## Spinning up The Simulations -If you're new to Vagrant, install [vagrant](https://releases.hashicorp.com/vagrant/) 2.2.7 (2.2.10 has problems with some of the VMs), and then run ```vagrant up``` in the 2t-clos-single-attach or 2t-clos-dual-attach directories. After the simulation has spun up, you can run ansible-playbook deploy-evpn.yml. It uses ospf-ibgp with ingress replication as the default setup. Junos is not supported in the dual-attach mode as it doesn't support MLAG, and I'm yet to add EVPN multi-homing support. To spin up the other NOSes, you'll need to get them imagess from the vendors and convert them to support varant libvirt if you want to use them with Vagrant. Brad Searle has written up good instructions for doing this on his [blog](https://codingpackets.com/blog/tag/libvirt/). Marc Weisel has some more automated support in his [github repos](https://github.com/mweisel?tab=repositories). NXOS has some teething troubles on startup (takes a very long time), but is fine once its up. +If you're new to Vagrant, install [vagrant](https://releases.hashicorp.com/vagrant/) 2.2.7 (2.2.10 has problems with some of the VMs), and then run ```vagrant up``` in the 2t-clos-single-attach or 2t-clos-dual-attach directories. After the simulation has spun up, you can run ansible-playbook deploy-evpn.yml. It uses ospf-ibgp with ingress replication as the default setup. Junos is not supported in the dual-attach mode as it doesn't support MLAG, and I'm yet to add EVPN multi-homing support. To spin up the other NOSes, you'll need to get them images from the vendors and convert them to support vagrant libvirt if you want to use them with Vagrant. Brad Searle has written up good instructions for doing this on his [blog](https://codingpackets.com/blog/tag/libvirt/). Marc Weisel has some more automated support in his [github repos](https://github.com/mweisel?tab=repositories). NXOS has some teething troubles on startup (takes a very long time), but is fine once its up. -The vagrant-libvirt link contains instructions on installing libvirt, QEMU and KVM for various Linux distributions. I use libvirt because it spins up VMs in parallel, making the entire setup a breeze on most modern You can use other simulation environments such as EVE-NG and GNS3 if you wish to spin up the simulations and use the configuration files here. Please send PRs if you'd like to contribute GNS3/EVE-NG or Virtualbox-based Vagrant configs. The basic model I follow for connecting the nodes is: +The vagrant-libvirt link contains instructions on installing libvirt, QEMU and KVM for various Linux distributions. I use libvirt because it spins up VMs in parallel, making the entire setup a breeze on most modern workstations or laptops. You can use other simulation environments such as EVE-NG and GNS3 if you wish to spin up the simulations and use the configuration files here. Please send PRs if you'd like to contribute GNS3/EVE-NG or Virtualbox-based Vagrant configs. The basic model I follow for connecting the nodes is: * Spines connect to all leaves first, leaf01 on port 1, leaf02 on port 2 etc, and then connect to the exit leaves * Leaves connect to the spines first (spine01 on port 1, ..) and then connect to the servers (server101...), before connecting to each other for MLAG/VPC peer-link (dual attach case only) * The Exit leaves connect to the spines first before connecting to the firewall and connect to the DC edge router last (port 4 in default config with 2 spines and 1 firewall)