-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy production AtlanticWave-SDX controllers #153
Comments
Ansible playbook : https://github.com/RENCI-NRIG/exogeni/tree/master/infrastructure/exogeni/exogeni-deployment/ansible
VMs are dual-homed with one public and one private interface on BEN Management network. VMs accept SSH connections through both public and private interfaces.
I also added port forwarding rules to the BEN gateway, however, routing this traffic will require a few more configurations. At this time, it can be easier to create the jenkins nodes with individual public IP addresses above.
Firewall implemented via script (ansible role partially completed)
nrig-service account created on the VMs, sdx-ci public key injected.
|
Production environment: Topology and equipment is shown on AtlanticWave SDX demonstration setup.
|
ATL and MIA switches need cabling for rate-limiting VFCs. Requested physical connections as below. Ports that does not have any connectors attached are selected. Intention is to keep as much as the previous setup in place until things look stable.
|
MIAMI-Corsa connections are completed for ports 13,14,15,16,23,24,25,26.
|
Existing switch configurations are saved. (configuration files are attached as well)
|
|
Connection details at Miami
|
CREATE VFCs with CTAG TUNNELS
VFC configurations with CTAG TUNNELS
Test with L2Tunnel connections
Actual connections
These connections all worked well. Note that CTAG tunnels are created over the vlan range 1-1499 that covers all VLAN tags for the in-band management (1411) and the intermediate VLAN tags that are assigned by the SDX controller (1,2,3 ... ) |
Multiple issues worth to clarify before going back to the production setup. We have tried everything we can based on the documents for last year's demo from GATech. But we don't think it would work based on the current SDX implementation. In conclusion, we need the following from you to make the production testbed work.
Details about the issues are as follows:
1.2 - Alternatively, on the production setup, topology with 3 sites (with the assumption that SDX will be running on SoX/Atlanta), Internet2-AL2S will be doing the VLAN tag translation, and a multipoint AL2S circuit can be used. In-band management traffic towards Miami and Santiago will be carried over VLAN 3621 on SoX/Atlanta.
For dataplane connections, flows as below are pushed. In this case, VLAN 1422 for the edge site is requested, but intermediate VLAN 1 is used for inter-site connection. This intermediate VLAN tag needs to be same on all sites.
For the reasons in item 2, same intermediate VLAN tag will be used for ingress and egress traffic. Therefore, physically separation of traffic is needed. Tunnels with the same VLAN range can be attached if separate physical ports are used.
|
Santiago SwitchLoop cables are missing and will take time to add the cables for the rate-limiting VFCs (br19, br20) . |
Checklist to validate connection
|
We created the environment in Miami and Chile, organizing the VLANs and ports. Bridges br25 were created, with the following tunnels: |
We discovered an issue with our Dell switches in the path: they are resetting the VLAN ID fields, removing the inner tags. We are troubleshooting this issue now. If we don't find a solution, we will move the config from S-VLAN to a VLAN range environment, with 5-10 VLANs between each pair of Corsa switches. |
@mcevik0, I still don't have access to any device in Atlanta. Please, create bridge br25 and the tunnels following the same idea and let me know. I will update the diagram. |
br25 on SoX-Corsa
|
Connectivity from AMPATH Corsa and DTN to the SOX Corsa switch is working. |
The SDX testbed was fixed, tested, and documented. Details shared via Slack. |
I think it will be better to post technical info to this github issue. So, I'm copying from Slack.
|
More copied from Slack
|
Current status of the VFCs MIAMI
SOX
CHILE
|
@jab1982 - I am having a difficulty to find the IP addresses below on the servers (MIAMI and CHILE).
I am looking at the servers below.
Can you let me know which servers are used? |
I can login to the Miami-DTN, Chile-DTN, Atlanta-DTN shown on the topology drawing. MIAMI
CHILE
ATLANTA
|
Miami DTN hosting SDX and LC-Miami Switch config
Some changes will be applied to the switch tunnels to accommodate port separation for SDXController|LC|DTN with openflow ports. |
This is the exception when manifest here was used .
Note the ports for sdx and lc and dtn
|
Build and run ATL Local-controller Switch:
Run ATL Local-controller:
|
Regarding the problem on ATL side , is there a firewall inside campus that prevents traffic on port tcp/6683 ?
I tested connection from the other VM (awsdx-ctrl) to current controller, it works.
Traffic from the Corsa switch to the OpenFlow controller on (128.61.149.224) is interrupted somewhere. |
@russclarkgt , @jab1982 - I am looking at the drawing located at https://renci.slack.com/archives/CRS2KPHFV/p1599079286029300
|
@russclarkgt @jab1982 -
Can you please let me know which server is the right one to use over which network and interface? |
We will run the controllers for production switches (FIU, SoX, Chile) at RENCI and provision VMs. This will require some work to bring the in-band management VLAN all the way to RENCI, but if it does not work well, this setup will still be useful for the monitoring part and testing the automation.
VM specs:
VMs are created (as templates) on the VMware Cluster with the IP addresses below.
The text was updated successfully, but these errors were encountered: