Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuration of shim-tcp-udp #9

Open
edugrasa opened this issue Jul 7, 2016 · 7 comments
Open

Allow configuration of shim-tcp-udp #9

edugrasa opened this issue Jul 7, 2016 · 7 comments

Comments

@edugrasa
Copy link
Contributor

edugrasa commented Jul 7, 2016

It would be nice if some of the machines could have an interface with a routable IP address configured, and the demonstrator allowed for the configuration of shim DIFs over TCP/UDP. This way RINA experiments could be extended outside of the machine where the demonstrator is installed.

(e.g. useful if you want to setup a datacentre in the demonstrator machine, and then have a number of laptops connecting to the different tenant DIFs and play quake for instance - as we did in the TNC demo)

@vmaffione
Copy link
Contributor

Following up #8 , I think that if we want to support the manager it makes sense the second option you propose. All the virtual machines have a management interface, and all the corresponding TAPs are bridged together into a mgmt bridge. But then you have different options to allow connectivity to an external manager:

  1. Have a separated QEMU VM (run manually) which run the manager, and attach that VM to the same mgmt bridge. In this case you don't need support for the shim-tcp-udp, the shim-ethernet is enough, since the manager VM would be in the same (virtualized) mgmt LAN than the demonstrator VMs.
  2. Have the manager VM run in a separate physical machine A. If the physical machine is on the same LAN as the physical machine B running the demonstrator VMs, then you can bridge the physical interface of B to mgmt bridge, and again use the shim ethernet to let the manager VM and the demonstrator VMs talk each other. But this solution requires careful configuration, otherwise you can lock you out from SSH. If the phyisical machine is not on the same LAN, then you will need the shim-tcp-udp.

What about solution 1? It seems the best one to me...

@edugrasa
Copy link
Contributor Author

Agree 1) seems the best one.

How could we automate creation of this separate QEMU VM as much as possible? Could we have a script that starts it given a path to the image? What is the best way to create the image without using buildroot?

@vmaffione
Copy link
Contributor

vmaffione commented Jul 21, 2016

Ok agreed on 1, I will do it.
If we manage to run Java inside buildroot VMs it's better because we can have a small image containing the manager. Otherwise the image must be created and updated manually, there is little to do. I can always extend the demonstrator to automatically run a custom image as a manager and connect it to the others as described in (1), this is not a problem.

If you are not familiar with qemu you can first create the empty image

$ qemu-img create -f qcow2 manager.qcow2 6G

then download the ISO of a distribution you want to use, and use this script available at https://github.com/vmaffione/qrun

$ ./qrun.py -i /path/to/manager.qcow2 --install-from-iso /path/to/install.iso

install the OS and shutdown the VM.

Then you can run the VM with

$ ./qrun.py -i /path/to/mamager.qcow2 -m 10  # 10 is an id you give to the VM instance

@vmaffione vmaffione reopened this Jul 21, 2016
@vmaffione
Copy link
Contributor

I've added support for --manager option. When you specify it, the system automatically creates a normal DIF NMS.DIF laying over a shim DIF, where all the nodes are involved. Then it adds the MAD configuration in the ipcmanager.conf. I tried to run a scenario (without manager), and it seems to do what expected. Could you give a try?

@edugrasa
Copy link
Contributor Author

Just tried it, there's one thing that should be fixed. In the NMS DIF, each IPCP only needs to enroll with the IPCP in the system running the Manager (it's a different enrollment strategy than the other ones, we could call it "star"). The problem is that the machine with the Manager is not there yet, but I would do the following:

If the --manager option is specified, gen.py will also create an extra machine with the custom manager image (when we have it, the same image as others for now) and the NMS DIF; all the other IPCPs in the NMS DIF will only enroll to this one. What do you think?

@vmaffione
Copy link
Contributor

Yes, it seems reasonable, I will do it!

@vmaffione
Copy link
Contributor

vmaffione commented Jul 22, 2016

I updated the code so that:

  • when --manager is specified, an additional manager node is run (just the same image for now), and all the other nodes enroll against this one.
  • the graphviz output colors in red the edges where enrollment is going to happen, otherwise black. In this way you can see graphically if you are ok with the enrollments.

Could you try these new features?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants