>> Can't wait? Jump to Quick-Start!
See also:
- README-dash-workflows.md for build workflows and Make targets.
- README-dash-ci for CI pipelines.
- README-dash-docker for Docker usage.
- README-saithrift for saithrift client/server and test workflows.
- README-ptftests for saithrift PTF test-case development and usage.
- README-pytests for saithrift Pytest test-case development and usage.
This is a P4 model of the DASH overlay pipeline which uses the bmv2 from p4lang. It includes the P4 program which models the DASH overlay data plane; Dockerfiles; build and test infrastructure; and CI (Continuous Integration) spec files.
IMPORTANT: Developers, read Typical Workflow: Committing new code - ignoring SAI submodule before committing code.
Table of Contents
- P4 code doesn't loop packets back to same port.
- P4 code mark-to-drop not set when meta.drop is set.
- Permission and ownership issues in Docker images, permanent fix is needed.
- Link to article describing gaps in BMv2 and P4-DPDK; targets that they are missing in order to be a good DASH P4 reference model: Draft Gap Analysis
Small items to complete given the existing features and state, e.g. excluding major roadmap items.
- Update SAI submodule to upstream when PRs are merged (currently using dev branches for URLs)
- Produce "dev" and "distro" versions of docker images. Dev images mount to host FS and use artifacts built on the host. Distro images are entirely self-contained including all artifacts.
- Build a Docker image automatically when its Dockerfile changes, publish and pull from permanent repo
- Use Azure Container Registry (ACR) for Docker images instead of temporary Dockerhub registry
- Use dedicated higher-performance runners instead of free Azure 2-core GitHub runner instances
These are significant feature or functionality work items.
- Use modified bmv2 which adds stateful processing. Current version is vanilla bmv2. This will require building it instead of using a prebuilt bmv2 docker image, see Build Docker dev container. [WIP]
- Add DASH service test cases including SAI-thrift pipeline configuration and traffic tests
See Installing Prerequisites for details.
- Ubuntu 20.04, bare-metal or VM
- 2 CPU cores minimum, 7GB RAM, 14Gb HD; same as free Azure 2-core GitHub runner instances, we'll try to live within these limits
- git - tested with version 2.25.1
- docker
- docker-compose (1.29.2 or later)
git clone <repository URL>
cd DASH
Optional - if you require a particular dev branch:
git checkout <branch>
Init (clone) the SAI submodule:
git submodule update --init # NOTE --recursive not needed (yet)
Eager to see it work? First clone this repo, then do the following:
In first terminal (console will print bmv2 logs):
cd dash-pipeline
make clean && make all run-switch
The above procedure takes awhile since it has to pull docker images (once) and build some code.
In second terminal (console will print saithrift server logs):
make run-saithrift-server
In third terminal (console will print test results):
make run-all-tests
When you're done, do:
make kill-all # just to stop the daemons
# you can redo "run" commands w/o rebuilding
OR
make clean # stop daemons and clean everything up
The final clean
above will kill the switch, delete artifacts and veth pairs.
The tests may use a combination of SW packet generators:
- Scapy - well-known packet-at-a-time SW traffic generator/capture
- ixia-c - performant flow-based packet generator/capture
The setup for ixia-c -based traffic tests is as follows. More info is available here.
This is a summary of most-often used commands, see README-dash-workflows.md for more details.
CTRL-c
- kill the switch container from within the interactive terminalmake kill-all
- kill all the running containersmake clean
- clean up everything, kill containers
See README-dash-workflows.md for build workflows and Make targets. There are many fine-grained Make targets to control your development workflow.
sudo apt install -y git
Need for basically everything to build/test dash-pipeline.
See:
NOTE Use docker-compose 1.29.2 or later! The
.yml
file format changed. Using an older version might result in an error such as:
ERROR: Invalid interpolation format for "controller" option in service "services": "ixiacom/ixia-c-controller:${CONTROLLER_VERSION:-latest}"
It is assumed you already have Docker on your system.
The docker-compose
command is used to orchestrate the ixia-c containers. You need to install it to run the ixia-c test scripts (ixia-c
itself doesn't require docker-compose; it's merely convenient for instantiating it using a declarative .yml
file).
See also:
Installation of docker-compose
has to be done just once. You can use another technique based on your platform and preferences. The following will download and install a linux executable under /usr/local/bin
. You should have a PATH to this directory. You can edit the commands below to locate it somewhere else as desired; just change the path as needed.
sudo mkdir -p /usr/local/bin
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
To test installation, execute the following. The output on the second line is an example, yours may differ.
docker-compose --version
docker-compose version 1.29.2, build 5becea4c