Skip to content

Commit

Permalink
initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
klemense1 committed Jul 9, 2021
1 parent 0dbed45 commit bf574bf
Show file tree
Hide file tree
Showing 47 changed files with 4,017 additions and 0 deletions.
6 changes: 6 additions & 0 deletions .bazelrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
test --test_output=errors --action_env="GTEST_COLOR=1"

# Force bazel output to use colors (good for jenkins) and print useful errors.
common --color=yes

build --cxxopt='-std=c++17' --define planner_rules_mcts=true --define ltl_rules=true
21 changes: 21 additions & 0 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: CI

on:
push:
schedule:
- cron: "0 2 * * *"

jobs:
build:

runs-on: ubuntu-latest
container:
image: docker://barksim/bark:latest
steps:
- uses: actions/checkout@v1
- name: Setting up virtual environment
run: virtualenv -p python3 ./tools/python/venv --system-site-packages
- name: Getting into venv
run: . ./tools/python/venv/bin/activate
- name: Runing merging_test_specific
run: bazel test //src/run:merging_test_specific
61 changes: 61 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Example Benchmark

Repository showing how to use BARK for research, e.g., conducting reproducible experiments. The core of this example repository is to run a benchmark of a single agent Monte Carlo Tree Search and compare it with MCTS variants, that incorporate penalties for violating traffic rules to their reward function. The study focuses on merging -- an arbitrary number of scenarios can be generated, where the initial positions and velocities of the vehicles are sampled randomly from a normal distribution.

## Getting started:
If you are not familar with BARK yet, check out our BARK paper [paper](https://arxiv.org/abs/2003.02604) and the code documentation [documentation](https://bark-simulator.readthedocs.io/en/latest/).
This repository uses a virtual environment just as BARK does (install via `bash tools/python/setup_venv.sh` and then source it via `source tools/python/into_venv.sh`)

## What is in there:
There are three targets ready for execution
* `merging_test_random`: runs N scenarios and sample the initial states randomly
* `merging_test_specific`: runs a specific scenario (you can tune them)
* `run_benchmark`: runs a full benchmark
* `scenario_tuning`: runs a lightweight benchmark with visualization to find suitable scenario generation parameters

## Benchmark
First, execute `bazel run //src/run:run_benchmark`. The results of the benchmark will be saved at
`example_benchmark/bazel-bin/src/run/run_benchmark.runfiles/example_benchmark`. Copy the file to `results/benchmark`. The result file consists of a pandas dataframe, that represents the outcome of each benchmark simulation run. To visualize that, we have prepared an ipython notebook. You can start the notebook server via `bazel run //src/create_figures:run_notebooks` and then select the `plot_benchmark_results` notebook. The code will generate the following figure:

![Benchmark Results](benchmark_results.png)

### Changing Parameters:
Parametrization in BARK is done via the `ParameterServer`, which reads json-files. You can tweak the existing ones:
* for the viewer in `viewer_config/params/`
* for the behavior models in `mcts_config/params/`

Of course, those json-files are quite nested, so we provide scripts to generate them with default values.
* for the viewer: `src/viewer_config/viewer_config_test.py`
* for the behavior models: `src/mcts_config/mcts_config_test.py`
The concept for creating parameter files can be transferred to any other object.

### Evaluators:
Evaluators can be used as a metric to analyze the benchmark, but also as a termination criterion for a simulation run. You can choose from any Evaluator in https://github.com/bark-simulator/bark/tree/master/bark/world/evaluation. Using EvaluatorLTL for example allows you to use a wide range of traffic rules.

### Scenarios:
The scenarios are generated based on the config files in `src/database/scenario_sets`.

## Dependency Management using Bazel
As you can see in the [bark-simulator Github Group](https://github.com/bark-simulator/), the BARK ecosystem is split over multiple Github repositories. One reason for this was to keep the core functionalities light-weight and reasonably fast to build. Specifically, a lot of planning modules are placed in seperate repositories. Using Bazel as our build environment enables the reproducibility of our experiments, as dependency versions of the repositories can be tracked easily.

For example, have a look to `tools/deps.bzl`, where the specific dependencies and either their commit hashs or a specific branch can be selected. In order to try two different versions of your planner (located in another repo), you do not build or install them manually, you just need to change the commit hash.

## Cite us

This repository contains work from multiple publications:
* Traffic Rules as Evaluators: [Formalizing Traffic Rules for Machine Interpretability](https://arxiv.org/abs/2007.00330)
* MCTS with Traffic Rules: [Modeling and Testing Multi-Agent Traffic Rules within Interactive Behavior Planning](https://arxiv.org/abs/2009.14186)

If you use them, please cite them.

For everything else, please cite us using the following [paper](https://arxiv.org/abs/2003.02604):

```
@inproceedings{Bernhard2020,
title = {BARK: Open Behavior Benchmarking in Multi-Agent Environments},
author = {Bernhard, Julian and Esterle, Klemens and Hart, Patrick and Kessler, Tobias},
booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
url = {https://arxiv.org/pdf/2003.02604.pdf},
year = {2020}
}
```
35 changes: 35 additions & 0 deletions WORKSPACE
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
workspace(name = "example_benchmark")

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive", "http_file")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
load("//tools:deps.bzl", "example_benchmark_dependencies")

example_benchmark_dependencies()

load("@bark_project//tools:deps.bzl", "bark_dependencies")
bark_dependencies()

load("@com_github_nelhage_rules_boost//:boost/boost.bzl", "boost_deps")
boost_deps()

# -------- Benchmark Database -----------------------
git_repository(
name = "benchmark_database",
commit = "422b0ddd316ab46ac79dcd72a45645e197cf7da1",
remote = "https://github.com/bark-simulator/benchmark-database",
)

load("@benchmark_database//util:deps.bzl", "benchmark_database_dependencies")
benchmark_database_dependencies()

load("@benchmark_database//load:load.bzl", "benchmark_database_release")
benchmark_database_release()

load("@rule_monitor_project//util:deps.bzl", "rule_monitor_dependencies")
rule_monitor_dependencies()

load("@planner_rules_mcts//util:deps.bzl", "planner_rules_mcts_dependencies")
planner_rules_mcts_dependencies()

load("@pybind11_bazel//:python_configure.bzl", "python_configure")
python_configure(name = "local_config_python")
Binary file added benchmark_results.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file added results/BUILD
Empty file.
5 changes: 5 additions & 0 deletions results/benchmark/BUILD
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
filegroup(
name="benchmark_results",
srcs=glob(["*.zip"]),
visibility = ["//visibility:public"],
)
3 changes: 3 additions & 0 deletions results/benchmark/benchmark_results.zip
Git LFS file not shown
Empty file added src/BUILD
Empty file.
6 changes: 6 additions & 0 deletions src/common/BUILD
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
py_library(
name = "custom_lane_corridor_config",
srcs = ["custom_lane_corridor_config.py"],
data = ["@bark_project//bark/python_wrapper:core.so"],
visibility = ["//visibility:public"],
)
81 changes: 81 additions & 0 deletions src/common/custom_lane_corridor_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# Copyright (c) 2021 fortiss GmbH
#
# Authors: Julian Bernhard, Klemens Esterle, Patrick Hart and
# Tobias Kessler
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

import logging

from bark.core.world.opendrive import XodrDrivingDirection
from bark.core.world.goal_definition import GoalDefinitionPolygon
from bark.runtime.scenario.scenario_generation.config_with_ease import LaneCorridorConfig
from bark.core.geometry import *


class CustomLaneCorridorConfig(LaneCorridorConfig):
def __init__(self,
params=None,
**kwargs):
super(CustomLaneCorridorConfig, self).__init__(params, **kwargs)

def goal(self, world):
# settings are valid for merging map
road_corr = world.map.GetRoadCorridor(
self._road_ids, XodrDrivingDirection.forward)
lane_corr = self._road_corridor.lane_corridors[0]
goal_polygon = Polygon2d([0, 0, 0], [
Point2d(-10, -10), Point2d(-10, 10), Point2d(10, 10), Point2d(10, -10)])
goal_point = GetPointAtS(
lane_corr.center_line, lane_corr.center_line.Length()*0.45)
goal_polygon = goal_polygon.Translate(goal_point)
return GoalDefinitionPolygon(goal_polygon)


class DeterministicLaneCorridorConfig(CustomLaneCorridorConfig):
def __init__(self,
params=None,
**kwargs):
super(DeterministicLaneCorridorConfig, self).__init__(params, **kwargs)
self._s_start = kwargs.pop("s_start", [30])
self._vel_start = kwargs.pop("vel_start", [10])
if not isinstance(self._s_start, list):
raise ValueError("start types must be of type list.")

def position(self, world):
"""
returns position based on start
"""
if self._road_corridor == None:
world.map.GenerateRoadCorridor(
self._road_ids, XodrDrivingDirection.forward)
self._road_corridor = world.map.GetRoadCorridor(
self._road_ids, XodrDrivingDirection.forward)
if self._road_corridor is None:
return None
if self._lane_corridor is not None:
lane_corr = self._lane_corridor
else:
lane_corr = self._road_corridor.lane_corridors[self._lane_corridor_id]
if lane_corr is None:
return None
centerline = lane_corr.center_line

if len(self._s_start) == 0:
logging.info(
"no more agents to spawn. If this message is created more than one \
time, then the scenario has been tried to be created more than \
once -> ERROR")
return None

self._current_s = self._s_start.pop(0)
xy_point = GetPointAtS(centerline, self._current_s)
angle = GetTangentAngleAtS(centerline, self._current_s)

logging.info("Creating agent at x={}, y={}, theta={}".format(
xy_point.x(), xy_point.y(), angle))
return (xy_point.x(), xy_point.y(), angle)

def velocity(self):
return self._vel_start
17 changes: 17 additions & 0 deletions src/create_figures/BUILD
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@

filegroup(
name = "notebooks_folder",
srcs = glob(["*.ipynb"], exclude=["run.py", "run", "__init__.py"])
)

# add bark deps here
py_test(
name = "run_notebooks",
srcs = ["run_notebooks.py"],
data = [":notebooks_folder",
"@bark_project//bark/python_wrapper:core.so",
"//results/benchmark:benchmark_results",
],
deps = ["@bark_project//bark/benchmark:benchmark_runner",
],
)
Loading

0 comments on commit bf574bf

Please sign in to comment.