Skip to content

Commit

Permalink
devkit-v0.2
Browse files Browse the repository at this point in the history
  • Loading branch information
mh0797 committed Mar 11, 2024
1 parent 526e9ac commit 3e46f63
Show file tree
Hide file tree
Showing 25 changed files with 283 additions and 84 deletions.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,10 @@


## Changelog <a name="changelog"></a>

- **`[2024/03/11]`** NAVSIM v0.2 release
- Easier installation and download
- mini and test split integration
- Privileged `Human` agent
- **`[2024/02/20]`** NAVSIM v0.1 release (initial demo)
- OpenScene-mini sensor blobs and annotation logs
- Naive `ConstantVelocity` agent
Expand Down
2 changes: 0 additions & 2 deletions docs/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,4 @@ Let’s dig deeper into this class.
Given this input, you will need to override the `compute_trajectory()` method and output a `Trajectory`. This is an array of BEV poses (with x, y and heading in local coordinates), as well as a `TrajectorySampling` config object that indicates the duration and frequency of the trajectory. The PDM score is evaluated for a horizon of 4 seconds at a frequency of 10Hz. The `TrajectorySampling` config facilitates interpolation when the output frequency is different from the one used during evaluation.

We provide a naive constant velocity agent as part of our demo, for reference:

https://github.com/autonomousvision/navsim/blob/51cecd51aa70b0e6bcfb3541b91ae88f2a78a25e/navsim/agents/constant_velocity_agent.py#L9

3 changes: 2 additions & 1 deletion docs/cache.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,5 @@ OpenScene is a compact redistribution of the large-scale [nuPlan dataset](https:
cd $NAVSIM_DEVKIT_ROOT/scripts/
./run_metric_caching.sh
```
Note that you have to set the `LOG_PATH` and `METRIC_CACHE_PATH` first. `LOG_PATH` has to point to the [OpenScene annotations](https://github.com/autonomousvision/navsim/blob/main/docs/install.md#1-download-the-demo-data). The cache will be saved under the `METRIC_CACHE_PATH` which you can chose freely.

This will create the meric cache under `$NUPLAN_EXP_ROOT/metric_cache`, where `$NUPLAN_EXP_ROOT` is defined by the environment variable set during installation.
65 changes: 44 additions & 21 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,36 +2,59 @@

To get started with NAVSIM:

### 1. Download the demo data
First, you need to download the OpenScene mini logs and sensor blobs, as well as the nuPlan maps.
### 1. Clone the navsim-devkit
Clone the repository
```
git clone https://github.com/autonomousvision/navsim.git
cd navsim
```
### 2. Download the demo data
You need to download the OpenScene logs and sensor blobs, as well as the nuPlan maps.
We provide scripts to download the nuplan maps, the mini split and the test split.
Navigate to the download directory and download the maps

**NOTE: Please check the [LICENSE file](https://motional-nuplan.s3-ap-northeast-1.amazonaws.com/LICENSE) before downloading the data.**

```
wget https://motional-nuplan.s3-ap-northeast-1.amazonaws.com/public/nuplan-v1.1/nuplan-maps-v1.1.zip && unzip nuplan-maps-v1.1.zip
wget https://s3.eu-central-1.amazonaws.com/avg-projects-2/navsim/navsim_logs.zip && unzip navsim_logs.zip
wget https://s3.eu-central-1.amazonaws.com/avg-projects-2/navsim/sensor_blobs.zip && unzip sensor_blobs.zip
cd download && ./download_maps
```
The `sensor_blobs` file is fairly large (90 GB). For understanding the metrics and testing the naive baselines in the demo, this is not strictly necessary.

### 2. Install the navsim-devkit
Next, setup the environment and install navsim.
Clone the repository
Next download the mini split and the test split
```
git clone https://github.com/kashyap7x/navsim.git
cd navsim
```
Then create a new environment and install the required dependencies:
```
conda env create --name navsim -f environment.yml
conda activate navsim
pip install -e .
./download_mini
./download_test
```

**The mini split and the test split take around ~160GB and ~220GB of memory respectively**

This will download the splits into the download directory. From there, move it to create the following structure.
```angular2html
~/navsim_workspace
├── navsim (containing the devkit)
├── exp
└── dataset
   ├── maps
   ├── navsim_logs
| ├── test
   │ └── mini
   └── sensor_blobs
├── test
   └── mini
```
Set the required environment variables, by adding the following to your `~/.bashrc` file
Based on the structure above, the environment variables need to be defined as:
```
export NAVSIM_DEVKIT_ROOT=/path/to/navsim/devkit
export NUPLAN_EXP_ROOT=/path/to/navsim/exp
export NUPLAN_MAPS_ROOT=/path/to/nuplan/maps
export OPENSCENE_DATA_ROOT=/path/to/openscene
export NUPLAN_MAPS_ROOT="$HOME/navsim_workspace/dataset/maps"
export NUPLAN_EXP_ROOT="$HOME/navsim_workspace/exp"
export NAVSIM_DEVKIT_ROOT="$HOME/navsim_workspace/navsim"
export OPENSCENE_DATA_ROOT="$HOME/navsim_workspace/dataset"
```

### 3. Install the navsim-devkit
Finally, install navsim.
To this end, create a new environment and install the required dependencies:
```
conda env create --name navsim -f environment.yml
conda activate navsim
pip install -e .
```
5 changes: 3 additions & 2 deletions docs/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,11 @@ i.e., `PDM Score = NC * DAC * DDC * (5*TTC + 2*C + 5*EP) / 12`
To evaluate the PDM score for an agent you can run:
```
cd $NAVSIM_DEVKIT_ROOT/scripts/
./run_pdm_score_evaluation.sh
./run_cv_pdm_score_evaluation.sh
```
**Note: You have to adapt the variables `LOG_PATH` so that it points to the [logs (annotations)](https://github.com/autonomousvision/navsim/blob/main/docs/install.md#1-download-the-demo-data), `METRIC_CACHE_PATH` so that it points to the [metric cache](https://github.com/autonomousvision/navsim/blob/main/docs/cache.md#understanding-the-data-format-and-classes) and `OUTPUT_DIR` so that it points to a directory where the evaluation csv will be stored**

By default, this will generate an evaluation csv for a simple constant velocity [planning baseline](https://github.com/autonomousvision/navsim/blob/main/docs/agents.md#output). You can modify the script to evaluate your own planning agent.

For instance, you can add a new config for your agent under `$NAVSIM_DEVKIT_ROOT/navsim/navsim/planning/script/config/pdm_scoring/agent/my_new_agent.yaml`.
Then, running your own agent is as simple as adding an override `agent=my_new_agent` to the script.
You can find an example in `run_human_agent_pdm_score_evaluation.sh`
4 changes: 4 additions & 0 deletions download/download_maps.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
wget https://motional-nuplan.s3-ap-northeast-1.amazonaws.com/public/nuplan-v1.1/nuplan-maps-v1.1.zip
unzip nuplan-maps-v1.1.zip
rm nuplan-maps-v1.1.zip
mv nuplan-maps-v1.0 maps
23 changes: 23 additions & 0 deletions download/download_mini.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
wget https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_metadata_mini.tgz
tar -xzf openscene_metadata_mini.tgz
rm openscene_metadata_mini.tgz

for split in {0..31}; do
wget https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_sensor_mini_camera/openscene_sensor_mini_camera_${split}.tgz
echo "Extracting file openscene_sensor_mini_camera_${split}.tgz"
tar -xzf openscene_sensor_mini_camera_${split}.tgz
rm openscene_sensor_mini_camera_${split}.tgz
done

for split in {0..31}; do
wget https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_sensor_mini_lidar/openscene_sensor_mini_lidar_${split}.tgz
echo "Extracting file openscene_sensor_mini_lidar_${split}.tgz"
tar -xzf openscene_sensor_mini_lidar_${split}.tgz
rm openscene_sensor_mini_lidar_${split}.tgz
done

mv openscene_v1.1/meta_datas mini_navsim_logs
rm -r openscene_v1.1

mv openscene-v1.1/sensor_blobs mini_sensor_blobs
rm -r openscene-v1.1
23 changes: 23 additions & 0 deletions download/download_test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
wget https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_metadata_test.tgz
tar -xzf openscene_metadata_test.tgz
rm openscene_metadata_test.tgz

for split in {0..31}; do
wget https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_sensor_test_camera/openscene_sensor_test_camera_${split}.tgz
echo "Extracting file openscene_sensor_test_camera_${split}.tgz"
tar -xzf openscene_sensor_test_camera_${split}.tgz
rm openscene_sensor_test_camera_${split}.tgz
done

for split in {0..31}; do
wget https://huggingface.co/datasets/OpenDriveLab/OpenScene/resolve/main/openscene-v1.1/openscene_sensor_test_lidar/openscene_sensor_test_lidar_${split}.tgz
echo "Extracting file openscene_sensor_test_lidar_${split}.tgz"
tar -xzf openscene_sensor_test_lidar_${split}.tgz
rm openscene_sensor_test_lidar_${split}.tgz
done

mv openscene_v1.1/meta_datas test_navsim_logs
rm -r openscene_v1.1
mkdir
mv openscene-v1.1/sensor_blobs test_sensor_blobs
rm -r openscene-v1.1
1 change: 1 addition & 0 deletions navsim/agents/abstract_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ class AbstractAgent(abc.ABC):
"""
Interface for a generic end-to-end agent.
"""
requires_scene = False

def __new__(cls, *args: Any, **kwargs: Any) -> AbstractAgent:
"""
Expand Down
2 changes: 2 additions & 0 deletions navsim/agents/constant_velocity_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@

class ConstantVelocityAgent(AbstractAgent):

requires_scene = False

def __init__(
self,
trajectory_sampling: TrajectorySampling = TrajectorySampling(
Expand Down
37 changes: 37 additions & 0 deletions navsim/agents/human_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from typing import List
from nuplan.planning.simulation.trajectory.trajectory_sampling import TrajectorySampling
from navsim.agents.abstract_agent import AbstractAgent
from navsim.common.dataclasses import AgentInput, Trajectory, Scene

class HumanAgent(AbstractAgent):

requires_scene = True

def __init__(
self,
trajectory_sampling: TrajectorySampling = TrajectorySampling(
time_horizon=4, interval_length=0.5
),
):
self._trajectory_sampling = trajectory_sampling

def name(self) -> str:
"""Inherited, see superclass."""

return self.__class__.__name__

def initialize(self) -> None:
"""Inherited, see superclass."""
pass

def get_sensor_modalities(self) -> List[str]:
"""Inherited, see superclass."""
return []

def compute_trajectory(self, agent_input: AgentInput, scene: Scene) -> Trajectory:
"""
Computes the ego vehicle trajectory.
:param current_input: Dataclass with agent inputs.
:return: Trajectory representing the predicted ego's position in future
"""
return scene.get_future_trajectory(self._trajectory_sampling.num_poses)
11 changes: 6 additions & 5 deletions navsim/common/dataclasses.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,14 +45,13 @@ class Cameras:
b0: Camera

@classmethod
def from_camera_dict(cls, camera_dict: Dict[str, Any]) -> Cameras:
def from_camera_dict(cls, sensor_blobs_path: Path, camera_dict: Dict[str, Any]) -> Cameras:

data_dict: Dict[str, Camera] = {}
for camera_name in camera_dict.keys():
# TODO: adapt for complete OpenScenes data
image_path = (
Path(OPENSCENE_DATA_ROOT)
/ "sensor_blobs/mini"
sensor_blobs_path
/ camera_dict[camera_name]["data_path"]
)
data_dict[camera_name] = Camera(
Expand Down Expand Up @@ -108,6 +107,7 @@ def __post_init__(self):
def from_scene_dict_list(
cls,
scene_dict_list: List[Dict],
sensor_blobs_path: Path,
num_history_frames: int,
sensor_modalities: List[str] = ["lidar", "camera"],
) -> AgentInput:
Expand Down Expand Up @@ -139,7 +139,7 @@ def from_scene_dict_list(
ego_statuses.append(ego_status)

if include_cameras:
cameras.append(Cameras.from_camera_dict(scene_dict_list[frame_idx]["cams"]))
cameras.append(Cameras.from_camera_dict(sensor_blobs_path, scene_dict_list[frame_idx]["cams"]))

if include_lidar:
# TODO: Add lidar data
Expand Down Expand Up @@ -302,6 +302,7 @@ def get_agent_input(
def from_scene_dict_list(
cls,
scene_dict_list: List[Dict],
sensor_blobs_path: Path,
num_history_frames: int,
num_future_frames: int,
sensor_modalities: List[str] = ["lidar", "camera"],
Expand Down Expand Up @@ -345,7 +346,7 @@ def from_scene_dict_list(
)

if "camera" in sensor_modalities:
cameras = Cameras.from_camera_dict(scene_dict_list[frame_idx]["cams"])
cameras = Cameras.from_camera_dict(sensor_blobs_path, scene_dict_list[frame_idx]["cams"])
else:
cameras = None

Expand Down
9 changes: 6 additions & 3 deletions navsim/common/dataloader.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@


def filter_scenes(
data_path: Path, scene_filter: SceneFilter, sensor_modalities: List[str] = ["lidar", "camera"]
data_path: Path, sensor_blobs_path: Path, scene_filter: SceneFilter, sensor_modalities: List[str] = ["lidar", "camera"]
) -> Dict[str, Scene]:

def split_list(input_list: List[Any], n: int) -> List[List[Any]]:
Expand Down Expand Up @@ -44,6 +44,7 @@ def split_list(input_list: List[Any], n: int) -> List[List[Any]]:
token = frame_list[scene_filter.num_history_frames - 1]["token"]
filtered_scenes[token] = Scene.from_scene_dict_list(
frame_list,
sensor_blobs_path,
num_history_frames=scene_filter.num_history_frames,
num_future_frames=scene_filter.num_future_frames,
sensor_modalities=sensor_modalities,
Expand All @@ -69,11 +70,12 @@ class SceneLoader:
def __init__(
self,
data_path: Path,
sensor_blobs_path: Path,
scene_filter: SceneFilter = SceneFilter(),
sensor_modalities: List[str] = ["lidar", "camera"],
):

self._filtered_scenes = filter_scenes(data_path, scene_filter, sensor_modalities)
self._filtered_scenes = filter_scenes(data_path, sensor_blobs_path, scene_filter, sensor_modalities)
self._scene_filter = scene_filter
self._sensor_modalities = sensor_modalities

Expand All @@ -98,11 +100,12 @@ class AgentInputLoader:
def __init__(
self,
data_path: Path,
sensor_blobs_path: Path,
scene_filter: SceneFilter = SceneFilter(),
sensor_modalities: List[str] = ["lidar", "camera"],
):

self._filtered_scenes = filter_scenes(data_path, scene_filter, sensor_modalities)
self._filtered_scenes = filter_scenes(data_path, sensor_blobs_path, scene_filter, sensor_modalities)
self._scene_filter = scene_filter
self._sensor_modalities = sensor_modalities

Expand Down
14 changes: 7 additions & 7 deletions navsim/evaluate/pdm_score.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
)
from navsim.planning.simulation.planner.pdm_planner.scoring.pdm_scorer import (
PDMScorer,
PDMScorerConfig,
)
from navsim.planning.simulation.planner.pdm_planner.utils.pdm_array_representation import (
ego_states_to_state_array,
Expand Down Expand Up @@ -96,19 +95,20 @@ def get_trajectory_as_array(
return ego_states_to_state_array(trajectory_ego_states)


def pdm_score(metric_cache: MetricCache, model_trajectory: Trajectory) -> PDMResults:
def pdm_score(
metric_cache: MetricCache,
model_trajectory: Trajectory,
future_sampling: TrajectorySampling,
simulator: PDMSimulator,
scorer: PDMScorer
) -> PDMResults:
"""
Runs PDM-Score and saves results in dataclass.
:param metric_cache: Metric cache dataclass
:param model_trajectory: Predicted trajectory in ego frame.
:return: Dataclass of PDM-Subscores.
"""

# TODO: add to some config
future_sampling = TrajectorySampling(num_poses=40, interval_length=0.1)
simulator = PDMSimulator(future_sampling)
scorer = PDMScorer(future_sampling, config=PDMScorerConfig(progress_distance_threshold=5.0))

initial_ego_state = metric_cache.ego_state

pdm_trajectory = metric_cache.trajectory
Expand Down
Loading

0 comments on commit 3e46f63

Please sign in to comment.