Skip to content

Latest commit

 

History

History
63 lines (38 loc) · 4.19 KB

README.md

File metadata and controls

63 lines (38 loc) · 4.19 KB

scad_tot Build & Push Docker Image

CMU CyLab group's repository for "TakeOverTime" neural network & verification. Affiliated with the Safe-SCAD project.

Project Structure

Installation

Option 1 is to install the project natively, which is recommended for most developers. Option 2 is to use the docker image, which is mostly intended for running the project in the cloud, however could potentially be used for local development. Installing locally is probably the most comfortable development experience for most developers that aren't familiar with Docker because containers are intended to be ephemeral. Choose whichever option you prefer.

Option 1: Native Install

This option installs the project natively on your system. Using a venv is recommended, but not required. Detailed instructions for this option can be found here: Native Installation Instructions.

Option 2: Docker Image

This option runs the project from within a docker container. It is really intended for running the project in the cloud, however you could also use it for local development if you like working in docker containers. Instructions for this method can be found here: Docker Installation Instructions.

Download Dataset

There are a few different versions of the dataset, depending on your needs. The current DNNs we have were built using All_Features_ReactionTime.csv.

Primary Dataset Files

  • Raw Data - Raw data from individual simulator runs (separete text file per run)
  • Cleaned Data - Cleaned data from individual simulator runs (separate csv file per run)
  • AllData.csv - Cleaned data combined into a single file (7GB)
  • cleaned_AllData.csv - Cleaned up version of AllData.csv (3GB)
  • All_Features_ReactionTime.csv - cleaned_AllData.csv cleaned further with "ReactionTime" and engineered features (2GB)

Other dataset files

These are the pre-processed that were used to build, train, and verify the latest version of TOT model. The training and testing data were used to train & test the model, and the verification data consists of the training and testing data combined with incorrectly predicted rows filtered out.

Dataset Info

Running Verification

Robustness

  • To run in a jupyter notebook, see ./verification/robustness.ipynb
  • To run from command line, see ./scripts/robustness.sh and ./scripts/robustness_asym.sh
    • View CLI options with ./verification/robustness.py --help

Sensitivity

  • To run in a jupyter notebook, see ./verification/sensitivity.ipynb
  • To run from command line, see ./scripts/sensitivity.sh and ./scripts/sensitivity_asym.sh
    • View CLI options with ./verification/sensitivity.py --help

Generating & Verifying Regions

  • To run in a jupyter notebook, see ./verification/clustering.ipynb