TabularBench: Adversarial robustness benchmark for tabular data
Leaderboard: https://serval-uni-lu.github.io/tabularbench/
Documentation: https://serval-uni-lu.github.io/tabularbench/doc
Research papers:
- Benchmark: TabularBench: Benchmarking Adversarial Robustness for Tabular Deep Learning in Real-world Use-cases
- CAA attack: Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data
- CAPGD attack: Towards Adaptive Attacks on Constrained Tabular Machine Learning
- MOEVA attack: A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
How to cite:
Would you like to reference the CAA attack?
Then consider citing our paper, to appear in NeurIPS 2024 (spotlight):
@misc{simonetto2024caa,
title={Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data},
author={Thibault Simonetto and Salah Ghamizi and Maxime Cordy},
booktitle={To appear in Advances in Neural Information Processing Systems},
year={2024},
url={https://arxiv.org/abs/2406.00775},
}
Would you like to reference the benchmark, the leaderboard or the model zoo?
Then consider citing our paper, to appear in NeurIPS 2024 Datasets and Benchmarks:
@misc{simonetto2024tabularbench,
title={TabularBench: Benchmarking Adversarial Robustness for Tabular Deep Learning in Real-world Use-cases},
author={Thibault Simonetto and Salah Ghamizi and Maxime Cordy},
booktitle={To appear in Advances in Neural Information Processing Systems},
year={2024},
url={https://arxiv.org/abs/2408.07579},
}
-
Clone the repository
-
Build the Docker image
./tasks/docker_build.sh
-
Run the Docker container
./tasks/run_benchmark.sh
Note: The ./tasks/run_benchmark.sh
script mounts the current directory to the /workspace
directory in the Docker container.
This allows you to edit the code on your host machine and run the code in the Docker container without rebuilding.
We recommend using Python 3.8.10.
-
Install the package from PyPI
pip install tabularbench
-
Clone the repository
-
Create a virtual environment using Pyenv with Python 3.8.10.
-
Install the dependencies using Poetry.
poetry install
-
Clone the repository
-
Create a virtual environment using Conda with Python 3.8.10.
conda create -n tabularbench python=3.8.10
-
Activate the conda environment.
conda activate tabularbench
-
Install the dependencies using Pip.
pip install -r requirements.txt
You can run the benchmark with the following command:
python -m tasks.run_benchmark
or with Docker:
docker_run_benchmark
You can also use the API to run the benchmark. See tasks/run_benchmark.py
for an example.
clean_acc, robust_acc = benchmark(
dataset="URL",
model="STG_Default",
distance="L2",
constraints=True,
)
We provide the models and parameters used in the paper. You can retrain the models with the following command:
python -m tasks.train_model
Edit the tasks/train_model.py
file to change the model, dataset, and training method.
Datasets, pretrained models, and synthetic data are publicly available here. The folder structure on the Shared folder should be followed locally to ensure the code runs correctly.
Note
We are transitioning to Hugging Face for data storage. The model's data is now available on Huggin Face here.
Datasets: Datasets are downloaded automatically in data/datasets
when used.
Models (HuggingFace): Models are now downloaded automatically as needed when running the benchmark. Only the required model for a specific setting will be downloaded. Pretrained models remain available in the data/models
folder on OneDrive.
Model parameters: Optimal parameters (from hyperparameters search) are required to train models and are in data/model_parameters
.
Synthetic data: The synthetic data generated by GANs is available in the folder data/synthetic
.
For technical reasons, the names of datasets, models, and training methods are different from the paper. The mapping can be found in docs/naming.md.