Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
Nov 25, 2024 - Python
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Interpretability and explainability of data and machine learning models
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
Paddle with Decentralized Trust based on Xuperchain
Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks
Hands on workshop material evaluating performance, fairness and robustness of models
Add a description, image, and links to the trusted-ai topic page so that developers can more easily learn about it.
To associate your repository with the trusted-ai topic, visit your repo's landing page and select "manage topics."