Skip to content

Latest commit

 

History

History
203 lines (130 loc) · 10.2 KB

README.md

File metadata and controls

203 lines (130 loc) · 10.2 KB

Below you will find an annotated bibliography of important papers and background reading to help you get started in bias mitigation, as well as Datasets to use to test out our tools. Please send suggestions or questions to [email protected].

Algorithmic Bias in AI: Advanced Background

Alekh Agarwal, Alina Beygelzimer, Miroslav Dud´ık, John Langford, and Hanna Wallach. A reductions approach to fair classification. arXiv preprint arXiv:1803.02453, 2018.

classic introduction to classification problems

Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. https: //www.propublica.org/article/ machine-bias-risk-assessments-in-/ criminal-sentencing, May 2016.

less-technical introduction to implications in predictive policing

Sivaraman Balakrishnan, Srivatsan Narayanan, Alessandro Rinaldo, Aarti Singh, and Larry Wasserman. Cluster trees on manifolds. In Advances in Neural Information Processing Systems, pages 2679–2687, 2013.

Secondary techniques in survey-based categorical data classification

Solon Barocas and Andrew D Selbst. Big data’s disparate impact. Cal. L. Rev., 104:671, 2016.

Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075, 2017.

Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems, pages 4349–4357, 2016.

Zdravko I Botev and Dirk P Kroese. The generalized cross entropy method, with applications to probability density estimation. Methodology and Computing in Applied Probability, 13(1):1–27, 2011.

Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. Building classifiers with independency constraints. In Data mining workshops, 2009. ICDMW’09. IEEE international conference on, pages 13–18. IEEE, 2009.

Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.

Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 797–806. ACM, 2017.

Andrew Cotter, Maya Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake Woodworth, and Seungil You. Training well-generalizing classifiers for fairness metrics and other data-dependent constraints. arXiv preprint arXiv:1807.00028, 2018a.

Andrew Cotter, Heinrich Jiang, and Karthik Sridharan. Twoplayer games for efficient non-convex constrained optimization. arXiv preprint arXiv:1804.06500, 2018b.

Andrew Cotter, Heinrich Jiang, Serena Wang, Taman Narayan, Maya Gupta, Seungil You, and Karthik Sridharan. Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. arXiv preprint arXiv:1809.04198, 2018c.

Neil A Doherty, Anastasia V Kartasheva, and Richard D Phillips. Information effect of entry into credit ratings market: The case of insurers’ ratings. Journal of Financial Economics, 106(2):308–330, 2012.

Michele Donini, Luca Oneto, Shai Ben-David, John ShaweTaylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. arXiv preprint arXiv:1802.08626, 2018.

Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226. ACM, 2012.

Elad Eban, Mariano Schain, Alan Mackey, Ariel Gordon, Ryan Rifkin, and Gal Elidan. Scalable learning of nondecomposable objectives. In Artificial Intelligence and Statistics, pages 832–840, 2017.

Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015.

Michael Feldman. Computational fairness: Preventing machine-learned discrimination. 2015.

Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259–268. ACM, 2015.

Benjamin Fish, Jeremy Kun, and Adam D Lelkes. Fair ´ boosting: a case study. In Workshop on Fairness, Accountability, and Transparency in Machine Learning. Citeseer, 2015.

Michael P Friedlander and Maya R Gupta. On minimizing distortion and relative entropy. IEEE Transactions on Information Theory, 52(1):238–245, 2006.

Data Bias in Statisics and Policy

Andreas, J.: Measuring compositionality in representation learning. In: ICLR (2019)

Beery, S., Van Horn, G., Perona, P.: Recognition in terra incognita. In: ECCV (2018)

Borghi, G., Pini, S., Grazioli, F., Vezzani, R., Cucchiara, R.: Face verification from depth using privileged information. In: BMVC (2018)

Bucilu˘a, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: ACM SIGKDD (2006)

Cai, S., Zuo, W., Zhang, L.: Higher-order integration of hierarchical convolutional activations for fine-grained visual categorization. In: ICCV (2017)

Chen, Y., Jin, X., Feng, J., Yan, S.: Training group orthogonal neural networks with privileged information. In: IJCAI. pp. 1532–1538. AAAI Press (2017)

Cui, Y., Zhou, F., Wang, J., Liu, X., Lin, Y., Belongie, S.: Kernel pooling for convolutional neural networks. In: CVPR (2017)

Ding, Y., Zhou, Y., Zhu, Y., Ye, Q., Jiao, J.: Selective sparse sampling for finegrained image recognition. In: ICCV. pp. 6599–6608 (2019)

Du, Y., Czarnecki, W.M., Jayakumar, S.M., Pascanu, R., Lakshminarayanan, B.: Adapting auxiliary losses using gradient similarity. arXiv preprint arXiv:1812.02224 (2018)

Gao, Y., Beijbom, O., Zhang, N., Darrell, T.: Compact bilinear pooling. In: CVPR (2016)

He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

Hoffman, J., Gupta, S., Darrell, T.: Learning with side information through modality hallucination. In: CVPR (2016)

Hu, T., Qi, H.: See better before looking closer: Weakly supervised data augmentation network for fine-grained visual classification. arXiv preprint arXiv:1901.09891 (2019)

Jang, Y., Lee, H., Hwang, S.J., Shin, J.: Learning what and where to transfer. arXiv preprint arXiv:1905.05901 (2019)

Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: CVPR (2018)

Kim, J.H., On, K.W., Lim, W., Kim, J., Ha, J.W., Zhang, B.T.: Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325 (2016)

Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350(6266), 1332–1338 (2015)

Lambert, J., Sener, O., Savarese, S.: Deep learning under privileged information using heteroscedastic dropout. In: CVPR (2018)

Lee, K.H., Ros, G., Li, J., Gaidon, A.: SPIGAN: Privileged adversarial learning from simulation. In: ICLR (2019)

Li, K., Wu, Z., Peng, K., Ernst, J., Fu, Y.: Tell me where to look: Guided attention inference network. In: CVPR (2018)

Li, P., Xie, J., Wang, Q., Gao, Z.: Towards faster training of global covariance pooling networks by iterative matrix square root normalization. In: CVPR (2018)

Li, P., Xie, J., Wang, Q., Zuo, W.: Is second-order information helpful for largescale visual recognition? In: ICCV (2017)

nlp-datasets

Public domain datasets with text data for use in Natural Language Processing (NLP).

Datasets (English, multilang)

Apache Software Foundation Public Mail Archives: all publicly available Apache Software Foundation mail archives as of July 11, 2011 (200 GB)

Blog Authorship Corpus: consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. 681,288 posts and over 140 million words. (298 MB)

Amazon Fine Food Reviews [Kaggle]: consists of 568,454 food reviews Amazon users left up to October 2012. Paper. (240 MB)

Amazon Reviews: Stanford collection of 35 million amazon reviews. (11 GB)

ArXiv: All the Papers on archive as fulltext (270 GB) + sourcefiles (190 GB).

ASAP Automated Essay Scoring [Kaggle]: For this competition, there are eight essay sets. Each of the sets of essays was generated from a single prompt. Selected essays range from an average length of 150 to 550 words per response. Some of the essays are dependent upon source information and others are not. All responses were written by students ranging in grade levels from Grade 7 to Grade 10. All essays were hand graded and were double-scored. (100 MB)

ASAP Short Answer Scoring [Kaggle]: Each of the data sets was generated from a single prompt. Selected responses have an average length of 50 words per response. Some of the essays are dependent upon source information and others are not. All responses were written by students primarily in Grade 10. All responses were hand graded and were double-scored. (35 MB)

Classification of political social media: Social media messages from politicians classified by content. (4 MB)

CLiPS Stylometry Investigation (CSI) Corpus: a yearly expanded corpus of student texts in two genres: essays and reviews. The purpose of this corpus lies primarily in stylometric research, but other applications are possible. (on request)

ClueWeb09 FACC: ClueWeb09 with Freebase annotations (72 GB)

ClueWeb11 FACC: ClueWeb11 with Freebase annotations (92 GB)