You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I'm from the MAMMOth project that works on multi-attribute multi-modal bias mitigation. In the course of that project we are developing FairBench: a library that presents building blocks that can be combined to construct and run a wide range of fairness metrics (while also tracking underlying computations), especially for multiple multi-value sensitive attributes and intersectionality considerations. We published a paper describing an organized process of building bias/fairness metrics here: https://arxiv.org/pdf/2405.19022
Last year, we had a call with @hoffmansc where we briefly mentioned the possibility of integrating parts of FairBench into AIF360 to supplement the latter's bias/fairness assessment capabilities. I am opening this issue as a follow-up thanks to achieving an acceptable level of maturity in our work.
As far as I can tell, AIF360 already has the concept of ratio and difference comparisons to serve as building blocks for many of its computations, but I believe that it would benefit from our principled fairness exploration capabilities that consider more building blocks and -importantly- are extensible to future building blocks instead of needing to hard-code their usage.
Rough proposal
I am proposing to create customize-able AIF360 metrics (perhaps start first with one extending BinaryLabelDatasetMetric, though FairBench also covers ranking and regression - with more tasks planned for the future). The new metrics would depend on the FairBench library. (If you want to prevent installation bloat, FairBench would not necessarily need to be a main dependency of AIF360, but only installed by users if they actually want to call these new metrics.)
I am thinking that the new metrics could be similar to AIF360's ClassificationMetric with the difference that:
a) __getattr__ would be overloaded so that, based on the method that users try to call a method whose name follows a standardized name convention, an appropriate fairness measure is generated. There are hundreds of valid combinations. For example, upon calling metric.intersectional_accuracy_min_ratio(...) an appropriate bias assessment could be performed.
b) Outcome values would be fairbench.Explainable objects. These can be used as floats in computations normally, but also allow backtracking previous computations with an .explain field.
Hello,
I'm from the MAMMOth project that works on multi-attribute multi-modal bias mitigation. In the course of that project we are developing FairBench: a library that presents building blocks that can be combined to construct and run a wide range of fairness metrics (while also tracking underlying computations), especially for multiple multi-value sensitive attributes and intersectionality considerations. We published a paper describing an organized process of building bias/fairness metrics here: https://arxiv.org/pdf/2405.19022
Last year, we had a call with @hoffmansc where we briefly mentioned the possibility of integrating parts of FairBench into AIF360 to supplement the latter's bias/fairness assessment capabilities. I am opening this issue as a follow-up thanks to achieving an acceptable level of maturity in our work.
As far as I can tell, AIF360 already has the concept of ratio and difference comparisons to serve as building blocks for many of its computations, but I believe that it would benefit from our principled fairness exploration capabilities that consider more building blocks and -importantly- are extensible to future building blocks instead of needing to hard-code their usage.
Rough proposal
I am proposing to create customize-able AIF360 metrics (perhaps start first with one extending
BinaryLabelDatasetMetric
, though FairBench also covers ranking and regression - with more tasks planned for the future). The new metrics would depend on the FairBench library. (If you want to prevent installation bloat, FairBench would not necessarily need to be a main dependency of AIF360, but only installed by users if they actually want to call these new metrics.)I am thinking that the new metrics could be similar to AIF360's
ClassificationMetric
with the difference that:a)
__getattr__
would be overloaded so that, based on the method that users try to call a method whose name follows a standardized name convention, an appropriate fairness measure is generated. There are hundreds of valid combinations. For example, upon callingmetric.intersectional_accuracy_min_ratio(...)
an appropriate bias assessment could be performed.b) Outcome values would be
fairbench.Explainable
objects. These can be used as floats in computations normally, but also allow backtracking previous computations with an.explain
field.Links
Docs: https://fairbench.readthedocs.io/
Github: https://github.com/mever-team/FairBench
I hope that this is interesting for your team. At your disposal for further discussions.
The text was updated successfully, but these errors were encountered: