Automated decision-making systems can potentially introduce biases, raising ethical concerns. This has led to the development of numerous bias mitigation techniques. Choosing a fairness-aware model often requires trial and error, as it is difficult to predict whether a mitigation measure will meet user requirements or how it will affect metrics like accuracy and runtime. Existing fairness toolkits lack a comprehensive benchmarking framework. To bridge this gap, we present FairnessEval, a framework specifically designed to evaluate fairness in Machine Learning models. FairnessEval streamlines dataset preparation, fairness evaluation, and result presentation, while also offering customization options. In this demonstration, we highlight the functionality of FairnessEval in the selection and validation of fairness-aware models. We compare various approaches and simulate deployment scenarios to showcase FairnessEval effectiveness.

FairnessEval: a Framework for Evaluating Fairness of Machine Learning Models / Baraldi, A.; Brucato, M.; Dudik, M.; Guerra, F.; Interlandi, M.. - 28:3(2025), pp. 1154-1157. ( 28th International Conference on Extending Database Technology, EDBT 2025 esp 2025) [10.48786/edbt.2025.112].

FairnessEval: a Framework for Evaluating Fairness of Machine Learning Models

Guerra F.;Interlandi M.
2025

Abstract

Automated decision-making systems can potentially introduce biases, raising ethical concerns. This has led to the development of numerous bias mitigation techniques. Choosing a fairness-aware model often requires trial and error, as it is difficult to predict whether a mitigation measure will meet user requirements or how it will affect metrics like accuracy and runtime. Existing fairness toolkits lack a comprehensive benchmarking framework. To bridge this gap, we present FairnessEval, a framework specifically designed to evaluate fairness in Machine Learning models. FairnessEval streamlines dataset preparation, fairness evaluation, and result presentation, while also offering customization options. In this demonstration, we highlight the functionality of FairnessEval in the selection and validation of fairness-aware models. We compare various approaches and simulate deployment scenarios to showcase FairnessEval effectiveness.
2025
28th International Conference on Extending Database Technology, EDBT 2025
esp
2025
28
1154
1157
Baraldi, A.; Brucato, M.; Dudik, M.; Guerra, F.; Interlandi, M.
FairnessEval: a Framework for Evaluating Fairness of Machine Learning Models / Baraldi, A.; Brucato, M.; Dudik, M.; Guerra, F.; Interlandi, M.. - 28:3(2025), pp. 1154-1157. ( 28th International Conference on Extending Database Technology, EDBT 2025 esp 2025) [10.48786/edbt.2025.112].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1381463
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact