A statistical approach to detect disparity prone features in a group fairness setting
Guilherme Dean Pelegrina, Miguel Couceiro, Leonardo Tomazeli Duarte
ARTIGO
Inglês
Agradecimentos: Funding was provided by Fundação de Amparo à Pesquisa do Estado de São Paulo (Grant nos. 2020/09838-0, 2020/10572-5, 2021/11086-0), TAILOR (Grant no. 952215)
Abstract: The use of machine learning models in decision support systems with high societal impact raised concerns about unfair (disparate) results for different groups of people. When evaluating such unfair decisions, one generally relies on predefined groups that are determined by a set of...
Ver mais
Abstract: The use of machine learning models in decision support systems with high societal impact raised concerns about unfair (disparate) results for different groups of people. When evaluating such unfair decisions, one generally relies on predefined groups that are determined by a set of features that are considered sensitive. However, such an approach is subjective and does not guarantee that these features are the only ones to be considered as sensitive nor that they entail unfair (disparate) outcomes. In this paper, we propose a preprocessing step to address the task of automatically recognizing disparity-prone features that does not require a trained model to verify unfair results. Our proposal is based on the Hilbert-Schmidt independence criterion, which measures the statistical dependence of variable distributions. We hypothesize that if the dependence between the label vector and a candidate is high for a sensitive feature, then the information provided by this feature will entail disparate performance measures between groups. Our empirical results at test our hypothesis and show that several features considered as sensitive in the literature do not necessarily entail disparate (unfair) results
Ver menos
FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESP
2020/09838-0; 2020/10572-5; 2021/11086-0
Aberto
Couceiro, Miguel
Autor
A statistical approach to detect disparity prone features in a group fairness setting
Guilherme Dean Pelegrina, Miguel Couceiro, Leonardo Tomazeli Duarte
A statistical approach to detect disparity prone features in a group fairness setting
Guilherme Dean Pelegrina, Miguel Couceiro, Leonardo Tomazeli Duarte
Fontes
|
AI and ethics (Fonte avulsa) |