Exploring Bias in Sclera Segmentation Models: A Group Evaluation Approach

Authors

Matej Vitek, Univerza v Ljubljani
Abhijit Das, Birla Institute of Technology and Science, Pilani
Diego Rafael Lucio, Universidade Federal do Parana
Luiz Antonio Zanlorensi, Universidade Federal do Parana
David Menotti, Universidade Federal do Parana
Jalil Nourmohammadi Khiarak, Politechnika Warszawska
Mohsen Akbari Shahpar, University of Tabriz
Meysam Asgari-Chenaghlu, University of Tabriz
Farhang Jaryani, Arak University
Juan E. Tapia, Hochschule Darmstadt
Andres Valenzuela, Toc Biometrics (TOC)
Caiyong Wang, University of Civil Engineering and Architecture
Yunlong Wang, Institute of Automation Chinese Academy of Sciences
Zhaofeng He, Beijing University of Posts and Telecommunications
Zhenan Sun, Institute of Automation Chinese Academy of Sciences
Fadi Boutros, Fraunhofer Institute for Computer Graphics Research IGD
Naser Damer, Fraunhofer Institute for Computer Graphics Research IGD
Jonas Henry Grebe, Beijing University of Posts and Telecommunications
Arjan Kuijper, Fraunhofer Institute for Computer Graphics Research IGD
Kiran Raja, Norges Teknisk-Naturvitenskapelige Universitet
Gourav Gupta, Norges Teknisk-Naturvitenskapelige Universitet
Georgios Zampoukis, Democritus University of Thrace
Lazaros Tsochatzidis, Democritus University of Thrace
Ioannis Pratikakis, Democritus University of Thrace
S. V. Aruna Kumar, Malnad College of Engineering
B. S. Harish, JSS Science and Technology University
Umapada Pal, Indian Statistical Institute, Kolkata
Peter Peer, Univerza v Ljubljani
Vitomir Struc, Univerza v Ljubljani

Article Type

Research Article

Publication Title

IEEE Transactions on Information Forensics and Security

Abstract

Bias and fairness of biometric algorithms have been key topics of research in recent years, mainly due to the societal, legal and ethical implications of potentially unfair decisions made by automated decision-making models. A considerable amount of work has been done on this topic across different biometric modalities, aiming at better understanding the main sources of algorithmic bias or devising mitigation measures. In this work, we contribute to these efforts and present the first study investigating bias and fairness of sclera segmentation models. Although sclera segmentation techniques represent a key component of sclera-based biometric systems with a considerable impact on the overall recognition performance, the presence of different types of biases in sclera segmentation methods is still underexplored. To address this limitation, we describe the results of a group evaluation effort (involving seven research groups), organized to explore the performance of recent sclera segmentation models within a common experimental framework and study performance differences (and bias), originating from various demographic as well as environmental factors. Using five diverse datasets, we analyze seven independently developed sclera segmentation models in different experimental configurations. The results of our experiments suggest that there are significant differences in the overall segmentation performance across the seven models and that among the considered factors, ethnicity appears to be the biggest cause of bias. Additionally, we observe that training with representative and balanced data does not necessarily lead to less biased results. Finally, we find that in general there appears to be a negative correlation between the amount of bias observed (due to eye color, ethnicity and acquisition device) and the overall segmentation performance, suggesting that advances in the field of semantic segmentation may also help with mitigating bias.

First Page

190

Last Page

205

DOI

https://10.1109/TIFS.2022.3216468

Publication Date

1-1-2023

Comments

Open Access, Hybrid Gold

This document is currently not available here.

Share

COinS