DOST—Domain Obedient Self-supervision for Trustworthy Multi Label Classification with Noisy Labels

Document Type

Conference Article

Publication Title

Studies in Computational Intelligence

Abstract

Incorporating the vast expertise of clinical practitioners into deep learning systems can massively improve trustworthiness and performance of these systems. Deep learning systems rely on enormous amounts of data, often accompanied by annotation errors, and do not natively abide by well-known medical principles. In diagnostic scenarios, lack of adherence to domain constraints make systems unreliable, and this problem is only exacerbated by annotation errors. This area has been relatively unexplored in the context of “multi-label classification” (MLC) tasks which feature more complex noise. This paper studies the effect of label noise on domain rule violation incidents, and incorporates clinical rules into our learning algorithm to improve trustworthiness. We propose the DOST paradigm and experimentally show that our approach not only makes deep learning models more aligned to domain rules, but also improves learning performance in key metrics and minimizes the effect of annotation noise. This novel approach uses domain guidance to detect offending annotations and deter rule-violating predictions in a self-supervised manner, thus making it more “data efficient” and domain compliant.

First Page

117

Last Page

127

DOI

10.1007/978-3-031-63592-2_10

Publication Date

1-1-2024

Share

COinS