Deep Classification of Mammographic Breast Density: DCBARNet

Document Type

Conference Article

Publication Title

International Conference Image and Vision Computing New Zealand

Abstract

Breast density plays a key part in the early prediction of breast cancer development. Experienced radiologists evaluate breast density from the mammographic images for a particular individual based on the Breast Imaging Reporting and Data System (BI-RADS) categories. While super-vised learning algorithms are increasingly finding application for this task, satisfying accuracy and requirement of human intervention remain two of the most challenging roadblocks. In this study, we introduce a novel attention-driven residual learning based method which adaptively pays attention to the selective features in both spatial and channel dimensions. The other feature of our approach is the use of generalized Global Average Pooling layers (utilizing the 2D Discrete Cosine Transform) to further enhance the classifier performance. The proposed method was validated on two publicly available datasets namely CBIS-DDSM and INBreast. Classification accuracies of 87.05% and 79.00% were obtained respectively for the two datasets with five-fold cross-validation. Our approach is fully automatic and the classification performance obtained is an improvement on the state-of-the-art algorithms. We also designed a dual-view network architecture explicitly for the purpose of leveraging the complementary information present in the two different views (CC and MLO) of a single breast. It was observed that the dual-view approach produces slightly better result than the single-view approach.

DOI

10.1109/IVCNZ61134.2023.10344251

Publication Date

1-1-2023

This document is currently not available here.

Share

COinS