Efficient Global-Context driven Volumetric Segmentation of Abdominal Images

Document Type

Conference Article

Publication Title

Proceedings - 2023 2023 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2023

Abstract

Volumetric medical image segmentation is an indispensable part of accurate diagnosis, treatment planning, and image-guided interventions. It entails the delineation of structures within 3D medical images. However, there are various challenges associated, including uncertain or intersecting boundaries, discrepancies in volume shapes and dimensions, variations among patients, and the need for considerable computational resources. We present here the new Volumetric global-Context integrated Attention Network (VoCANet), for segmenting anatomical structures from multi-dimensional medical images. Global contextual information, from different levels of the low-projection path, is utilized to efficiently capture features corresponding to anatomical structures of interest; which may often exhibit diverse shapes and sizes. An attention module is integrated into the network to enhance the range of activation responses for prioritizing pertinent features, thereby optimizing computational resources. The high-projection path increases the dimensionality of the feature volumes obtained from the low-projection path, to produce the final segmentation output. It incorporates multistage supervision and densely connected convolution kernels, for enhancing segmentation performance. The proposed deep network is applied to the task of multi-organ and Adrenocortical Carcinoma segmentation of the abdominal images from Synapse and Adrenal-ACC-Ki67-Seg datasets respectively. Experimental results demonstrate the superiority of our model, as compared to other state-of-the-art frameworks, in performing segmentation from multi-dimensional medical images.

First Page

1880

Last Page

1885

DOI

10.1109/BIBM58861.2023.10385802

Publication Date

1-1-2023

This document is currently not available here.

Share

COinS