Revisiting Modality Imbalance In Multimodal Pedestrian Detection
Document Type
Conference Article
Publication Title
Proceedings - International Conference on Image Processing, ICIP
Abstract
Multimodal learning, particularly for pedestrian detection, has recently received emphasis due to its capability to function equally well in several critical autonomous driving scenarios such as low-light, night-time, and adverse weather conditions. However, in most cases, the training distribution largely emphasizes the contribution of one specific input that makes the network biased towards one modality. Hence, the generalization of such models becomes a significant problem where the non-dominant input modality during training could be contributing more to the course of inference. Here, we introduce a novel training setup with regularizer in the multimodal architecture to resolve the problem of this disparity between the modalities. Specifically, our regularizer term helps to make the feature fusion method more robust by considering both the feature extractors equivalently important during the training to extract the multimodal distribution which is referred to as removing the imbalance problem. Furthermore, our decoupling concept of output stream helps the detection task by sharing the spatial sensitive information mutually. Extensive experiments of the proposed method on KAIST and UTokyo datasets shows improvement of the respective state-of-the-art performance.
First Page
1755
Last Page
1759
DOI
10.1109/ICIP49359.2023.10222711
Publication Date
1-1-2023
Recommended Citation
Das, Arindam; Das, Sudip; Sistu, Ganesh; Horgan, Jonathan; Bhattacharya, Ujjwal; Jones, Edward; Glavin, Martin; and Eising, Ciarán, "Revisiting Modality Imbalance In Multimodal Pedestrian Detection" (2023). Conference Articles. 539.
https://digitalcommons.isical.ac.in/conf-articles/539
Comments
Open Access, Green