Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch
Document Type
Conference Article
Publication Title
Proceedings - International Conference on Pattern Recognition
Abstract
In this work we introduce a cross modal image retrieval system that allows both text and sketch as input modalities for the query. A cross-modal deep network architecture is formulated to jointly model the sketch and text input modalities as well as the the image output modality, learning a common embedding between text and images and between sketches and images. In addition, an attention model is used to selectively focus the attention on the different objects of the image, allowing for retrieval with multiple objects in the query. Experiments show that the proposed method performs the best in both single and multiple object image retrieval in standard datasets.
First Page
916
Last Page
921
DOI
10.1109/ICPR.2018.8545452
Publication Date
11-26-2018
Recommended Citation
Dey, Sounak; Dutta, Anjan; Ghosh, Suman K.; Valveny, Ernest; Llados, Josep; and Pal, Umapada, "Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch" (2018). Conference Articles. 40.
https://digitalcommons.isical.ac.in/conf-articles/40
Comments
Open Access, Green