Visual-semantic embeddings have been extensively used as a powerful model for cross-modal retrieval of images and sentences. In this setting, data coming from different modalities can be projected in a common embedding space, in which distances can be used to infer the similarity between pairs of images and sentences. While this approach has shown impressive performances on fully supervised settings, its application to semi-supervised scenarios has been rarely investigated. In this paper we propose a domain adaptation model for cross-modal retrieval, in which the knowledge learned from a supervised dataset can be transferred on a target dataset in which the pairing between images and sentences is not known, or not useful for training due to the limited size of the set. Experiments are performed on two target unsupervised scenarios, respectively related to the fashion and cultural heritage domain. Results show that our model is able to effectively transfer the knowledge learned on ordinary visual-semantic datasets, achieving promising results. As an additional contribution, we collect and release the dataset used for the cultural heritage domain.
Visual-Semantic Alignment Across Domains Using a Semi-Supervised Approach / Carraggi, Angelo; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita. - 11134:(2019), pp. 625-640. (Intervento presentato al convegno 15th European Conference on Computer Vision, ECCV 2018 tenutosi a Munich, Germany nel 8-14 September 2018) [10.1007/978-3-030-11024-6_47].