Ivanović, MilošObrenović, MihailoLampert, ThomasGançarski, Pierre2023-11-142023-11-14202310.1007/s10994-023-06374-1https://dspace.unic.kg.ac.rs/handle/123456789/17141Supervised deep learning requires a huge amount of reference data, which is often difficult and expensive to obtain. Domain adaptation helps with this problem—labelled data from one dataset should help in learning on another unlabelled or scarcely labelled dataset. In remote sensing, where a variety of sensors produce images of different modalities and with different numbers of channels, it would be very beneficial to develop heterogeneous domain adaptation methods that are able to work between domains that come from different input spaces. However, this challenging problem is rarely addressed, the majority of existing heterogeneous domain adaptation work does not use raw image-data, or they rely on translation from one domain to the other, therefore ignoring domain-invariant feature extraction approaches. This article proposes novel approaches for heterogeneous image domain adaptation for both the semi-supervised and unsupervised settings. These are based on extracting domain invariant features using deep adversarial learning. For the unsupervised domain adaptation case, the impact of pseudo-labelling is also investigated. We evaluate on two heterogeneous remote sensing datasets, one being RGB, and the other multispectral, for the task of land-cover patch classification, and also on a standard computer vision benchmark of RGB-depth map object classification. The results show that the proposed domain invariant approach consistently outperforms the competing methods based on image-to-image/feature translation, in both remote sensing and in a standard computer vision problem.en-USLearning domain invariant representations of heterogeneous image dataArticle