Geoinformatics Unit

Paper accepted for publication in IEEE Transactions on Geoscience and Remote Sensing

February 2, 2022

Our paper entitled "DML: Differ-Modality Learning for Building Semantic Segmentation" led by Junshi Xia, has been accepted for publication in IEEE Transactions on Geoscience and Remote Sensing.

This work critically analyzes the problems arising from differ-modality building semantic segmentation in the remote sensing domain. With the growth of multi-modality datasets, such as optical, synthetic aperture radar (SAR), light detection and ranging (LiDAR), and the scarcity of semantic knowledge, the task of learning multi-modality information has increasingly become relevant over the last few years. However, multi-modality datasets cannot be obtained simultaneously due to many factors. Assume we have SAR images with reference information in one place and optical images without reference in another; how to learn relevant features of optical images from SAR images? We refer to it as differ-modality learning (DML). To solve the DML problem, we propose novel deep neural network architectures, which include image adaptation, feature adaptation, knowledge distillation, and self-training modules for different scenarios. We test the proposed methods on the differ-modality remote sensing datasets (very high-resolution SAR and RGB from SpaceNet 6) to build semantic segmentation and achieve superior efficiency. The presented approach achieves the best performance when compared to the state-of-the-art methods.

img12

Multi-modality learning (left): the same multi-modality datasets are both used for training and testing. Cross-modality learning} (middle): the model is trained on multi-modalities (e.g., optical and SAR), and only one modality is used in the process of testing. Differ-modality learning (right): the model is trained on one modality (e.g., SAR) with the training labels, and the other modality (e.g., optical) is used in the process of testing.