Geoinformatics Unit

Paper accepted at ECCV 2020

July 3, 2020

Pleased to announce that the following paper has been accepted to ECCV 2020 as spotlight!

Guided Deep Decoder: Unsupervised Image Pair Fusion
Tatsumi Uezato, Danfeng Hong, Naoto Yokoya, Wei He

Abstract

The fusion of input and guidance images that have a tradeoff in their information (e.g., hyperspectral and RGB image fusion or pansharpening) can be interpreted as one general problem. However, previous studies applied a task-specific handcrafted prior and did not address the problems with a unified approach. To address this limitation, in this study, we propose a guided deep decoder network as a general prior. The proposed network is composed of an encoder-decoder network that exploits multi-scale features of a guidance image and a deep decoder network that generates an output image. The two networks are connected by feature refinement units to embed the multi-scale features of the guidance image into the deep decoder network. The proposed network allows the network parameters to be optimized in an unsupervised way without training data. Our results show that the proposed network can achieve state-of-the-art performance in various image fusion problems.

Paper    Supmat    Code


img12

Illustration of image pair fusion