Geoinformatics Unit

Journal Papers

  1. D. Hong, N. Yokoya, J. Chanussot, and X. X. Zhu, " An augmented linear mixing model to address spectral variability for hyperspectral unmixing ," IEEE Transactions on Image Processing (accepted for publication), 2018.
    Quick Abstract

    Abstract: Hyperspectral imagery collected from airborne or satellite sources inevitably suffers from spectral variability, making it difficult for spectral unmixing to accurately estimate abundance maps. The classical unmixing model, the linear mixing model (LMM), generally fails to handle this sticky issue effectively. To this end, we propose a novel spectral mixture model, called the augmented linear mixing model (ALMM), to address spectral variability by applying a data-driven learning strategy in inverse problems of hyperspectral unmixing. The proposed approach models the main spectral variability (i.e., scaling factors) generated by variations in illumination or typography separately by means of the endmember dictionary. It then models other spectral variabilities caused by environmental conditions (e.g., local temperature and humidity, atmospheric effects) and instrumental configurations (e.g., sensor noise), as well as material nonlinear mixing effects, by introducing a spectral variability dictionary. To effectively run the data-driven learning strategy, we also propose a reasonable prior knowledge for the spectral variability dictionary, whose atoms are assumed to be low-coherent with spectral signatures of endmembers, which leads to a well-known low-coherence dictionary learning problem. Thus, a dictionary learning technique is embedded in the framework of spectral unmixing so that the algorithm can learn the spectral variability dictionary and estimate the abundance maps simultaneously. Extensive experiments on synthetic and real datasets are performed to demonstrate the superiority and effectiveness of the proposed method in comparison with previous state-of-the-art methods.

  2. D. Hong, N. Yokoya, N. Ge, J. Chanussot, and X. X. Zhu, " Learnable manifold alignment (LeMA) : A semi-supervised cross-modality learning framework for land cover and land use classification ," ISPRS Journal of Photogrammetry and Remote Sensing (accepted for publication), 2018.
    Quick Abstract

    Abstract: In this paper, we aim at tackling a general but interesting cross-modality feature learning question in remote sensing community - can a limited amount of highly-discriminative (e.g., hyperspectral) training data improve the performance of a classification task using a large amount of poorly-discriminative (e.g., multispectral) data? Traditional semi-supervised manifold alignment methods do not perform sufficiently well for such problems, since the hyperspectral data is very expensive to be largely collected in a trade-off between time and efficiency, compared to the multispectral data. To this end, we propose a novel semi-supervised cross-modality learning framework, called learnable manifold alignment (LeMA). LeMA learns a joint graph structure directly from the data instead of using a given fixed graph defined by a Gaussian kernel function. With the learned graph, we can further capture the data distribution by graph-based label propagation, which enables finding a more accurate decision boundary. Additionally, an optimization strategy based on the alternating direction method of multipliers (ADMM) is designed to solve the proposed model. Extensive experiments on two hyperspectral-multispectral datasets demonstrate the superiority and effectiveness of the proposed method in comparison with several state-of-the-art methods.

  3. P. Du, E. Li, J. Xia, A. Samat and X. Bai, " Feature and model level fusion of pre-trained CNN for remote sensing scene classification ," IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. (accepted for publication), 2018.
    Quick Abstract

    Abstract: Convolutional neural networks (CNN) have attracted tremendous attention in the remote sensing community due to its excellent performance in different domains. Especially for remote sensing scene classification, the CNN based methods have brought a great breakthrough. However, it is not feasible to fully design and train a new CNN model for remote sensing scene classification, as this usually requires a large number of training samples and high computational costs. To alleviate these limitations of fully training a new model, some work attempts to use the pre-trained CNN models as feature extractors to build feature representation of scene images for classification and has achieved impressive results. In this scheme, how to construct feature representation of scene image via the pre-trained CNN model becomes the key process. Existing studies paid little attention to build more discriminative feature representation by exploring the potential benefits of multi-layer features from a single CNN model and different feature representations from multiple CNN models. To this end, this paper presents a fusion strategy to build feature representation of the scene images by integrating multi-layer features of a single pre-trained CNN model, and extends it to a framework of multiple CNN models. For these purposes, a multiscale improved Fisher kernel (MIFK) coding method is used to build feature representation of the scene images on convolutional layers, and a feature fusion approach based on two feature subspace learning methods (PCA/SRKDA and PCA/SRKLPP) is proposed to construct final fused features for scene classification. For validation and comparison purposes, the proposed approaches are evaluated with two challenging high-resolution remote sensing datasets and shows the competitive performance compared with existing state-of-the-art baselines such as fully trained CNN models, fine tuning CNN models and other related works.

  4. W. He and N. Yokoya, " Multi-temporal Sentinel-1 and -2 data fusion for optical image simulation ," ISPRS International Journal of Geo-Information, vol. 7, no. 10, pp. 389, 2018.
    PDF    Quick Abstract

    Abstract: In this paper, we present the optical image simulation from synthetic aperture radar (SAR) data using deep learning based methods. Two models, i.e., optical image simulation directly from the SAR data and from multi-temporal SAR-optical data, are proposed to testify the possibilities. The deep learning based methods that we chose to achieve the models are a convolutional neural network (CNN) with a residual architecture and a conditional generative adversarial network (cGAN). We validate our models using the Sentinel-1 and -2 datasets. The experiments demonstrate that the model with multi-temporal SAR-optical data can successfully simulate the optical image, meanwhile, the model with simple SAR data as input failed. The optical image simulation results indicate the possibility of SAR-optical information blending for the subsequent applications such as large-scale cloud removal, and optical data temporal super-resolution. We also investigate the sensitivity of the proposed models against the training samples, and reveal possible future directions.

  5. P. Du, L. Gan, J. Xia, and D. Wang, " Multikernel adaptive collaborative representation for hyperspectral image classification ," IEEE Trans. Geosci. Remote Sens., vol. 56, no. 8, pp. 4664-4677, 2018.
    Quick Abstract

    Abstract: To adequately represent the nonlinearities in the high-dimensional feature space for hyperspectral images (HSIs), we propose a multiple kernel collaborative representation-based classifier (CRC) in this paper. Extended morphological profiles are first extracted from the original HSIs, because they can efficiently capture the spatial and spectral information. In the proposed method, a novel multiple kernel learning (MKL) model is embedded into CRC. Multiple kernel patterns, e.g., Naive, Multimetric, and Multiscale are adopted for the optimal set of basic kernels, which are helpful to capture the useful information from different pixel distributions, kernel metric spaces, and kernel scales. To learn an optimal linear combination of the predefined basic kernels, we add an extra training stage to the typical CRC where kernel weights are jointly learned with the representation coefficients from the training samples by minimizing the representation error. Moreover, by considering different contributions of dictionary atoms, the adaptive representation strategy is applied to the MKL framework via a dissimilarity-weighted regularizer to obtain a more robust representation of test pixels in the fused kernel space. Experimental results on three real HSIs confirm that the proposed classifiers outperform the other state-of-the-art representation-based classifiers.

  6. S. Karimzadeh, M. Matsuoka, M. Miyajima, B. Adriano, A. Fallahi, and J. Karashi, " Sequential SAR Coherence Method for the Monitoring of Buildings in Sarpole-Zahab, Iran ," Remote Sensing, vol. 10, no. 8, pp. 1255, 2018.
    PDF    Quick Abstract

    Abstract: In this study, we used fifty-six synthetic aperture radar (SAR) images acquired from the Sentinel-1 C-band satellite with a regular period of 12 days (except for one image) to produce sequential phase correlation (sequential coherence) maps for the town of Sarpole-Zahab in western Iran, which experienced a magnitude 7.3 earthquake on 12 November 2017. The preseismic condition of the buildings in the town was assessed based on a long sequential SAR coherence (LSSC) method, in which we considered 55 of the 56 images to produce a coherence decay model with climatic and temporal parameters. The coseismic condition of the buildings was assessed with 3 later images and normalized RGB visualization using the short sequential SAR coherence (SSSC) method. Discriminant analysis between the completely collapsed and uncollapsed buildings was also performed for approximately 700 randomly selected buildings (for each category) by considering the heights of the buildings and the SSSC results. Finally, the area and volume of debris were calculated based on a fusion of a discriminant map and a 3D vector map of the town.

  7. L. Guanter, M. Brell, J. C.-W. Chan, C. Giardino, J. Gomez-Dans, C. Mielke, F. Morsdorf, K. Segl, and N. Yokoya, " Synergies of spaceborne imaging spectroscopy with other remote sensing approaches ," Surveys in Geophysics, pp. 1-31, 2018.
    Quick Abstract

    Abstract: Imaging spectroscopy (IS), also commonly known as hyperspectral remote sensing, is a powerful remote sensing technique for the monitoring of the Earth’s surface and atmosphere. Pixels in optical hyperspectral images consist of continuous reflectance spectra formed by hundreds of narrow spectral channels, allowing an accurate representation of the surface composition through spectroscopic techniques. However, technical constraints in the definition of imaging spectrometers make spectral coverage and resolution to be usually traded by spatial resolution and swath width, as opposed to optical multispectral (MS) systems typically designed to maximize spatial and/or temporal resolution. This complementarity suggests that a synergistic exploitation of spaceborne IS and MS data would be an optimal way to fulfill those remote sensing applications requiring not only high spatial and temporal resolution data, but also rich spectral information. On the other hand, IS has been shown to yield a strong synergistic potential with non-optical remote sensing methods, such as thermal infrared (TIR) and light detection and ranging (LiDAR). In this contribution we review theoretical and methodological aspects of potential synergies between optical IS and other remote sensing techniques. The focus is put on the evaluation of synergies between spaceborne optical IS and MS systems because of the expected availability of the two types of data in the next years. Short reviews of potential synergies of IS with TIR and LiDAR measurements are also provided.

  8. J. Chen, P. Du, C. Wu, J. Xia, and J. Chanussot, " Mapping urban land cover of a large area using multiple sensors multiple features ," Remote Sensing, vol. 10, no. 6, pp. 872, 2018.
    PDF    Quick Abstract

    Abstract: Concerning the strengths and limitations of multispectral and airborne LiDAR data, the fusion of such datasets can compensate for the weakness of each other. This work have investigated the integration of multispectral and airborne LiDAR data for the land cover mapping of large urban area. Different LiDAR-derived features are involoved, including height, intensity, and multiple-return features. However, there is limited knowledge relating to the integration of multispectral and LiDAR data including three feature types for the classification task. Furthermore, a little contribution has been devoted to the relative importance of input features and the impact on the classification uncertainty by using multispectral and LiDAR. The key goal of this study is to explore the potenial improvement by using both multispectral and LiDAR data and to evaluate the importance and uncertainty of input features. Experimental results revealed that using the LiDAR-derived height features produced the lowest classification accuracy (83.17%). The addition of intensity information increased the map accuracy by 3.92 percentage points. The accuracy was further improved to 87.69% with the addition multiple-return features. A SPOT-5 image produced an overall classification accuracy of 86.51%. Combining spectral and spatial features increased the map accuracy by 6.03 percentage points. The best result (94.59%) was obtained by the combination of SPOT-5 and LiDAR data using all available input variables. Analysis of feature relevance demonstrated that the normalized digital surface model (nDSM) was the most beneficial feature in the classification of land cover. LiDAR-derived height features were more conducive to the classification of urban area as compared to LiDAR-derived intensity and multiple-return features. Selecting only 10 most important features can result in higher overall classification accuracy than all scenarios of input variables except the feature of entry scenario using all available input features. The variable importance varied a very large extent in the light of feature importance per land cover class. Results of classification uncertainty suggested that feature combination can tend to decrease classification uncertainty for different land cover classes, but there is no “one-feature-combination-fits-all” solution. The values of classification uncertainty exhibited significant differences between the land cover classes, and extremely low uncertainties were revealed for the water class. However, it should be noted that using all input variables resulted in relatively lower classification uncertainty values for most of the classes when compared to other input features scenarios.

  9. J. Xia, N. Yokoya, and A. Iwasaki, " Fusion of hyperspectral and LiDAR data with a novel ensemble classifier ," IEEE Geosci. Remote Sens. Lett., vol. 15, no. 6, pp. 957-961, 2018.
    Quick Abstract

    Abstract: Due to the development of sensors and data acquisition technology, the fusion of features from multiple sensors is a very hot topic. In this letter, the use of morphological features to fuse an HS image and a light detection and ranging (LiDAR)-derived digital surface model (DSM) is exploited via an ensemble classifier. In each iteration, we first apply morphological openings and closings with partial reconstruction on the first few principal components (PCs) of the HS and LiDAR datasets to produce morphological features to model spatial and elevation information for HS and LiDAR datasets. Second, three groups of features (i.e., spectral, morphological features of HS and LiDAR data) are split into several disjoint subsets. Third, data transformation is applied to each subset and the features extracted in each subset are stacked as the input of a random forest (RF) classifier. Three data transformation methods, including principal component analysis (PCA), linearity preserving projection (LPP), and unsupervised graph fusion (UGF) are introduced into the ensemble classification process. Finally, we integrate the classification results achieved at each step by a majority vote. Experimental results on co-registered HS and LiDAR-derived DSM demonstrate the effectiveness and potentialities of the proposed ensemble classifier.

  10. P. Ghamisi and N. Yokoya, " IMG2DSM: Height simulation from single imagery using conditional generative adversarial nets ," IEEE Geosci. Remote Sens. Lett., vol. 15, no. 5, pp. 794-798, 2018.
    Quick Abstract

    Abstract: This paper proposes a groundbreaking approach in the remote sensing community to simulating digital surface model (DSM) from a single optical image. This novel technique uses conditional generative adversarial nets whose architecture is based on an encoder-decoder network with skip connections (generator) and penalizing structures at the scale of image patches (discriminator). The network is trained on scenes where both DSM and optical data are available to establish an image-to-DSM translation rule. The trained network is then utilized to simulate elevation information on target scenes where no corresponding elevation information exists. The capability of the approach is evaluated both visually (in terms of photo interpretation) and quantitatively (in terms of reconstruction errors and classification accuracies) on sub-decimeter spatial resolution datasets captured over Vaihingen, Potsdam, and Stockholm. The results confirm the promising performance of the proposed framework.

  11. N. Yokoya, P. Ghamisi, J. Xia, S. Sukhanov, R. Heremans, I. Tankoyeu, B. Bechtel, B. Le Saux, G. Moser, and D. Tuia, " Open data for global multimodal land use classification: Outcome of the 2017 IEEE GRSS Data Fusion Contest ," IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 11, no. 5, pp. 1363-1377, 2018.
    PDF    Quick Abstract

    Abstract: In this paper, we present the scientific outcomes of the 2017 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society. The 2017 Contest was aimed at addressing the problem of local climate zones classification based on a multitemporal and multimodal dataset, including image (Landsat 8 and Sentinel-2) and vector data (from OpenStreetMap). The competition, based on separate geographical locations for the training and testing of the proposed solution, aimed at models that were accurate (assessed by accuracy metrics on an undisclosed reference for the test cities), general (assessed by spreading the test cities across the globe), and computationally feasible (assessed by having a test phase of limited time). The techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and of mixed ideas and methodologies deriving from computer vision and machine learning but also deeply rooted in the specificities of remote sensing. In particular, rigorous atmospheric correction, the use of multidate images, and the use of ensemble methods fusing results obtained from different data sources/time instants made the difference.

  12. B. Le Saux, N. Yokoya, R. Hansch, and S. Prasad, " 2018 IEEE GRSS Data Fusion Contest: Multimodal land use classification ," IEEE Geoscience and Remote Sensing Magazine, vol. 6, no. 1, pp. 52-54, 2018.
    PDF   

Conference Papers

  1. V. Ferraris, N. Yokoya, N. Dobigeon, and M. Chabert, "A comparative study of fusion-based change detection methods for multi-band images with different spectral and spatial resolutions," IEEE International Geoscience and Remote Sensing Symposium, 2018.
  2. J. Xia, N. Yokoya, and A. Iwasaki, "Boosting for domain adaptation extreme learning machines for hyperspectral image classification," IEEE International Geoscience and Remote Sensing Symposium, 2018.
  3. H. Xu, H. Zhang, W. He, and L. Zhang, "Superpixel based dimension reduction for hyperspectral imagery," IEEE International Geoscience and Remote Sensing Symposium, 2018.
  4. D. Hong, N. Yokoya, J. Xu, and X. X. Zhu, "Joint & progressive learning from high-dimensional data for multi-label classification," European Conference on Computer Vision, 2018.