Now showing 1 - 2 of 2
  • Publication
    Efficient Learning for Earth Observation Applications with Self-Supervised Learning
    Earth observation methods are utilized across a wide range of research disciplines and therefore play an increasingly important role. This importance is fueled in part by the ever-increasing amount of (freely) available Earth observation data. The systematic analysis of these vast amounts of data requires scalable methods that are tailored to the complexity of the data; Deep Learning meets these requirements and has therefore proven to be a useful tool. However, the supervised learning process that is typically used to train these models to perform specific tasks requires the annotation of large amounts of training data, a process that is both expensive and laborious. Recently, self-supervised learning paradigms that do not rely on annotated data have emerged and gained traction in the field of computer vision. Such methods enable the pre-training of Deep Learning models on large amounts of annotation-free data and the subsequent supervised fine-tuning of the pretrained models on specific tasks. As a result of the pretraining, this fine-tuning process requires much smaller amounts of annotated data and therefore enables a much more efficient learning process. Thus, self-supervised learning represents a worthwhile path for Earth observation applications as it is able to leverage the vast amounts of freely available Earth observation data to enable the efficient learning of related tasks. In this work, we employ self-supervised learning methods on multi-modal Earth observation data to quantify its effects on the learning efficiency. We pre-train different Deep Learning model backbones on the task of identifying matching scenes from Sentinel-1 SAR and Sentinel-2 multi-band data in a contrastive learning setup. Our results support the notion that self-supervised learning is useful for efficient learning of Earth observation tasks. We find that the pre-training process results in the successful learning of rich latent representations: the supervised fine-tuning of pre-trained models results in (i) a better performance than models that were trained in a fully supervised way while at the same time (ii) requiring only 10-20% of the annotated data. Furthermore, (iii) this effect can be observed across different downstream tasks such as patch-based classification or image segmentation, indicating that the learned latent representations are task agnostic. Based on these findings, we expect self-supervised pre-training to enable more efficient Deep Learning for Earth observation applications and therefore to boost scientific output across a range of Earth observation-related research disciplines.
  • Publication
    Multi-modal Self-supervised Learning for Earth Observation
    An ever-increasing number of Earth Observation satellites continually captures massive amounts of remote sensing data. This wealth of data makes manual analysis of all images by human experts impossible. At the same time, the data lacks readily available labels which are necessary for training supervised machine learning models, including state-of-the art deep learning approaches. Techniques from self-supervised machine learning make it possible to leverage unlabeled data for the training of deep neural network models, and thus improve our Earth Observation capabilities across different tasks.