Now showing 1 - 8 of 8
  • Publication
    Estimation of Power Generation and CO2 Emissions Using Satellite Imagery
    Burning fossil fuels produces large amounts of carbon dioxide (CO2), a major Greenhouse Gas (GHG) and a main driver of Climate Change. Quantification of GHG emissions related to power plants is crucial for accurate predictions of climate effects and for achieving a successful energy transition (from fossil-fuel to carbon-free energy). The reporting of such emissions is only required in some countries, resulting in insufficient global coverage. In this work, we propose an end-to-end method to predict power generation rates for fossil fuel power plants from satellite images based on which we estimate GHG emission rates. We present a multitask deep learning approach able to simultaneously predict: (i) the pixel-area covered by plumes from a single satellite image of a power plant, (ii) the type of fired fuel, and (iii) the power generation rate. To ensure physically realistic predictions from our model we account for environmental conditions. We then convert the predicted power generation rate into estimates for the rate at which CO2 is being emitted, using fuel-dependent conversion factors.
  • Publication
    A Multimodal Approach for Event Detection: Study of UK Lockdowns in the Year 2020.
    (IEEE Geoscience and Remote Sensing Society, 2022-07-19) ; ; ;
    Satellites allow spatially precise monitoring of the Earth, but provide only limited information on events of societal impact. Subjective societal impact, however, may be quantified at a high frequency by monitoring social media data. In this work, we propose a multi-modal data fusion framework to accurately identify periods of COVID-19-related lockdown in the United Kingdom using satellite observations (NO2 measurements from Sentinel-5P) and social media (textual content of tweets from Twitter) data. We show that the data fusion of the two modalities improves the event detection accuracy on a national level and for large cities such as London.
  • Publication
    Multitask Learning for Estimating Power Plant Greenhouse Gas Emissions from Satellite Imagery
    (Tackling Climate Change with Machine Learning workshop at NeurIPS., 2021-12-14) ; ; ;
    The burning of fossil fuels produces large amounts of carbon dioxide (CO2), a major Greenhouse Gas (GHG) and a main driver of Climate Change. Quantifying GHG emissions is crucial for accurate predictions of climate effects and to enforce emission trading schemes. The reporting of such emissions is only required in some countries, resulting in insufficient global coverage. In this work, we propose an end-to-end method to predict power generation rates for fossil fuel power plants from satellite images based on which we estimate GHG emission rates. We present a multitask deep learning approach able to simultaneously predict: (i) the pixel-area covered by plumes from a single satellite image of a power plant, (ii) the type of fired fuel, and (iii) the power generation rate. We then convert the predicted power generation rate into estimates for the rate at which CO2 is being emitted. Experimental results show that our model approach allows us to estimate the power generation rate of a power plant to within 139 MW (MAE, for a mean sample power plant capacity of 1177 MW) from a single satellite image and CO2 emission rates to within 311 t/h. This multitask learning approach improves the power generation estimation MAE by 39% compared to a baseline single-task network trained on the same dataset.
  • Publication
    Power Plant Classification from Remote Imaging with Deep Learning
    Satellite remote imaging enables the detailed study of land use patterns on a global scale. We investigate the possibility to improve the information content of traditional land use classification by identifying the nature of industrial sites from medium-resolution remote sensing images. In this work, we focus on classifying different types of power plants from Sentinel-2 imaging data. Using a ResNet-50 deep learning model, we are able to achieve a mean accuracy of 90.0% in distinguishing 10 different power plant types and a background class. Furthermore, we are able to identify the cooling mechanisms utilized in thermal power plants with a mean accuracy of 87.5%. Our results enable us to qualitatively investigate the energy mix from Sentinel-2 imaging data, and prove the feasibility to classify industrial sites on a global scale from freely available satellite imagery.
    Scopus© Citations 2
  • Publication
    Efficient Learning for Earth Observation Applications with Self-Supervised Learning
    Earth observation methods are utilized across a wide range of research disciplines and therefore play an increasingly important role. This importance is fueled in part by the ever-increasing amount of (freely) available Earth observation data. The systematic analysis of these vast amounts of data requires scalable methods that are tailored to the complexity of the data; Deep Learning meets these requirements and has therefore proven to be a useful tool. However, the supervised learning process that is typically used to train these models to perform specific tasks requires the annotation of large amounts of training data, a process that is both expensive and laborious. Recently, self-supervised learning paradigms that do not rely on annotated data have emerged and gained traction in the field of computer vision. Such methods enable the pre-training of Deep Learning models on large amounts of annotation-free data and the subsequent supervised fine-tuning of the pretrained models on specific tasks. As a result of the pretraining, this fine-tuning process requires much smaller amounts of annotated data and therefore enables a much more efficient learning process. Thus, self-supervised learning represents a worthwhile path for Earth observation applications as it is able to leverage the vast amounts of freely available Earth observation data to enable the efficient learning of related tasks. In this work, we employ self-supervised learning methods on multi-modal Earth observation data to quantify its effects on the learning efficiency. We pre-train different Deep Learning model backbones on the task of identifying matching scenes from Sentinel-1 SAR and Sentinel-2 multi-band data in a contrastive learning setup. Our results support the notion that self-supervised learning is useful for efficient learning of Earth observation tasks. We find that the pre-training process results in the successful learning of rich latent representations: the supervised fine-tuning of pre-trained models results in (i) a better performance than models that were trained in a fully supervised way while at the same time (ii) requiring only 10-20% of the annotated data. Furthermore, (iii) this effect can be observed across different downstream tasks such as patch-based classification or image segmentation, indicating that the learned latent representations are task agnostic. Based on these findings, we expect self-supervised pre-training to enable more efficient Deep Learning for Earth observation applications and therefore to boost scientific output across a range of Earth observation-related research disciplines.
  • Publication
    Multi-modal Self-supervised Learning for Earth Observation
    An ever-increasing number of Earth Observation satellites continually captures massive amounts of remote sensing data. This wealth of data makes manual analysis of all images by human experts impossible. At the same time, the data lacks readily available labels which are necessary for training supervised machine learning models, including state-of-the art deep learning approaches. Techniques from self-supervised machine learning make it possible to leverage unlabeled data for the training of deep neural network models, and thus improve our Earth Observation capabilities across different tasks.