Now showing 1 - 10 of 15
No Thumbnail Available
Publication

Toward Global Estimation of Ground-Level NO2 Pollution With Deep Learning and Remote Sensing

2022-03-21 , Scheibenreif, Linus Mathias , Mommert, Michael , Borth, Damian

Air pollution is a central environmental problem in countries around the world. It contributes to climate change through the emission of greenhouse gases, and adversely impacts the health of billions of people. Despite its importance, detailed information about the spatial and temporal distribution of pollutants is complex to obtain. Ground-level monitoring stations are sparse, and approaches for modeling air pollution rely on extensive datasets which are unavailable for many locations. We introduce three techniques for the estimation of air pollution to overcome these limitations: 1) a baseline localized approach that mimics conventional land-use regression through gradient boosting; 2) an OpenStreetMap (OSM) approach with gradient boosting that is applicable beyond regions covered by detailed geographic datasets; and 3) a remote sensing-based deep learning method utilizing multiband imagery and trace-gas column density measurements from satellites. We focus on the estimation of nitrogen dioxide (NO2), a common anthropogenic air pollutant with adverse effects on the environment and human health. Our local baseline model achieves strong results with a mean absolute error (MAE) of 5.18 ± 0.16 μg/m3 NO2. Substituting localized inputs with OSM leads to a degraded performance (MAE 7.22 ± 0.14) but enables NO2 estimation at a global scale. The proposed deep learning model on remote sensing data combines high accuracy (MAE 5.5 ± 0.14) with global coverage and heteroscedastic uncertainty quantification. Our results enable the estimation of surface-level NO2 pollution with high spatial resolution for any location on Earth. We illustrate this capability with an out-of-distribution test set on the US westcoast. Code and data are publicly available.

No Thumbnail Available
Publication

Self-supervised Vision Transformers for Land-cover Segmentation and Classification

2022-06-19 , Scheibenreif, Linus Mathias , Hanna, Joëlle , Mommert, Michael , Borth, Damian

No Thumbnail Available
Publication

Power Plant Classification from Remote Imaging with Deep Learning

2021-07 , Mommert, Michael , Scheibenreif, Linus Mathias , Hanna, Joëlle , Borth, Damian

Satellite remote imaging enables the detailed study of land use patterns on a global scale. We investigate the possibility to improve the information content of traditional land use classification by identifying the nature of industrial sites from medium-resolution remote sensing images. In this work, we focus on classifying different types of power plants from Sentinel-2 imaging data. Using a ResNet-50 deep learning model, we are able to achieve a mean accuracy of 90.0% in distinguishing 10 different power plant types and a background class. Furthermore, we are able to identify the cooling mechanisms utilized in thermal power plants with a mean accuracy of 87.5%. Our results enable us to qualitatively investigate the energy mix from Sentinel-2 imaging data, and prove the feasibility to classify industrial sites on a global scale from freely available satellite imagery.

No Thumbnail Available
Publication

Characterization of Industrial Smoke Plumes from Remote Sensing Data

2020-12-11 , Mommert, Michael , Sigel, Mario , Neuhausler, Marcel , Scheibenreif, Linus Mathias , Borth, Damian

The major driver of global warming has been identified as the anthropogenic release of greenhouse gas (GHG) emissions from industrial activities. The quantitative monitoring of these emissions is mandatory to fully understand their effect on the Earth's climate and to enforce emission regulations on a large scale. In this work, we investigate the possibility to detect and quantify industrial smoke plumes from globally and freely available multi-band image data from ESA's Sentinel-2 satellites. Using a modified ResNet-50, we can detect smoke plumes of different sizes with an accuracy of 94.3%. The model correctly ignores natural clouds and focuses on those imaging channels that are related to the spectral absorption from aerosols and water vapor, enabling the localization of smoke. We exploit this localization ability and train a U-Net segmentation model on a labeled sub-sample of our data, resulting in an Intersection-over-Union (IoU) metric of 0.608 and an overall accuracy for the detection of any smoke plume of 94.0%; on average, our model can reproduce the area covered by smoke in an image to within 5.6%. The performance of our model is mostly limited by occasional confusion with surface objects, the inability to identify semi-transparent smoke, and human limitations to properly identify smoke based on RGB-only images. Nevertheless, our results enable us to reliably detect and qualitatively estimate the level of smoke activity in order to monitor activity in industrial plants across the globe. Our data set and code base are publicly available.

No Thumbnail Available
Publication

Masked Vision Transformers for Hyperspectral Image Classification

2023-06-18 , Scheibenreif, Linus , Mommert, Michael , Borth, Damian

Transformer architectures have become state-of-the-art models in computer vision and natural language processing. To a significant degree, their success can be attributed to self-supervised pre-training on large scale unlabeled datasets. This work investigates the use of self-supervised masked image reconstruction to advance transformer models for hyperspectral remote sensing imagery. To facilitate self-supervised pre-training, we build a large dataset of unlabeled hyperspectral observations from the EnMAP satellite and systematically investigate modifications of the vision transformer architecture to optimally leverage the characteristics of hyperspectral data. We find significant improvements in accuracy on different land cover classification tasks over both standard vision and sequence transformers using (i) blockwise patch embeddings, (ii) spatialspectral self-attention, (iii) spectral positional embeddings and (iv) masked self-supervised pre-training 1. The resulting model outperforms standard transformer architectures by +5% accuracy on a labeled subset of our EnMAP data and by +15% on Houston2018 hyperspectral dataset, making it competitive with a strong 3D convolutional neural network baseline. In an ablation study on label-efficiency based on the Houston2018 dataset, self-supervised pretraining significantly improves transformer accuracy when little labeled training data is available. The self-supervised model outperforms randomly initialized transformers and the 3D convolutional neural network by +7-8% when only 0.1-10% of the training labels are available.

No Thumbnail Available
Publication

Contrastive Self-Supervised Data Fusion for Satellite Imagery

2022 , Scheibenreif, Linus Mathias , Mommert, Michael , Borth, Damian

Self-supervised learning has great potential for the remote sensing domain, where unlabelled observations are abundant, but labels are hard to obtain. This work leverages unlabelled multi-modal remote sensing data for augmentation-free contrastive self-supervised learning. Deep neural network models are trained to maximize the similarity of latent representations obtained with different sensing techniques from the same location, while distinguishing them from other locations. We showcase this idea with two self-supervised data fusion methods and compare against standard supervised and self-supervised learning approaches on a land-cover classification task. Our results show that contrastive data fusion is a powerful self-supervised technique to train image encoders that are capable of producing meaningful representations: Simple linear probing performs on par with fully supervised approaches and fine-tuning with as little as 10% of the labelled data results in higher accuracy than supervised training on the entire dataset.

No Thumbnail Available
Publication

A Novel Dataset and Benchmark for Surface NO2 Prediction from Remote Sensing Data Including COVID Lockdown Measures

2021-07-16 , Scheibenreif, Linus Mathias , Mommert, Michael , Borth, Damian

NO2 is an atmospheric trace gas that contributes to global warming as a precursor of greenhouse gases and has adverse effects on human health. Surface NO2 concentrations are commonly measured through strictly localized networks of air quality stations on the ground. This work presents a novel dataset of surface NO2 measurements aligned with atmospheric column densities from Sentinel-5P, as well as geographic and meteorological variables and lockdown information. The dataset provides access to data from a variety of sources through a common format and will foster data-driven research into the causes and effects of NO2 pollution. We showcase the value of the new dataset on the task of surface NO2 estimation with gradient boosting. The resulting models enable daily estimates and confident identification of EU NO2 exposure limit breaches. Additionally, we investigate the influence of COVID-19 lockdowns on air quality in Europe and find a significant decrease in NO2 levels.

No Thumbnail Available
Publication

A Multimodal Approach for Event Detection: Study of UK Lockdowns in the Year 2020.

2022-07-19 , Hanna, Joëlle , Scheibenreif, Linus Mathias , Mommert, Michael , Borth, Damian

Satellites allow spatially precise monitoring of the Earth, but provide only limited information on events of societal impact. Subjective societal impact, however, may be quantified at a high frequency by monitoring social media data. In this work, we propose a multi-modal data fusion framework to accurately identify periods of COVID-19-related lockdown in the United Kingdom using satellite observations (NO2 measurements from Sentinel-5P) and social media (textual content of tweets from Twitter) data. We show that the data fusion of the two modalities improves the event detection accuracy on a national level and for large cities such as London.

No Thumbnail Available
Publication

Multitask Learning for Estimating Power Plant Greenhouse Gas Emissions from Satellite Imagery

2021-12-14 , Hanna, Joëlle , Mommert, Michael , Scheibenreif, Linus Mathias , Borth, Damian

The burning of fossil fuels produces large amounts of carbon dioxide (CO2), a major Greenhouse Gas (GHG) and a main driver of Climate Change. Quantifying GHG emissions is crucial for accurate predictions of climate effects and to enforce emission trading schemes. The reporting of such emissions is only required in some countries, resulting in insufficient global coverage. In this work, we propose an end-to-end method to predict power generation rates for fossil fuel power plants from satellite images based on which we estimate GHG emission rates. We present a multitask deep learning approach able to simultaneously predict: (i) the pixel-area covered by plumes from a single satellite image of a power plant, (ii) the type of fired fuel, and (iii) the power generation rate. We then convert the predicted power generation rate into estimates for the rate at which CO2 is being emitted. Experimental results show that our model approach allows us to estimate the power generation rate of a power plant to within 139 MW (MAE, for a mean sample power plant capacity of 1177 MW) from a single satellite image and CO2 emission rates to within 311 t/h. This multitask learning approach improves the power generation estimation MAE by 39% compared to a baseline single-task network trained on the same dataset.

No Thumbnail Available
Publication

Estimation of Air Pollution with Remote Sensing Data: Revealing Greenhouse Gas Emissions from Space

2021-07-23 , Scheibenreif, Linus Mathias , Mommert, Michael , Borth, Damian

Air pollution is a major driver of climate change. Anthropogenic emissions from the burning of fossil fuels for transportation and power generation emit large amounts of problematic air pollutants, including Greenhouse Gases (GHGs). Despite the importance of limiting GHG emissions to mitigate climate change, detailed information about the spatial and temporal distribution of GHG and other air pollutants is difficult to obtain. Existing models for surface-level air pollution rely on extensive land-use datasets which are often locally restricted and temporally static. This work proposes a deep learning approach for the prediction of ambient air pollution that only relies on remote sensing data that is globally available and frequently updated. Combining optical satellite imagery with satellite-based atmospheric column density air pollution measurements enables the scaling of air pollution estimates (in this case NO2) to high spatial resolution (up to ∼10m) at arbitrary locations and adds a temporal component to these estimates. The proposed model performs with high accuracy when evaluated against air quality measurements from ground stations (mean absolute error <6 μg/m3). Our results en- able the identification and temporal monitoring of major sources of air pollution and GHGs.