Super-Resolved Canopy Height Mapping from Sentinel-2 Time Series Using LiDAR HD Reference Data across Metropolitan France

Super-Resolved Canopy Height Mapping from Sentinel-2 Time Series Using LiDAR HD Reference Data across Metropolitan France
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Fine-scale forest monitoring is essential for understanding canopy structure and its dynamics, which are key indicators of carbon stocks, biodiversity, and forest health. Deep learning is particularly effective for this task, as it integrates spectral, temporal, and spatial signals that jointly reflect the canopy structure. To address this need, we introduce THREASURE-Net, a novel end-to-end framework for Tree Height Regression And Super-Resolution. The model is trained on Sentinel-2 time series using reference height metrics derived from LiDAR HD data at multiple spatial resolutions over Metropolitan France to produce annual height maps. We evaluate three model variants, producing tree-height predictions at 2.5 m, 5 m, and 10 m resolution. THREASURE-Net does not rely on any pretrained model nor on reference very high resolution optical imagery to train its super-resolution module; instead, it learns solely from LiDAR-derived height information. Our approach outperforms existing state-of-the-art methods based on Sentinel data and is competitive with methods based on very high resolution imagery. It can be deployed to generate high-precision annual canopy-height maps, achieving mean absolute errors of 2.62 m, 2.72 m, and 2.88 m at 2.5 m, 5 m, and 10 m resolution, respectively. These results highlight the potential of THREASURE-Net for scalable and cost-effective structural monitoring of temperate forests using only freely available satellite data. The source code for THREASURE-Net is available at: https://github.com/Global-Earth-Observation/threasure-net.


💡 Research Summary

This paper introduces THREASURE‑Net, an end‑to‑end deep learning framework that simultaneously performs super‑resolution (SR) and tree‑height regression using only Sentinel‑2 time‑series data, with airborne LiDAR‑HD products serving as the sole reference. The study covers the entirety of metropolitan France, leveraging 80 Sentinel‑2 tiles (MGRS) and more than 179 000 1 km² LiDAR‑HD patches for training, plus separate validation and test sets. Each LiDAR patch is converted to a raster of the 95th‑percentile canopy height at three target spatial resolutions: 2.5 m, 5 m, and 10 m. A crop mask derived from the French Land Parcel Identification System removes non‑forest vegetation that could bias the model.

The architecture consists of three main components. First, a spatio‑spectral feature extractor applies Residual Dense Blocks (RDBs) – borrowed from the Residual Dense Network (RDN) – to each Sentinel‑2 observation independently, producing a high‑dimensional feature map while preserving fine spatial details. Second, a lightweight temporal self‑attention module (as described by Garnot & Landrieu, 2020) aggregates the multi‑temporal features, learning to weight each acquisition date according to its relevance; temporal positions are encoded alongside the features, allowing the network to explicitly account for seasonal variability. Third, the temporally fused representation is up‑sampled by a factor f (f = 2.5 m/10 m, 5 m/10 m, or 10 m/10 m) using a learned up‑sampling block, after which a regression head predicts the 95th‑percentile canopy height for the reference date. Crucially, the SR module is trained jointly with the regression task, eliminating the need for any pre‑trained SR network or very‑high‑resolution (VHR) optical imagery.

Training data comprise Sentinel‑2 Level‑2A products (10 selected bands plus cloud mask and sun/satellite viewing angles) resampled to 10 m. Angles are transformed with sine and cosine to respect periodicity. Only observations from the leaf‑on season (May–October) of the same year as the LiDAR reference are used; images with >50 % cloud cover or >10 % missing data are discarded. The model thus learns to handle irregular, partially cloudy time series without requiring cloud‑free composites.

Performance is evaluated against several baselines: (i) traditional Sentinel‑2‑only SR and regression pipelines, (ii) state‑of‑the‑art methods that combine Sentinel‑2 with VHR optical data (e.g., Google Earth), and (iii) recent single‑image SR networks adapted to time series. THREASURE‑Net achieves mean absolute errors (MAE) of 2.62 m (2.5 m resolution), 2.72 m (5 m), and 2.88 m (10 m), outperforming all Sentinel‑2‑only baselines and matching or surpassing VHR‑based approaches. The model also demonstrates robustness to cloud‑contaminated inputs and can generate predictions for any day of the year, thanks to its temporal conditioning.

Key contributions include: (1) a fully end‑to‑end SR‑regression model that does not rely on any pretrained SR network or VHR reference imagery; (2) the ability to learn fine‑scale spatial details directly from LiDAR‑derived heights, achieving VHR‑level accuracy with only free Sentinel‑2 data; (3) explicit conditioning on acquisition dates, enabling the network to capture phenological changes; (4) resilience to irregular, cloudy time series, making the approach operationally viable. Limitations are acknowledged: LiDAR‑HD provides only a single acquisition per year, preventing direct temporal validation of the SR component, and residual non‑forest vegetation may still introduce minor bias. Future work is suggested to incorporate multi‑year LiDAR time series, test generalization across different climatic zones, and refine crop‑masking strategies.

In summary, THREASURE‑Net proves that high‑precision, multi‑scale canopy‑height maps can be produced at national scale using exclusively freely available Sentinel‑2 observations, offering a cost‑effective and scalable tool for forest carbon accounting, biodiversity monitoring, and disturbance detection.


Comments & Academic Discussion

Loading comments...

Leave a Comment