Title: First On-Orbit Demonstration of a Geospatial Foundation Model
ArXiv ID: 2512.01181
Date: 2025-12-01
Authors: ** - Andrew Du¹* (AI for Space Group, The University of Adelaide) - Roberto Del Prete³ (Φ‑lab, European Space Agency) - Alejandro Mousist⁴ (Thales Alenia Space) - Nick Manser² (SmartSat CRC) - Fabrice Marre² (SmartSat CRC) - Andrew Barton² (SmartSat CRC) - Carl Seubert² (SmartSat CRC) - Gabriele Meoni³ (European Space Agency) - Tat‑Jun Chin¹ (AI for Space Group, The University of Adelaide) ¹ AI for Space Group, The University of Adelaide, Australia ² SmartSat CRC, Adelaide, Australia ³ Φ‑lab, European Space Agency, Frascati, Italy ⁴ Thales Alenia Space, Cannes, France *Corresponding author: andrew.du@adelaide.edu.au — **
📝 Abstract
Geospatial foundation models (GeoFMs) promise broad generalisation capacity for Earth observation (EO) tasks, particularly under data-limited conditions. However, their large size poses a barrier to deployment on resource-constrained space hardware. To address this, we present compact variants of a Vision Transformer (ViT)-based GeoFM that preserve downstream task performance while enabling onboard execution. Evaluation across five downstream tasks and validation in two representative flight environments show that model compression and domain adaptation are critical to reducing size and resource demands while maintaining high performance under operational conditions. We further demonstrate reliable on-orbit inference with the IMAGIN-e payload aboard the International Space Station. These results establish a pathway from large GeoFMs to flight-ready, resource-efficient deployments, expanding the feasibility of onboard AI for EO missions.
💡 Deep Analysis
📄 Full Content
First On-Orbit Demonstration of a Geospatial
Foundation Model
Andrew Du1*, Roberto Del Prete3, Alejandro Mousist4,
Nick Manser2, Fabrice Marre2, Andrew Barton2, Carl Seubert2,
Gabriele Meoni3, Tat-Jun Chin1
1*AI for Space Group, The University of Adelaide, Adelaide, 5000,
South Australia, Australia.
2SmartSat CRC, Adelaide, 5000, South Australia, Australia.
3Φ-lab, European Space Agency, Frascati, 00044, Italy.
4Thales Alenia Space, Cannes, 06150, France.
*Corresponding author(s). E-mail(s): andrew.du@adelaide.edu.au;
Contributing authors: Roberto.DelPrete@esa.int;
Alejandro.Mousist@thalesaleniaspace.com;
nick.manser@smartsatcrc.com; fabrice.marre@smartsatcrc.com;
andrew.barton@smartsatcrc.com; carl.seubert@smartsatcrc.com;
Gabriele.Meoni@esa.int; tat-jun.chin@adelaide.edu.au;
Abstract
Geospatial foundation models (GeoFMs) promise broad generalisation capacity
for Earth observation (EO) tasks, particularly under data-limited conditions.
However, their large size poses a barrier to deployment on resource-constrained
space hardware. To address this, we present compact variants of a Vision Trans-
former (ViT)-based GeoFM that preserve downstream task performance while
enabling onboard execution. Evaluation across five downstream tasks and valida-
tion in two representative flight environments show that model compression and
domain adaptation are critical to reducing size and resource demands while main-
taining high performance under operational conditions. We further demonstrate
reliable on-orbit inference with the IMAGIN-e payload aboard the International
Space Station. These results establish a pathway from large GeoFMs to flight-
ready, resource-efficient deployments, expanding the feasibility of onboard AI for
EO missions.
1
arXiv:2512.01181v1 [cs.LG] 1 Dec 2025
Keywords: Earth observation, satellite, geospatial foundation model, machine
learning, model compression, domain adaptation
1 Introduction
Understanding Earth’s dynamic systems through satellite imagery is critical for
addressing a wide range of environmental and societal challenges, such as monitoring
climate change [1, 2], managing natural resources [3, 4], supporting sustainable infras-
tructure [5–7], and enabling timely responses to natural disasters [8, 9]. To meet these
needs, an increasing number of Earth observation (EO) satellites have been launched
or are planned for launch, equipped with ever-advancing imagers capable of captur-
ing high-resolution multispectral and hyperspectral imagery across a broad swath of
the electromagnetic spectrum. However, the primary challenge in EO has shifted from
acquiring data to efficiently analysing and extracting actionable insights from the vast
volumes collected, particularly in bandwidth-limited or real-time scenarios. As a result,
there is growing interest in deploying machine learning (ML) techniques, particularly
deep neural networks (DNNs), directly onboard satellites to enable more advanced
processing and analysis in orbit. Recent advances in space-qualified hardware acceler-
ators, such as Intel’s Myriad Vision Processing Unit (VPU) series [10–13], Ubotica’s
XE platforms [14–16], and NVIDIA’s Jetson modules [17–19], have begun to make
this feasible, opening the door to more capable and intelligent EO systems.
To date, several EO missions have demonstrated or plan to demonstrate the fea-
sibility of deploying ML onboard satellites. Φ-sat-1 [10, 20] marked the first use of a
DNN on a space-qualified artificial intelligence (AI) chip (i.e., Intel Movidius Myriad
2), executing a convolutional neural network (CNN) in orbit to discard cloudy images.
Building on this, Φsat-2 [21, 22] plans to support multiple onboard applications, includ-
ing cloud detection, marine vessel tracking, image compression, anomaly detection,
and wildfire detection, though these have yet to be demonstrated. ION-SCV 003 [12]
showed that models can be iteratively updated by capturing images onboard, down-
linking them for labelling, retraining on the ground, and uplinking the revised weights.
Its successor, ION-SCV 004 [13], and SONATE-2 [19] extended this capability by sup-
porting onboard training on preloaded datasets acquired from previous missions. More
recently, CogniSAT-6 [16, 23] became the first EO satellite to integrate onboard AI
processing, autonomous task scheduling, and near real-time insight delivery via inter-
satellite links (ISL). It has also executed over 20 onboard applications including flood
detection, land cover classification, and volcano monitoring [24, 25]. Separately, ISS
Mounted Accessible Global Imaging Nod-e (IMAGIN-e) [26, 27] has enabled AI pro-
cessing capabilities aboard the International Space Station (ISS), serving as a testbed
for edge computing in space without the need for a dedicated satellite.
While these missions demonstrate meaningful progress, they largely rely on
lightweight, task-specific models—typically CNNs—due to three practical constraints:
limited onboard compute, memory