MarsQE: Semantic-Informed Quality Enhancement for Compressed Martian Image
Lossy image compression is essential for Mars exploration missions, due to the limited bandwidth between Earth and Mars. However, the compression may introduce visual artifacts that complicate the geological analysis of the Martian surface. Existing quality enhancement approaches, primarily designed for Earth images, fall short for Martian images due to a lack of consideration for the unique Martian semantics. In response to this challenge, we conduct an in-depth analysis of Martian images, yielding two key insights based on semantics: the presence of texture similarities and the compact nature of texture representations in Martian images. Inspired by these findings, we introduce MarsQE, an innovative, semantic-informed, two-phase quality enhancement approach specifically designed for Martian images. The first phase involves the semantic-based matching of texture-similar reference images, and the second phase enhances image quality by transferring texture patterns from these reference images to the compressed image. We also develop a post-enhancement network to further reduce compression artifacts and achieve superior compression quality. Our extensive experiments demonstrate that MarsQE significantly outperforms existing approaches for Earth images, establishing a new benchmark for the quality enhancement on Martian images.
💡 Research Summary
The paper addresses a critical bottleneck in Mars exploration: the visual degradation of images that must be heavily compressed for transmission over the limited bandwidth between Mars and Earth. While many deep‑learning based quality‑enhancement methods exist for Earth imagery, they rely on the rich semantic diversity and varied texture patterns typical of terrestrial scenes. Martian images, by contrast, exhibit two distinctive properties: (1) markedly higher inter‑image and intra‑image similarity, meaning that patches from different images or from different regions of the same image are often nearly identical in pixel statistics; and (2) a compact texture representation, with only a few semantic classes (e.g., sand, soil, rock) dominating the visual content. The authors substantiate these findings through extensive statistical analysis on the Martian Image Compression (MIC) dataset, comparing MAE, RMSE, and Normalized Correlation Coefficient (NCC) against the DIV2K Earth‑image benchmark and the Aerial Image Dataset (AID). Across multiple patch sizes (256×256, 128×128, 64×64) the Martian data consistently shows lower error metrics and higher NCC, confirming the hypothesized similarity and compactness.
Motivated by these observations, the authors propose MarsQE, a two‑phase, semantic‑informed quality‑enhancement framework specifically designed for Martian imagery. The first phase introduces a Semantic‑based Matching Module (SMM) that projects each compressed patch into a learned semantic space using a CNN encoder and then retrieves the most similar reference patches from a pre‑compiled database of high‑quality Martian images. Unlike naïve pixel‑wise nearest‑neighbor search, SMM leverages cosine similarity in the semantic domain, making it robust to illumination changes and minor geometric variations. The second phase employs a Texture Transfer Network (TTN) that uses attention mechanisms to fuse the retrieved reference texture into the compressed patch, effectively restoring fine‑grained details while suppressing blockiness and ringing artifacts. A final Post‑Enhancement Network (PE‑Net) refines the output with a Residual‑in‑Residual architecture and multi‑scale deconvolution, targeting residual noise and subtle compression residues.
Training proceeds on the MIC dataset, jointly optimizing SMM and TTN with a combination of L1 reconstruction loss and perceptual loss, while PE‑Net is trained separately with L1+SSIM losses. The authors evaluate MarsQE on several Mars missions (MIC, MSL, Perseverance) without any fine‑tuning, demonstrating average gains of +2.3 dB in PSNR and +0.018 in SSIM over state‑of‑the‑art Earth‑centric methods such as RBQE, DnCNN, AR‑CNN, and recent blind denoising networks. Ablation studies reveal that both the semantic matching and the texture transfer components contribute substantially: removing SMM drops performance by ~1.1 dB, and varying the number of reference patches balances quality improvement against computational cost.
The paper also discusses limitations. The effectiveness of SMM depends on the richness of the reference database; novel terrain types (e.g., volcanic cones, icy deposits) may lack suitable matches, potentially limiting generalization. The semantic encoder is initially pre‑trained on Earth data, which may not capture Martian‑specific feature distributions, suggesting a need for dedicated Martian semantic pre‑training. Moreover, the full pipeline—SMM, TTN, and PE‑Net—requires GPU‑accelerated inference, which is feasible at Earth ground stations but not on resource‑constrained rovers or orbiters, indicating a need for model compression or lightweight variants for on‑board processing.
Future directions proposed include (1) an online, self‑expanding reference database that continuously incorporates newly received high‑quality images; (2) a Mars‑specific semantic encoder trained on a larger, possibly semi‑supervised Martian dataset; and (3) a distilled, quantized version of MarsQE suitable for real‑time deployment on spacecraft. The authors argue that the underlying principle—exploiting high similarity and limited semantic diversity—could be transferred to other planetary bodies with similar visual characteristics, such as the Moon or Phobos.
In summary, MarsQE represents a novel, data‑driven approach that tailors deep‑learning based image restoration to the unique statistical properties of Martian imagery, achieving substantial quality gains over generic Earth‑oriented methods and opening new avenues for high‑fidelity visual analysis in planetary science.
Comments & Academic Discussion
Loading comments...
Leave a Comment