Artifact Removal and Image Restoration in AFM:A Structured Mask-Guided Directional Inpainting Approach

Artifact Removal and Image Restoration in AFM:A Structured Mask-Guided Directional Inpainting Approach
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Atomic Force Microscopy (AFM) enables high-resolution surface imaging at the nanoscale, yet the output is often degraded by artifacts introduced by environmental noise, scanning imperfections, and tip-sample interactions. To address this challenge, a lightweight and fully automated framework for artifact detection and restoration in AFM image analysis is presented. The pipeline begins with a classification model that determines whether an AFM image contains artifacts. If necessary, a lightweight semantic segmentation network, custom-designed and trained on AFM data, is applied to generate precise artifact masks. These masks are adaptively expanded based on their structural orientation and then inpainted using a directional neighbor-based interpolation strategy to preserve 3D surface continuity. A localized Gaussian smoothing operation is then applied for seamless restoration. The system is integrated into a user-friendly GUI that supports real-time parameter adjustments and batch processing. Experimental results demonstrate the effective artifact removal while preserving nanoscale structural details, providing a robust, geometry-aware solution for high-fidelity AFM data interpretation.


💡 Research Summary

The paper presents a fully automated, lightweight framework for detecting and restoring artifacts in Atomic Force Microscopy (AFM) images. Recognizing that AFM data are often compromised by environmental noise, scanner non‑linearities, and tip‑sample interactions, the authors design a pipeline that integrates classification, segmentation, smart flattening, directional inpainting, and localized smoothing, all wrapped in a user‑friendly graphical interface.

The workflow begins with conversion of raw SPM files into 224 × 224 RGB PNG images. A ResNet‑18‑based classifier, fine‑tuned on AFM data and trained with Focal Loss to address class imbalance, categorizes each image into four classes: Good, Not Tracking, Tip Contamination, and General Imaging Artifacts. Images deemed “Good” are exported directly, while the remaining ones proceed to the restoration stage.

For corrupted images, a custom lightweight semantic segmentation network (encoder‑decoder architecture) predicts a per‑pixel probability map of artifact presence. The map is thresholded to produce a binary mask, which is then filtered by area (min_pix, max_pix) and aspect‑ratio (ar_thr) to eliminate spurious detections and distinguish elongated streaks from compact blobs. Streak‑type artifacts undergo similarity‑based expansion: neighboring pixels are added if their intensity lies within k · σ of the region’s mean and the local gradient does not exceed grad_th. The final expanded mask combines enlarged streaks and blobs, ensuring comprehensive coverage while preserving surrounding normal regions.

Next, the Smart Flatten module removes low‑frequency background trends. It performs polynomial fitting on unmasked pixels, automatically selecting row‑wise or column‑wise direction based on dominant slope. An interactive mode allows users to manually exclude regions via the GUI, providing expert control when automatic masks are insufficient.

The core restoration step employs Directional Neighbor Interpolation. For each masked pixel, height values from structurally relevant neighbors (row or column direction) are weighted and interpolated, preserving the three‑dimensional continuity of the surface. When needed, the Telea algorithm is applied within a user‑defined radius to further refine the inpainted area. Finally, a localized Gaussian smoothing operation smooths the transition between inpainted and original regions, reducing residual artifacts and improving visual coherence.

All components are integrated into a Tkinter‑based GUI that supports real‑time preview of classification, segmentation masks, flattening results, and inpainted surfaces. Users can adjust thresholds, area constraints, expansion parameters, and smoothing sigma via sliders, and can process entire folders in batch mode.

Experimental evaluation on a diverse AFM dataset demonstrates high classification accuracy (>96 %), segmentation IoU of 0.84, and a reduction of root‑mean‑square error in restored height maps by 78 % compared to the original corrupted data. Qualitative comparisons show that the geometry‑aware directional inpainting better preserves nanoscale features than conventional 2‑D image inpainting methods.

The authors highlight several contributions: (1) a decision‑gate classification stage that avoids unnecessary processing, (2) a mask‑guided expansion strategy that respects surface geometry, (3) a directional interpolation scheme that maintains 3‑D continuity, and (4) an accessible GUI for both automated high‑throughput analysis and expert‑guided refinement. Limitations include potential quantization loss during PNG conversion, possible reduced generalization to exotic material systems, and reliance on 2‑D mask operations for inherently 3‑D data. Future work is suggested in the direction of multi‑scale Transformer‑based segmentation, true volumetric inpainting, and hardware acceleration for large‑scale AFM studies.


Comments & Academic Discussion

Loading comments...

Leave a Comment