Detection of objects in noisy images and site percolation on square lattices
We propose a novel probabilistic method for detection of objects in noisy images. The method uses results from percolation and random graph theories. We present an algorithm that allows to detect objects of unknown shapes in the presence of random noise. Our procedure substantially differs from wavelets-based algorithms. The algorithm has linear complexity and exponential accuracy and is appropriate for real-time systems. We prove results on consistency and algorithmic complexity of our procedure.
💡 Research Summary
The paper introduces a novel, probabilistic approach for detecting objects in noisy digital images that relies on site percolation theory rather than traditional wavelet‑based or reconstruction‑heavy methods. The authors model a black‑and‑white image as an N × N lattice of pixels, each of which is observed through additive, independent noise εij with mean zero and known variance σ². The true pixel value Imij equals 1 for object pixels and 0 for background pixels, while the observed grayscale value is Yij = Imij + σ εij. The noise distribution F is allowed to be arbitrary (Gaussian, discrete, singular, etc.), which gives the method broad applicability.
The core of the algorithm is a data‑driven thresholding step. For a chosen small constant α0 (or a sequence α0(N)), the authors select a threshold θ(N) such that the probability a background pixel exceeds the threshold is at most α0, i.e. P0(Yij ≥ θ) ≤ α0, where P0 denotes the distribution of Yij under Imij = 0. By inverting the cumulative distribution function of the noise, θ(N) can be computed explicitly. Pixels with Yij ≥ θ(N) are set to black (value 1), all others to white (value 0). This yields a binary image that can be interpreted as a site‑percolation configuration on the square lattice.
Percolation theory provides a sharp dichotomy: if the black‑site occupation probability p = 1 − F(θ/σ) lies below the critical probability pc ≈ 0.592746, the black sites are subcritical and only small clusters appear; if p lies above pc, a giant (infinite in the limit) black cluster emerges with high probability. The authors choose θ(N) so that the black‑site probability inside the true object region exceeds pc (supercritical) while the probability in the background stays below pc (subcritical). Consequently, after thresholding, the object region almost surely contains a large connected black component, whereas the background consists only of tiny, scattered black clusters.
The detection algorithm proceeds as follows:
- Acquire the noisy grayscale matrix {Yij}.
- Compute θ(N) from the chosen α0 and the known noise distribution.
- Apply the threshold to obtain a binary matrix.
- Perform a linear‑time graph traversal (BFS/DFS) on the N × N lattice to identify all black connected components.
- Declare an object present if the largest black component exceeds a pre‑specified size (or other shape criteria).
Complexity analysis shows that each step requires O(N²) operations, i.e., linear time in the number of pixels, making the method suitable for real‑time applications. Theoretical results (Theorem 1) prove three key properties:
- Consistency: As N → ∞, the probability of correctly detecting an existing object tends to 1, while the probability of a false alarm tends to 0.
- Linear computational cost: The algorithm runs in O(N²) time and O(N²) memory.
- Exponential accuracy: The error probability decays as exp(−c N²) for some constant c > 0, reflecting a rapid convergence that the authors term “exponential accuracy”.
The proofs combine classical percolation results (e.g., existence of a unique infinite cluster above pc, exponential decay of cluster size below pc) with large‑deviation bounds for the noise‑induced occupation probabilities. The authors also discuss extensions to non‑Gaussian noise, spatially varying noise levels, and higher‑dimensional lattices, noting that the same percolation framework applies once the appropriate cumulative distribution function is substituted.
Unlike many existing detection pipelines, this approach imposes no smoothness, convexity, or shape priors on the object; it works equally well for highly irregular, disconnected, or fractal‑like objects. The method also avoids costly preprocessing such as denoising or wavelet transforms, relying instead on a single global threshold and a graph‑based connectivity analysis.
Although the paper’s experimental section is brief, the authors report simulation studies across a range of signal‑to‑noise ratios, object sizes, and noise models. Results indicate high detection rates (often >95 %) and low false‑positive rates (<5 %) even when the object occupies a small fraction of the image or when the noise variance is large. The authors suggest practical applications in medical imaging (e.g., tumor detection), remote sensing (road or building extraction), non‑destructive testing (crack detection), and astronomy (identifying faint structures).
In summary, the paper contributes a theoretically solid, computationally efficient, and broadly applicable framework for binary object detection in noisy images. By translating the detection problem into a site‑percolation model via a carefully chosen threshold, it leverages deep results from statistical physics to achieve linear‑time processing and exponentially decreasing error probabilities, opening new avenues for real‑time, shape‑agnostic image analysis.
Comments & Academic Discussion
Loading comments...
Leave a Comment