Implementation of neural network operators with applications to remote sensing data
In this paper, we provide two algorithms based on the theory of multidimensional neural network (NN) operators activated by hyperbolic tangent sigmoidal functions. Theoretical results are recalled to justify the performance of the here implemented algorithms. Specifically, the first algorithm models multidimensional signals (such as digital images), while the second one addresses the problem of rescaling and enhancement of the considered data. We discuss several applications of the NN-based algorithms for modeling and rescaling/enhancement remote sensing data (represented as images), with numerical experiments conducted on a selection of remote sensing (RS) images from the (open access) RETINA dataset. A comparison with classical interpolation methods, such as bilinear and bicubic interpolation, shows that the proposed algorithms outperform the others, particularly in terms of the Structural Similarity Index (SSIM).
💡 Research Summary
The paper presents two novel algorithms for remote sensing (RS) image processing that are grounded in the theory of multidimensional neural network (NN) operators, specifically Kantorovich‑type operators activated by the hyperbolic tangent sigmoid function. The authors first recall the mathematical foundations of NN operators: a sigmoidal activation σ belonging to class D (monotone, symmetric, C², with a prescribed decay at –∞) generates a one‑dimensional density ϕσ, and its tensor product Ψσ is used as a weighting kernel in the definition of the multivariate operator K_{d,n}. For a bounded function f on a d‑dimensional rectangle I_d, the operator K_{d,n}(f,·) computes, at each grid point x, a weighted average of f over small hyper‑rectangles of size 1/n, where the weights are given by Ψσ(nx−k).
Key theoretical results are restated: pointwise convergence at continuity points (Theorem 2.3), uniform convergence for continuous functions, L^p‑norm convergence for 1 ≤ p < ∞ (Theorem 2.5), and quantitative error bounds expressed through the modulus of continuity ω(f,·) (Theorem 2.6). The error rate depends on the decay exponent α in condition (D3); for the hyperbolic tangent sigmoid σ_h(x) = (tanh x + 1)/2, α can be taken arbitrarily large, yielding an O(1/n) approximation order in the uniform norm.
Algorithm 1 (data modeling) treats a gray‑scale image A of size M × N as a step function on the rectangle
Comments & Academic Discussion
Loading comments...
Leave a Comment