On the $L^p$-Convergence and Denoising Performance of Durrmeyer-Type Max-Min Neural Network Operators

On the $L^p$-Convergence and Denoising Performance of Durrmeyer-Type Max-Min Neural Network Operators
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this paper, we investigate Durrmeyer-type generalizations of maximum-minimum neural network operators. The primary objective of this study is to establish the convergence of these operators in the $L^{p}$ norm for functions $f\in L^{p}([a,b],[0,1])$ with $1\leq p<\infty$. To this end, we analyze the properties of sigmoidal functions and maximum-minimum operations, subsequently establishing the convergence of the proposed operator in pointwise, supremum, and $L^{p}$ norms. Furthermore, we derive quantitative estimates for the rates of convergence. In the applications section, numerical and graphical examples demonstrate that the proposed Durrmeyer-type operators provide smoother approximations compared to Kantorovich-type and standard max-min operators. Finally, we highlight the superior filtering performance of these operators in signal analysis, validating their effectiveness in both approximation and data processing tasks.


💡 Research Summary

The paper introduces a novel Durrmeyer‑type neural network operator that blends sigmoidal activation functions with a max‑min (minimum) aggregation. Starting from the classical Durrmeyer operators, which are integral‑based linear approximators, the authors replace the linear combination of sampled values by a pointwise minimum of two quantities: (i) a normalized local average of the target function over a small interval determined by a kernel χ, and (ii) the centered bell‑shaped kernel ϕσ derived from a sigmoidal function σ. The resulting operator, denoted D⁽ᵐ⁾ₙ(f;x), is defined for bounded functions f:


Comments & Academic Discussion

Loading comments...

Leave a Comment