Graph Signal Denoising Using Regularization by Denoising and Its Parameter Estimation
In this paper, we propose an interpretable denoising method for graph signals using regularization by denoising (RED). RED is a technique developed for image restoration that uses an efficient (and sometimes black-box) denoiser in the regularization term of the optimization problem. By using RED, optimization problems can be designed with the explicit use of the denoiser, and the gradient of the regularization term can be easily computed under mild conditions. We adapt RED for denoising of graph signals beyond image processing. We show that many graph signal denoisers, including graph neural networks, theoretically or practically satisfy the conditions for RED. We also study the effectiveness of RED from a graph filter perspective. Furthermore, we propose supervised and unsupervised parameter estimation methods based on deep algorithm unrolling. These methods aim to enhance the algorithm applicability, particularly in the unsupervised setting. Denoising experiments for synthetic and real-world datasets show that our proposed method improves signal denoising accuracy in mean squared error compared to existing graph signal denoising methods.
💡 Research Summary
This paper introduces a novel framework for graph‑signal denoising by extending the Regularization by Denoising (RED) methodology, originally devised for image restoration, to the graph domain. RED incorporates an arbitrary denoiser directly into the regularization term of an optimization problem, yielding an explicit objective function that remains interpretable and amenable to standard gradient‑based solvers. The key theoretical requirement for RED is that the embedded denoiser satisfies two mild conditions: (i) local homogeneity (D(c·x) ≈ c·D(x) for scaling factors c close to 1) and (ii) strong passivity (the spectral radius of the Jacobian ∇D(x) does not exceed 1). Under these conditions the gradient of the regularizer simplifies to x − D(x), guaranteeing convergence to a global optimum when the overall problem is convex.
The authors first examine whether popular graph‑signal denoisers meet these criteria. They prove analytically that graph Laplacian regularization (LR) – whether the Laplacian is fixed, constructed from distance‑based edge weights, or estimated via a Gaussian Markov Random Field – satisfies both homogeneity and passivity. For modern graph neural network (GNN) based denoisers, specifically Graph Attention Networks (GAT) and the plug‑and‑play ADMM (PnP‑ADMM) scheme, the paper provides extensive empirical evidence: small scaling factors produce almost identical scaled outputs, and the ℓ₂‑norm ratio ‖D(y)‖₂/‖y‖₂ never exceeds 1 across synthetic and real‑world datasets, confirming strong passivity. Consequently, a wide class of graph denoisers can be safely embedded within RED.
With the RED objective minₓ ½‖x − y‖₂² + α·½ xᵀ(x − D_graph(x)), the gradient becomes (x − y) + α(x − D_graph(x)). This leads to a simple gradient‑descent update x^{k+1}=x^{k} − ε
Comments & Academic Discussion
Loading comments...
Leave a Comment