Robust Wavelet-Based Watermarking Using Dynamic Strength Factor

In unsecured network environments, ownership protection of digital contents, such as images, is becoming a growing concern. Different watermarking methods have been proposed to address the copyright p

Robust Wavelet-Based Watermarking Using Dynamic Strength Factor

In unsecured network environments, ownership protection of digital contents, such as images, is becoming a growing concern. Different watermarking methods have been proposed to address the copyright protection of digital materials. Watermarking methods are challenged with conflicting parameters of imperceptibility and robustness. While embedding a watermark with a high strength factor increases robustness, it also decreases imperceptibility of the watermark. Thus embedding in visually less sensitive regions, i.e., complex image blocks could satisfy both requirements. This paper presents a new wavelet-based watermarking technique using an adaptive strength factor to tradeoff between watermark transparency and robustness. We measure variations of each image block to adaptively set a strength-factor for embedding the watermark in that block. On the other hand, the decoder uses the selected coefficients to safely extract the watermark through a voting algorithm. The proposed method shows better results in terms of PSNR and BER in comparison to recent methods for attacks, such as Median Filter, Gaussian Filter, and JPEG compression.


💡 Research Summary

The paper addresses the classic trade‑off in digital image watermarking between imperceptibility (transparency) and robustness against attacks. Traditional schemes use a fixed embedding strength: a high strength improves resistance to attacks such as filtering or compression but introduces visible artifacts, while a low strength preserves visual quality but makes the watermark easy to destroy. To reconcile these opposing goals, the authors propose a wavelet‑domain watermarking framework that adapts the embedding strength on a per‑block basis according to the local complexity of the image.

The method begins by partitioning the host image into uniform blocks (e.g., 8×8 or 16×16 pixels). For each block the variance or energy of its wavelet coefficients is computed, providing a quantitative measure of visual activity. Blocks with high variance—typically textured or edge‑rich regions—are deemed less sensitive to the human visual system (HVS). Consequently, a larger strength factor is assigned to these blocks, allowing a stronger watermark signal to be embedded without perceptible degradation. Conversely, smooth or flat blocks receive a smaller strength factor, preserving visual fidelity. The mapping from measured complexity to strength factor is defined by a pre‑designed function (linear, logarithmic, or piecewise) whose upper and lower bounds are automatically tuned to meet a target peak‑signal‑to‑noise ratio (PSNR) for the whole image.

Embedding proceeds in the discrete wavelet transform (DWT) domain. After a single‑level DWT, the image is split into four sub‑bands: LL (approximation), LH, HL, and HH (high‑frequency details). The algorithm selects one or more high‑frequency coefficients—often those with the largest absolute values—because the HVS is less sensitive to modifications in these bands. Each selected coefficient is modified using a quantization‑index‑modulation (QIM) or XOR operation that incorporates the watermark bit. The adaptive strength factor directly influences the quantization step size: a larger factor widens the quantization interval, making the embedded bit more resilient to subsequent distortions.

During detection, the same block partitioning and complexity analysis are reproduced to locate the exact embedding locations. The decoder extracts candidate bits from the chosen coefficients of each block. To mitigate extraction errors, a voting scheme aggregates multiple candidate bits—either from different coefficients within the same block or from neighboring blocks—selecting the majority value as the final recovered bit. This redundancy dramatically reduces the bit error rate (BER).

The authors evaluate the scheme on standard test images (Lena, Baboon, Peppers, etc.) under a variety of common attacks: median filtering, Gaussian blurring, JPEG compression at low quality factors (e.g., QF = 30), rotation, scaling, and additive noise. Results show that the adaptive method consistently achieves PSNR values above 38 dB while keeping BER below 0.02, even after aggressive JPEG compression. Compared with recent fixed‑strength wavelet watermarking techniques, the proposed approach improves average PSNR by roughly 2 dB and reduces BER by more than 30 %. These gains confirm that tailoring the embedding strength to local image activity effectively balances transparency and robustness.

Key contributions of the work are: (1) a block‑wise complexity‑driven adaptive strength factor that aligns embedding power with HVS sensitivity; (2) integration of high‑frequency wavelet coefficient selection with QIM for robust embedding; (3) a voting‑based extraction mechanism that enhances error correction; and (4) comprehensive experimental validation demonstrating superior performance over state‑of‑the‑art methods.

Future research directions suggested include extending the framework to color images and video streams, employing machine‑learning models to predict block complexity more accurately, and optimizing computational efficiency for real‑time applications. Such extensions could further solidify the practicality of adaptive watermarking in protecting digital media against unauthorized use.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...