RemedyGS: Defend 3D Gaussian Splatting against Computation Cost Attacks
📝 Abstract
As a mainstream technique for 3D reconstruction, 3D Gaussian splatting (3DGS) has been applied in a wide range of applications and services. Recent studies have revealed critical vulnerabilities in this pipeline and introduced computation cost attacks that lead to malicious resource occupancies and even denial-of-service (DoS) conditions, thereby hindering the reliable deployment of 3DGS. In this paper, we propose the first effective and comprehensive black-box defense framework, named RemedyGS, against such computation cost attacks, safeguarding 3DGS reconstruction systems and services. Our pipeline comprises two key components: a detector to identify the attacked input images with poisoned textures and a purifier to recover the benign images from their attacked counterparts, mitigating the adverse effects of these attacks. Moreover, we incorporate adversarial training into the purifier to enforce distributional alignment between the recovered and original natural images, thereby enhancing the defense efficacy. Experimental results demonstrate that our framework effectively defends against white-box, black-box, and adaptive attacks in 3DGS systems, achieving state-of-theart performance in both safety and utility.
💡 Analysis
As a mainstream technique for 3D reconstruction, 3D Gaussian splatting (3DGS) has been applied in a wide range of applications and services. Recent studies have revealed critical vulnerabilities in this pipeline and introduced computation cost attacks that lead to malicious resource occupancies and even denial-of-service (DoS) conditions, thereby hindering the reliable deployment of 3DGS. In this paper, we propose the first effective and comprehensive black-box defense framework, named RemedyGS, against such computation cost attacks, safeguarding 3DGS reconstruction systems and services. Our pipeline comprises two key components: a detector to identify the attacked input images with poisoned textures and a purifier to recover the benign images from their attacked counterparts, mitigating the adverse effects of these attacks. Moreover, we incorporate adversarial training into the purifier to enforce distributional alignment between the recovered and original natural images, thereby enhancing the defense efficacy. Experimental results demonstrate that our framework effectively defends against white-box, black-box, and adaptive attacks in 3DGS systems, achieving state-of-theart performance in both safety and utility.
📄 Content
3D reconstruction, which aims to synthesize photorealistic novel views from multi-view input images, plays a pivotal role in various applications, including augmented reality (AR), virtual reality (VR) [1], and holographic communication [18,43]. Recently, 3D Gaussian splatting (3DGS) [21] has emerged as a leading approach for 3D reconstruction. By representing scenes as a set of 3D Gaussian primitives, 3DGS enables explicit modeling that significantly accelerates rendering while delivering high-quality novel view synthesis. This combination of efficiency and visual fidelity has made it attractive for commercial applications, with companies such as Spline [39], KIRI [23], and Polycam [38] providing large-scale paid services that reconstruct 3D scenes and synthesize novel views from user-uploaded images.
The superior reconstruction capability of 3DGS stems from its adaptive density control mechanism, which introduces new Gaussians to under-reconstructed areas while pruning low-contribution ones until convergence. This adaptive densification allows 3DGS to effectively capture fine geometric details and complex textures in the scene. Nevertheless, it also raises significant security concerns. Attackers can exploit this process by manipulating input images to trigger an excessive increase in the number of Gaussians, significantly increasing computational costs. A recent study, Poison-splat [31], has revealed this critical vulnerability, demonstrating how adversaries induce dramatic escalations in GPU memory usage, training duration, and rendering latency through this new type of computation cost attacks. Instead of directly maximizing the number of Gaussians, attackers sharpen 3D objects by increasing the total variance score, which indirectly causes 3DGS to allocate more Gaussians. These attacks can be launched by malicious users posing as legitimate ones or by tampering with images uploaded by others, effectively monopolizing computational resources and causing denial-of-service (DoS) conditions. Thus, the stability, reliability, and availability of real-world 3DGS systems will be severely threatened.
Several basic defense mechanisms have been proposed, such as image smoothing and limiting the number of Gaussians, which, however, are largely ineffective. Specifically, image smoothing, which employs filters such as Gaussian or bilateral filtering [41], aims to preprocess input images to mitigate the effects of noise introduced by attackers. Nevertheless, since the attack process often involves complex non-linear transformations, it is ineffective to apply simple linear filters to mitigate poisoned textures. Moreover, limiting the number of Gaussians during 3DGS training may compromise the system’s adaptability and representation quality, especially in complex scenes. These straightforward strategies result in an unsatisfactory trade-off be-Figure 1. The overview of our proposed defense framework RemedyGS against 3DGS computation cost attacks, where we visualize the input RGB image and 3DGS point cloud positions. The computational cost increases with the density of 3DGS point cloud. Our method effectively safeguards 3DGS systems.
tween security and utility, often leading to a degradation of reconstruction quality by up to 10 dB [31]. This degradation arises from two primary reasons. First, these methods cannot differentiate between clean and poisoned images, which results in a uniform degradation in the performance of all users. Second, they fail to distinguish original textures from injected noise, thereby obscuring fine details essential for high-quality reconstructions. These limitations motivate us to explore a meticulously designed defense method that ensures reliable and effective applications of 3DGS.
In this paper, we propose RemedyGS, a comprehensive black-box defense framework to protect 3DGS systems against white-box computation cost attacks while preserving high reconstruction utility. The pipeline of RemedyGS is illustrated in Figure 1. It consists of two key components: 1) a detector that differentiates between attacked and safe input images, and 2) a learnable purifier that recovers normal images from attacked ones. Given that universal smoothing can significantly compromise the quality of reconstructions for legitimate users, we develop a detector network to identify poisoned images, ensuring that only those images flagged as compromised undergo further processing. This targeted approach preserves the utility of services for normal users while addressing the negative impacts of computation cost attacks on compromised inputs. Unlike traditional image smoothing methods, which struggle to reverse the complex transformations associated with attacked images, our learnable purifier is designed to learn the intricate non-linear inverse transformations necessary for effective recovery. This enables the purifier to achieve high-quality restoration of benign images, thereby enhancing safety under more
This content is AI-processed based on ArXiv data.