SharpTimeGS: Sharp and Stable Dynamic Gaussian Splatting via Lifespan Modulation

SharpTimeGS: Sharp and Stable Dynamic Gaussian Splatting via Lifespan Modulation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Novel view synthesis of dynamic scenes is fundamental to achieving photorealistic 4D reconstruction and immersive visual experiences. Recent progress in Gaussian-based representations has significantly improved real-time rendering quality, yet existing methods still struggle to maintain a balance between long-term static and short-term dynamic regions in both representation and optimization. To address this, we present SharpTimeGS, a lifespan-aware 4D Gaussian framework that achieves temporally adaptive modeling of both static and dynamic regions under a unified representation. Specifically, we introduce a learnable lifespan parameter that reformulates temporal visibility from a Gaussian-shaped decay into a flat-top profile, allowing primitives to remain consistently active over their intended duration and avoiding redundant densification. In addition, the learned lifespan modulates each primitives’ motion, reducing drift in long-lived static points while retaining unrestricted motion for short-lived dynamic ones. This effectively decouples motion magnitude from temporal duration, improving long-term stability without compromising dynamic fidelity. Moreover, we design a lifespan-velocity-aware densification strategy that mitigates optimization imbalance between static and dynamic regions by allocating more capacity to regions with pronounced motion while keeping static areas compact and stable. Extensive experiments on multiple benchmarks demonstrate that our method achieves state-of-the-art performance while supporting real-time rendering up to 4K resolution at 100 FPS on one RTX 4090.


💡 Research Summary

SharpTimeGS introduces a lifespan‑aware 4D Gaussian representation that simultaneously addresses the long‑standing issues of temporal visibility decay and motion drift in dynamic Gaussian splatting. Each Gaussian primitive is equipped with two learnable parameters: a lifespan variance σₜ and a temporal radius r. These parameters reshape the traditional Gaussian‑shaped opacity curve into a flat‑top profile: the primitive remains fully opaque for |t‑T| ≤ r and drops sharply to zero outside this interval. This eliminates the need for multiple overlapping Gaussians to approximate a step‑like lifespan, thereby reducing redundant densification and preserving a clean temporal boundary.

Motion is modulated by a scaling function f(σₜ, r) = max(1.0, ((σₜ + r)/2)²). For long‑lived, static primitives (large σₜ and r), f becomes large, effectively nullifying the velocity term (v/f ≈ 0) and freezing the point in space. Conversely, short‑lived, dynamic primitives obtain a small f, allowing them to move freely. By coupling lifespan directly into both opacity and motion equations, SharpTimeGS achieves a unified formulation that balances static stability with dynamic expressiveness.

A velocity‑aware initialization separates static and dynamic points at the start of training: static points receive long lifespans and near‑zero velocities, while dynamic points are assigned short lifespans and initial velocities. This prior dramatically stabilizes early optimization.

To further balance representation capacity, the authors propose a two‑stage densification scheme. In the first stage (≈ 1/3 of total iterations) they follow the AbsGS approach, expanding the primitive set based on image gradients. After fixing the total number of primitives, the second stage introduces a lifespan‑velocity score:
s = λₑ E + λₒ O + λₗ (1 − exp(−‖v‖ + 1 / f)),
where E is reconstruction error, O is opacity, and the last term prioritizes short‑lived, fast‑moving Gaussians. High‑scoring primitives are cloned, while low‑scoring, low‑opacity ones are removed. This adaptive allocation concentrates resources on regions with pronounced motion while keeping static areas compact.

Extensive experiments on Neural3D V, ENeRF‑Outdoor, and SelfCap benchmarks demonstrate that SharpTimeGS outperforms prior state‑of‑the‑art methods (e.g., FreeTimeGS, 4DRotorGS) in PSNR, SSIM, and LPIPS. Moreover, the system renders 4K resolution at 100 FPS on a single RTX 4090, confirming its real‑time capability.

The paper’s contributions are: (1) a flat‑top lifespan‑controlled opacity function, (2) a lifespan‑modulated motion formulation, (3) a velocity‑aware initialization that separates static and dynamic primitives, and (4) a lifespan‑velocity‑aware densification strategy. Limitations include potential artifacts when abrupt topology changes occur (the flat‑top may produce harsh temporal cut‑offs) and the need for sufficient temporal sampling to learn reliable lifespan parameters. Future work may explore non‑linear motion models or more sophisticated lifespan priors to handle complex deformations.


Comments & Academic Discussion

Loading comments...

Leave a Comment