Stronger Approximation Guarantees for Non-Monotone γ-Weakly DR-Submodular Maximization

Maximizing submodular objectives under constraints is a fundamental problem in machine learning and optimization. We study the maximization of a nonnegative, non-monotone γ-weakly DR-submodular functi

Stronger Approximation Guarantees for Non-Monotone γ-Weakly DR-Submodular Maximization

Maximizing submodular objectives under constraints is a fundamental problem in machine learning and optimization. We study the maximization of a nonnegative, non-monotone γ-weakly DR-submodular function over a down-closed convex body. Our main result is an approximation algorithm whose guarantee depends smoothly on γ; in particular, when γ = 1 (the DR-submodular case) our bound recovers the 0.401 approximation factor, while for γ < 1 the guarantee degrades gracefully and, it improves upon previously reported bounds for γ-weakly DR-submodular maximization under the same constraints. Our approach combines a Frank-Wolfe-guided continuous-greedy framework with a γ-aware double-greedy step, yielding a simple yet effective procedure for handling non-monotonicity. This results in state-of-the-art guarantees for non-monotone γ-weakly DR-submodular maximization over down-closed convex bodies.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...