Stronger Approximation Guarantees for Non-Monotone γ-Weakly DR-Submodular Maximization

Reading time: 10 minute
...

📝 Original Paper Info

- Title: Stronger Approximation Guarantees for Non-Monotone γ-Weakly DR-Submodular Maximization
- ArXiv ID: 2601.00611
- Date: 2026-01-02
- Authors: Hareshkumar Jadav, Ranveer Singh, Vaneet Aggarwal

📝 Abstract

Maximizing submodular objectives under constraints is a fundamental problem in machine learning and optimization. We study the maximization of a nonnegative, non-monotone $γ$-weakly DR-submodular function over a down-closed convex body. Our main result is an approximation algorithm whose guarantee depends smoothly on $γ$; in particular, when $γ=1$ (the DR-submodular case) our bound recovers the $0.401$ approximation factor, while for $γ<1$ the guarantee degrades gracefully and, it improves upon previously reported bounds for $γ$-weakly DR-submodular maximization under the same constraints. Our approach combines a Frank-Wolfe-guided continuous-greedy framework with a $γ$-aware double-greedy step, yielding a simple yet effective procedure for handling non-monotonicity. This results in state-of-the-art guarantees for non-monotone $γ$-weakly DR-submodular maximization over down-closed convex bodies.

💡 Summary & Analysis

1. **Beginner**: This paper develops algorithms to solve complex optimization problems. Imagine optimizing furniture placement in a room for maximum efficiency – this paper explores various methods to achieve that.
  1. Intermediate: The paper presents a novel approach to optimize non-monotone $`\gamma`$-weak DR submodular functions, combining Frank-Wolfe and double-greedy methods to surpass existing approximation guarantees. This is crucial in finding the best value within complex function structures.

  2. Advanced: This research introduces new algorithms for optimizing non-monotone $`\gamma`$-weakly DR-submodular objectives over down-closed convex bodies, combining Frank-Wolfe and double-greedy methods to provide improved approximation guarantees. The approach recovers the classical DR constant when $`\gamma=1`$, and strictly surpasses the baseline for all other $`\gamma\in(0,1)`$ values.

📄 Full Paper Content (ArXiv Source)

Keywords: Combinatorial Optimization, Weakly DR Submodular Function, Approximation Algorithm

Introduction

Submodular maximization under various constraints is a central problem in optimization and theoretical computer science. Foundational work established much of the modern toolkit and sparked a rich line of research . Informally, a set function is submodular if marginal gains decrease as the chosen set grows (a discrete “diminishing returns’’ property). One key reason the problem remains important is its wide scope: many classic combinatorial tasks can be cast as maximizing a submodular objective, including Max-Cut, the assignment problem, facility location, and max bisection .

On the continuous side, Diminishing-Returns (DR) submodular models support Maximum A Posteriori (MAP) inference for determinantal point processes . Informally, a DR-submodular function is a continuous analogue of submodularity in which marginal gains along each coordinate decrease as the current point increases (a “diminishing returns’’ property in $`\mathbb{R}^n`$). We give the formal definition in the preliminaries. The problem also arise in online allocation and learning .

Classical results for continuous DR-submodular maximization are based on projection-free, first-order methods with provable guarantees. Foundational work developed the geometry and algorithms for continuous DR-submodular maximization , as well as optimal algorithms for continuous non-monotone DR-submodular objectives . This paradigm extends to online and stochastic information models , and recent progress sharpens bounds and constraint handling via DR-based analyses . Within this line, the weakly DR framework broadens modeling reach by relaxing diminishing returns to a factor $`\gamma\in(0,1]`$: roughly, marginal gains are still decreasing, but only up to a controlled multiplicative slack $`\gamma`$. This provides unified algorithms and guarantees . These developments motivate our focus on continuous DR and weakly DR objectives over down-closed convex bodies.

A range of first-order methods are known for continuous DR and weakly DR maximization over down-closed convex bodies. Projected gradient methods are the most basic scheme: they take a step in the direction of the gradient and then project back to the feasible region $`P`$ . Frank–Wolfe (projection-free) methods, also known as conditional gradient methods, avoid projections by instead solving a linear subproblem $`\max_{y\in P} \langle y,\nabla F(x)\rangle`$ and moving towards this direction; they leverage DR/weakly-DR restricted concavity to certify progress . Double–greedy style methods adapt the discrete bracketing idea to the continuous setting. They keep two solutions, a “lower’’ and an “upper’’ one, and repeatedly adjust their coordinates in opposite directions so that the two solutions move closer together . These techniques have also been extended to online, bandit, and stochastic models .

At a high level, our algorithm has two interacting phases. We combine a $`\gamma`$-aware Frank–Wolfe–guided measured continuous greedy step with a $`\gamma`$-aware double–greedy step. Intuitively, the Frank–Wolfe component drives global progress by following promising directions inside the convex body, while the double–greedy component resolves local conflicts between including or excluding mass on each coordinate. Together, these phases produce a single solution whose quality we certify via a combined performance guarantee. In our method, we design a $`\gamma`$-aware Frank-Wolfe guided measured continuous Greedy ($`\gamma`$-FWG) algorithm with $`\gamma`$-dependent thresholds and progress certificates, pair it with a $`\gamma`$-aware double–greedy, and then optimize a convex mixture of their certificates to obtain a performance curve $`\Phi_\gamma`$ that strictly improves the baseline for all $`\gamma\in(0,1)`$ while matching the DR boundary at $`\gamma=1`$ , yielding state-of-the-art $`\gamma`$-dependent guarantees.

Beyond establishing new approximation guarantees and improving the best-known bounds, our approach introduces the following technical novelties:

  • We design a novel, $`\gamma`$-aware Frank–Wolfe–guided measured continuous greedy algorithm for the non-monotone $`\gamma`$-weakly DR setting. Our method introduces $`\gamma`$-dependent threshold schedules and progress certificates that balance ascent along Frank–Wolfe directions with measured updates, while preserving feasibility and ensuring a monotone decay of the residual gap.

  • Weakly-DR functions behave asymmetrically. In the DR case, one has the ‘naive’ inequality

    MATH
    \begin{equation}
    F(\mathbf{x})\ \ge\ \frac{F(\mathbf{x} \vee \mathbf{y}) + F(\mathbf{x} \wedge \mathbf{y})}{2}.
    \end{equation}
    Click to expand and view more

    In contrast, in the $`\gamma`$-weakly DR setting, Lemma 3.1 yields

    MATH
    \begin{equation}
    F(\mathbf{x})\ \ge\ \frac{\gamma^{2}\,F(\mathbf{x} \vee \mathbf{y}) + F(\mathbf{x} \wedge \mathbf{y})}{1+\gamma^{2}}.
    \end{equation}
    Click to expand and view more

    Here only the $`F(\mathbf{x}\vee\mathbf{y})`$ term is scaled by $`\gamma^{2}`$, so the two sides of the inequality are no longer treated symmetrically. Similar one-sided $`\gamma`$-dependence appears in Lemmas 2.1, 2.2, and several auxiliary results in the appendix. This loss of symmetry breaks the standard potential-based arguments used in classical DR analyses. Our approach therefore uses a case-based progress analysis that explicitly tracks one-sided marginal decay and introduces $`\gamma`$-aware thresholds that couple Frank–Wolfe steps with measured updates, allowing us to certify progress despite this asymmetry.

  • In parallel, we adapt the classical double–greedy potential to a $`\gamma`$-weighted variant that explicitly balances asymmetric gains and losses, yielding tight progress guarantees across the weakly-DR regime.

Our Contribution

In this paper, we consider non-monotone $`\gamma`$-weakly DR-submodular objectives over a down-closed convex body $`P \subseteq [0,1]^n`$ with $`0 < \gamma \le 1`$. Informally, $`\gamma`$-weak DR means that marginal gains decrease as coordinates increase, but only up to a factor $`\gamma \in (0,1]`$ capturing objectives that exhibit partial, rather than full, diminishing returns. In this regime, the canonical approximation envelope is $`\kappa(\gamma) = \gamma e^{-\gamma}`$ (which recovers $`e^{-1}`$ at $`\gamma = 1`$) . In , this approximation guarantee is achieved with time complexity $`\mathcal{O}(1/\varepsilon)`$, where $`\varepsilon > 0`$ is an accuracy parameter, while attains the same guarantee with running time $`\mathcal{O}(1/\varepsilon^3)`$. In contrast, our algorithm achieves the $`\Phi_\gamma`$-approximation in time $`\mathrm{Poly}(n, \delta^{-1})`$. Recently, Buchbinder and Feldman introduced a novel technique that yields a $`0.401`$-approximation for the (fully) DR-submodular case ; their algorithm also runs in time $`\mathrm{Poly}(n, \delta^{-1})`$.

This paper aims to close the gap between weakly and full DR. We develop $`\gamma`$-aware algorithms and analyses that (i) recover the classical DR constant at $`\gamma=1`$ and (ii) strictly improve upon the baseline $`\kappa(\gamma)=\gamma e^{-\gamma}`$ throughout the weakly-DR regime . Our approach combines a $`\gamma`$-aware Frank–Wolfe–guided measured continuous greedy subroutine with a $`\gamma`$-aware double–greedy, and then optimizes a convex mixture of their certificates. The resulting guarantee $`\Phi_\gamma`$ is an objective value determined by three tunable parameters $`(\alpha,r,t_s)`$—where $`\alpha`$ is the mixing weight between certificates and $`(r,t_s)`$ govern schedule/tuning in the FW-guided and double–greedy components—and we choose these to maximize $`\Phi_\gamma`$ at each given $`\gamma`$. A formal statement of our main result appears as Theorem 12. To contextualize our guarantees, Figure 1 plots the optimized curve $`\Phi_\gamma`$, and Table 1 reports representative values and the associated parameter choices $`(\alpha,r,t_s)`$.

In this direction, our key contributions are summarized below—each item highlights a distinct component of our algorithmic framework and its guarantee.

  • We present a $`\gamma`$-aware Frank–Wolfe–guided measured continuous greedy and a $`\gamma`$-aware double–greedy, each delivering explicit constant-factor guarantees for $`\gamma`$-weakly DR objectives over down-closed convex bodies.

  • We derive a parameter-optimized convex mixture of the two certificates producing a performance curve $`\Phi_\gamma`$ that strictly improves the prior baseline $`\kappa(\gamma)=\gamma e^{-\gamma}`$ for all $`\gamma\in(0,1)`$ and matches the classical DR constant at $`\gamma=1`$.

  • Our proofs are modular and avoid curvature assumptions; they recover the DR boundary as a special case and extend smoothly across the weakly-DR spectrum.

  • The methods use only first-order information and linear optimization over $`P`$ (Frank–Wolfe oracles), making them projection-free and suitable for large-scale instances.

Non-monotone $`\gamma`$-weakly DR-submodular objectives naturally arise when full DR-submodular models, such as continuous budget allocation, DPP-based diversity objectives, and mean-field inference for probabilistic submodular models, are augmented with practical penalties including over-exposure costs, risk or variance regularization, and intensity constraints. These augmentations retain the weak DR structure while inducing non-monotonicity, which is exactly the regime targeted by our algorithm.

style="width:77.0%" />
Approximation guarantee versus weakly-DR parameter. The horizontal axis is the weakly-DR parameter γ ∈ (0, 1] and the vertical axis is the approximation factor. We plot our optimized guarantee Φγ (blue curve) alongside the non-monotone weakly-DR baseline κ(γ) = γeγ (orange curve). Across the entire regime γ ∈ (0, 1), Φγ strictly exceeds κ(γ), and at γ = 1 (full DR) our curve reaches 0.401, matching the current best bound. Selected parameter choices (α, r, ts) used to construct Φγ are reported in Table 1.
$`\gamma`$ $`\Phi_\gamma`$ $`\gamma e^{-\gamma}`$ $`\alpha`$ $`r`$ $`t_s`$
0.1 0.095 0.090 0.001 3.750 0.000
0.2 0.178 0.164 0.000 3.083 0.000
0.3 0.247 0.222 0.000 2.833 0.000
0.4 0.303 0.268 0.000 2.667 0.000
0.5 0.345 0.303 0.000 2.583 0.000
0.6 0.372 0.329 0.000 2.417 0.000
0.7 0.387 0.348 0.000 2.333 0.000
0.8 0.391 0.359 0.054 2.250 0.075
0.9 0.396 0.366 0.160 2.083 0.267
1.0 0.401 0.368 0.197 2.220 0.368

Numerical comparison of our optimized guarantee $`\Phi_\gamma`$ against non-monotone weakly-DR baseline $`\kappa(\gamma)=\gamma e^{-\gamma}`$. Each row corresponds to a choice of the weakly-DR parameter $`\gamma\in(0,1]`$; the second and third columns report the achieved approximation factor $`\Phi_\gamma`$ and the baseline, respectively. The final three columns list representative internal parameters $`(\alpha,r,t_s)`$ used to construct $`\Phi_\gamma`$: $`\alpha`$ is the convex mixing weight between the two certificates, while $`r`$ and $`t_s`$ are schedule/tuning parameters of the $`\gamma`$ aware FW-guided measured continuous greedy and $`\gamma`$-aware double–greedy components (see the algorithm description). All values are rounded to three decimals.

For non-monotone, $`\gamma`$-weakly DR objectives over down-closed convex bodies, unified analyses establish the baseline envelope $`\kappa(\gamma)=\gamma e^{-\gamma}`$ . An earlier line derives the same envelope via continuous/“measured” greedy combined with projection-free (Frank–Wolfe) arguments adapted to one-sided weakly-DR gradients . A complementary stationary-point baseline shows that any first-order stationary point obtained by projected or mirror ascent achieves $`\gamma^2/(1+\gamma^2)`$ of $`\mathrm{OPT}`$ (recovering $`1/2`$ at $`\gamma=1`$), with stochastic/online refinements using non-oblivious surrogates and unbiased gradients .

When $`\gamma=1`$, the problem reduces to continuous non-monotone DR-submodular maximization over down-closed convex bodies. This line begins with the multilinear/continuous-greedy framework and the Measured Continuous Greedy guarantee of $`1/e\approx0.367`$ , with subsequent improvements culminating in the current best constant $`0.401`$ . Hardness results still leave a notable gap for non-monotone objectives under common constraints (with no better than $`\approx 0.478`$ ), underscoring the importance of tighter algorithms at $`\gamma=1`$. Our guarantees match this DR boundary while delivering strict improvements over $`\kappa(\gamma)`$ for every $`\gamma\in(0,1)`$.

Preliminaries and Notation

In this section, we introduce the basic notation, definitions, and assumptions used throughout the paper. We use boldface letters (e.g., $`\mathbf{x}, \mathbf{y}`$) to denote vectors in $`\mathbb{R}^n`$, and write a vector as $`\mathbf{x} = (x_1, \cdots, x_n)`$. The all-ones and all-zeros vectors are denoted by $`\mathbf{1}`$ and $`\mathbf{0}`$, respectively. We use $`\mathbf{e}_i`$ to denote the $`i`$-th standard basis vector in $`\mathbb{R}^n`$. Let $`N`$ be the ground set with $`|N| = n`$ elements. The discrete and continuous hypercubes are defined as

MATH
\{0,1\}^n \;=\; \bigl\{\mathbf{x}\in\mathbb{R}^n : x_i\in\{0,1\}\ \forall i\bigr\},
Click to expand and view more

and

MATH
[0,1]^n \;=\; \bigl\{\mathbf{x}\in\mathbb{R}^n : x_i\in[0,1]\ \forall i\bigr\},
Click to expand and view more

respectively. For a positive integer $`n`$, we write $`[n] := \{1,2,\hdots,n\}`$.

For $`\mathbf{x},\mathbf{y}\in\mathbb{R}^n`$, we use the componentwise order: $`\mathbf{x}\le \mathbf{y}`$ if and only if $`x_i\le y_i`$ for all $`i`$ (and $`\mathbf{x}<\mathbf{y}`$ if and only if $`x_ijoin and meet are defined as

MATH
\mathbf{x}\vee \mathbf{y} := (\max\{x_i,y_i\})_{i=1}^n,
\quad
\mathbf{x}\wedge \mathbf{y} := (\min\{x_i,y_i\})_{i=1}^n.
Click to expand and view more

We also use the elementwise (Hadamard) product $`\mathbf{x}\odot \mathbf{y}\in\mathbb{R}^n`$, defined by $`(\mathbf{x}\odot \mathbf{y})_i := x_i\,y_i`$ for each $`i\in[n]`$, and the standard inner product $`\langle \mathbf{x},\mathbf{y}\rangle := \sum_{i=1}^n x_i y_i`$. For vectors with entries in $`[0,1]`$, the coordinatewise probabilistic sum is

MATH
\begin{equation}
    \mathbf{x} \oplus \mathbf{y} \;:=\; \mathbf{1}-\bigl(\mathbf{1}-\mathbf{x}\bigr)\odot\bigl(\mathbf{1}-\mathbf{y}\bigr).
\end{equation}
Click to expand and view more

For vectors $`\mathbf{x}^{(1)},\hdots,\mathbf{x}^{(m)}\in[0,1]^n`$, we write

MATH
\begin{align}
\bigoplus_{j=1}^m \mathbf{x}^{(j)}
&= \mathbf{1}-\bigodot_{j=1}^m\bigl(\mathbf{1}-\mathbf{x}^{(j)}\bigr) = \mathbf{1} - \big((\mathbf{1}-\mathbf{x}^{(1)})\odot\cdots\odot (\mathbf{1}-\mathbf{x}^{(m)})\big).
\label{eq:oplus-odot}
\end{align}
Click to expand and view more

The operators $`\odot`$ and $`\oplus`$ bind more tightly than vector addition or subtraction, so $`\mathbf{x} + \mathbf{y}\odot \mathbf{z}`$ means $`\mathbf{x} + (\mathbf{y}\odot \mathbf{z})`$. When a scalar function is applied to a vector, it is interpreted elementwise; for instance, for $`\mathbf{x}\in[0,1]^n`$, the vector $`e^{\mathbf{x}}`$ has entries $`\bigl(e^{\mathbf{x}}\bigr)_i = e^{x_i}`$.

A set $`P\subseteq\mathbb{R}^n`$ is convex if $`\lambda \mathbf{x}+(1-\lambda)\mathbf{y}\in P`$ for all $`\mathbf{x},\mathbf{y}\in P`$ and $`\lambda\in[0,1]`$. A convex body is a compact, convex set with nonempty interior. A polytope $`P\subseteq[0,1]^n`$ is down-closed if $`\mathbf{y}\in P`$ implies $`\mathbf{x}\in P`$ for every $`\mathbf{x}\in\mathbb{R}^n`$ with $`\mathbf{0}\le \mathbf{x}\le \mathbf{y}`$. We say that $`P`$ is solvable if linear optimization over $`P`$ can be performed in polynomial time. The (Euclidean) diameter of a set $`P\subseteq\mathbb{R}^n`$ is

MATH
D := \sup\{\ \|\mathbf{x}-\mathbf{y}\|_2 : \mathbf{x},\mathbf{y}\in P\ \}.
Click to expand and view more

A nonnegative set function $`f:\{0,1\}^n\to\mathbb{R}_{\ge 0}`$ is submodular if, for all $`\mathbf{x},\mathbf{y}\in\{0,1\}^n`$ with $`\mathbf{x}\le \mathbf{y}`$ and for all $`\mathbf{a}\in\{0,1\}^n`$,

MATH
f(\mathbf{x}\vee \mathbf{a})-f(\mathbf{x})\;\ge\;f(\mathbf{y}\vee \mathbf{a})-f(\mathbf{y}).
Click to expand and view more

In the continuous case, a nonnegative function $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ is diminishing-returns (DR) submodular if, for all $`\mathbf{x},\mathbf{y}\in[0,1]^n`$ with $`\mathbf{x}\le \mathbf{y}`$, any coordinate $`i\in[n]`$, and any $`c>0`$ such that $`\mathbf{x}+c\,\mathbf{e}_i,\ \mathbf{y}+c\,\mathbf{e}_i\in[0,1]^n`$,

MATH
F(\mathbf{x}+c\,\mathbf{e}_i)-F(\mathbf{x})\;\ge\;F(\mathbf{y}+c\,\mathbf{e}_i)-F(\mathbf{y}).
Click to expand and view more

If $`F`$ is differentiable, this is equivalent to $`\nabla F(\mathbf{x}) \ge \nabla F(\mathbf{y})`$ for all $`\mathbf{x}\le \mathbf{y}`$ .

A nonnegative function $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ is $`\gamma`$-weakly DR-submodular if, for all $`\mathbf{x},\mathbf{y}\in[0,1]^n`$ with $`\mathbf{x}\le \mathbf{y}`$, any $`i\in[n]`$, and any $`c>0`$ with $`\mathbf{x}+c\,\mathbf{e}_i,\ \mathbf{y}+c\,\mathbf{e}_i\in[0,1]^n`$,

MATH
\begin{equation}
F(\mathbf{x}+c\,\mathbf{e}_i)-F(\mathbf{x})\;\ge\;\gamma\big(F(\mathbf{y}+c\,\mathbf{e}_i)-F(\mathbf{y})\big).
\end{equation}
Click to expand and view more

When $`F`$ is differentiable, this is equivalent to $`\nabla F(\mathbf{x}) \ge \gamma\,\nabla F(\mathbf{y})`$ for all $`\mathbf{x}\le \mathbf{y}`$. This condition holds for some $`\gamma>1`$ if and only if $`F`$ is constant (and a constant $`F`$ satisfies it for any $`\gamma`$); it holds for some $`\gamma\le 0`$ exactly when $`F`$ is coordinate-wise monotone. Hence, we focus on the nontrivial range $`0<\gamma\le 1`$.

A differentiable function $`F:P \to \mathbb{R}`$ is $`L`$-smooth if, for all $`\mathbf{x},\mathbf{y} \in P`$, it satisfies

MATH
\begin{equation}
    \|\nabla F(\mathbf{x})-\nabla F(\mathbf{y})\|_2 \le L\,\|\mathbf{x}-\mathbf{y}\|_2.
\end{equation}
Click to expand and view more

Now we discuss some properties of $`\gamma`$-weakly DR-submodular functions. In the following three lemmas, let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be differentiable and $`\gamma`$-weakly DR-submodular.

Lemma 1. For all $`\mathbf{x},\mathbf{y}\in[0,1]^n`$ and $`\lambda\in[0,1]`$, the following hold:

  1. *If $`\mathbf{x}\le \mathbf{y}`$, then

    MATH
    \begin{equation}
    \label{eq:lemma1-convex-combo}
    F\bigl(\lambda \mathbf{x}+(1-\lambda)\mathbf{y}\bigr)
    \ \ge \
    \frac{\lambda\,F(\mathbf{x})+\gamma^{2}(1-\lambda)\,F(\mathbf{y})}{\lambda+\gamma^{2}(1-\lambda)}.
    \end{equation}
    Click to expand and view more

    Equivalently,

    MATH
    \begin{equation}
    \label{eq:lemma1-convex-combo-alt}
    F\bigl((1-\lambda)\mathbf{x}+\lambda \mathbf{y}\bigr)
    \ \ge\
    \frac{(1-\lambda)\,F(\mathbf{x})+\gamma^{2}\lambda\,F(\mathbf{y})}{(1-\lambda)+\gamma^{2}\lambda}.
    \end{equation}
    ```*
    Click to expand and view more
  2. *If $`\mathbf{x}+\mathbf{y}\in[0,1]^n`$, then

    MATH
    \begin{equation}
    \label{eq:lemma1-increment}
    F(\mathbf{x}+\lambda \mathbf{y})-F(\mathbf{x})
    \ \ge\
    \frac{\gamma^{2}\lambda}{\,1-\lambda+\gamma^{2}\lambda\,}\,\bigl(F(\mathbf{x}+\mathbf{y})-F(\mathbf{x})\bigr).
    \end{equation}
    ```*
    Click to expand and view more

Lemma 2. For every $`\mathbf{x},\mathbf{y}\in [0,1]^n`$, the following inequalities hold:

  1. *If $`\mathbf{x}+\mathbf{y}\le \mathbf{1}`$, then

    MATH
    \begin{equation}
    \label{eq:grad-ineq-plus}
            \langle \nabla F(\mathbf{x}),\,\mathbf{y}\rangle \;\ge\; \gamma\big(F(\mathbf{x}+\mathbf{y}) - F(\mathbf{x})\big).
    \end{equation}
    ```*
    Click to expand and view more
  2. *If $`\mathbf{x}-\mathbf{y}\ge \mathbf{0}`$, then

    MATH
    \begin{equation}
    \label{eq:grad-ineq-minus}
            \langle \nabla F(\mathbf{x}),\,\mathbf{y}\rangle \;\le\; \frac{1}{\gamma}\big(F(\mathbf{x}) - F(\mathbf{x}-\mathbf{y})\big).
    \end{equation}
    ```*
    Click to expand and view more

Lemma 3. *For any fixed $`\mathbf{y}\in[0,1]^n`$, define

MATH
\begin{equation}
\label{eq:closure-defs}
G_{\oplus}(\mathbf{x})\ :=\ F(\mathbf{x}\oplus \mathbf{y})
\quad\text{and}\quad
G_{\odot}(\mathbf{x})\ :=\ F(\mathbf{x}\odot \mathbf{y}).
\end{equation}
Click to expand and view more

Then both $`G_{\oplus}`$ and $`G_{\odot}`$ are nonnegative and $`\gamma`$–weakly DR-submodular.*

Proofs of these lemmas are provided in Appendix 6. These statements generalize prior results from to the $`\gamma`$-weakly DR setting and hold for all $`\gamma\in(0,1]`$; in particular, they coincide exactly with the classical DR statements when $`\gamma=1`$.

Supporting Results

In this section, we generalize two standard algorithms to the $`\gamma`$–weakly DR setting and present their analyses. First, we prove a $`\gamma`$-weighted Frank–Wolfe certificate over solvable convex bodies (Theorem 5). Second, we develop a $`\gamma`$-aware Double–Greedy algorithm and establish an unbalanced lower bound that interpolates smoothly in $`\gamma`$ (Theorem 6).

We generalize the classical DR framework to the $`\gamma`$–weakly DR setting and obtain guarantees that continuously interpolate between weakly and full DR. Our first ingredient (Lemma 4) shows that the local‐optimality certificate $`\langle \mathbf{y}-\mathbf{x},\nabla F(\mathbf{x})\rangle\le 0`$ can be translated, under $`\gamma`$–weakly DR‐submodularity, into a $`\gamma`$–weighted value comparison between $`F(\mathbf{x})`$ and the join/meet values $`F(\mathbf{x}\vee\mathbf{y})`$ and $`F(\mathbf{x}\wedge\mathbf{y})`$, recovering the classical $`\tfrac12\!\big(F(\mathbf{x}\vee\mathbf{y})+F(\mathbf{x}\wedge\mathbf{y})\big)`$ bound at $`\gamma=1`$ (cf. ; see also ). Our second ingredient (Theorem 5) “globalizes’’ the comparison over a solvable convex body $`P`$: a Frank–Wolfe–type routine yields a uniform first‐order certificate against every $`\mathbf{y}\in P`$ without requiring curvature information, a device standard in recent continuous submodular solvers (e.g., ) and aligned with unified weakly‐DR analyses (e.g., ). Combining this certificate with the weakly‐DR property gives a value bound that degrades smoothly with $`\gamma`$ and exactly matches the DR case at $`\gamma=1`$; this is formalized in Lemma 4 and Theorem 5, and proof of these results are given in Appendix 7

Lemma 4. *Let $`F:[0,1]^n\to\mathbb{R}_{\ge0}`$ be $`\gamma`$–weakly DR–submodular. If $`\mathbf{x}`$ is a local optimum with respect to a vector $`\mathbf{y}`$, i.e.,

MATH
\begin{equation}
\label{eq:local-opt}
\langle \mathbf{y}-\mathbf{x},\nabla F(\mathbf{x})\rangle\le 0,
\end{equation}
Click to expand and view more

then

MATH
\begin{equation}
\label{eq:lemma2-bound}
F(\mathbf{x})\ \ge\ \frac{\gamma^{2}\,F(\mathbf{x}\vee \mathbf{y}) + F(\mathbf{x}\wedge \mathbf{y})}{\,1+\gamma^{2}\,}.
\end{equation}
```*

</div>

<div id="thm:weak-dr-smooth" class="theorem">

**Theorem 5**. *Let $`F:[0,1]^n\!\to\mathbb{R}_{\ge0}`$ be a
nonnegative, $`L`$-smooth function that is $`\gamma`$–weakly
DR–submodular. Let $`P\subseteq[0,1]^n`$ be a solvable convex body of
diameter $`D`$, and let $`\delta\in(0,1)`$. There is a polynomial-time
algorithm that outputs $`\mathbf{x}\in P`$ such that, for every
$`\mathbf{y}\in P`$,
``` math
\begin{equation}
\label{eq:thm-smooth-bound}
\begin{aligned}
F(\mathbf{x}) \!\ge \!\frac{\gamma^{2}F(\mathbf{x}\vee \mathbf{y})+F(\mathbf{x}\wedge \mathbf{y})}{1+\gamma^{2}}
\!-\!\frac{\delta\,\gamma}{1+\gamma^{2}} \left[\max_{\mathbf{z}\in P}F(\mathbf{z}) + \frac{L D^{2}}{2}\right].
\end{aligned}
\end{equation}
```*

</div>

In the continuous DR submodular domain, Double–Greedy–type procedures
were extended to *DR–submodular* objectives on $`[0,1]^n`$ (the
$`\gamma{=}1`$ case) by several works , but these analyses still stated
a uniform $`1/2`$ bound and predated the unbalanced refinement. We
generalize this line to the *$`\gamma`$–weakly DR–submodular* regime
($`0<\gamma\le1`$), leveraging the weakly-DR structure introduced for
continuous submodular maximization and the recent unified weakly-DR
perspective .

Our algorithm retains the Double–Greedy structure but augments it with
*$`\gamma`$-aware* smoothing/thresholding, ensuring that it handles any
given $`\gamma\in(0,1]`$ robustly; the resulting guarantee interpolates
continuously in $`\gamma`$ and collapses to the classical DR bound at
$`\gamma=1`$. The corresponding lower-bound guarantee is stated in
Theorem <a href="#thm:unbalanced-weakDR" data-reference-type="ref"
data-reference="thm:unbalanced-weakDR">6</a>, and a detailed proof is
provided in Appendix <a href="#Double-Greedy" data-reference-type="ref"
data-reference="Double-Greedy">8</a>.

<div id="thm:unbalanced-weakDR" class="theorem">

**Theorem 6**. *Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be nonnegative
and $`\gamma`$–weakly DR-submodular for some $`\gamma\in(0,1]`$, and fix
a parameter $`\varepsilon\in(0,1)`$. There exists a polynomial-time
algorithm that outputs $`\mathbf{x}\in[0,1]^n`$ such that
``` math
\begin{equation}
\label{eq:unbalanced-bound}
F(\mathbf{x})\ \ge\
\max_{r \ge 0}\ 
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\gamma^{9/2}\bigr)\,r\,F(\mathbf{o})\;+\;F(\mathbf{0})\;+\;r^2 F(\mathbf{1})}
{\,r^2\;+\;2\gamma^{3/2}r\;+\;1\,}.
\end{equation}
```*

</div>

When $`\gamma=1`$ and $`r=1`$ this recovers the canonical $`1/2`$
approximation, while for many instances one can choose $`r\neq1`$ to
obtain a strictly better guarantee, in direct analogy to the unbalanced
bounds for set functions .

The unbalanced Double–Greedy guarantee extends directly to axis-aligned
boxes. Fix an upper bound $`\mathbf{x}\in[0,1]^n`$ and consider
maximizing $`F`$ over the box $`[\mathbf{0},\mathbf{x}]`$. Define
$`G:[0,1]^n\to\mathbb{R}_{\ge0}`$ by
$`G(\mathbf{a}):=F(\mathbf{x}\odot \mathbf{a})`$. By
Lemma <a href="#lem:weakDR-closure" data-reference-type="ref"
data-reference="lem:weakDR-closure">3</a>, $`G`$ remains nonnegative and
$`\gamma`$–weakly DR-submodular, so
Theorem <a href="#thm:unbalanced-weakDR" data-reference-type="ref"
data-reference="thm:unbalanced-weakDR">6</a> applies to $`G`$.
Translating the output back via
$`\mathbf{y}:=\mathbf{x}\odot\mathbf{a}'\le \mathbf{x}`$ yields the
following corollary. We use following corollary as Box Maximization in
Algorithm <a href="#alg:main" data-reference-type="ref"
data-reference="alg:main">[alg:main]</a>.

<div id="cor:box-weakDR" class="corollary">

**Corollary 7** (Box maximization). *Let
$`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be nonnegative and $`\gamma`$–weakly
DR-submodular for some $`\gamma\in(0,1]`$, let $`\mathbf{x}\in[0,1]^n`$,
and fix $`\varepsilon\in(0,1)`$. There exists a polynomial-time
algorithm that outputs a vector $`\mathbf{y}\in[0,1]^n`$ with
$`\mathbf{y}\le \mathbf{x}`$ such that, for every fixed
$`\mathbf{o}\in[0,1]^n`$,
``` math
\begin{equation}
\label{eq_123}
    F(\mathbf{y}) \ge
\max_{r \ge 0}
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr)\,r\,F(\mathbf{x}\odot \mathbf{o})+F(\mathbf{0})+r^2F(\mathbf{x})}
{\,r^2\;+\;2\gamma^{3/2}r\;+\;1\,}.
\end{equation}
```*

</div>

<div class="proof">

*Proof.* Fix $`\mathbf{x}\in[0,1]^n`$ and define the restricted
objective
``` math
\begin{equation}
\label{eq:box-defG}
G(\mathbf{a})\ :=\ F(\mathbf{x}\odot \mathbf{a})\qquad\text{for all }\mathbf{a}\in[0,1]^n.
\end{equation}
Click to expand and view more

By Lemma 3, $`G`$ is nonnegative and $`\gamma`$–weakly DR-submodular. Applying Theorem 6 to $`G`$ (with the same $`\varepsilon\in(0,1)`$) yields some $`\mathbf{a}'\in[0,1]^n`$ such that

MATH
\begin{equation}
\label{eq:box-thmG}
G(\mathbf{a}') \ge
\max_{r\ge 0}
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr)\,r\,G(\mathbf{o})+G(\mathbf{0})+r^2\,G(\mathbf{1})}
{\,r^2\;+\;2\gamma^{3/2}r\;+\;1\,}.
\end{equation}
Click to expand and view more

From [eq:box-defG], we have

MATH
\begin{equation*}
G(\mathbf{o})=F(\mathbf{x}\odot \mathbf{o}),\qquad
G(\mathbf{0})=F(\mathbf{0}),\qquad
G(\mathbf{1})=F(\mathbf{x}).
\end{equation*}
Click to expand and view more

Substituting these identities into [eq:box-thmG] gives

MATH
\begin{equation}
\label{eq:box-subbed}
G(\mathbf{a}') \ge
\max_{r\ge 0}
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\ \gamma^{9/2}\bigr)\,r\,F(\mathbf{x}\odot \mathbf{o})+F(\mathbf{0})+r^2\,F(\mathbf{x})}
{\,r^2\;+\;2\gamma^{3/2}r\;+\;1\,}.
\end{equation}
Click to expand and view more

Define $`\mathbf{y}:=\mathbf{x}\odot \mathbf{a}'`$. Then $`\mathbf{y}\le \mathbf{x}`$ coordinate-wise and, by [eq:box-defG],

MATH
\begin{equation}
\label{eq:box-FeqG}
F(\mathbf{y})\ =\ F(\mathbf{x}\odot \mathbf{a}')\ =\ G(\mathbf{a}').
\end{equation}
Click to expand and view more

Combining [eq:box-subbed] and [eq:box-FeqG] yields the claimed bound [eq_123]. ◻

Main Algorithm and Results

In this section, we present our main result together with the algorithm that achieves it. Our approach is recursive and hinges on a core subroutine invoked at every level of recursion: the $`\gamma`$–Frank–Wolfe Guided Measured Continuous Greedy ($`\gamma`$-FWG). We describe $`\gamma`$-FWG and establish its guarantees in Section 4.1. Building on this component, Section 4.2 introduces the full recursive algorithm, and Section 4.3 proves our main theorem.

$`\gamma`$-FWG Algorithm

We develop a measured continuous greedy method, steered by Frank–Wolfe directions and explicitly tuned by $`\gamma`$, to operate in the $`\gamma`$–weakly DR setting. The algorithm is explicitly $`\gamma`$-parameterized, so it works for any $`\gamma\in(0,1]`$ and reduces to the classical DR case when $`\gamma=1`$ . For clarity, in the description of Algorithm [alg:fw-guided-mcg] we assume that $`\delta^{-1}`$ is an integer and that $`\delta\le \varepsilon`$ (which lets us set $`m=\delta^{-1}`$). If these conditions do not hold, we reduce $`\delta`$ to $`1/\big\lceil 1/\min\{\delta,\varepsilon\}\big\rceil`$ without affecting the analysis. Also we define $`\beta := \frac{\gamma^{2}\,\delta}{\,1-\delta+\gamma^{2}\delta\,}`$. Since the algorithm does not know the values $`F(\mathbf{o})`$, $`F(\mathbf{z}\odot \mathbf{o})`$, and $`F(\mathbf{z}\oplus \mathbf{o})`$, we rely on the following guessing lemma; its proof uses standard guessing arguments and proof of this lemma is given in Appendix 9.

Lemma 8. *Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be nonnegative and $`\gamma`$-weakly DR-submodular for some $`0<\gamma\le 1`$, and let $`P\subseteq[0,1]^n`$ be down-closed. There exists a constant-size (depending only on $`\varepsilon`$ and $`\gamma`$) set of triples $`\mathcal{G} \subseteq \mathbb{R}_{\ge 0}^3`$ such that $`\mathcal{G}`$ contains a triple $`(g,g_\odot,g_\oplus)`$ with

MATH
\label{eq:triple-bounds1}
\begin{align}
(1-\varepsilon)\,F(\mathbf{o}) &\le g \le F(\mathbf{o}), 
\label{eq:triple-bounds-g1}\\
F(\mathbf{z}\odot \mathbf{o})-\varepsilon\,g &\le g_\odot \le F(\mathbf{z}\odot \mathbf{o}), 
\label{eq:triple-bounds-godot1}\\
F(\mathbf{z}\oplus \mathbf{o})-\varepsilon\,g &\le g_\oplus \le F(\mathbf{z}\oplus \mathbf{o}).
\label{eq:triple-bounds-goplus1}
\end{align}
```*

</div>

Therefore, by trying all triples in $`\mathcal{G}`$, we can act as if
Algorithm <a href="#alg:fw-guided-mcg" data-reference-type="ref"
data-reference="alg:fw-guided-mcg">[alg:fw-guided-mcg]</a> is given
valid surrogates $`g`$, $`g_{\odot}`$, and $`g_{\oplus}`$ that meet
these bounds. For convenience, we first define the threshold functions.
For $`i\in\{0,1,\hdots,\delta^{-1}-1\}`$, define
``` math
\begin{align}
v_1(i)
&:= \Bigl[(1-\beta)^{\,i}+\tfrac{1-(1-\beta)^{\,i}-2\varepsilon}{\gamma}\Bigr]\,g
   -\tfrac{1}{\gamma}\,g_{\odot} -\tfrac{1-(1-\beta)^{\,i}}{\gamma}\,g_{\oplus}
\label{eq:v1-def}\\[4pt]
v_2(i)
&:= (1-\beta)^{\,i}\Big[
   \Bigl(\tfrac{(1-\beta)^{-i_s}}{\gamma}-\Bigl(1+\tfrac{3}{\gamma}\Bigr)\varepsilon+1-\tfrac{1}{\gamma}\Bigr)\,g
%    \notag\\
% &\hspace{2.6cm}
-\Bigl(\tfrac{(1-\beta)^{-i_s}}{\gamma}-\tfrac{1}{\gamma}-\beta\,(i-i_s)\Bigr)\,g_{\oplus}
   \Big]
\label{eq:v2-def-uniq}
\end{align}
Click to expand and view more

Input: nonnegative $`L`$-smooth $`\gamma`$-weakly DR-submodular $`F:[0,1]^n\!\to\mathbb{R}_{\ge0}`$; meta-solvable down-closed $`P\subseteq[0,1]^n`$; $`\mathbf{z}\in P`$; $`\gamma \in (0,1]`$; parameters $`t_s\in(0,1)`$, $`\varepsilon\in(0,1/2)`$, $`\delta\in(0,1)`$. $`i_s \gets \lceil t_s/\delta \rceil`$ $`v{(i)} \;:=\; \begin{cases} v_1(i), & \text{if } i \le i_s,\\ v_2(i), & \text{if } i \ge i_s. \end{cases}`$ $`\mathbf{z}(i) \gets \begin{cases} \mathbf{z}, & \text{if } i < i_s,\\ \mathbf{0}, & \text{if } i \ge i_s \end{cases}`$ $`\mathbf{y}(0) \gets \mathbf{0}`$ $`\mathbf{w}(i) \gets \bigl(\mathbf{1} - \mathbf{y}(i-1) - \mathbf{z}(i-1)\bigr) \odot \nabla F\bigl(\mathbf{y}(i-1)\bigr)`$ $`Q{(i)} \gets \bigl\{\, \mathbf{x} \in P \ \big|\ \langle \mathbf{w}(i), \mathbf{x} \rangle \ge \gamma \bigl(v{(i-1)} - F(\mathbf{y}(i-1)) \bigr) \,\bigr\}`$ Use Theorem 5 to compute an approximate local maximum $`\mathbf{x}(i)`$ of $`Q{(i)}`$ (if $`Q{(i)}=\varnothing`$, set $`\mathbf{x}(i)`$ to an arbitrary vector in $`P`$) $`\mathbf{y}(i) \gets \mathbf{y}(i-1) + \delta \,\bigl(\mathbf{1} - \mathbf{y}(i-1) - \mathbf{z}(i-1)\bigr) \odot \mathbf{x}(i)`$ return $`\mathbf{y}(\delta^{-1})`$ and the sequence $`\mathbf{x}(1), \mathbf{x}(2), \hdots, \mathbf{x}(\delta^{-1})`$

For a fixed $`\gamma`$, the parameters $`t_s`$ and $`\varepsilon`$ are constants, and hence the running time of Algorithm [alg:fw-guided-mcg] is $`\mathrm{Poly}(n, \delta^{-1})`$. Algorithm [alg:fw-guided-mcg] therefore guarantees the following performance on its output. The proof of this theorem is provided in Appendix 9.

Theorem 9. $`\gamma`$-FWG takes as input a nonnegative, $`L`$-smooth, $`\gamma`$–weakly DR-submodular function $`F:[0,1]^n \to \mathbb{R}_{\ge0}`$, a meta-solvable down-closed convex body $`P \subseteq [0,1]^n`$ of diameter $`D`$, a vector $`\mathbf{z}\in P`$, and parameters $`t_s \in (0,1)`$, $`\varepsilon \in (0,1/2)`$, and $`\delta \in (0,1)`$. Given this input, $`\gamma`$-FWG outputs a vector $`\mathbf{y}\in P`$ and vectors $`\mathbf{x}{(1)},\hdots,\mathbf{x}{(m)} \in P`$ for some $`m = O(\delta^{-1}+\varepsilon^{-1})`$, such that at least one of the following holds.

  1. *``` math \begin{equation} \label{eq:item1-ABC} F(\mathbf{y}) \ \ge\ A_\gamma(t_s),F(\mathbf{o});+;B_\gamma(t_s),F(\mathbf{z}\odot\mathbf{o});+;C_\gamma(t_s),F(\mathbf{z}\oplus\mathbf{o});-;\delta L D^{2}. \end{equation}

    *
    Click to expand and view more
  2. *There exists $`i\in[m]`$ such that

    MATH
    \begin{equation}
    \label{eq:item2-gap}
    F\bigl(\mathbf{x}{(i)} \oplus \mathbf{o}\bigr)\ \le\ F(\mathbf{z}\oplus \mathbf{o})\;-\;\varepsilon\,F(\mathbf{o}),
    \end{equation}
    Click to expand and view more

    and the point $`\mathbf{x}{(i)}`$ satisfies the $`\gamma`$–weakly DR local–value bound

    MATH
    \begin{align}
    \label{eq:item2-local}
    F\bigl(\mathbf{x}{(i)}\bigr)
    &\ge\;
    \frac{\gamma^{2}\,F\bigl(\mathbf{x}{(i)}\vee \mathbf{o}\bigr)+F\bigl(\mathbf{x}{(i)}\wedge \mathbf{o}\bigr)}{1+\gamma^{2}}
    % \notag\\ &\qquad \qquad 
    -\frac{\delta\,\gamma}{1+\gamma^{2}}
    \left(\max_{\mathbf{y}'\in Q(i)}F(\mathbf{y}')+\tfrac{1}{2}L D^{2}\right).
    \end{align}
    Click to expand and view more

    i.e., $`\mathbf{x}{(i)}`$ is an approximate local maximum with respect to $`\mathbf{o}`$ under the $`\gamma`$–weakly DR guarantee and the Frank–Wolfe certificate over $`Q(i)`$.*

*Here the $`\gamma`$–dependent coefficients are

MATH
\label{eq:ABC-coeffs}
\begin{align}
A_\gamma(t_s)
&:= -\frac{e^{\gamma t_s-\gamma}}{\,1-\gamma\,}
+\frac{e^{-\gamma^2}}{\gamma(1-\gamma)}\Big(e^{\gamma^2 t_s}-(1-\gamma)\Big)\;-\; O(\varepsilon)
\label{eq:A-gamma}\\[2pt]
B_\gamma(t_s)
&:= \frac{e^{-\gamma}-e^{\gamma t_s-\gamma}}{\gamma}
\label{eq:B-gamma}\\[2pt]
C_\gamma(t_s)
&:= \frac{e^{\gamma^2 t_s}-1}{\gamma(1-\gamma)}\Big(e^{-\gamma(1-t_s)-\gamma^2 t_s}-e^{-\gamma^2}\Big)
\label{eq:C-gamma}\\
&\quad+\;\frac{e^{-\gamma(1-t_s)}}{\gamma}\left[
\big(e^{-\gamma t_s}-1\big)
+\frac{e^{-\gamma^2 t_s}-e^{-\gamma t_s}}{1-\gamma}
\right]\nonumber\\
&\quad+\,e^{-\gamma(1-t_s)-\gamma^2 t_s}\Bigg[
\frac{\gamma^2}{1-\gamma}(1-t_s)\,e^{\gamma(1-\gamma)(1-t_s)}\nonumber\\
&\hspace{2cm}+\frac{\gamma}{(1-\gamma)^2}\Big(1-e^{\gamma(1-\gamma)(1-t_s)}\Big)
\Bigg].\nonumber
\end{align}
```*

</div>

## Main Algorithm

In this section we describe our main algorithm and analyze the recursive
framework that establishes our main result
(Theorem <a href="#thm:main" data-reference-type="ref"
data-reference="thm:main">12</a>). The core building block is our new
procedure *$`\gamma`$-weakly Frank–Wolfe Guided Measured Continuous
Greedy* ($`\gamma`$-FWG), introduced in
Section <a href="#sec:main_com" data-reference-type="ref"
data-reference="sec:main_com">4.1</a>. We classify an execution of
$`\gamma`$-FWG as *successful* if the first outcome holds (i.e., when
$`F(\mathbf{y})`$ attains the “large value” case). Otherwise, the
execution is deemed *unsuccessful*. With this terminology in place, we
now describe the main recursive driver,
*Algorithm <a href="#alg:main" data-reference-type="ref"
data-reference="alg:main">[alg:main]</a>*, which we use to prove
Theorem <a href="#thm:main" data-reference-type="ref"
data-reference="thm:main">12</a>. In addition to the parameters
appearing in Theorem <a href="#thm:main" data-reference-type="ref"
data-reference="thm:main">12</a>,
Algorithm <a href="#alg:main" data-reference-type="ref"
data-reference="alg:main">[alg:main]</a> takes two auxiliary inputs:
$`\varepsilon\in(0,1/2)`$ and $`t_s\in(0,1)`$. These are forwarded
unchanged to every call to $`\gamma`$-FWG.

Algorithm <a href="#alg:main" data-reference-type="ref"
data-reference="alg:main">[alg:main]</a> runs for
$`L=1+\left\lceil\tfrac{1+\gamma}{\varepsilon\gamma}\right\rceil`$
recursion levels, indexed by $`i`$. At level $`1`$, it finds an
approximate local maximizer $`\mathbf{z}(0)\in P`$ and, from this seed,
runs <span class="smallcaps">Box-Maximization</span> and $`\gamma`$-FWG,
producing $`\mathbf{z}'`$, $`\mathbf{y}`$, and a batch
$`\{\mathbf{x}(1),\ldots,\mathbf{x}(m)\}`$. For each subsequent level
$`i=2,\ldots,L`$, every candidate $`\mathbf{x}(\cdot)`$ emitted at level
$`i-1`$ becomes a new seed $`\mathbf{z}`$; the same two subroutines are
applied to each seed, yielding fresh outputs $`\mathbf{z}'`$,
$`\mathbf{y}`$, and $`\{\mathbf{x}(1),\ldots,\mathbf{x}(m)\}`$. After
all levels complete, the algorithm returns the vector with the largest
objective value among all $`\mathbf{z}'`$, $`\mathbf{y}`$, and
$`\mathbf{x}(\cdot)`$ produced at any level.

<div class="algorithm">

<div class="algorithmic">

Let $`\mathbf{z}(0)`$ be an local maximum in $`P`$ obtained via the
Theorem <a href="#thm:unbalanced-weakDR" data-reference-type="ref"
data-reference="thm:unbalanced-weakDR">6</a>. Execute
MAIN-RECURSIVE$`(F, P,\gamma, t_s , \mathbf{z}(0), \varepsilon, \delta, 1)`$.
$`\mathbf{z}' =`$ Box-Maximization$`(\mathbf{z})`$ Let
$`(\mathbf{y}, \mathbf{x}(1), \hdots, \mathbf{x}(m)) =`$
$`\gamma`$-FWG$`(F, P, \mathbf{z},\gamma, t_s , \varepsilon)`$
(Algorithm <a href="#alg:fw-guided-mcg" data-reference-type="ref"
data-reference="alg:fw-guided-mcg">[alg:fw-guided-mcg]</a>)
$`\mathbf{y}(j)\ =\ `$MAIN-RECURSIVE$`(F, P,\gamma, \mathbf{x}(j), t_s , \varepsilon, \delta, i + 1)`$
**return** the vector maximizing $`F`$ among $`\mathbf{z}'`$,
$`\mathbf{y}`$ and the vectors in $`\{\mathbf{y}(j) \mid j \in [m]\}`$.

</div>

</div>

Observe that, for fixed $`\gamma`$, the number of recursive calls
executed by Algorithm 1 is
$`O\bigl(m^{L}\bigr) = (\delta^{-1} + \varepsilon^{-1})^{O(1/\varepsilon)}`$.
For any constant $`\varepsilon`$, this quantity is polynomial in
$`\delta^{-1}`$. Moreover, each individual recursive call runs in time
polynomial in $`\delta^{-1}`$ and $`n`$. Therefore, the overall running
time of Algorithm 1 is polynomial in $`\delta^{-1}`$ and $`n`$.

We say that a recursive call of
Algorithm <a href="#alg:main" data-reference-type="ref"
data-reference="alg:main">[alg:main]</a> is *successful* if its internal
run of $`\gamma`$-FWG is successful.
Section <a href="#sec:last_proof" data-reference-type="ref"
data-reference="sec:last_proof">4.3</a> shows that
Algorithm <a href="#alg:main" data-reference-type="ref"
data-reference="alg:main">[alg:main]</a> performs sufficiently many
recursive invocations to ensure that at least one call is successful
and, moreover, that it obtains a vector $`\mathbf{z}`$ which is an
approximate local maximizer with respect to $`\mathbf{o}`$. From such a
call, the analysis further proves that either the accompanying vector
$`\mathbf{z}'`$ or the vector $`\mathbf{y}`$ satisfies the performance
guarantee stated in
Theorem <a href="#thm:main" data-reference-type="ref"
data-reference="thm:main">12</a>.

## The Main Result

In the recursion tree of
Algorithm <a href="#alg:main" data-reference-type="ref"
data-reference="alg:main">[alg:main]</a>, we focus on one designated
path of *heir* calls. Along this path, the “fallback” guarantee of
$`\gamma`$-FWG is passed forward at each level. A recursive call
$`\mathcal{C}`$ is an *heir* if either

1.  $`\mathcal{C}`$ is the unique level-$`1`$ call (i.e., the first
    invocation in the recursion), or

2.  $`\mathcal{C}`$ was invoked by another heir call $`\mathcal{C}_p`$
    that is *unsuccessful*, and its input seed $`\mathbf{z}`$ equals
    $`\mathbf{x}(j^\star)`$ for some index $`j^\star`$ that satisfies
    the second outcome of
    Theorem <a href="#thm:fw-guided-mcg" data-reference-type="ref"
    data-reference="thm:fw-guided-mcg">9</a> for the $`\gamma`$-FWG run
    inside $`\mathcal{C}_p`$.

Intuitively, if an heir call $`\mathcal{C}_p`$ is *unsuccessful* (its
run of $`\gamma`$-FWG does not return a large-value $`\mathbf{y}`$),
then Theorem <a href="#thm:fw-guided-mcg" data-reference-type="ref"
data-reference="thm:fw-guided-mcg">9</a> guarantees an index
$`j^\star\in[m]`$ for which $`\mathbf{x}{(j^\star)}`$ satisfies a
first-order $`\gamma`$–weakly DR certificate. We then set the seed of
the next heir on the designated path to
$`\mathbf{z}\gets \mathbf{x}{(j^\star)}`$. The following observation
records the certificate’s invariant, which we subsequently combine to
obtain the final guarantee.

<div id="obs:weakDR-heir" class="observation">

**Observation 1**. Fix $`\gamma\in(0,1]`$. Every heir recursive call in
Algorithm <a href="#alg:main" data-reference-type="ref"
data-reference="alg:main">[alg:main]</a> receives a seed
$`\mathbf{z}\in P`$ satisfying
``` math
\begin{equation}
\label{eq:heir-invariant}
F(\mathbf{z})\ \ge\ \frac{\gamma^{2}\,F(\mathbf{z}\vee \mathbf{o})\;+\;F(\mathbf{z}\wedge \mathbf{o})}{1+\gamma^{2}}
\;-\;O(\varepsilon)\,F(\mathbf{o})\;-\;O(\delta\,L D^{2}).
\end{equation}
Click to expand and view more

Proof. We argue by cases on the recursion level that produces $`\mathbf{z}`$.

Case 1: $`\mathbf{z}`$ comes from a later level. Here $`\mathbf{z}=\mathbf{x}{(i)}`$ for some index $`i`$ returned by the previous $`\gamma`$-FWG call, where the “successful” bound [eq:item1-ABC] did not apply. Hence [eq:item2-local] holds:

MATH
\begin{align}
    F(\mathbf{x}{(i)})\ &\ge\ 
\frac{\gamma^{2}F(\mathbf{x}{(i)}\vee \mathbf{o})+F(\mathbf{x}{(i)}\wedge \mathbf{o})}{1+\gamma^{2}}
% \notag\\ & \qquad\qquad\qquad
-\frac{\delta\,\gamma}{1+\gamma^{2}}\!\left(\max_{\mathbf{y}'\in Q(i)}F(\mathbf{y}')+\tfrac{1}{2}L D^{2}\right).
\label{eq:local-certificate-short}
\end{align}
Click to expand and view more

Since $`Q(i)\subseteq P`$, we have $`\max_{\mathbf{y}'\in Q(i)}F(\mathbf{y}')\le \max_{\mathbf{y}'\in P}F(\mathbf{y}')\le F(\mathbf{o})`$, and substituting this into [eq:local-certificate-short] with $`\mathbf{z}=\mathbf{x}{(i)}`$ yields

MATH
\begin{equation}
\label{eq:heir-case1-mid}
F(\mathbf{z})\ \ge\ 
\frac{\gamma^{2}F(\mathbf{z}\vee \mathbf{o})+F(\mathbf{z}\wedge \mathbf{o})}{1+\gamma^{2}}
-\frac{\delta\,\gamma}{1+\gamma^{2}}\!\left(F(\mathbf{o})+\tfrac{1}{2}L D^{2}\right),
\end{equation}
Click to expand and view more

which matches the invariant [eq:heir-invariant] up to the stated $`O(\delta)\,F(\mathbf{o})`$ and $`O(\delta\,L D^{2})`$ terms.

Case 2: $`\mathbf{z}`$ is produced at the first recursion level. Here $`\mathbf{z}`$ is the output of the weakly-DR local-maximization routine from Theorem 5 with accuracy parameter $`\eta\ :=\ \min\{\varepsilon,\delta\}.`$ Applying Theorem 5 with $`\mathbf{y}=\mathbf{o}`$ and using $`\max_{\mathbf{y}'\in P}F(\mathbf{y}')\le F(\mathbf{o})`$ gives

MATH
\begin{equation}
\label{eq:first-level}
F(\mathbf{z})\ \ge\ 
\frac{\gamma^{2}F(\mathbf{z}\vee \mathbf{o})+F(\mathbf{z}\wedge \mathbf{o})}{1+\gamma^{2}}
\;-\;\frac{\eta\,\gamma}{1+\gamma^{2}}\Bigl(F(\mathbf{o})+\tfrac{1}{2}L D^{2}\Bigr).
\end{equation}
Click to expand and view more

Since $`\eta\le\varepsilon`$ and $`\eta\le\delta`$ by definition of $`\eta`$, the error term in [eq:first-level] is again of the form $`O(\varepsilon)\,F(\mathbf{o})+O(\delta\,L D^{2})`$, yielding [eq:heir-invariant].

Both cases establish [eq:heir-invariant], it completes the proof. ◻

Before proving that a successful heir exists (Corollary 10), we note a simple measure that drops at each level of recursion.

Observation 2. Assume every heir recursive call of Algorithm [alg:main] is unsuccessful (in the sense of Theorem 9). Then, for every recursion level $`i\ge 1`$, there exists an heir call at level $`i`$ that receives a seed $`\mathbf{z}`$ with

MATH
\begin{equation}
\label{eq:descend-claim}
F(\mathbf{z}\oplus \mathbf{o})\ \le\ F\bigl(\mathbf{z}(0)\oplus \mathbf{o}\bigr)\;-\;\varepsilon\,(i-1)\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Proof. We argue by induction on the level $`i`$.

Base case ($`i=1`$). The unique level-1 heir call receives $`\mathbf{z}=\mathbf{z}(0)`$. Hence

MATH
\begin{equation}
\label{eq:base-case}
F(\mathbf{z}\oplus \mathbf{o})\;=\;F\bigl(\mathbf{z}(0)\oplus \mathbf{o}\bigr)\;\le\;F\bigl(\mathbf{z}(0)\oplus \mathbf{o}\bigr)-\varepsilon\cdot 0\cdot F(\mathbf{o}),
\end{equation}
Click to expand and view more

which is exactly [eq:descend-claim] with $`i=1`$.

Inductive step. Assume the statement holds for level $`i-1\ge 1`$; i.e., there is an heir call on level $`i-1`$ with seed $`\mathbf{z}`$ such that

MATH
\begin{equation}
\label{eq:IH}
F(\mathbf{z}\oplus \mathbf{o})\ \le\ F\bigl(\mathbf{z}(0)\oplus \mathbf{o}\bigr)\;-\;\varepsilon\,(i-2)\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

By assumption, this heir call is unsuccessful. Therefore, the “gap” outcome [eq:item2-gap] of Theorem 9 applies to its internal run of $`\gamma`$-FWG, and hence there exists $`j^\star\in[m]`$ such that

MATH
\begin{equation}
\label{eq:gap-step}
F\bigl(\mathbf{x}(j^\star)\oplus \mathbf{o}\bigr)\ \le\ F(\mathbf{z}\oplus \mathbf{o})\;-\;\varepsilon\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Combining [eq:IH] and [eq:gap-step] gives

MATH
\begin{equation}
\label{eq:level-i}
F\bigl(\mathbf{x}(j^\star)\oplus \mathbf{o}\bigr)\ \le\ F\bigl(\mathbf{z}(0)\oplus \mathbf{o}\bigr)\;-\;\varepsilon\,(i-1)\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

By the definition of heirs, the child call at level $`i`$ seeded with $`\mathbf{z}\gets \mathbf{x}(j^\star)`$ is itself an heir call and satisfies [eq:level-i], which is precisely [eq:descend-claim] for level $`i`$. ◻

The weakly-DR specifics (the $`\gamma`$-aware local-value bound and Frank–Wolfe certificate) only affect the quality guarantee for the seed $`\mathbf{x}(j^\star)`$, not the descent amount on $`F(\cdot\oplus o)`$. Thus the $`\varepsilon`$-per-level decrease remains identical to the DR case , while $`\gamma`$ enters later in the value lower bounds used to conclude the analysis.

Corollary 10. Some recursive call of Algorithm [alg:main] is a successful heir.

Proof. Assume, toward a contradiction, that no recursive call is a successful heir. By Observation 2, at level

MATH
\begin{equation}
\label{eq:level-choice}
i\ :=\ 1+\Bigl\lceil \tfrac{\gamma+1}{\gamma\,\varepsilon}\Bigr\rceil
\end{equation}
Click to expand and view more

there exists an heir call with seed $`\mathbf{z}`$ such that

MATH
\begin{align}
F(\mathbf{z}\oplus \mathbf{o})
&\overset{\text{(a)}}{\le}\;
F(\mathbf{z}(0)\oplus \mathbf{o})\;-\;\varepsilon\,(i-1)\,F(\mathbf{o})\notag\\
&\overset{\text{(b)}}{\le}\;
F(\mathbf{z}(0)\oplus \mathbf{o})\;-\;\Bigl(1+\tfrac{1}{\gamma}\Bigr)\,F(\mathbf{o}),
\label{eq:descend-instantiated}
\end{align}
Click to expand and view more

where $`\text{(a)}`$ follows from Observation 2 applied at level $`i`$, and $`\text{(b)}`$ uses $`i-1 \ge (\gamma+1)/(\gamma\varepsilon)`$ from [eq:level-choice].

Since this call is (by assumption) also unsuccessful, the gap alternative [eq:item2-gap] of Theorem 9 applies, yielding some $`j`$ with

MATH
\begin{equation}
\label{eq:unsuccessful-gap}
F\bigl(\mathbf{x}(j)\oplus \mathbf{o}\bigr)
\;\le\;
F(\mathbf{z}\oplus \mathbf{o}) \;-\; \varepsilon\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Combining [eq:descend-instantiated] and [eq:unsuccessful-gap] gives

MATH
\begin{equation}
\label{eq:combined-gap}
F\bigl(\mathbf{x}(j)\oplus \mathbf{o}\bigr)
\;\le\;
F(\mathbf{z}(0)\oplus \mathbf{o}) \;-\; \Bigl(1+\tfrac{1}{\gamma}+\varepsilon\Bigr)\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

By nonnegativity of $`F`$, we have $`F\bigl(\mathbf{x}(j)\oplus \mathbf{o}\bigr)\ge 0`$, so [eq:combined-gap] implies

MATH
\begin{equation}
\label{eq:rearrange-positivity}
F(\mathbf{z}(0)\oplus \mathbf{o}) - F(\mathbf{o})
\;\ge\;
\Bigl(\tfrac{1}{\gamma}+\varepsilon\Bigr)\,F(\mathbf{o})
\;>\;
\tfrac{1}{\gamma}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

On the other hand, by $`\gamma`$–weakly DR property we have

MATH
\begin{align}
F(\mathbf{z}(0)\oplus \mathbf{o}) - F(\mathbf{o})
&\overset{\text{(c)}}{\le}\;
\frac{1}{\gamma}\Bigl(F\bigl(\mathbf{z}(0)\odot(\mathbf{1}-\mathbf{o})\bigr)-F(\mathbf{0})\Bigr)\notag\\
&\overset{\text{(d)}}{\le}\;
\frac{1}{\gamma}\,F\bigl(\mathbf{z}(0)\odot(\mathbf{1}-\mathbf{o})\bigr)
\label{eq:weakdr-split}
\end{align}
Click to expand and view more

where $`\text{(c)}`$ follows from the $`\gamma`$–weakly DR definition, and $`\text{(d)}`$ uses $`F(\mathbf{0})\ge 0`$ (nonnegativity). Combining [eq:rearrange-positivity] and [eq:weakdr-split] yields

MATH
\begin{equation}
\label{eq:strict-better}
F(\mathbf{o})\ <\ F\bigl(\mathbf{z}(0)\odot(\mathbf{1}-\mathbf{o})\bigr).
\end{equation}
Click to expand and view more

Since $`P`$ is down-closed and $`\mathbf{z}(0)\in P`$, we have $`\mathbf{z}(0)\odot(\mathbf{1}-\mathbf{o})\le \mathbf{z}(0)`$ coordinate-wise, hence $`\mathbf{z}(0)\odot(\mathbf{1}-\mathbf{o})\in P`$. The strict improvement in [eq:strict-better] contradicts the optimality of $`\mathbf{o}`$ over $`P`$. Therefore, our assumption was false, and some recursive call must be a successful heir. ◻

Consider any successful heir call within Algorithm [alg:main]. Denote its input seed by $`\mathbf{z}^{\star}`$, and let $`\mathbf{y}^{\star}`$ be the high-value solution returned by $`\gamma`$-FWG, while $`\mathbf{z}'^{\star}`$ is the child seed produced during the same call. Invoking Theorem 9 (1) yields

MATH
\begin{equation}
\label{eq:successful-heir-ABC}
F(\mathbf{y}^{\star})
\ge
A_\gamma(t_s)\,F(\mathbf{o})
+
B_\gamma(t_s)\,F(\mathbf{z}^{\star}\odot \mathbf{o})
+
C_\gamma(t_s)\,F(\mathbf{z}^{\star}\oplus \mathbf{o})
-
\delta\,L D^{2}.
\end{equation}
Click to expand and view more

For the ensuing analysis of a successful heir, we also need a companion lower bound for $`F(\mathbf{z}'^{\star})`$; this is provided by the next lemma.

Lemma 11. *Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be nonnegative, $`L`$-smooth, and $`\gamma`$-weakly DR-submodular for some $`\gamma\in(0,1]`$. Let $`\mathbf{z}^{\ast}\in[0,1]^n`$ be the incumbent vector provided to the heir recursive call, and let $`\mathbf{z}'{}^{\ast}\le \mathbf{z}^{\ast}`$ be the output of Corollary 7 (run on the box $`[0,\mathbf{z}^{\ast}]`$ with error $`\varepsilon`$). Then

MATH
\begin{align}
  F(\mathbf{z}'{}^{\ast})
& \ge 
\max_{r\ge 0}
\frac{\Big(2\gamma^{3/2}r+\frac{\gamma}{1+\gamma^{2}}r^{2}\Big)F(\mathbf{z}^{\ast}\odot \mathbf{o})
+\frac{\gamma^{2}}{1+\gamma^{2}}r^{2}F(\mathbf{z}^{\ast}\oplus \mathbf{o})}
{\,r^{2}+2\gamma^{3/2}r+1\,}
% \notag\\
% &\qquad\qquad\qquad\qquad\qquad
-O(\varepsilon)F(\mathbf{o})-O(\delta L D^{2}).
\end{align}
```*

</div>

<div class="proof">

*Proof.* Applying
Corollary <a href="#cor:box-weakDR" data-reference-type="ref"
data-reference="cor:box-weakDR">7</a> to the box
$`[0,\mathbf{z}^{\ast}]`$ (with error $`\varepsilon`$) gives
``` math
\begin{align}
F(\mathbf{z}'{}^{\ast})
&\ge
\max_{r\ge 0}
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr) r F(\mathbf{z}^{\ast}\odot \mathbf{o})+F(\mathbf{0})+r^{2}\,F(\mathbf{z}^{\ast})}
{\,r^{2}+2\gamma^{3/2}r+1\,}.
\label{eq:box-bound}
\end{align}
Click to expand and view more

Since $`F(\mathbf{0})\ge 0`$, dropping the nonnegative $`F(\mathbf{0})`$ can only decrease the right-hand side, therefore we get (a);

MATH
\begin{align}
F(\mathbf{z}'{}^{\ast})
&\overset{\text{(a)}}{\ge}
\max_{r\ge 0}
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr)\,r\,F(\mathbf{z}^{\ast}\odot \mathbf{o})\;+\;r^{2}\,F(\mathbf{z}^{\ast})}
{\,r^{2}+2\gamma^{3/2}r+1\,}\notag\\
&\overset{\text{(b)}}{\ge}
\max_{r\ge 0}
\frac{2\gamma^{3/2}\,r\,F(\mathbf{z}^{\ast}\odot \mathbf{o})\;+\;r^{2}\,F(\mathbf{z}^{\ast})}
{\,r^{2}+2\gamma^{3/2}r+1\,}
\;-\;O(\varepsilon)\,F(\mathbf{o}),
\label{eq:eps-drop}
\end{align}
Click to expand and view more

where (b) uses $`F(\mathbf{z}^{\ast}\odot \mathbf{o})\le F(\mathbf{o})`$ and the bound $`\displaystyle \frac{r}{r^{2}+2\gamma^{3/2}r+1}\le 1`$, so the negative perturbation term $`-\,4\varepsilon\,\gamma^{9/2}\,\frac{r}{r^{2}+2\gamma^{3/2}r+1}\,F(\mathbf{z}^{\ast}\odot\mathbf{o})`$ is at worst $`-O(\varepsilon)\,F(\mathbf{o})`$ after maximizing over $`r\ge 0`$.

Next, since $`\mathbf{z}^{\ast}`$ is the seed of an heir recursive call, Observation 1 yields

MATH
\begin{align}
F(\mathbf{z}^{\ast})
&\ge
\frac{\gamma^{2}F(\mathbf{z}^{\ast}\vee \mathbf{o})+F(\mathbf{z}^{\ast}\wedge \mathbf{o})}{1+\gamma^{2}}
-O(\varepsilon)F(\mathbf{o})-O(\delta L D^{2}).
\label{eq:heir-inv}
\end{align}
Click to expand and view more

Because $`F\ge 0`$ and $`\gamma\le 1`$, we also have $`F(\mathbf{z}^{\ast}\wedge \mathbf{o})\ge \gamma\,F(\mathbf{z}^{\ast}\wedge \mathbf{o})`$; combining this with the weakly-DR “swap” inequality

MATH
\begin{equation}
\label{eq:weakDR-swap}
\gamma\,F(\mathbf{x}\vee\mathbf{y})+F(\mathbf{x}\wedge\mathbf{y})\ \ge\ \gamma\,F(\mathbf{x}\oplus\mathbf{y})+F(\mathbf{x}\odot\mathbf{y})
\quad(\forall\,\mathbf{x},\mathbf{y}\in[0,1]^n),
\end{equation}
Click to expand and view more

we get the refined lower bound

MATH
\begin{align}
F(\mathbf{z}^{\ast})
&\ge
\frac{\gamma^{2}F(\mathbf{z}^{\ast}\oplus \mathbf{o})+\gamma\,F(\mathbf{z}^{\ast}\odot \mathbf{o})}{1+\gamma^{2}}
-O(\varepsilon)F(\mathbf{o})-O(\delta L D^{2}).
\label{eq:heir-inv-swapped}
\end{align}
Click to expand and view more

Substituting [eq:heir-inv-swapped] into [eq:eps-drop], and noting that $`\frac{r^{2}}{r^{2}+2\gamma^{3/2}r+1}\le 1`$, preserves the additive error $`-O(\varepsilon)\,F(\mathbf{o})-O(\delta L D^{2})`$ and yields coefficients

MATH
\begin{align*}
    \frac{2\gamma^{3/2}r}{r^{2}+2\gamma^{3/2}r+1}
\;+\;
\frac{\gamma}{1+\gamma^{2}}\cdot\frac{r^{2}}{r^{2}+2\gamma^{3/2}r+1}
&\quad\text{for }F(\mathbf{z}^{\ast}\odot \mathbf{o}),\\
\frac{\gamma^{2}}{1+\gamma^{2}}\cdot\frac{r^{2}}{r^{2}+2\gamma^{3/2}r+1}
&\quad\text{for }F(\mathbf{z}^{\ast}\oplus \mathbf{o}),
\end{align*}
Click to expand and view more

which is exactly the claimed form. ◻

We have established two certified lower bounds for any successful heir call: one for the Frank–Wolfe–guided output $`\mathbf{y}^{\star}`$ (Theorem 9(1)) and one for the box-restricted child $`\mathbf{z}'^{\star}`$ (Lemma 11). Since the algorithm returns the better of these two values, any convex combination of the two bounds remains a valid lower bound on the algorithm’s output. Let $`\alpha\in[0,1]`$ be the mixing parameter. Putting the two bounds into a common form and combining them gives, for any $`r\ge 0`$ and $`t_s\in(0,1)`$,

MATH
\begin{align}
F(\textnormal{ALG})
\ \ge\ 
&(1-\alpha)\,A_\gamma(t_s)\, F(\mathbf{o})+\;\Big[(1-\alpha)\,B_\gamma(t_s) \;+\;\alpha\,D_\gamma(r)\Big]\;F(\mathbf{z}^{\star}\odot\mathbf{o})\notag\\
&+\;\Big[(1-\alpha)\,C_\gamma(t_s) \;+\;\alpha\,E_\gamma(r)\Big] \;F(\mathbf{z}^{\star}\oplus\mathbf{o})-\;O(\varepsilon)\,F(\mathbf{o}) \;-\;O(\delta\,L D^{2}),
\label{eq:convex-combo-master}
\end{align}
Click to expand and view more

where

MATH
\begin{equation}
\label{eq:DE-defs}
D_\gamma(r)\ :=\ \frac{2\gamma^{3/2}\,r+\frac{\gamma}{1+\gamma^2}\,r^2}{\,r^2+2\gamma^{3/2}r+1\,},
\qquad
E_\gamma(r)\ :=\ \frac{\frac{\gamma^2}{1+\gamma^2}\,r^2}{\,r^2+2\gamma^{3/2}r+1\,}.
\end{equation}
Click to expand and view more

To extract a pure multiplicative factor in front of $`F(\mathbf{o})`$, we choose parameters so that the coefficients multiplying $`F(\mathbf{z}^{\star}\odot\mathbf{o})`$ and $`F(\mathbf{z}^{\star}\oplus\mathbf{o})`$ are nonnegative, allowing these terms to be dropped by nonnegativity of $`F`$. Define the feasible set

MATH
\begin{align}
\mathcal{F}_\gamma
:= \Bigl\{(\alpha,r,t_s)&\in[0,1]\times[0,\infty)\times(0,1)\ :\notag\\
&\hspace{0.25cm}(1-\alpha)B_\gamma(t_s)+\alpha D_\gamma(r) \ge 0, 
(1-\alpha)C_\gamma(t_s)+\alpha E_\gamma(r) \ge 0\Bigr\}.
\label{eq:feas-region}
\end{align}
Click to expand and view more

For any $`(\alpha,r,t_s)\in\mathcal{F}_\gamma`$, inequality [eq:convex-combo-master] yields

MATH
\begin{equation}
\label{eq:drop-terms}
F(\textnormal{ALG})
\ \ge\ 
\Bigl[(1-\alpha)\,A_\gamma(t_s)\;-\;O(\varepsilon)\Bigr]\,F(\mathbf{o})\;-\;O(\delta\,L D^{2}).
\end{equation}
Click to expand and view more

This motivates optimizing the leading factor:

MATH
\begin{equation}
\label{eq:phi-gamma-prob}
\Phi_\gamma\ :=\ \max_{(\alpha,r,t_s)\in\mathcal{F}_\gamma}\ (1-\alpha)\,A_\gamma(t_s).
\end{equation}
Click to expand and view more

Our approximation guarantee $`\Phi_\gamma`$ is defined as the optimal value of the maximization problem in [eq:phi-gamma-prob]. This is an optimization over only three scalar parameters $`(\alpha,r,t_s)`$ with simple linear feasibility constraints in [eq:feas-region], so for any fixed $`\gamma`$ the problem is easy to solve numerically. In particular, even though $`\Phi_\gamma`$ does not appear to admit a closed-form expression as a function of $`\gamma`$, it can be computed to any desired accuracy in polynomial time (in the inverse of the discretization step) by a direct grid search.

For each fixed $`\gamma`$, we solve [eq:phi-gamma-prob] by performing an explicit grid search over $`(r,t_s)`$ on a bounded domain $`r\in[0,r_{\max}]`$, $`t_s\in(0,1)`$, and then optimizing over $`\alpha`$ in closed form using the linear constraints in [eq:feas-region]. For every grid point $`(r,t_s)`$, we determine the interval of feasible $`\alpha\in[0,1]`$ for which both inequalities in [eq:feas-region] hold, and then choose the endpoint of this interval that maximizes $`(1-\alpha)A_\gamma(t_s)`$. This yields a candidate triple $`(\alpha,r,t_s)`$ and a corresponding candidate value of $`\Phi_\gamma`$, and we keep the best one over all grid points. The running time is polynomial in the inverse grid step. This is precisely the procedure implemented in our Python code to generate Fig. 1 and the parameter table (we used $`r_{\max}=10`$, since larger values of $`r_{\max}`$ did not change the outcome).

Boundary case $`\gamma=1`$. Several coefficients (e.g., $`A_\gamma,B_\gamma,C_\gamma`$) contain factors of $`(1-\gamma)`$ in the denominator; consequently, the expressions inside [eq:convex-combo-master] may exhibit an apparent $`0/0`$ form as $`\gamma\to 1`$. We interpret all such terms by taking their continuous limits, and evaluate via L’Hôpital’s rule where needed. Substituting these limits into [eq:convex-combo-master] yields the DR (i.e., $`\gamma=1`$) specialization of our mixture bound, and optimizing [eq:phi-gamma-prob] at $`\gamma=1`$ reproduces the current best DR guarantee ($`0.401`$) of Buchbinder and Feldman .

What the optimization achieves.

The optimized guarantee $`\Phi_\gamma`$ (a) exactly matches the current best DR constant at $`\gamma=1`$, and (b) strictly improves on the non-monotone weakly-DR baseline $`\kappa(\gamma)=\gamma e^{-\gamma}`$ for all $`\gamma\in(0,1)`$. Intuitively, the mixture balances Frank–Wolfe–guided progress with box-restricted improvements through $`(\alpha,r,t_s)`$, certifying the best factor per $`\gamma`$.

Theorem 12. *Fix $`\gamma\in(0,1]`$ and $`\delta\in(0,1)`$. Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be a nonnegative, $`L`$-smooth, $`\gamma`$-weakly DR-submodular function, and let $`P\subseteq[0,1]^n`$ be a down-closed, meta-solvable convex body of diameter $`D`$. There exists a polynomial-time algorithm that returns a point $`\mathbf{x}\in P`$ such that

MATH
\begin{equation}
\label{eq:main-guarantee}
F(\mathbf{x})\ \ge\ \Phi_\gamma\cdot \max_{\mathbf{y}\in P} F(\mathbf{y})\;-\;O\!\left(\delta\,D^{2}L\right),
\end{equation}
Click to expand and view more

where $`\Phi_\gamma`$ is the optimal value of [eq:phi-gamma-prob].*

Proof. Let $`\mathbf{o}\in\arg\max_{\mathbf{y}\in P}F(\mathbf{y})`$. By Corollary 10, Algorithm [alg:main] has at least one successful heir call. For such a call with seed $`\mathbf{z}^\star`$, Theorem 9 (1) and Lemma 11 provide two certified lower bounds on the returned candidates $`\mathbf{y}^\star`$ and $`\mathbf{z}'{}^\star`$. Forming any convex combination of these two bounds yields [eq:convex-combo-master], with $`D_\gamma, E_\gamma`$ given in [eq:DE-defs].

Choose $`(\alpha,r,t_s)\in\mathcal{F}_\gamma`$ (cf. [eq:feas-region]) so that the coefficients of $`F(\mathbf{z}^\star\odot \mathbf{o})`$ and $`F(\mathbf{z}^\star\oplus \mathbf{o})`$ in [eq:convex-combo-master] are nonnegative. Dropping these nonnegative contributions gives [eq:drop-terms]:

MATH
\begin{equation}
    F(\textnormal{ALG})\ \ge\ \bigl[(1-\alpha)A_\gamma(t_s)-O(\varepsilon)\bigr]\,F(\mathbf{o})\;-\;O(\delta L D^2).
\end{equation}
Click to expand and view more

Maximizing over feasible $`(\alpha,r,t_s)`$ yields $`\Phi_\gamma`$ from [eq:phi-gamma-prob], so, for an optimal choice,

MATH
\begin{equation}
    F(\textnormal{ALG})\ \ge\ \bigl[\Phi_\gamma-O(\varepsilon)\bigr]\,F(\mathbf{o})\;-\;O(\delta L D^2).
\end{equation}
Click to expand and view more

Finally, set $`\varepsilon=\Theta(\delta)`$ to absorb the $`-O(\varepsilon)F(\mathbf{o})`$ term into the $`-O(\delta L D^2)`$ smoothing error. ◻

Conclusion

This paper develops a unified, projection-free framework for maximizing continuous, non-monotone $`\gamma`$-weakly DR-submodular functions over down-closed convex bodies. Our method couples a $`\gamma`$-aware Frank–Wolfe–guided measured continuous greedy with a $`\gamma`$-aware double–greedy, and optimizes a convex mixture of their certificates through three tunable parameters $`(\alpha,r,t_s)`$. Across the entire weakly-DR spectrum, the resulting guarantee $`\Phi_\gamma`$ strictly improves the canonical non-monotone baseline $`\kappa(\gamma)=\gamma e^{-\gamma}`$, and at the DR boundary ($`\gamma=1`$) it matches the current best constant $`0.401`$. We show improvements over prior work in Figure 1 and Table 1.

Proofs of Section <a href="#sec:prelim" data-reference-type=“ref”

data-reference=“sec:prelim”>2 Lemmas

In this appendix, we provide detailed proofs of the fundamental properties of $`\gamma`$-weakly DR-submodular functions and extend classical results for DR-submodular functions to the generalized $`\gamma`$-weakly setting. For clarity, we restate lemmas before presenting their proofs.

Lemma 1. Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be differentiable and $`\gamma`$-weakly DR-submodular. Then for all $`\mathbf{x},\mathbf{y}\in[0,1]^n`$ and $`\lambda\in[0,1]`$ the following hold:

  1. If $`\mathbf{x}\le \mathbf{y}`$, then

    MATH
    \begin{equation}
    \label{eq:exe-convex-form-1}
    F\bigl(\lambda \mathbf{x}+(1-\lambda)\mathbf{y}\bigr)
    \ \ge \
    \frac{\lambda\,F(\mathbf{x})+\gamma^{2}(1-\lambda)\,F(\mathbf{y})}{\lambda+\gamma^{2}(1-\lambda)}.
    \end{equation}
    Click to expand and view more

    Equivalently,

    MATH
    \begin{equation}
    \label{eq:exe-convex-form-2}
    F\bigl((1-\lambda)\mathbf{x}+\lambda \mathbf{y}\bigr)
    \ \ge\
    \frac{(1-\lambda)\,F(\mathbf{x})+\gamma^{2}\lambda\,F(\mathbf{y})}{(1-\lambda)+\gamma^{2}\lambda}.
    \end{equation}
    Click to expand and view more
  2. If $`\mathbf{y}\ge \mathbf{0}`$ and $`\mathbf{x}+\mathbf{y}\in[0,1]^n`$, then

    MATH
    \begin{equation}
    \label{eq:exe-increment-form}
    F(\mathbf{x}+\lambda \mathbf{y})-F(\mathbf{x})
    \ \ge\
    \frac{\gamma^{2}\lambda}{\,1-\lambda+\gamma^{2}\lambda\,}\,\bigl(F(\mathbf{x}+\mathbf{y})-F(\mathbf{x})\bigr).
    \end{equation}
    Click to expand and view more

Proof. Fix $`\mathbf{x}\in[0,1]^n`$ and a direction $`\mathbf{v}\ge 0`$ such that $`\mathbf{x}+t \mathbf{v}\in[0,1]^n`$ for all $`t\in[0,1]`$. Define the univariate function

MATH
\begin{equation}
\label{eq:phi-def}
    \phi(t)\ :=\ F(\mathbf{x}+t \mathbf{v}), \qquad t\in[0,1].
\end{equation}
Click to expand and view more

By the chain rule, $`\phi`$ is differentiable and

MATH
\begin{equation}
\label{eq:phi-deriv}
    \phi'(t)
    \ =\
    \bigl\langle \nabla F(\mathbf{x}+t \mathbf{v}),\,\mathbf{v}\bigr\rangle.
\end{equation}
Click to expand and view more

Now fix $`0\le s\le t\le 1`$. Since $`\mathbf{v}\ge 0`$, we have $`\mathbf{x}+s \mathbf{v}\le \mathbf{x}+t \mathbf{v}`$, and by $`\gamma`$-weak DR-submodularity this implies

MATH
\begin{equation}
\label{eq:grad-gamma}
    \nabla F(\mathbf{x}+s \mathbf{v})
    \ \ge\
    \gamma\,\nabla F(\mathbf{x}+t \mathbf{v}).
\end{equation}
Click to expand and view more

Taking inner products of both sides of [eq:grad-gamma] with $`\mathbf{v}\ge 0`$ and using [eq:phi-deriv] gives

MATH
\begin{equation}
\label{eq:gamma-monotone}
    \phi'(s)
    \ =\
    \bigl\langle \nabla F(\mathbf{x}+s\mathbf{v}),\mathbf{v}\bigr\rangle
    \ \ge\
    \gamma\,\bigl\langle \nabla F(\mathbf{x}+t\mathbf{v}),\mathbf{v}\bigr\rangle
    \ =\
    \gamma\,\phi'(t),
    \qquad 0\le s\le t\le 1.
\end{equation}
Click to expand and view more

Next fix $`\lambda\in(0,1)`$. For $`t\in[\lambda,1]`$, applying [eq:gamma-monotone] with $`s=\lambda`$ gives $`\phi'(\lambda)\ge \gamma\,\phi'(t)`$, so

MATH
\phi'(t)\ \le\ \frac{1}{\gamma}\,\phi'(\lambda).
Click to expand and view more

Integrating this upper bound over $`t\in[\lambda,1]`$ and using the fundamental theorem of calculus yields

MATH
\begin{equation}
\label{eq:segment-upper}
    \phi(1)-\phi(\lambda)
    \ =\
    \int_\lambda^1 \phi'(t)\,dt
    \ \le\
    \int_\lambda^1 \frac{1}{\gamma}\,\phi'(\lambda)\,dt
    \ =\
    \frac{1-\lambda}{\gamma}\,\phi'(\lambda).
\end{equation}
Click to expand and view more

Similarly, for $`s\in[0,\lambda]`$, applying [eq:gamma-monotone] with $`t=\lambda`$ gives $`\phi'(s)\ge \gamma\,\phi'(\lambda)`$. Integrating this lower bound over $`s\in[0,\lambda]`$ we obtain

MATH
\begin{equation}
\label{eq:segment-lower}
    \phi(\lambda)-\phi(0)
    \ =\
    \int_0^\lambda \phi'(s)\,ds
    \ \ge\
    \int_0^\lambda \gamma\,\phi'(\lambda)\,ds
    \ =\
    \gamma\,\lambda\,\phi'(\lambda).
\end{equation}
Click to expand and view more

From [eq:segment-upper] we get

MATH
\phi'(\lambda)
    \ \ge\
    \frac{\gamma}{1-\lambda}\,\bigl(\phi(1)-\phi(\lambda)\bigr),
Click to expand and view more

and substituting this lower bound into [eq:segment-lower] gives

MATH
\begin{equation}
\label{eq:ratio-phi}
    \phi(\lambda)-\phi(0)
    \ \ge\
    \gamma\,\lambda\cdot
    \frac{\gamma}{1-\lambda}\,\bigl(\phi(1)-\phi(\lambda)\bigr)
    \ =\
    \frac{\gamma^{2}\lambda}{1-\lambda}\,\bigl(\phi(1)-\phi(\lambda)\bigr).
\end{equation}
Click to expand and view more

Multiplying both sides of [eq:ratio-phi] by $`1-\lambda`$ and expanding, we obtain

MATH
(1-\lambda)\,\phi(\lambda)- (1-\lambda)\,\phi(0)
    \ \ge\
    \gamma^{2}\lambda\,\phi(1)-\gamma^{2}\lambda\,\phi(\lambda).
Click to expand and view more

Rearranging the terms involving $`\phi(\lambda)`$ to the left and the remaining terms to the right yields

MATH
\bigl(1-\lambda+\gamma^{2}\lambda\bigr)\,\phi(\lambda)
    \ \ge\
    (1-\lambda)\,\phi(0)+\gamma^{2}\lambda\,\phi(1).
Click to expand and view more

Dividing by $`1-\lambda+\gamma^{2}\lambda>0`$ we get

MATH
\begin{equation}
\label{eq:phi-convex-combo}
    \phi(\lambda)
    \ \ge\
    \frac{(1-\lambda)\,\phi(0)+\gamma^{2}\lambda\,\phi(1)}
         {(1-\lambda)+\gamma^{2}\lambda}.
\end{equation}
Click to expand and view more

To prove part (2), take $`\mathbf{v}=\mathbf{y}\ge 0`$ and $`\phi(t)=F(\mathbf{x}+t\mathbf{y})`$ as in [eq:phi-def]. Since $`\mathbf{x}+\mathbf{y}\in[0,1]^n`$, the entire segment $`\{\mathbf{x}+t\mathbf{y}: t\in[0,1]\}`$ lies in $`[0,1]^n`$, so the above argument applies. In this case,

MATH
\phi(0)=F(\mathbf{x}),\qquad
    \phi(1)=F(\mathbf{x}+\mathbf{y}),\qquad
    \phi(\lambda)=F(\mathbf{x}+\lambda\mathbf{y}).
Click to expand and view more

Substituting these expressions into [eq:phi-convex-combo] gives

MATH
\begin{equation}
\label{eq:part2-main}
    F(\mathbf{x}+\lambda \mathbf{y})
    \ \ge\
    \frac{(1-\lambda)F(\mathbf{x})+\gamma^{2}\lambda F(\mathbf{x}+\mathbf{y})}
         {(1-\lambda)+\gamma^{2}\lambda}.
\end{equation}
Click to expand and view more

Subtracting $`F(\mathbf{x})`$ from both sides of [eq:part2-main], we obtain

MATH
F(\mathbf{x}+\lambda\mathbf{y})-F(\mathbf{x})
    \ \ge\
    \frac{(1-\lambda)F(\mathbf{x})+\gamma^{2}\lambda F(\mathbf{x}+\mathbf{y})}
         {(1-\lambda)+\gamma^{2}\lambda}
    \;-\; F(\mathbf{x}).
Click to expand and view more

Writing $`F(\mathbf{x})`$ as $`\frac{(1-\lambda)+\gamma^{2}\lambda}{(1-\lambda)+\gamma^{2}\lambda}F(\mathbf{x})`$ and simplifying the numerator, we get

MATH
F(\mathbf{x}+\lambda\mathbf{y})-F(\mathbf{x})
    \ \ge\
    \frac{\gamma^{2}\lambda}{1-\lambda+\gamma^{2}\lambda}\,\bigl(F(\mathbf{x}+\mathbf{y})-F(\mathbf{x})\bigr),
Click to expand and view more

which is exactly [eq:exe-increment-form]. This proves part (2).

For part (1), assume $`\mathbf{x}\le \mathbf{y}`$ and define $`\mathbf{v}:= \mathbf{y}-\mathbf{x}\ge 0`$. For $`t\in[0,1]`$ set

MATH
\begin{equation}
\label{eq:phi-convex-path}
    \phi(t)
    \ :=\
    F\bigl(\mathbf{x}+t(\mathbf{y}-\mathbf{x})\bigr)
    \ =\
    F\bigl((1-t)\mathbf{x}+t\mathbf{y}\bigr).
\end{equation}
Click to expand and view more

Again, the segment between $`\mathbf{x}`$ and $`\mathbf{y}`$ lies in $`[0,1]^n`$, so [eq:phi-convex-combo] applies. Here

MATH
\phi(0)=F(\mathbf{x}),\qquad
    \phi(1)=F(\mathbf{y}),\qquad
    \phi(\lambda)=F\bigl((1-\lambda)\mathbf{x}+\lambda\mathbf{y}\bigr).
Click to expand and view more

Substituting into [eq:phi-convex-combo], we obtain

MATH
\begin{equation}
\label{eq:part1-main}
    F\bigl((1-\lambda)\mathbf{x}+\lambda\mathbf{y}\bigr)
    \ \ge\
    \frac{(1-\lambda)\,F(\mathbf{x})+\gamma^{2}\lambda\,F(\mathbf{y})}
         {(1-\lambda)+\gamma^{2}\lambda},
\end{equation}
Click to expand and view more

which is exactly [eq:exe-convex-form-2]. Finally, replacing $`\lambda`$ by $`1-\lambda`$ in [eq:part1-main] gives

MATH
F\bigl(\lambda\mathbf{x}+(1-\lambda)\mathbf{y}\bigr)
    \ \ge\
    \frac{\lambda\,F(\mathbf{x})+\gamma^{2}(1-\lambda)\,F(\mathbf{y})}
         {\lambda+\gamma^{2}(1-\lambda)},
Click to expand and view more

which is [eq:exe-convex-form-1]. This proves part (1). ◻

Lemma 2. Let $`F : [0,1]^n \to \mathbb{R}_{\ge 0}`$ be a differentiable $`\gamma`$-weakly DR-submodular function. Then, for every $`\mathbf{x},\mathbf{y}\in [0,1]^n`$ with $`\mathbf{y}\ge \mathbf{0}`$, the following inequalities hold:

  1. If $`\mathbf{x}+\mathbf{y}\le \mathbf{1}`$, then

    MATH
    \begin{equation}
    \label{eq:grad-lower-main}
            \big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle
            \;\ge\;
            \gamma\big(F(\mathbf{x}+\mathbf{y}) - F(\mathbf{x})\big).
    \end{equation}
    Click to expand and view more
  2. If $`\mathbf{x}-\mathbf{y}\ge \mathbf{0}`$, then

    MATH
    \begin{equation}
    \label{eq:grad-upper-main}
            \big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle
            \;\le\;
            \frac{1}{\gamma}\big(F(\mathbf{x}) - F(\mathbf{x}-\mathbf{y})\big).
    \end{equation}
    Click to expand and view more

Proof. We first prove part (1). Assume $`\mathbf{x}+\mathbf{y}\le\mathbf{1}`$ and define

MATH
\begin{equation}
\label{eq:g-def}
    g(t) \;:=\; F(\mathbf{x}+t \mathbf{y}),
    \qquad t\in[0,1].
\end{equation}
Click to expand and view more

The condition $`\mathbf{x}+\mathbf{y}\le\mathbf{1}`$ and $`\mathbf{y}\ge\mathbf{0}`$ implies $`\mathbf{x}+t\mathbf{y}\in[0,1]^n`$ for all $`t\in[0,1]`$. By the chain rule,

MATH
\begin{equation}
\label{eq:g-deriv}
    g'(t)
    \;=\;
    \big\langle \nabla F(\mathbf{x}+t \mathbf{y}),\,\mathbf{y}\big\rangle.
\end{equation}
Click to expand and view more

For each $`t\in[0,1]`$ we have $`\mathbf{x}\le \mathbf{x}+t\mathbf{y}`$, so by $`\gamma`$-weak DR-submodularity,

MATH
\begin{equation}
\label{eq:grad-pointwise-1}
    \nabla F(\mathbf{x})
    \;\ge\;
    \gamma\,\nabla F(\mathbf{x}+t \mathbf{y}),
    \qquad 0\le t\le 1.
\end{equation}
Click to expand and view more

Taking inner products of both sides of [eq:grad-pointwise-1] with $`\mathbf{y}\ge\mathbf{0}`$ and using [eq:g-deriv] gives

MATH
\begin{equation}
\label{eq:g-prime-bound}
    \big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle
    \;\ge\;
    \gamma\,\big\langle \nabla F(\mathbf{x}+t\mathbf{y}),\,\mathbf{y}\big\rangle
    \;=\;
    \gamma\,g'(t),
    \qquad 0\le t\le 1.
\end{equation}
Click to expand and view more

Equivalently,

MATH
\begin{equation}
\label{eq:g-prime-upper}
    g'(t)
    \;\le\;
    \frac{1}{\gamma}\,\big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle,
    \qquad 0\le t\le 1.
\end{equation}
Click to expand and view more

Integrating the bound [eq:g-prime-upper] over $`t\in[0,1]`$ and using the fundamental theorem of calculus yields

MATH
\begin{equation}
\label{eq:g-integral}
    F(\mathbf{x}+\mathbf{y})-F(\mathbf{x})
    \;=\;
    \int_0^1 g'(t)\,dt
    \;\le\;
    \int_0^1 \frac{1}{\gamma}\,\big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle\,dt
    \;=\;
    \frac{1}{\gamma}\,\big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle.
\end{equation}
Click to expand and view more

Rearranging [eq:g-integral] gives

MATH
\begin{equation}
\label{eq:grad-lower-main-again}
    \big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle
    \;\ge\;
    \gamma\big(F(\mathbf{x}+\mathbf{y})-F(\mathbf{x})\big),
\end{equation}
Click to expand and view more

which is exactly [eq:grad-lower-main].

We now prove part (2). Assume $`\mathbf{x}-\mathbf{y}\ge\mathbf{0}`$ and define

MATH
\begin{equation}
\label{eq:h-def}
    h(t) \;:=\; F(\mathbf{x}-t \mathbf{y}),
    \qquad t\in[0,1].
\end{equation}
Click to expand and view more

The condition $`\mathbf{x}-\mathbf{y}\ge\mathbf{0}`$ and $`\mathbf{y}\ge\mathbf{0}`$ implies $`\mathbf{x}-t\mathbf{y}\in[0,1]^n`$ for all $`t\in[0,1]`$. Again by the chain rule,

MATH
\begin{equation}
\label{eq:h-deriv}
    h'(t)
    \;=\;
    \big\langle \nabla F(\mathbf{x}-t \mathbf{y}),\, -\mathbf{y}\big\rangle
    \;=\;
    -\,\big\langle \nabla F(\mathbf{x}-t \mathbf{y}),\,\mathbf{y}\big\rangle.
\end{equation}
Click to expand and view more

For each $`t\in[0,1]`$ we have $`\mathbf{x}-t\mathbf{y}\le \mathbf{x}`$, so $`\gamma`$-weak DR-submodularity gives

MATH
\begin{equation}
\label{eq:grad-pointwise-2}
    \nabla F(\mathbf{x}-t \mathbf{y})
    \;\ge\;
    \gamma\,\nabla F(\mathbf{x}),
    \qquad 0\le t\le 1.
\end{equation}
Click to expand and view more

Taking inner products with $`\mathbf{y}\ge\mathbf{0}`$ and using [eq:h-deriv] we obtain

MATH
\begin{equation}
\label{eq:inner-ineq-2}
    \big\langle \nabla F(\mathbf{x}-t \mathbf{y}),\,\mathbf{y}\big\rangle
    \;\ge\;
    \gamma\,\big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle,
    \qquad 0\le t\le 1,
\end{equation}
Click to expand and view more

and hence

MATH
\begin{equation}
\label{eq:hprime-bound}
    h'(t)
    \;=\;
    -\,\big\langle \nabla F(\mathbf{x}-t \mathbf{y}),\,\mathbf{y}\big\rangle
    \;\le\;
    -\,\gamma\,\big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle,
    \qquad 0\le t\le 1.
\end{equation}
Click to expand and view more

Integrating [eq:hprime-bound] over $`t\in[0,1]`$ and applying the fundamental theorem of calculus gives

MATH
\begin{equation}
\label{eq:h-integral}
    F(\mathbf{x}-\mathbf{y})-F(\mathbf{x})
    \;=\;
    \int_0^1 h'(t)\,dt
    \;\le\;
    \int_0^1 -\,\gamma\,\big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle\,dt
    \;=\;
    -\,\gamma\,\big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle.
\end{equation}
Click to expand and view more

Multiplying [eq:h-integral] by $`-1`$ yields

MATH
\begin{equation}
\label{eq:grad-upper-ineq}
    F(\mathbf{x})-F(\mathbf{x}-\mathbf{y})
    \;\ge\;
    \gamma\,\big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle.
\end{equation}
Click to expand and view more

Rearranging [eq:grad-upper-ineq], we obtain

MATH
\begin{equation}
\label{eq:grad-upper-main-again}
    \big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\big\rangle
    \;\le\;
    \frac{1}{\gamma}\big(F(\mathbf{x})-F(\mathbf{x}-\mathbf{y})\big),
\end{equation}
Click to expand and view more

which is exactly [eq:grad-upper-main]. This completes the proof. ◻

Lemma 3. Let $`F:[0,1]^n\to\mathbb{R}_{\ge0}`$ be nonnegative and $`\gamma`$–weakly DR-submodular for some $`\gamma\in(0,1]`$. For any fixed $`\mathbf{y}\in[0,1]^n`$, define

MATH
\begin{equation}
\label{eq:Gplus-def}
G_{\oplus}(\mathbf{x})\ :=\ F(\mathbf{x}\oplus \mathbf{y})
\end{equation}
Click to expand and view more

and

MATH
\begin{equation}
\label{eq:Gdot-def}
G_{\odot}(\mathbf{x})\ :=\ F(\mathbf{x}\odot \mathbf{y}).
\end{equation}
Click to expand and view more

Then both $`G_{\oplus}`$ and $`G_{\odot}`$ are nonnegative and $`\gamma`$–weakly DR-submodular, that is, for all $`\mathbf{x}^{(1)},\mathbf{x}^{(2)}\in[0,1]^n`$ with $`\mathbf{x}^{(1)}\le \mathbf{x}^{(2)}`$, any coordinate $`u\in[n]`$, and any step $`p\in[0,1-\mathbf{x}^{(2)}_u]`$ such that the updates stay in $`[0,1]^n`$, we have

MATH
\begin{equation}
\label{eq:Gplus-DR}
G_{\oplus}\bigl(\mathbf{x}^{(1)}+p\,\mathbf{e}_u\bigr)-G_{\oplus}(\mathbf{x}^{(1)})
\;\ge\;
\gamma\Bigl(G_{\oplus}\bigl(\mathbf{x}^{(2)}+p\,\mathbf{e}_u\bigr)-G_{\oplus}(\mathbf{x}^{(2)})\Bigr)
\end{equation}
Click to expand and view more

and

MATH
\begin{equation}
\label{eq:Gdot-DR}
G_{\odot}\bigl(\mathbf{x}^{(1)}+p\,\mathbf{e}_u\bigr)-G_{\odot}(\mathbf{x}^{(1)})
\;\ge\;
\gamma\Bigl(G_{\odot}\bigl(\mathbf{x}^{(2)}+p\,\mathbf{e}_u\bigr)-G_{\odot}(\mathbf{x}^{(2)})\Bigr).
\end{equation}
Click to expand and view more

Proof. Nonnegativity of $`G_{\oplus}`$ and $`G_{\odot}`$ follows directly from [eq:Gplus-def], [eq:Gdot-def] and the nonnegativity of $`F`$.

Fix $`\mathbf{x}^{(1)}, \mathbf{x}^{(2)}\in[0,1]^n`$ with $`\mathbf{x}^{(1)}\le \mathbf{x}^{(2)}`$, a coordinate $`u\in[n]`$, and a step $`p\in[0,\,1-\mathbf{x}^{(2)}_u]`$, so that $`\mathbf{x}^{(j)}+p\,\mathbf{e}_u\in[0,1]^n`$ for $`j=1,2`$.

We first treat the $`\oplus`$ case. Set

MATH
\begin{equation}
\label{eq:z-def}
    \mathbf{z}^{(j)} \;:=\; \mathbf{x}^{(j)}\oplus \mathbf{y},
    \qquad j\in\{1,2\}.
\end{equation}
Click to expand and view more

Since $`\mathbf{x}^{(1)}\le\mathbf{x}^{(2)}`$ and the map $`x\mapsto x\oplus y`$ is nondecreasing in $`x`$, we have

MATH
\begin{equation}
\label{eq:z-order}
    \mathbf{z}^{(1)} \;\le\; \mathbf{z}^{(2)}.
\end{equation}
Click to expand and view more

Using the coordinate-wise identity

MATH
\begin{equation}
\label{eq:oplus-coord}
    (\mathbf{x}\oplus\mathbf{y})_i
    \;=\;
    x_i + y_i - x_i y_i,
\end{equation}
Click to expand and view more

we can express the update $`(\mathbf{x}^{(j)}+p\,\mathbf{e}_u)\oplus\mathbf{y}`$ in terms of $`\mathbf{z}^{(j)}`$. Indeed, only the $`u`$-th coordinate of $`\mathbf{x}^{(j)}`$ changes, so for each $`j\in\{1,2\}`$ we have

MATH
\begin{equation}
\label{eq:Gplus-update}
    (\mathbf{x}^{(j)}+p\,\mathbf{e}_u)\oplus \mathbf{y}
    \;=\;
    \mathbf{z}^{(j)} + \alpha\,\mathbf{e}_u,
\end{equation}
Click to expand and view more

where

MATH
\begin{equation}
\label{eq:alpha-def}
    \alpha \;:=\; p\,(1-\mathbf{y}_u).
\end{equation}
Click to expand and view more

This follows by plugging $`x_u^{(j)}+p`$ into the expression [eq:oplus-coord] and simplifying.

Next we check that the updated point stays in $`[0,1]^n`$ at coordinate $`u`$. Using [eq:oplus-coord] with $`x_u^{(2)}`$ we obtain

MATH
\begin{equation}
\label{eq:one-minus-z2}
    1-\mathbf{z}^{(2)}_u
    \;=\;
    1-\bigl(x^{(2)}_u + y_u - x^{(2)}_u y_u\bigr)
    \;=\;
    (1-\mathbf{x}^{(2)}_u)(1-\mathbf{y}_u).
\end{equation}
Click to expand and view more

Since $`p\le 1-\mathbf{x}^{(2)}_u`$ by assumption and $`1-\mathbf{y}_u\ge 0`$, we get

MATH
\begin{equation}
\label{eq:alpha-bound-plus}
    \alpha
    \;=\;
    p(1-\mathbf{y}_u)
    \;\le\;
    (1-\mathbf{x}^{(2)}_u)(1-\mathbf{y}_u)
    \;=\;
    1-\mathbf{z}^{(2)}_u.
\end{equation}
Click to expand and view more

Thus $`\mathbf{z}^{(2)}+\alpha\,\mathbf{e}_u`$ remains in $`[0,1]^n`$.

Now we use the $`\gamma`$–weak DR-submodularity of $`F`$. From [eq:z-order] and [eq:alpha-bound-plus] we can apply the definition of $`\gamma`$–weak DR-submodularity to the pair $`(\mathbf{z}^{(1)},\mathbf{z}^{(2)})`$ with step $`\alpha`$ in coordinate $`u`$ and obtain

MATH
\begin{equation}
\label{eq:F-DR-plus}
    F\!\bigl(\mathbf{z}^{(1)}+\alpha\,\mathbf{e}_u\bigr) - F(\mathbf{z}^{(1)})
    \;\ge\;
    \gamma\Bigl(
        F\!\bigl(\mathbf{z}^{(2)}+\alpha\,\mathbf{e}_u\bigr) - F(\mathbf{z}^{(2)})
    \Bigr).
\end{equation}
Click to expand and view more

Using [eq:Gplus-def] and [eq:Gplus-update], we can rewrite the left-hand side and the right-hand side of [eq:F-DR-plus] in terms of $`G_{\oplus}`$ as

MATH
\begin{equation}
\label{eq:Gplus-rewrite}
    F\!\bigl(\mathbf{z}^{(j)}+\alpha\,\mathbf{e}_u\bigr)
    \;=\;
    G_{\oplus}\!\bigl(\mathbf{x}^{(j)}+p\,\mathbf{e}_u\bigr),
    \qquad
    F(\mathbf{z}^{(j)}) = G_{\oplus}(\mathbf{x}^{(j)}),
    \quad j=1,2.
\end{equation}
Click to expand and view more

Substituting [eq:Gplus-rewrite] into [eq:F-DR-plus] gives

MATH
\begin{equation}
\label{eq:Gplus-DR-proof}
    G_{\oplus}\!\bigl(\mathbf{x}^{(1)}+p\,\mathbf{e}_u\bigr)-G_{\oplus}(\mathbf{x}^{(1)})
    \;\ge\;
    \gamma\Bigl(
        G_{\oplus}\!\bigl(\mathbf{x}^{(2)}+p\,\mathbf{e}_u\bigr)-G_{\oplus}(\mathbf{x}^{(2)})
    \Bigr),
\end{equation}
Click to expand and view more

which is exactly [eq:Gplus-DR]. This shows that $`G_{\oplus}`$ is $`\gamma`$–weakly DR-submodular.

We now treat the $`\odot`$ case. Set

MATH
\begin{equation}
\label{eq:w-def}
    \mathbf{w}^{(j)} \;:=\; \mathbf{x}^{(j)}\odot \mathbf{y},
    \qquad j\in\{1,2\}.
\end{equation}
Click to expand and view more

Since the map $`x\mapsto x\odot y`$ is also nondecreasing in $`x`$, we again have

MATH
\begin{equation}
\label{eq:w-order}
    \mathbf{w}^{(1)} \;\le\; \mathbf{w}^{(2)}.
\end{equation}
Click to expand and view more

By definition of $`\odot`$,

MATH
\begin{equation}
\label{eq:odot-coord}
    (\mathbf{x}\odot\mathbf{y})_i
    \;=\;
    x_i y_i,
\end{equation}
Click to expand and view more

so updating $`\mathbf{x}^{(j)}`$ in coordinate $`u`$ by $`p`$ gives

MATH
\begin{equation}
\label{eq:Gdot-update}
    (\mathbf{x}^{(j)}+p\,\mathbf{e}_u)\odot \mathbf{y}
    \;=\;
    \mathbf{w}^{(j)} + \beta\,\mathbf{e}_u,
\end{equation}
Click to expand and view more

where

MATH
\begin{equation}
\label{eq:beta-def}
    \beta \;:=\; p\,\mathbf{y}_u.
\end{equation}
Click to expand and view more

Again we check that the updated point stays in $`[0,1]^n`$ at coordinate $`u`$. From [eq:odot-coord] we have

MATH
\begin{equation}
\label{eq:w2-coord}
    \mathbf{w}^{(2)}_u \;=\; \mathbf{x}^{(2)}_u \mathbf{y}_u,
\end{equation}
Click to expand and view more

so

MATH
\begin{equation}
\label{eq:one-minus-w2}
    1-\mathbf{w}^{(2)}_u
    \;=\;
    1-\mathbf{x}^{(2)}_u \mathbf{y}_u.
\end{equation}
Click to expand and view more

Using $`p\le 1-\mathbf{x}^{(2)}_u`$ and $`\mathbf{y}_u\le 1`$, we obtain

MATH
\begin{equation}
\label{eq:beta-bound}
    \beta
    \;=\;
    p\,\mathbf{y}_u
    \;\le\;
    (1-\mathbf{x}^{(2)}_u)\mathbf{y}_u
    \;\le\;
    1-\mathbf{x}^{(2)}_u \mathbf{y}_u
    \;=\;
    1-\mathbf{w}^{(2)}_u.
\end{equation}
Click to expand and view more

Hence $`\mathbf{w}^{(2)}+\beta\,\mathbf{e}_u\in[0,1]^n`$.

Now we again use the $`\gamma`$–weak DR-submodularity of $`F`$. By [eq:w-order] and [eq:beta-bound], the definition applied to $`(\mathbf{w}^{(1)},\mathbf{w}^{(2)})`$ with step $`\beta`$ in coordinate $`u`$ yields

MATH
\begin{equation}
\label{eq:F-DR-dot}
    F\!\bigl(\mathbf{w}^{(1)}+\beta\,\mathbf{e}_u\bigr)-F(\mathbf{w}^{(1)})
    \;\ge\;
    \gamma\Bigl(
        F\!\bigl(\mathbf{w}^{(2)}+\beta\,\mathbf{e}_u\bigr)-F(\mathbf{w}^{(2)})
    \Bigr).
\end{equation}
Click to expand and view more

Using [eq:Gdot-def] and [eq:Gdot-update], we can rewrite [eq:F-DR-dot] as

MATH
\begin{equation}
\label{eq:Gdot-DR-proof}
    G_{\odot}\!\bigl(\mathbf{x}^{(1)}+p\,\mathbf{e}_u\bigr)-G_{\odot}(\mathbf{x}^{(1)})
    \;\ge\;
    \gamma\Bigl(
        G_{\odot}\!\bigl(\mathbf{x}^{(2)}+p\,\mathbf{e}_u\bigr)-G_{\odot}(\mathbf{x}^{(2)})
    \Bigr),
\end{equation}
Click to expand and view more

which is exactly [eq:Gdot-DR]. Thus $`G_{\odot}`$ is also $`\gamma`$–weakly DR-submodular, completing the proof. ◻

Frank–Wolfe Algorithm and Proof of Theorem <a href="#thm:weak-dr-smooth" data-reference-type=“ref”

data-reference=“thm:weak-dr-smooth”>5

This section develops a first–order certificate tailored to the $`\gamma`$–weakly DR setting and uses it to prove our main result, Theorem 5. The argument follows a local-to-global template: (i) a local optimality condition at $`\mathbf{x}`$ yields a $`\gamma`$–weighted comparison between $`F(\mathbf{x})`$ and the join/meet values with any comparator $`\mathbf{y}`$ (Lemma 4); (ii) a Frank–Wolfe variant produces a point $`\mathbf{x}\in P`$ that satisfies a uniform first–order certificate against every $`\mathbf{y}\in P`$ (Lemma 13); and (iii) combining the two delivers a global value bound that degrades smoothly with $`\gamma`$ and matches the classical DR guarantee at $`\gamma=1`$.

Algorithmic setup.

We will invoke the following Frank–Wolfe–type routine from . For clarity of presentation, we assume $`\delta^{-1}\in\mathbb{N}`$; if not, we replace $`\delta`$ by $`1/\lceil\delta^{-1}\rceil`$ without affecting the asymptotics.

Let $`\mathbf{x}{(0)}`$ be an arbitrary vector in $`P`$. Let $`\mathbf{z}{(i)} \in \arg\max_{\mathbf{y}\in P} \langle \mathbf{y}, \nabla F(\mathbf{x}{(i-1)}) \rangle`$. Let $`\mathbf{x}{(i)} \gets (1-\delta)\,\mathbf{x}{(i-1)} + \delta \,\mathbf{z}{(i)}`$. Let $`i^{*} \in \arg\min_{1 \leq i \leq \delta^{-2}} \{ \langle \mathbf{z}{(i)} - \mathbf{x}{(i-1)}, \nabla F(\mathbf{x}{(i-1)}) \rangle \}`$. $`\mathbf{x}{(i^{*}-1)}`$.

As observed in , the update rule

MATH
\mathbf{x}{(i)} \;=\; (1-\delta)\,\mathbf{x}{(i-1)} + \delta\,\mathbf{z}{(i)}, \qquad \mathbf{z}{(i)}\in P,
Click to expand and view more

keeps all iterates in $`P`$; in particular, $`\mathbf{x}{(i)}\in P`$ for every $`0\le i\le \delta^{-2}`$.

The next lemma converts the local optimality condition into a lattice-based comparison that interpolates in $`\gamma`$; it coincides with the classical $`\tfrac12\!\big(F(\mathbf{x}\vee\mathbf{y}) +F(\mathbf{x}\wedge\mathbf{y})\big)`$ bound at $`\gamma=1`$.

Lemma 1. Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be differentiable and $`\gamma`$–weakly DR-submodular. If $`\mathbf{x}`$ is a local optimum with respect to $`\mathbf{y}`$, it means;

MATH
\begin{equation}
\label{eq:local-opt-cond}
    \big\langle \mathbf{y}-\mathbf{x},\nabla F(\mathbf{x})\big\rangle\;\le\;0,
\end{equation}
Click to expand and view more

then

MATH
\begin{equation}
\label{eq:local-to-lattice}
    F(\mathbf{x})\ \ge\ \frac{\gamma^{2}\,F(\mathbf{x}\vee \mathbf{y}) + F(\mathbf{x}\wedge \mathbf{y})}{\,1+\gamma^{2}\,}.
\end{equation}
Click to expand and view more

Proof. Starting from the local optimality condition [eq:local-opt-cond], we decompose $`\mathbf{y}-\mathbf{x}`$ as

MATH
\begin{equation}
\label{eq:decomp-y-x}
    \mathbf{y}-\mathbf{x}
    \;=\;
    (\mathbf{y}\vee\mathbf{x}-\mathbf{x})\;-\;(\mathbf{x}-\mathbf{y}\wedge\mathbf{x}).
\end{equation}
Click to expand and view more

Substituting [eq:decomp-y-x] into [eq:local-opt-cond] gives

MATH
\begin{equation}
\label{eq:local-opt-expanded}
    0
    \;\ge\;
    \big\langle \mathbf{y}-\mathbf{x},\nabla F(\mathbf{x})\big\rangle
    \;=\;
    \big\langle \mathbf{y}\vee\mathbf{x}-\mathbf{x},\nabla F(\mathbf{x})\big\rangle
    \;-\;
    \big\langle \mathbf{x}-\mathbf{y}\wedge\mathbf{x},\nabla F(\mathbf{x})\big\rangle.
\end{equation}
Click to expand and view more

We now bound each inner product using Lemma 2. First, note that $`\mathbf{y}\vee\mathbf{x}-\mathbf{x}\ge\mathbf{0}`$ and

MATH
\mathbf{x}+ (\mathbf{y}\vee\mathbf{x}-\mathbf{x})
    \;=\;
    \mathbf{y}\vee\mathbf{x}
    \;\in\;
    [0,1]^n,
Click to expand and view more

so Lemma 2(1) applies with the direction $`\mathbf{y}\vee\mathbf{x}-\mathbf{x}`$. We obtain

MATH
\begin{equation}
\label{eq:term1-lower}
    \big\langle \nabla F(\mathbf{x}),\,\mathbf{y}\vee\mathbf{x}-\mathbf{x}\big\rangle
    \;\ge\;
    \gamma\big(F(\mathbf{y}\vee\mathbf{x})-F(\mathbf{x})\big).
\end{equation}
Click to expand and view more

Similarly, $`\mathbf{x}-\mathbf{y}\wedge\mathbf{x}\ge\mathbf{0}`$ and

MATH
\mathbf{x}-(\mathbf{x}-\mathbf{y}\wedge\mathbf{x})
    \;=\;
    \mathbf{y}\wedge\mathbf{x}
    \;\in\;
    [0,1]^n,
Click to expand and view more

so Lemma 2(2) applies with the direction $`\mathbf{x}-\mathbf{y}\wedge\mathbf{x}`$. This gives

MATH
\begin{equation}
\label{eq:term2-upper}
    \big\langle \nabla F(\mathbf{x}),\,\mathbf{x}-\mathbf{y}\wedge\mathbf{x}\big\rangle
    \;\le\;
    \frac{1}{\gamma}\big(F(\mathbf{x})-F(\mathbf{y}\wedge\mathbf{x})\big).
\end{equation}
Click to expand and view more

Substituting the bounds [eq:term1-lower] and [eq:term2-upper] into [eq:local-opt-expanded] yields

MATH
\begin{equation}
\label{eq:ineq-before-rearrange}
    0
    \;\ge\;
    \gamma\big(F(\mathbf{y}\vee\mathbf{x})-F(\mathbf{x})\big)
    \;-\;
    \frac{1}{\gamma}\big(F(\mathbf{x})-F(\mathbf{y}\wedge\mathbf{x})\big).
\end{equation}
Click to expand and view more

Expanding [eq:ineq-before-rearrange], we obtain

MATH
\begin{equation}
\label{eq:expanded-ineq}
    0
    \;\ge\;
    \gamma F(\mathbf{y}\vee\mathbf{x})
    \;-\;
    \gamma F(\mathbf{x})
    \;-\;
    \frac{1}{\gamma}F(\mathbf{x})
    \;+\;
    \frac{1}{\gamma}F(\mathbf{y}\wedge\mathbf{x}).
\end{equation}
Click to expand and view more

Rearranging [eq:expanded-ineq] by bringing the terms involving $`F(\mathbf{x})`$ to the right-hand side gives

MATH
\begin{equation}
\label{eq:move-Fx}
    \gamma F(\mathbf{y}\vee\mathbf{x})
    \;+\;
    \frac{1}{\gamma}F(\mathbf{y}\wedge\mathbf{x})
    \;\le\;
    \left(\gamma+\frac{1}{\gamma}\right)F(\mathbf{x}).
\end{equation}
Click to expand and view more

Dividing both sides of [eq:move-Fx] by $`\gamma+1/\gamma=(1+\gamma^{2})/\gamma`$ yields

MATH
\begin{equation}
\label{eq:Fx-lattice-form}
    F(\mathbf{x})
    \;\ge\;
    \frac{\gamma F(\mathbf{y}\vee\mathbf{x}) + \tfrac{1}{\gamma}F(\mathbf{y}\wedge\mathbf{x})}
         {\gamma+\tfrac{1}{\gamma}}
    \;=\;
    \frac{\gamma^{2}F(\mathbf{y}\vee\mathbf{x}) + F(\mathbf{y}\wedge\mathbf{x})}{1+\gamma^{2}}.
\end{equation}
Click to expand and view more

This is exactly [eq:local-to-lattice], after noting that $`\mathbf{y}\vee\mathbf{x}=\mathbf{x}\vee\mathbf{y}`$ and $`\mathbf{y}\wedge\mathbf{x}=\mathbf{x}\wedge\mathbf{y}`$. ◻

The next lemma is a standard smoothness-based guarantee produced by Algorithm [algo:frank] (see Theorem 2.4 in ).

Lemma 13 (Theorem 2.4 of ). *Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be nonnegative and $`L`$-smooth, let $`P\subseteq[0,1]^n`$ be a solvable convex body of diameter $`D`$, and let $`\delta\in(0,1)`$. There is a polynomial-time algorithm that returns $`\mathbf{x}\in P`$ such that

MATH
\begin{equation}
\label{eq:first-order-cert}
\big\langle \mathbf{y}-\mathbf{x},\nabla F(\mathbf{x})\big\rangle \;\le\;
\delta\!\left[\max_{\mathbf{z}\in P} F(\mathbf{z}) \,+\, \frac{L D^{2}}{2}\right]
\qquad\text{for all } \mathbf{y}\in P.
\end{equation}
```*

</div>

Combining the uniform certificate
<a href="#eq:first-order-cert" data-reference-type="eqref"
data-reference="eq:first-order-cert">[eq:first-order-cert]</a> with the
weakly–DR inequalities
(Lemma <a href="#lem:grad-ineq" data-reference-type="ref"
data-reference="lem:grad-ineq">2</a>) and the local-to-lattice
comparison
(Lemma <a href="#lem:local-to-lattice" data-reference-type="ref"
data-reference="lem:local-to-lattice">1</a>) yields our main bound.

<div id="thm:weak-dr-smooth_a" class="exew">

**Theorem 1**. Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be nonnegative and
$`L`$-smooth, and suppose $`F`$ is $`\gamma`$–weakly DR-submodular for
some $`\gamma\in(0,1]`$. Let $`P\subseteq[0,1]^n`$ be a solvable convex
body of diameter $`D`$, and let $`\delta\in(0,1)`$. Then there is a
polynomial-time algorithm that outputs $`\mathbf{x}\in P`$ such that,
for every $`\mathbf{y}\in P`$,
``` math
\begin{equation}
\label{eq:weak-dr-guarantee}
F(\mathbf{x})\;\ge\;
\frac{\gamma^{2} F(\mathbf{x}\vee \mathbf{y})+F(\mathbf{x}\wedge \mathbf{y})}{1+\gamma^{2}}
\;-\;
\frac{\delta\,\gamma}{1+\gamma^{2}}\!\left[\max_{\mathbf{z}\in P} F(\mathbf{z}) \,+\, \frac{L D^{2}}{2}\right].
\end{equation}
Click to expand and view more

Proof. Let $`\mathbf{x}\in P`$ be returned by Lemma 13; then [eq:first-order-cert] holds for all $`\mathbf{y}\in P`$. As in the proof of Lemma 1, Lemma 2 implies that for every $`\mathbf{y}\in[0,1]^n`$,

MATH
\begin{equation}
\label{eq:grad-ineq-global}
\big\langle \mathbf{y}-\mathbf{x},\nabla F(\mathbf{x})\big\rangle \;\ge\; 
\gamma\,F(\mathbf{x}\vee \mathbf{y}) \;+\; \frac{1}{\gamma}\,F(\mathbf{x}\wedge \mathbf{y})
\;-\; \frac{1+\gamma^{2}}{\gamma}\,F(\mathbf{x}).
\end{equation}
Click to expand and view more

Combining [eq:first-order-cert] and [eq:grad-ineq-global] for any $`\mathbf{y}\in P`$ gives

MATH
\begin{equation}
\label{eq:combine-cert}
\delta\!\left[\max_{\mathbf{z}\in P} F(\mathbf{z}) \,+\, \frac{L D^{2}}{2}\right]
\;\ge\;
\gamma\,F(\mathbf{x}\vee \mathbf{y})
\;+\; \frac{1}{\gamma}\,F(\mathbf{x}\wedge \mathbf{y})
\;-\; \frac{1+\gamma^{2}}{\gamma}\,F(\mathbf{x}).
\end{equation}
Click to expand and view more

Rearranging [eq:combine-cert] by moving the $`F(\mathbf{x})`$ term to the left-hand side, we obtain

MATH
\begin{equation}
\label{eq:move-Fx-theorem}
\frac{1+\gamma^{2}}{\gamma}\,F(\mathbf{x})
\;\ge\;
\gamma\,F(\mathbf{x}\vee \mathbf{y})
\;+\; \frac{1}{\gamma}\,F(\mathbf{x}\wedge \mathbf{y})
\;-\;
\delta\!\left[\max_{\mathbf{z}\in P} F(\mathbf{z}) \,+\, \frac{L D^{2}}{2}\right].
\end{equation}
Click to expand and view more

Multiplying both sides of [eq:move-Fx-theorem] by $`\frac{\gamma}{1+\gamma^{2}}`$ yields

MATH
\begin{align}
\label{eq:Fx-final}
F(\mathbf{x})
&\;\ge\;
\frac{\gamma^{2}}{1+\gamma^{2}}\,F(\mathbf{x}\vee\mathbf{y})
\;+\;
\frac{1}{1+\gamma^{2}}\,F(\mathbf{x}\wedge\mathbf{y})
\;-\;
\frac{\delta\,\gamma}{1+\gamma^{2}}\!\left[\max_{\mathbf{z}\in P} F(\mathbf{z}) \,+\, \frac{L D^{2}}{2}\right].
\end{align}
Click to expand and view more

This is exactly [eq:weak-dr-guarantee], completing the proof. ◻

Double–Greedy Algorithm and Proof of Theorem <a href="#thm:unbalanced-weakDR" data-reference-type=“ref”

data-reference=“thm:unbalanced-weakDR”>6

This appendix develops and analyzes a $`\gamma`$-aware Double–Greedy routine whose guarantee is stated in Theorem 2. Our analysis uses the grid-discretized variant in Algorithm [algo:double]. For concreteness, we assume that $`\varepsilon^{-1}`$ is an even integer; if not, we replace $`\varepsilon`$ by $`\varepsilon' \;=\; {1}/{\,(2\left\lceil \varepsilon^{-1}\right\rceil)\,}\;\in (0,1],`$ which leaves the bounds unchanged up to the stated $`\varepsilon`$-dependence. Throughout, we write $`\mathbf{o}\in[0,1]^n`$ for an arbitrary comparator; when $`\mathbf{o}`$ is chosen to be a maximizer, we have $`F(\mathbf{o})=\max_{\mathbf{u}\in[0,1]^n}F(\mathbf{u})`$.

Theorem 2. There exists a polynomial-time algorithm that, given a nonnegative $`\gamma`$-weakly DR-submodular function $`F : [0,1]^n \to \mathbb{R}_{\ge 0}`$ and a parameter $`\varepsilon \in (0,1)`$, outputs $`\mathbf{x}\in [0,1]^n`$ such that, for every fixed $`\mathbf{o}\in [0,1]^n`$,

MATH
\begin{equation}
    F(\mathbf{x})\;\;\ge\;\; 
\max_{r \ge 0}\;
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr)\,r\,F(\mathbf{o})\;+\;F(\mathbf{0})\;+\;r^2\,F(\mathbf{1})}
{\,r^2\;+\;2\gamma^{3/2}r\;+\;1\,}.
\end{equation}
Click to expand and view more

Algorithm [algo:double] maintains two vectors $`\mathbf{x},\mathbf{y}\in[0,1]^n`$ that start at $`\mathbf{0}`$ and $`\mathbf{1}`$, respectively, and monotonically converge to a single vector by making one coordinate agree per iteration. The new value assigned to the chosen coordinate (in both $`\mathbf{x}`$ and $`\mathbf{y}`$) is taken from a uniform grid; we denote the grid by

MATH
V \;:=\; \Bigl\{\, j\,\frac{\varepsilon}{n}\ :\ j\in\mathbb{Z},\ 0\le j \le n\,\varepsilon^{-1}\Bigr\}\ \subseteq\ [0,1].
Click to expand and view more

As shown in the lemmas that follow, the discretization loss due to using $`V`$ is explicitly controlled, and the resulting lower bound interpolates continuously with $`\gamma`$ and specializes to the classical DR guarantee at $`\gamma=1`$.

Input: oracle access to $`F:[0,1]^n \to \mathbb{R}_{\ge 0}`$, ground set $`N`$, grid parameter $`\varepsilon\in(0,1)`$, weakly-DR parameter $`\gamma\in(0,1]`$. Let $`V \gets \left\{ \frac{j\varepsilon}{n} \;:\; j\in\mathbb{Z},\ 0 \le j \le n\varepsilon^{-1} \right\} \subseteq [0,1]`$. Denote the elements of $`N`$ by $`u_1,\cdots,u_n`$ in an arbitrary order. Let $`\mathbf{x}\gets \mathbf{0}`$ and $`\mathbf{y}\gets \mathbf{1}`$. $`a_i \in \arg\max_{v\in V} F\left(\mathbf{x}+ v\,\mathbf{1}_{u_i}\right)`$; $`\Delta_{a,i} \gets F\left(\mathbf{x}+ a_i\,\mathbf{1}_{u_i}\right) - F(\mathbf{x})`$. $`b_i \in \arg\max_{v\in V} F\left(\mathbf{y}- v\mathbf{1}_{u_i}\right)`$; $`\Delta_{b,i} \gets F\left(\mathbf{y}- b_i\,\mathbf{1}_{u_i}\right) - F(\mathbf{y})`$. $`w_i \gets \dfrac{\Delta_{a,i}\,a_i + \gamma\,\Delta_{b,i}\,(1-b_i)}{\Delta_{a,i} + \gamma\,\Delta_{b,i}}`$. $`w_i \gets 1 - b_i`$ Set $`x_{u_i} \gets w_i`$ and $`y_{u_i} \gets w_i`$. $`\mathbf{x}`$

We now quantify the value attained by the vector returned by Algorithm [algo:double]. Let $`\mathbf{x}{(i)}`$ and $`\mathbf{y}{(i)}`$ denote the values of $`\mathbf{x}`$ and $`\mathbf{y}`$ after $`i`$ iterations of the main loop, respectively. For a fixed coordinate $`u_i`$, let

MATH
\begin{equation}
\label{eq:vi-star-def}
v^\ast \;\in\; \arg\max_{v\in[0,1]} \; F\bigl(\mathbf{x}{(i-1)} + v\,\mathbf{e}_{u_i}\bigr)
\end{equation}
Click to expand and view more

be a continuous (unconstrained-by-grid) maximizer along the $`u_i`$-th coordinate direction at iteration $`i`$. The next lemma bounds the discretization loss of the grid choice $`a_i`$ used by Algorithm [algo:double]. It holds for every $`\gamma\in(0,1]`$ and, when $`\gamma=1`$, it matches the exact bound of .

Lemma 14. *For any integer $`i\in\{1,\dots,n\}`$, the following holds:

MATH
\begin{equation}
\label{eq:double-case1}
\text{If } v^\ast \ge \tfrac{1}{2},\quad
F\!\bigl(\mathbf{x}{(i-1)} + a_i\,\mathbf{e}_{u_i}\bigr)
\;\ge\;
\max_{v\in[0,1]} F\!\bigl(\mathbf{x}{(i-1)} + v\,\mathbf{e}_{u_i}\bigr)
\;-\; \frac{2\varepsilon}{\gamma^{2} n}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more
MATH
\begin{equation}
\label{eq:double-case2}
\text{If } v^\ast < \tfrac{1}{2},\quad
F\!\bigl(\mathbf{x}{(i-1)} + a_i\,\mathbf{e}_{u_i}\bigr)
\;\ge\;
\max_{v\in[0,1]} F\!\bigl(\mathbf{x}{(i-1)} + v\,\mathbf{e}_{u_i}\bigr)
\;-\; \frac{2\varepsilon}{n}\,\gamma^{2}\,F(\mathbf{o}).
\end{equation}
```*

</div>

<div class="proof">

*Proof.* Let $`v^\ast\in[0,1]`$ maximize the function
``` math
\begin{equation}
\label{eq:phi-direction-def}
v\;\longmapsto\; F\bigl(\mathbf{x}{(i-1)}+v\,\mathbf{e}_{u_i}\bigr),
\end{equation}
Click to expand and view more

so that [eq:vi-star-def] holds. We treat two cases depending on the size of $`v^\ast`$.

Case 1: $`v^\ast\ge \tfrac12`$. Let $`v\in V`$ be the largest grid point with $`v\le v^\ast`$. By the definition of $`a_i`$ as a maximizer over the grid, we have

MATH
\begin{equation}
\label{eq:case1-grid-choice}
F\bigl(\mathbf{x}{(i-1)} + v^\ast \mathbf{e}_{u_i}\bigr) - F\bigl(\mathbf{x}{(i-1)} + a_i \mathbf{e}_{u_i}\bigr)
\ \le\
F\bigl(\mathbf{x}{(i-1)} + v^\ast \mathbf{e}_{u_i}\bigr) - F\bigl(\mathbf{x}{(i-1)} + v \mathbf{e}_{u_i}\bigr),
\end{equation}
Click to expand and view more

since $`F(\mathbf{x}{(i-1)}+a_i\mathbf{e}_{u_i})`$ is at least the value at any other grid point, and in particular at $`v`$.

Define the univariate function

MATH
\begin{equation}
\label{eq:phi-def-double}
\phi(t)\;:=\;F\bigl(\mathbf{x}{(i-1)} + t\,\mathbf{e}_{u_i}\bigr), \qquad t\in[0,1].
\end{equation}
Click to expand and view more

By differentiability and the chain rule,

MATH
\begin{equation}
\label{eq:phi-deriv-double}
\phi'(t)\;=\;\big\langle \nabla F\bigl(\mathbf{x}{(i-1)} + t\,\mathbf{e}_{u_i}\bigr),\,\mathbf{e}_{u_i}\big\rangle.
\end{equation}
Click to expand and view more

Along this coordinate direction, $`\gamma`$–weak DR-submodularity implies that for any $`0\le s\le t\le 1`$,

MATH
\begin{equation}
\label{eq:gamma-monotone-double}
\phi'(s)\;\ge\;\gamma\,\phi'(t).
\end{equation}
Click to expand and view more

Applying [eq:gamma-monotone-double] with $`s=v`$ and $`t\in[v,v^\ast]`$ yields

MATH
\begin{equation}
\label{eq:phi-prime-upper-case1}
\phi'(t)\;\le\;\frac{1}{\gamma}\,\phi'(v), \qquad t\in[v,v^\ast].
\end{equation}
Click to expand and view more

Integrating [eq:phi-prime-upper-case1] over $`t\in[v,v^\ast]`$ gives

MATH
\begin{equation}
\label{eq:phi-diff-case1-1}
\phi(v^\ast)-\phi(v)
\;=\;\int_{v}^{v^\ast}\phi'(t)\,dt
\;\le\;
\int_{v}^{v^\ast}\frac{1}{\gamma}\,\phi'(v)\,dt
\;=\;
\frac{v^\ast-v}{\gamma}\,\phi'(v).
\end{equation}
Click to expand and view more

Similarly, applying [eq:gamma-monotone-double] with $`0\le s\le v`$ and $`t=v`$ gives

MATH
\begin{equation}
\label{eq:phi-prime-lower-case1}
\phi'(s)\;\ge\;\gamma\,\phi'(v), \qquad s\in[0,v].
\end{equation}
Click to expand and view more

Integrating [eq:phi-prime-lower-case1] over $`s\in[0,v]`$ yields

MATH
\begin{equation}
\label{eq:phi-diff-case1-2}
\phi(v)-\phi(0)
\;=\;\int_{0}^{v}\phi'(s)\,ds
\;\ge\;
\int_{0}^{v}\gamma\,\phi'(v)\,ds
\;=\;
v\,\gamma\,\phi'(v).
\end{equation}
Click to expand and view more

Rearranging [eq:phi-diff-case1-2] gives

MATH
\begin{equation}
\label{eq:phi-prime-bound-v}
\phi'(v)\;\le\;\frac{\phi(v)-\phi(0)}{\gamma\,v}.
\end{equation}
Click to expand and view more

Combining [eq:phi-diff-case1-1] and [eq:phi-prime-bound-v], we obtain

MATH
\begin{equation}
\label{eq:phi-diff-case1-combined}
\phi(v^\ast)-\phi(v)
\;\le\;
\frac{v^\ast-v}{\gamma}\cdot\frac{\phi(v)-\phi(0)}{\gamma\,v}
\;=\;
\frac{v^\ast-v}{\gamma^{2}v}\,\bigl[\phi(v)-\phi(0)\bigr].
\end{equation}
Click to expand and view more

We now bound the factor $`\phi(v)-\phi(0)`$. Since $`\phi(v)=F(\mathbf{x}{(i-1)}+v\,\mathbf{e}_{u_i})`$ and $`\phi(0)=F(\mathbf{x}{(i-1)})`$, nonnegativity of $`F`$ and optimality of $`\mathbf{o}`$ imply

MATH
\begin{equation}
\label{eq:phi-diff-upper-F0}
\phi(v)-\phi(0)
\;\le\;
F(\mathbf{o}),
\end{equation}
Click to expand and view more

because $`\phi(v)\le F(\mathbf{o})`$ and $`\phi(0)\ge 0`$. Substituting [eq:phi-diff-upper-F0] into [eq:phi-diff-case1-combined] gives

MATH
\begin{equation}
\label{eq:phi-diff-case1-final}
\phi(v^\ast)-\phi(v)
\;\le\;
\frac{v^\ast-v}{\gamma^{2}v}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

By construction of the grid $`V`$, we have

MATH
\begin{equation}
\label{eq:grid-gap-case1}
v^\ast-v\;\le\;\frac{\varepsilon}{n}.
\end{equation}
Click to expand and view more

Moreover, since $`v^\ast\ge\tfrac{1}{2}`$ and $`\tfrac{1}{2}\in V`$, the choice of $`v`$ as the largest grid point not exceeding $`v^\ast`$ implies

MATH
\begin{equation}
\label{eq:v-lower-bound-case1}
v\;\ge\;\tfrac12.
\end{equation}
Click to expand and view more

Combining [eq:phi-diff-case1-final], [eq:grid-gap-case1], and [eq:v-lower-bound-case1], we obtain

MATH
\begin{equation}
\label{eq:phi-diff-case1-bound}
\phi(v^\ast)-\phi(v)
\;\le\;
\frac{\varepsilon/n}{\gamma^{2}\cdot (1/2)}\,F(\mathbf{o})
\;=\;
\frac{2\varepsilon}{\gamma^{2}n}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Using [eq:case1-grid-choice] and [eq:phi-def-double], [eq:phi-diff-case1-bound] gives

MATH
\begin{equation}
\label{eq:case1-final}
F\bigl(\mathbf{x}{(i-1)} + v^\ast \mathbf{e}_{u_i}\bigr)
-
F\bigl(\mathbf{x}{(i-1)} + a_i \mathbf{e}_{u_i}\bigr)
\;\le\;
\frac{2\varepsilon}{\gamma^{2}n}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Since $`v^\ast`$ maximizes $`v\mapsto F(\mathbf{x}{(i-1)}+v\,\mathbf{e}_{u_i})`$ over $`[0,1]`$, [eq:case1-final] is equivalent to [eq:double-case1], proving the first claim.

Case 2: $`v^\ast< \tfrac12`$. Let $`v\in V`$ be the smallest grid point with $`v\ge v^\ast`$. Then

MATH
\begin{equation}
\label{eq:grid-gap-case2}
v-v^\ast\;\le\;\frac{\varepsilon}{n},
\end{equation}
Click to expand and view more

and, since $`\varepsilon^{-1}`$ is even and $`\tfrac12\in V`$, we have

MATH
\begin{equation}
\label{eq:v-upper-bound-case2}
v\;\le\;\tfrac12.
\end{equation}
Click to expand and view more

By the choice of $`a_i`$ as a maximizer over the grid,

MATH
\begin{equation}
\label{eq:case2-grid-choice}
F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{x}{(i-1)}+v^\ast \mathbf{e}_{u_i}\bigr)
\;\ge\;
F\bigl(\mathbf{x}{(i-1)}+v\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{x}{(i-1)}+v^\ast \mathbf{e}_{u_i}\bigr).
\end{equation}
Click to expand and view more

Using the same function $`\phi`$ as in [eq:phi-def-double], for $`t\in[v^\ast,v]`$ we have $`t\le v`$, so by [eq:gamma-monotone-double],

MATH
\begin{equation}
\label{eq:phi-prime-lower-case2}
\phi'(t)\;\ge\;\gamma\,\phi'(v), \qquad t\in[v^\ast,v].
\end{equation}
Click to expand and view more

Integrating [eq:phi-prime-lower-case2] over $`t\in[v^\ast,v]`$ yields

MATH
\begin{equation}
\label{eq:phi-diff-case2-1}
\phi(v)-\phi(v^\ast)
\;=\;\int_{v^\ast}^{v}\phi'(t)\,dt
\;\ge\;
\int_{v^\ast}^{v}\gamma\,\phi'(v)\,dt
\;=\;
(v-v^\ast)\,\gamma\,\phi'(v).
\end{equation}
Click to expand and view more

For $`s\in[v,1]`$, we have $`v\le s`$, so [eq:gamma-monotone-double] implies

MATH
\begin{equation}
\label{eq:phi-prime-upper-case2}
\phi'(v)\;\ge\;\gamma\,\phi'(s), \qquad s\in[v,1].
\end{equation}
Click to expand and view more

Integrating [eq:phi-prime-upper-case2] over $`s\in[v,1]`$ gives

MATH
\begin{equation}
\label{eq:phi-diff-case2-2}
\phi(1)-\phi(v)
\;=\;\int_{v}^{1}\phi'(s)\,ds
\;\le\;
\int_{v}^{1}\frac{1}{\gamma}\,\phi'(v)\,ds
\;=\;
\frac{1-v}{\gamma}\,\phi'(v).
\end{equation}
Click to expand and view more

Rearranging [eq:phi-diff-case2-2] yields

MATH
\begin{equation}
\label{eq:phi-prime-lower-from1}
\phi'(v)
\;\ge\;
\frac{\gamma}{1-v}\,\bigl[\phi(1)-\phi(v)\bigr].
\end{equation}
Click to expand and view more

Combining [eq:phi-diff-case2-1] and [eq:phi-prime-lower-from1], we obtain

MATH
\begin{equation}
\label{eq:phi-diff-case2-combined}
\phi(v)-\phi(v^\ast)
\;\ge\;
(v-v^\ast)\,\gamma\cdot
\frac{\gamma}{1-v}\,\bigl[\phi(1)-\phi(v)\bigr]
\;=\;
\frac{(v-v^\ast)\,\gamma^{2}}{1-v}\,\bigl[\phi(1)-\phi(v)\bigr].
\end{equation}
Click to expand and view more

Using [eq:phi-def-double], note that

MATH
\begin{equation}
\label{eq:phi-1-v-bounded}
\phi(1)-\phi(v)
\;=\;
F\bigl(\mathbf{x}{(i-1)}+\mathbf{e}_{u_i}\bigr)
-
F\bigl(\mathbf{x}{(i-1)}+v\,\mathbf{e}_{u_i}\bigr)
\;\le\;
F(\mathbf{o}),
\end{equation}
Click to expand and view more

since $`F`$ is nonnegative and maximized at $`\mathbf{o}`$. Substituting [eq:phi-1-v-bounded] and the bounds [eq:grid-gap-case2][eq:v-upper-bound-case2] into [eq:phi-diff-case2-combined] gives

MATH
\begin{align}
\phi(v)-\phi(v^\ast)
&\;\ge\;
\frac{(v-v^\ast)\,\gamma^{2}}{1-v}\,\bigl[\phi(1)-\phi(v)\bigr]
\notag\\
&\;\ge\;
\frac{(v-v^\ast)\,\gamma^{2}}{1-v}\,(-F(\mathbf{o}))
\qquad\text{(since \(\phi(1)-\phi(v)\ge -F(\mathbf{o})\))}\notag\\
&\;\ge\;
-\,\frac{\varepsilon/n}{\,1/2\,}\,\gamma^{2}F(\mathbf{o})
\;=\;
-\,\frac{2\varepsilon}{n}\,\gamma^{2}F(\mathbf{o}),
\label{eq:phi-diff-case2-bound}
\end{align}
Click to expand and view more

where we used $`v-v^\ast\le \varepsilon/n`$ and $`v\le\tfrac12`$ (hence $`1-v\ge\tfrac12`$).

Using [eq:case2-grid-choice], [eq:phi-def-double], and [eq:phi-diff-case2-bound], we obtain

MATH
\begin{equation}
\label{eq:case2-final}
F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{x}{(i-1)}+v^\ast \mathbf{e}_{u_i}\bigr)
\;\ge\;
-\,\frac{2\varepsilon}{n}\,\gamma^{2}F(\mathbf{o}).
\end{equation}
Click to expand and view more

Since $`v^\ast`$ maximizes $`v\mapsto F(\mathbf{x}{(i-1)}+v\,\mathbf{e}_{u_i})`$ over $`[0,1]`$, [eq:case2-final] is equivalent to [eq:double-case2], proving the second claim.

The two cases [eq:double-case1] and [eq:double-case2] together establish the lemma. ◻

Similarly, we obtain an analogous result for $`\mathbf{y}`$. We omit the proof, as it mirrors the argument of Lemma 14.

Lemma 15. *For any integer $`i\in\{1,\dots,n\}`$, let

MATH
\begin{equation}
\label{eq:vi-star-y-def}
    v^\ast \;\in\; \arg\max_{v\in[0,1]} F\bigl(\mathbf{y}{(i-1)}-v\,\mathbf{e}_{u_i}\bigr).
\end{equation}
Click to expand and view more

If $`v^\ast\ge \tfrac12`$, then

MATH
\begin{equation}
\label{eq:double2-case1}
F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)\ \ge\
\max_{v\in[0,1]} F\bigl(\mathbf{y}{(i-1)}-v\,\mathbf{e}_{u_i}\bigr)
\;-\; \frac{2\varepsilon}{n}\,\gamma^{2}\,F(\mathbf{o}),
\end{equation}
Click to expand and view more

and if $`v^\ast<\tfrac12`$, then

MATH
\begin{equation}
\label{eq:double2-case2}
F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)\ \ge\
\max_{v\in[0,1]} F\bigl(\mathbf{y}{(i-1)}-v\,\mathbf{e}_{u_i}\bigr)
\;-\; \frac{2\varepsilon}{\gamma^{2}n}\,F(\mathbf{o}).
\end{equation}
```*

</div>

At each iteration, the $`\gamma`$-aware mixing step (line 8 of
Algorithm <a href="#algo:double" data-reference-type="ref"
data-reference="algo:double">[algo:double]</a>) guarantees a
quantifiable increase in the objective. The next lemma lower-bounds this
per-coordinate progress for both trajectories, showing that the gain is
a convex-combination–type quadratic term that scales with $`\gamma`$.

<div id="Lemma:s2_c3" class="lemma">

**Lemma 16**. *For every integer $`1\le i\le n`$,
``` math
\begin{equation}
\label{eq:lemma-s2c3-x}
F\bigl(\mathbf{x}{(i)}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)\ \ge\ \frac{\Delta_{a,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}
\end{equation}
Click to expand and view more

and

MATH
\begin{equation}
\label{eq:lemma-s2c3-y}
F\bigl(\mathbf{y}{(i)}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)\ \ge\ \frac{\gamma^{3}\Delta_{b,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{equation}
```*

</div>

<div class="proof">

*Proof.* **Increase of $`F`$ along $`\mathbf{x}`$.** By the definition
of $`\mathbf{x}{(i)}`$ and
``` math
\begin{equation}
\label{eq:w-i-def}
    w_i\;=\;\frac{\Delta_{a,i}\,a_i+\gamma\,\Delta_{b,i}\,(1-b_i)}{\Delta_{a,i}+\gamma\,\Delta_{b,i}},
\end{equation}
Click to expand and view more

we can write the new point $`\mathbf{x}{(i)}`$ as a convex combination of two one-dimensional updates:

MATH
\begin{equation}
\label{eq:x-i-update}
\mathbf{x}{(i)} \;=\; \mathbf{x}{(i-1)} 
\;+\; \frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}}\,a_i\,\mathbf{e}_{u_i}
\;+\; \frac{\gamma\Delta_{b,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}}\,(1-b_i)\,\mathbf{e}_{u_i}.
\end{equation}
Click to expand and view more

Equivalently, $`\mathbf{x}{(i)}`$ is the convex combination

MATH
\begin{equation}
\label{eq:x-i-as-convex-comb}
\mathbf{x}{(i)}
\;=\;
\frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}}
    \bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)
\;+\;
\frac{\gamma\Delta_{b,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}}
    \bigl(\mathbf{x}{(i-1)}+(1-b_i)\,\mathbf{e}_{u_i}\bigr).
\end{equation}
Click to expand and view more

Therefore

MATH
\begin{equation}
\label{eq:x-diff-start}
\begin{aligned}
&F\bigl(\mathbf{x}{(i)}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)= F\!\left(
     \frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}} \bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)+ \frac{\gamma\Delta_{b,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}} \bigl(\mathbf{x}{(i-1)}+(1-b_i)\,\mathbf{e}_{u_i}\bigr)
   \right)\\ &\hspace{12cm}-F\bigl(\mathbf{x}{(i-1)}\bigr).
\end{aligned}
\end{equation}
Click to expand and view more

Now we apply Lemma 1 (the one-dimensional $`\gamma`$–weak DR convexity-type bound) to the pair

MATH
\mathbf{z}^{(1)}=\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i},
\qquad
\mathbf{z}^{(2)}=\mathbf{x}{(i-1)}+(1-b_i)\,\mathbf{e}_{u_i},
Click to expand and view more

with mixing weights

MATH
\begin{equation}
\label{eq:lambda-mu-def}
\lambda
\;=\;
\frac{\gamma\Delta_{b,i}}{\Delta_{a,i}+\gamma\Delta_{b,i}},
\qquad
1-\lambda
\;=\;
\frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma\Delta_{b,i}}.
\end{equation}
Click to expand and view more

Lemma 1 states that for such a convex combination we have

MATH
\begin{equation}
\label{eq:lemma-simple2-use}
F\bigl((1-\lambda)\mathbf{z}^{(1)}+\lambda \mathbf{z}^{(2)}\bigr)
\;\ge\;
\frac{(1-\lambda)\,F(\mathbf{z}^{(1)})+\gamma^{2}\lambda\,F(\mathbf{z}^{(2)})}{(1-\lambda)+\gamma^{2}\lambda}.
\end{equation}
Click to expand and view more

Substituting [eq:lambda-mu-def] into [eq:lemma-simple2-use], and noting that

MATH
(1-\lambda)+\gamma^{2}\lambda
\;=\;
\frac{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}{\Delta_{a,i}+\gamma\Delta_{b,i}},
Click to expand and view more

we obtain

MATH
\begin{equation}
\label{eq:x-middle-ineq}
\begin{aligned}
F\bigl(\mathbf{x}{(i)}\bigr)
&\ge
\frac{\Delta_{a,i}\,F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)
      + \gamma^{3}\Delta_{b,i}\,F\bigl(\mathbf{x}{(i-1)}+(1-b_i)\,\mathbf{e}_{u_i}\bigr)}
     {\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{aligned}
\end{equation}
Click to expand and view more

Subtracting $`F\bigl(\mathbf{x}{(i-1)}\bigr)`$ from both sides of [eq:x-middle-ineq], and grouping terms, yields

MATH
\begin{equation}
\label{eq:x-diff-expanded}
\begin{aligned}
& F\bigl(\mathbf{x}{(i)}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)\\
&\ge
\frac{\Delta_{a,i}\bigl[F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)\bigr]
     + \gamma^{3}\Delta_{b,i}\bigl[F\bigl(\mathbf{x}{(i-1)}+(1-b_i)\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)\bigr]}
     {\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{aligned}
\end{equation}
Click to expand and view more

Here we simply subtracted $`F(\mathbf{x}{(i-1)})`$ inside the numerator to isolate directional gains.

By definition of $`\Delta_{a,i}`$,

MATH
\begin{equation}
\label{eq:Delta-a-def}
\Delta_{a,i}
\;=\;
F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr),
\end{equation}
Click to expand and view more

so the first term in the numerator of [eq:x-diff-expanded] is exactly $`\Delta_{a,i}^{2}`$. For the second term we use the $`\gamma`$–weakly DR property to compare the gain at $`\mathbf{x}{(i-1)}`$ with the corresponding gain at $`\mathbf{y}{(i-1)}`$. Along the $`u_i`$-th coordinate, the weak DR property implies that the marginal decrease when moving from $`1`$ down to $`b_i`$ at $`\mathbf{y}{(i-1)}`$ is at least a $`\gamma^{2}`$-fraction of the corresponding marginal at $`\mathbf{x}{(i-1)}`$. This yields

MATH
\begin{equation}
\label{eq:x-second-term-lower}
F\bigl(\mathbf{x}{(i-1)}+(1-b_i)\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)
\;\ge\;
\gamma\,\Bigl[F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{y}{(i-1)}-\mathbf{e}_{u_i}\bigr)\Bigr].
\end{equation}
Click to expand and view more

Multiplying [eq:x-second-term-lower] by $`\gamma^{3}\Delta_{b,i}`$ gives a $`\gamma^{4}`$ factor inside the numerator. Substituting [eq:Delta-a-def] and [eq:x-second-term-lower] into [eq:x-diff-expanded], we obtain

MATH
\begin{equation}
\label{eq:x-diff-with-y}
\begin{aligned}
F\bigl(\mathbf{x}{(i)}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)
&\ge
\frac{\Delta_{a,i}^{2}
      + \gamma^{4}\Delta_{b,i}\Bigl[F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)
                                   -F\bigl(\mathbf{y}{(i-1)}-\mathbf{e}_{u_i}\bigr)\Bigr]}
     {\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{aligned}
\end{equation}
Click to expand and view more

By the definition of $`b_i`$ in Algorithm [algo:double], the expression

MATH
F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)
-
F\bigl(\mathbf{y}{(i-1)}-\mathbf{e}_{u_i}\bigr)
Click to expand and view more

is nonnegative or, in the worst case, does not exceed the corresponding candidate values over the grid. In particular, the term multiplied by $`\gamma^{4}\Delta_{b,i}`$ in [eq:x-diff-with-y] is nonnegative, so we may drop it to obtain the simpler bound

MATH
\begin{equation}
\label{eq:x-final-lower}
F\bigl(\mathbf{x}{(i)}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)
\ \ge\ 
\frac{\Delta_{a,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}},
\end{equation}
Click to expand and view more

which is precisely [eq:lemma-s2c3-x].

Increase of $`F`$ along $`\mathbf{y}`$. The argument for $`\mathbf{y}`$ is symmetric. From the update rule for $`\mathbf{y}{(i)}`$ and the same weight $`w_i`$, we can write

MATH
\begin{equation}
\label{eq:y-i-as-convex-comb}
\mathbf{y}{(i)}
\;=\;
\frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}}
    \bigl(\mathbf{y}{(i-1)}+(a_i-1)\,\mathbf{e}_{u_i}\bigr)
\;+\;
\frac{\gamma\Delta_{b,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}}
    \bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr).
\end{equation}
Click to expand and view more

Hence

MATH
\begin{equation}
\label{eq:y-diff-start}
\begin{aligned}
F\bigl(\mathbf{y}{(i)}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)
&= F\!\left(
     \frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}} \bigl(\mathbf{y}{(i-1)}+(a_i-1)\,\mathbf{e}_{u_i}\bigr)
     + \frac{\gamma\Delta_{b,i}}{\Delta_{a,i}+\gamma \Delta_{b,i}} \bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)
   \right)\\
   & \hspace{8cm}-F\bigl(\mathbf{y}{(i-1)}\bigr).
\end{aligned}
\end{equation}
Click to expand and view more

Applying Lemma 1 to the pair

MATH
\tilde\mathbf{z}^{(1)}=\mathbf{y}{(i-1)}+(a_i-1)\,\mathbf{e}_{u_i},
\qquad
\tilde\mathbf{z}^{(2)}=\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i},
Click to expand and view more

with the same weights as in [eq:lambda-mu-def], we obtain

MATH
\begin{equation}
\label{eq:y-middle-ineq}
F\bigl(\mathbf{y}{(i)}\bigr)
\;\ge\;
\frac{\Delta_{a,i}\,F\bigl(\mathbf{y}{(i-1)}+(a_i-1)\,\mathbf{e}_{u_i}\bigr)
      + \gamma^{3}\Delta_{b,i}\,F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)}
     {\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{equation}
Click to expand and view more

Subtracting $`F(\mathbf{y}{(i-1)})`$ and regrouping gives

MATH
\begin{equation}
\label{eq:y-diff-expanded}
\begin{aligned}
&F\bigl(\mathbf{y}{(i)}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)\\
&\ge
\frac{\Delta_{a,i}\bigl[F\bigl(\mathbf{y}{(i-1)}+(a_i-1)\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)\bigr]
     + \gamma^{3}\Delta_{b,i}\bigl[F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)\bigr]}
     {\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{aligned}
\end{equation}
Click to expand and view more

Using the $`\gamma`$–weakly DR property to compare the section at $`\mathbf{y}{(i-1)}`$ with that at $`\mathbf{x}{(i-1)}`$ along the $`u_i`$-th coordinate, we obtain

MATH
\begin{equation}
\label{eq:y-first-term-lower}
\begin{aligned}
    F\bigl(\mathbf{y}{(i-1)}+(a_i-1)\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)
\;\ge\;
\frac{1}{\gamma}\Bigl[F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)
                      -F\bigl(\mathbf{x}{(i-1)}+\mathbf{e}_{u_i}\bigr)\Bigr].
\end{aligned}
\end{equation}
Click to expand and view more

By definition of $`\Delta_{b,i}`$,

MATH
\begin{equation}
\label{eq:Delta-b-def}
\Delta_{b,i}
\;=\;
F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr),
\end{equation}
Click to expand and view more

so the second term in the numerator of [eq:y-diff-expanded] is $`\gamma^{3}\Delta_{b,i}^{2}`$. Substituting [eq:y-first-term-lower] and [eq:Delta-b-def] into [eq:y-diff-expanded] yields

MATH
\begin{equation}
\label{eq:y-diff-with-x}
\begin{aligned}
F\bigl(\mathbf{y}{(i)}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)
&\ge
\frac{ \tfrac{1}{\gamma}\Delta_{a,i}\Bigl[F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)
    -F\bigl(\mathbf{x}{(i-1)}+\mathbf{e}_{u_i}\bigr)\Bigr]
      + \gamma^{3}\Delta_{b,i}^{2}}
     {\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{aligned}
\end{equation}
Click to expand and view more

By the choice of $`a_i`$ in Algorithm [algo:double], the term

MATH
F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)
-
F\bigl(\mathbf{x}{(i-1)}+\mathbf{e}_{u_i}\bigr)
Click to expand and view more

is nonpositive (since $`a_i`$ optimally trades off against the unit step), so the first term in the numerator of [eq:y-diff-with-x] is nonpositive. Dropping this nonpositive term yields the simpler lower bound

MATH
\begin{equation}
\label{eq:y-final-lower}
F\bigl(\mathbf{y}{(i)}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)
\ \ge\ 
\frac{\gamma^{3}\Delta_{b,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}},
\end{equation}
Click to expand and view more

which is exactly [eq:lemma-s2c3-y]. Combining [eq:x-final-lower] and [eq:y-final-lower] completes the proof. ◻

Reference path and its contraction.

To relate the two trajectories $`\mathbf{x}{(i)}`$ and $`\mathbf{y}{(i)}`$ to an arbitrary initial vector $`\mathbf{o}`$, we introduce the standard lattice-coupled reference sequence

MATH
\begin{equation}
\label{eq:o-ref-path-def}
\mathbf{o}^{(i)} \;:=\; \bigl(\,\mathbf{o}\vee \mathbf{x}{(i)}\,\bigr)\;\wedge\;\mathbf{y}{(i)}
\qquad \text{for } i=0,1,\dots,n.
\end{equation}
Click to expand and view more

Note that $`\mathbf{o}^{(0)}=\mathbf{o}`$ and $`\mathbf{o}^{(n)}=\mathbf{x}{(n)}=\mathbf{y}{(n)}`$, since Algorithm [algo:double] equalizes the two trajectories by the end. The next lemma bounds the one-step decrease of $`F(\mathbf{o}^{(i)})`$; telescoping this bound over $`i`$ yields the global comparison used in the final guarantee.

Lemma 17. *For every integer $`1 \le i \le n`$,

MATH
\begin{equation}
\label{eq:lemma-s2C4-claim}
F\bigl(\mathbf{o}^{(i-1)}\bigr) - F\bigl(\mathbf{o}^{(i)}\bigr)
\;\le\;
\frac{ \Delta_{a,i}\,\Delta_{b,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}
\;+\; \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o}).
\end{equation}
```*

</div>

<div class="proof">

*Proof.* We first treat the case
``` math
\begin{equation}
\label{eq:case-o-inc}
\mathbf{o}^{(i-1)}_{u_i} \;\le\; \mathbf{o}^{(i)}_{u_i}.
\end{equation}
Click to expand and view more

Define the single-coordinate setter

MATH
\begin{equation}
\label{eq:set-operator-def}
\operatorname{set}_{u}(\mathbf{z},t)
\;:=\;
\mathbf{z}- (\mathbf{z}_{u}-t)\,\mathbf{e}_{u},
\end{equation}
Click to expand and view more

which replaces the $`u`$-th coordinate of $`\mathbf{z}`$ with $`t`$ and leaves all other coordinates unchanged.

By $`\gamma`$–weakly DR-submodularity, for any $`\mathbf{p}\le \mathbf{q}`$ and any scalars $`\alpha \le \beta`$, we have the one-dimensional comparison

MATH
\begin{equation}
\label{eq:dr-single}
F\bigl(\operatorname{set}_{u}(\mathbf{p},\beta)\bigr)-F\bigl(\operatorname{set}_{u}(\mathbf{p},\alpha)\bigr)
\;\ge\; \gamma \Bigl[
F\bigl(\operatorname{set}_{u}(\mathbf{q},\beta)\bigr)-F\bigl(\operatorname{set}_{u}(\mathbf{q},\alpha)\bigr)\Bigr].
\end{equation}
Click to expand and view more

Here the left-hand side is the gain when changing coordinate $`u`$ from $`\alpha`$ to $`\beta`$ at the lower point $`\mathbf{p}`$, and the right-hand side compares it to the corresponding gain at the higher point $`\mathbf{q}`$, scaled by $`\gamma`$.

By construction of $`\mathbf{o}^{(i-1)}`$ and $`\mathbf{y}{(i-1)}`$ in [eq:o-ref-path-def], we have

MATH
\begin{equation}
\label{eq:o-less-y}
\mathbf{o}^{(i-1)} \;\le\; \mathbf{y}{(i-1)}.
\end{equation}
Click to expand and view more

In the present case we also have [eq:case-o-inc]. Set

MATH
\begin{equation}
\label{eq:dr-single-subst}
\mathbf{p}=\mathbf{o}^{(i-1)},\quad \mathbf{q}=\mathbf{y}{(i-1)},\quad u=u_i,\quad
\alpha=\mathbf{o}^{(i-1)}_{u_i},\quad \beta=\mathbf{o}^{(i)}_{u_i},
\end{equation}
Click to expand and view more

which satisfy the requirements of [eq:dr-single]. Then [eq:dr-single] gives

MATH
\begin{equation}
\label{eq:dr-single-applied-raw}
\begin{aligned}
&F\bigl(\operatorname{set}_{u_i}(\mathbf{o}^{(i-1)},\,\mathbf{o}^{(i)}_{u_i})\bigr)
 - F\bigl(\operatorname{set}_{u_i}(\mathbf{o}^{(i-1)},\,\mathbf{o}^{(i-1)}_{u_i})\bigr)\\
&\qquad \hspace{3cm} \ge\;
\gamma \Bigl[
F\bigl(\operatorname{set}_{u_i}(\mathbf{y}^{(i-1)},\,\mathbf{o}^{(i)}_{u_i})\bigr)
 - F\bigl(\operatorname{set}_{u_i}(\mathbf{y}^{(i-1)},\,\mathbf{o}^{(i-1)}_{u_i})\bigr) \Bigr].
\end{aligned}
\end{equation}
Click to expand and view more

By definition of the reference path [eq:o-ref-path-def] and of the setter [eq:set-operator-def],

MATH
\begin{equation}
\label{eq:set-equals-o}
\operatorname{set}_{u_i}(\mathbf{o}^{(i-1)},\,\mathbf{o}^{(i)}_{u_i})=\mathbf{o}^{(i)},
\qquad
\operatorname{set}_{u_i}(\mathbf{o}^{(i-1)},\,\mathbf{o}^{(i-1)}_{u_i})=\mathbf{o}^{(i-1)}.
\end{equation}
Click to expand and view more

Using [eq:set-equals-o] in [eq:dr-single-applied-raw], the left-hand side becomes $`F(\mathbf{o}^{(i)})-F(\mathbf{o}^{(i-1)})`$, so we obtain

MATH
\begin{equation}
\label{eq:dr-single-applied-o}
F\bigl(\mathbf{o}^{(i)}\bigr)-F\bigl(\mathbf{o}^{(i-1)}\bigr)
\;\ge\; \gamma \Bigl[
F\bigl(\operatorname{set}_{u_i}(\mathbf{y}^{(i-1)},\,\mathbf{o}^{(i)}_{u_i})\bigr)
 - F\bigl(\operatorname{set}_{u_i}(\mathbf{y}^{(i-1)},\,\mathbf{o}^{(i-1)}_{u_i})\bigr) \Bigr].
\end{equation}
Click to expand and view more

Using the explicit form [eq:set-operator-def], for any $`\mathbf{z},t`$ we have $`\operatorname{set}_{u_i}(\mathbf{z},t)=\mathbf{z}- (\mathbf{z}_{u_i}-t)\,\mathbf{e}_{u_i}`$. Hence

MATH
\begin{equation}
\label{eq:set-as-sub}
\operatorname{set}_{u_i}(\mathbf{y}^{(i-1)},\,\mathbf{o}^{(i)}_{u_i})
=
\mathbf{y}^{(i-1)}-(\mathbf{y}^{(i-1)}_{u_i}-\mathbf{o}^{(i)}_{u_i})\,\mathbf{e}_{u_i}
\end{equation}
Click to expand and view more

and

MATH
\begin{equation}
\label{eq:set-as-sub-prev}
\operatorname{set}_{u_i}(\mathbf{y}^{(i-1)},\,\mathbf{o}^{(i-1)}_{u_i})
=
\mathbf{y}^{(i-1)}-(\mathbf{y}^{(i-1)}_{u_i}-\mathbf{o}^{(i-1)}_{u_i})\,\mathbf{e}_{u_i}.
\end{equation}
Click to expand and view more

Since $`\mathbf{o}^{(i)}`$ and $`\mathbf{o}^{(i-1)}`$ share all coordinates except possibly $`u_i`$, and $`\mathbf{y}^{(i-1)}_{u_i}\in[0,1]`$, the quantities $`\mathbf{y}^{(i-1)}_{u_i}-\mathbf{o}^{(i)}_{u_i}`$ and $`\mathbf{y}^{(i-1)}_{u_i}-\mathbf{o}^{(i-1)}_{u_i}`$ lie in $`[0,1]`$. For notational convenience, write

MATH
\begin{equation}
\label{eq:o-coord-sub-values}
\begin{aligned}
1-\mathbf{o}^{(i)}_{u_i} &= \mathbf{y}^{(i-1)}_{u_i}-\mathbf{o}^{(i)}_{u_i},\\
1-\mathbf{o}^{(i-1)}_{u_i} &= \mathbf{y}^{(i-1)}_{u_i}-\mathbf{o}^{(i-1)}_{u_i},
\end{aligned}
\end{equation}
Click to expand and view more

which allows us to rewrite [eq:set-as-sub][eq:set-as-sub-prev] as

MATH
\begin{equation}
\label{eq:set-y-o-form}
\operatorname{set}_{u_i}(\mathbf{y}^{(i-1)},\,\mathbf{o}^{(i)}_{u_i})
=
\mathbf{y}^{(i-1)}-(1-\mathbf{o}^{(i)}_{u_i})\,\mathbf{e}_{u_i},
\qquad
\operatorname{set}_{u_i}(\mathbf{y}^{(i-1)},\,\mathbf{o}^{(i-1)}_{u_i})
=
\mathbf{y}^{(i-1)}-(1-\mathbf{o}^{(i-1)}_{u_i})\,\mathbf{e}_{u_i}.
\end{equation}
Click to expand and view more

Substituting [eq:set-y-o-form] into [eq:dr-single-applied-o], we get

MATH
\begin{equation}
\label{eq:F-o-diff-y-section}
F\bigl(\mathbf{o}^{(i)}\bigr)-F\bigl(\mathbf{o}^{(i-1)}\bigr)
\;\ge\; \gamma \Bigl[
F\bigl(\mathbf{y}{(i-1)}-(1-\mathbf{o}^{(i)}_{u_i})\,\mathbf{e}_{u_i}\bigr)
-
F\bigl(\mathbf{y}{(i-1)}-(1-\mathbf{o}^{(i-1)}_{u_i})\,\mathbf{e}_{u_i}\bigr) \Bigr].
\end{equation}
Click to expand and view more

We now bound each term on the right-hand side.

First, by Lemma 15, for the sequence along the $`u_i`$-th coordinate,

MATH
\begin{equation}
\label{eq:double2-max-bound}
\max_{v\in[0,1]} F\bigl(\mathbf{y}{(i-1)}-v\,\mathbf{e}_{u_i}\bigr)
\;\le\;
F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)
\;+\;\frac{2\varepsilon}{n}\,\gamma^{2}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Since $`1-\mathbf{o}^{(i-1)}_{u_i}\in[0,1]`$, we have

MATH
\begin{equation}
\label{eq:bound-at-o-prev-coord}
\begin{aligned} 
F\bigl(\mathbf{y}{(i-1)}-(1-\mathbf{o}^{(i-1)}_{u_i})\,\mathbf{e}_{u_i}\bigr)
&\;\le\;
\max_{v\in[0,1]} F\bigl(\mathbf{y}{(i-1)}-v\,\mathbf{e}_{u_i}\bigr)\\
&\;\le\;
F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)
\;+\;\frac{2\varepsilon}{n}\,\gamma^{2}\,F(\mathbf{o}).
\end{aligned}
\end{equation}
Click to expand and view more

Next, recall that the $`u_i`$-th coordinate of the reference point satisfies

MATH
\begin{equation}
\label{eq:o-i-coord-def}
\mathbf{o}^{(i)}_{u_i}
=
\frac{\Delta_{a,i}}{\Delta_{a,i}+\Delta_{b,i}}\,a_i
+ \frac{\Delta_{b,i}}{\Delta_{a,i}+\Delta_{b,i}}\,(1-b_i),
\end{equation}
Click to expand and view more

by the definition of the coordinate update in Algorithm [algo:double]. Applying Lemma 1(1) along the $`u_i`$-th coordinate at $`\mathbf{y}{(i-1)}`$ with points

MATH
(1-a_i)\quad\text{and}\quad b_i,
Click to expand and view more

and mixing weights proportional to $`\Delta_{a,i}`$ and $`\Delta_{b,i}`$, yields

MATH
\begin{equation}
\label{eq:y-section-convex}
\begin{aligned}
&F\bigl(\mathbf{y}{(i-1)} - (1-\mathbf{o}^{(i)}_{u_i})\,\mathbf{e}_{u_i}\bigr)\\
&\quad\ge
\frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}\;
F\bigl(\mathbf{y}{(i-1)}-(1-a_i)\,\mathbf{e}_{u_i}\bigr)
\;+\;
\frac{\gamma^{2}\Delta_{b,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}\;
F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr).
\end{aligned}
\end{equation}
Click to expand and view more

Combining [eq:F-o-diff-y-section], [eq:bound-at-o-prev-coord], and [eq:y-section-convex], we obtain

MATH
\begin{equation}
\label{eq:o-diff-before-algebra}
\begin{aligned}
&F\bigl(\mathbf{o}^{(i)}\bigr)-F\bigl(\mathbf{o}^{(i-1)}\bigr)\\
& \quad\ge
\gamma \Biggl[
\frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}
\Bigl(F\bigl(\mathbf{y}{(i-1)}-(1-a_i)\,\mathbf{e}_{u_i}\bigr)
- F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)\Bigr)
- \frac{2\varepsilon}{n}\,\gamma^{2}\,F(\mathbf{o})
\Biggr].
\end{aligned}
\end{equation}
Click to expand and view more

The first term inside the brackets is the “true” directional gain between the points with coordinates $`1-a_i`$ and $`b_i`$; the second term comes from the discretization loss in Lemma 15.

Rewrite the first bracketed term in [eq:o-diff-before-algebra] by adding and subtracting $`F(\mathbf{y}{(i-1)})`$, and using the definition

MATH
\begin{equation}
\label{eq:Delta-b-def-recall}
\Delta_{b,i}
=
F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr),
\end{equation}
Click to expand and view more

to obtain

MATH
\begin{equation}
\label{eq:o-diff-mid-algebra}
\begin{aligned}
&F\bigl(\mathbf{y}{(i-1)}-(1-a_i)\,\mathbf{e}_{u_i}\bigr)
- F\bigl(\mathbf{y}{(i-1)}-b_i\,\mathbf{e}_{u_i}\bigr)\\
&\hspace{3cm}\qquad=
\Bigl(F\bigl(\mathbf{y}{(i-1)}-(1-a_i)\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)\Bigr)
-\Delta_{b,i}.
\end{aligned}
\end{equation}
Click to expand and view more

Substituting [eq:o-diff-mid-algebra] into [eq:o-diff-before-algebra], we get

MATH
\begin{equation}
\label{eq:o-diff-mid-full}
\begin{aligned}
&F\bigl(\mathbf{o}^{(i)}\bigr)-F\bigl(\mathbf{o}^{(i-1)}\bigr)\\
&\qquad\ge
\frac{\gamma\,\Delta_{a,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}
\Bigl(F\bigl(\mathbf{y}{(i-1)}-(1-a_i)\,\mathbf{e}_{u_i}\bigr)
- F\bigl(\mathbf{y}{(i-1)}\bigr) - \Delta_{b,i}\Bigr)
- \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o}).
\end{aligned}
\end{equation}
Click to expand and view more

We now transfer this expression from $`\mathbf{y}{(i-1)}`$ to $`\mathbf{x}{(i-1)}`$ using the $`\gamma`$–weakly DR property. Since

MATH
\begin{equation}
\label{eq:x-less-y}
\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i} \;\le\; \mathbf{y}{(i-1)}-(1-a_i)\,\mathbf{e}_{u_i},
\end{equation}
Click to expand and view more

the weak DR property implies that the marginal gain at $`\mathbf{y}{(i-1)}`$ when moving coordinate $`u_i`$ from $`1`$ down to $`1-a_i`$ is at most a $`1/\gamma`$-scaled version of the marginal gain at $`\mathbf{x}{(i-1)}`$ when moving coordinate $`u_i`$ from $`0`$ up to $`a_i`$. Formally,

MATH
\begin{equation}
\label{eq:y-to-x-gamma}
F\bigl(\mathbf{y}{(i-1)}-(1-a_i)\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)
\;\ge\;
\frac{1}{\gamma}\Bigl[
F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)-F\bigl(\mathbf{x}{(i-1)}+\mathbf{e}_{u_i}\bigr)
\Bigr].
\end{equation}
Click to expand and view more

Substituting [eq:y-to-x-gamma] into [eq:o-diff-mid-full], we obtain

MATH
\begin{equation}
\label{eq:o-diff-x-section}
\begin{aligned}
&F\bigl(\mathbf{o}^{(i)}\bigr)-F\bigl(\mathbf{o}^{(i-1)}\bigr)\\
&\qquad \ge
\frac{\Delta_{a,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}
\Bigl(F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)
- F\bigl(\mathbf{x}{(i-1)}+\mathbf{e}_{u_i}\bigr) - \Delta_{b,i}\Bigr)
- \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o}).
\end{aligned}
\end{equation}
Click to expand and view more

By the definition of $`a_i`$ in Algorithm [algo:double] (and the fact that $`1\in V`$), the choice of $`a_i`$ along the grid ensures that

MATH
\begin{equation}
\label{eq:a-i-choice-neg}
F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)
- F\bigl(\mathbf{x}{(i-1)}+\mathbf{e}_{u_i}\bigr)
\;\le\;
0.
\end{equation}
Click to expand and view more

Therefore

MATH
\begin{equation}
\label{eq:x-section-term-upper}
F\bigl(\mathbf{x}{(i-1)}+a_i\,\mathbf{e}_{u_i}\bigr)
- F\bigl(\mathbf{x}{(i-1)}+\mathbf{e}_{u_i}\bigr) - \Delta_{b,i}
\;\le\;
-\Delta_{b,i}.
\end{equation}
Click to expand and view more

Substituting [eq:x-section-term-upper] into [eq:o-diff-x-section] yields

MATH
\begin{equation}
\label{eq:o-diff-final-case1}
F\bigl(\mathbf{o}^{(i)}\bigr)-F\bigl(\mathbf{o}^{(i-1)}\bigr)
\;\ge\;
-\,\frac{\Delta_{a,i}\,\Delta_{b,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}
- \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Rearranging [eq:o-diff-final-case1] gives

MATH
\begin{equation}
\label{eq:o-decrease-case1}
F\bigl(\mathbf{o}^{(i-1)}\bigr)-F\bigl(\mathbf{o}^{(i)}\bigr)
\;\le\;
\frac{\Delta_{a,i}\,\Delta_{b,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}
+ \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o}),
\end{equation}
Click to expand and view more

which is exactly [eq:lemma-s2C4-claim] in the case [eq:case-o-inc].

The remaining case $`\mathbf{o}^{(i-1)}_{u_i} > \mathbf{o}^{(i)}_{u_i}`$ is analogous: we reverse the roles of the “left” and “right” endpoints on the $`u_i`$-th coordinate and carry out the same argument, obtaining the same bound [eq:lemma-s2C4-claim]. This completes the proof. ◻

Combining the progress of the two trajectories $`\mathbf{x}{(i)}`$ and $`\mathbf{y}{(i)}`$ with the contraction of the reference path $`\mathbf{o}^{(i)}`$ yields the following inequality for any tradeoff parameter $`r\ge 0`$. It will telescope over $`i`$ to produce the final guarantee.

Corollary 18. *For every $`r \ge 0`$ and integer $`1 \le i \le n`$,

MATH
\begin{equation}
\label{eq:cor-main}
\begin{aligned}
&\frac{1}{r}\,\bigl[F(\mathbf{x}{(i)})-F(\mathbf{x}{(i-1)})\bigr]
\;+\; r\,\bigl[F(\mathbf{y}{(i)})-F(\mathbf{y}{(i-1)})\bigr]\\
& \hspace{5cm}\;\ge\;
2 \gamma^{3/2}\,\left( F\bigl(\mathbf{o}^{(i-1)}\bigr) - F\bigl(\mathbf{o}^{(i)}\bigr) - \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o})
\right).
\end{aligned}
\end{equation}
```*

</div>

<div class="proof">

*Proof.* By Lemma <a href="#Lemma:s2_c3" data-reference-type="ref"
data-reference="Lemma:s2_c3">16</a>, for each $`i`$ we have
``` math
\begin{equation}
\label{eq:lem-s2c3-recall}
F\bigl(\mathbf{x}{(i)}\bigr)-F\bigl(\mathbf{x}{(i-1)}\bigr)\ \ge\ \frac{\Delta_{a,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}},
\qquad
F\bigl(\mathbf{y}{(i)}\bigr)-F\bigl(\mathbf{y}{(i-1)}\bigr)\ \ge\ \frac{\gamma^{3}\Delta_{b,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{equation}
Click to expand and view more

Multiplying the first inequality in [eq:lem-s2c3-recall] by $`1/r`$ and the second by $`r`$, and adding them, gives

MATH
\begin{equation}
\label{eq:cor-1}
\begin{aligned}
\frac{1}{r}\bigl[F(\mathbf{x}{(i)})-F(\mathbf{x}{(i-1)})\bigr]
&+ r\bigl[F(\mathbf{y}{(i)})-F(\mathbf{y}{(i-1)})\bigr] \ge \frac{(1/r)\,\Delta_{a,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}
   + \frac{r \,\gamma^{3}\,\Delta_{b,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{aligned}
\end{equation}
Click to expand and view more

The numerator in [eq:cor-1] can be rewritten as a completed square plus a mixed term:

MATH
\begin{equation}
\label{eq:cor-complete-square}
\frac{(1/r)\,\Delta_{a,i}^{2} + r\gamma^{3}\Delta_{b,i}^{2}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}
=
\frac{\left(\frac{\Delta_{a,i}}{\sqrt r}-\Delta_{b,i}\,\gamma^{3/2}\sqrt r\right)^{2}
       + 2 \gamma^{3/2}\,\Delta_{a,i}\Delta_{b,i}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{equation}
Click to expand and view more

Substituting [eq:cor-complete-square] into [eq:cor-1], and using that the square term is nonnegative, we obtain

MATH
\begin{equation}
\label{eq:cor-2}
\begin{aligned}
\frac{1}{r}\bigl[F(\mathbf{x}{(i)})-F(\mathbf{x}{(i-1)})\bigr]
&+ r\bigl[F(\mathbf{y}{(i)})-F(\mathbf{y}{(i-1)})\bigr] \ge \frac{2 \gamma^{3/2}\,\Delta_{a,i}\Delta_{b,i}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{aligned}
\end{equation}
Click to expand and view more

Thus, up to the factor $`\Delta_{a,i}\Delta_{b,i}`$, the per-step progress is a convex-combination–type quantity that depends on $`\gamma`$.

Next we relate $`\Delta_{a,i}\Delta_{b,i}`$ to the contraction of the reference path. Lemma 17 states that

MATH
\begin{equation}
\label{eq:lem-s2C4-recall}
F\bigl(\mathbf{o}^{(i-1)}\bigr) - F\bigl(\mathbf{o}^{(i)}\bigr)
\;\le\;
\frac{ \Delta_{a,i}\,\Delta_{b,i}}{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}
\;+\; \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Rearranging [eq:lem-s2C4-recall], we get

MATH
\begin{equation}
\label{eq:Delta-prod-lower}
\Delta_{a,i}\Delta_{b,i}
\;\ge\;
\biggl(F\bigl(\mathbf{o}^{(i-1)}\bigr) - F\bigl(\mathbf{o}^{(i)}\bigr) - \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o})\biggr)
\bigl(\Delta_{a,i}+\gamma^{2}\Delta_{b,i}\bigr).
\end{equation}
Click to expand and view more

Substituting [eq:Delta-prod-lower] into [eq:cor-2] yields

MATH
\begin{equation}
\label{eq:cor-3}
\begin{aligned}
\frac{1}{r}\bigl[F(\mathbf{x}{(i)})-F(\mathbf{x}{(i-1)})\bigr]
&+ r\bigl[F(\mathbf{y}{(i)})-F(\mathbf{y}{(i-1)})\bigr] \\
&\ge
\frac{2 \gamma^{3/2}
\bigl(F(\mathbf{o}^{(i-1)})-F(\mathbf{o}^{(i)})-\frac{2\varepsilon}{n}\gamma^{3}F(\mathbf{o})\bigr)
\bigl(\Delta_{a,i}+\gamma^{2}\Delta_{b,i}\bigr)}
{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}.
\end{aligned}
\end{equation}
Click to expand and view more

Since $`\gamma\in(0,1]`$, we have $`\gamma^{2}\ge\gamma^{3}`$, so

MATH
\begin{equation}
\label{eq:gamma-ratio}
\Delta_{a,i}+\gamma^{2}\Delta_{b,i}
\;\ge\;
\Delta_{a,i}+\gamma^{3}\Delta_{b,i},
\end{equation}
Click to expand and view more

and hence

MATH
\begin{equation}
\label{eq:ratio-lower-bound}
\frac{\Delta_{a,i}+\gamma^{2}\Delta_{b,i}}{\Delta_{a,i}+\gamma^{3}\Delta_{b,i}}
\;\ge\; 1.
\end{equation}
Click to expand and view more

Applying [eq:ratio-lower-bound] to [eq:cor-3] gives

MATH
\begin{equation}
\label{eq:cor-4}
\begin{aligned}
\frac{1}{r}\bigl[F(\mathbf{x}{(i)})-F(\mathbf{x}{(i-1)})\bigr]
&+ r\bigl[F(\mathbf{y}{(i)})-F(\mathbf{y}{(i-1)})\bigr] \\
&\ge
2 \gamma^{3/2}
\left(F\bigl(\mathbf{o}^{(i-1)}\bigr) - F\bigl(\mathbf{o}^{(i)}\bigr)
- \frac{2\varepsilon}{n}\,\gamma^{3}\,F(\mathbf{o})\right).
\end{aligned}
\end{equation}
Click to expand and view more

This is exactly [eq:cor-main], completing the proof. ◻

We now conclude the analysis of the $`\gamma`$–aware Double–Greedy routine. The theorem below is obtained by telescoping the per-iteration coupling bound together with the contraction of the lattice-coupled reference path. The guarantee interpolates continuously in $`\gamma`$ and reduces to the classical DR bound when $`\gamma=1`$.

Theorem 19. *There exists a polynomial-time algorithm that, given a nonnegative $`\gamma`$-weakly DR-submodular function $`F : [0,1]^n \to \mathbb{R}_{\ge 0}`$ and a parameter $`\varepsilon \in (0,1)`$, outputs $`\mathbf{x}\in [0,1]^n`$ such that for every fixed $`\mathbf{o}\in [0,1]^n`$,

MATH
\begin{equation}
\label{eq:double-final-guarantee}
F(\mathbf{x})\ \ge\ 
\max_{r \ge 0}\ 
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr)\,r\,F(\mathbf{o})\;+\;F(\mathbf{0})\;+\;r^{2}\,F(\mathbf{1})}
{\,r^{2}\;+\;2\gamma^{3/2}r\;+\;1\,}.
\end{equation}
```*

</div>

<div class="proof">

*Proof.* Fix any $`r>0`$. Summing the per-iteration inequality from
Corollary <a href="#cor:progress-gamma" data-reference-type="ref"
data-reference="cor:progress-gamma">18</a> over $`i=1,\dots,n`$ gives
``` math
\begin{equation}
\label{eq:sum-progress}
\begin{aligned}
\frac{1}{r}\bigl[F(\mathbf{x}{(n)})-F(\mathbf{x}{(0)})\bigr]
&\;+\; r\bigl[F(\mathbf{y}{(n)})-F(\mathbf{y}{(0)})\bigr]\\
&= \sum_{i=1}^n \left( \frac{1}{r}\bigl[F(\mathbf{x}{(i)})-F(\mathbf{x}{(i-1)})\bigr]
\;+\; r\bigl[F(\mathbf{y}{(i)})-F(\mathbf{y}{(i-1)})\bigr]\right)\\
&\ge \sum_{i=1}^n \left( 2 \gamma^{3/2}\,\bigl[ F(\mathbf{o}^{(i-1)}) - F(\mathbf{o}^{(i)}) \bigr]
\;-\; \frac{2\varepsilon}{n}\,\cdot 2\gamma^{9/2}\,F(\mathbf{o})\right)\\
&= 2 \gamma^{3/2}\,\bigl[ F(\mathbf{o}^{(0)}) - F(\mathbf{o}^{(n)}) \bigr]
\;-\; 4\varepsilon\,\gamma^{9/2}\,F(\mathbf{o}).
\end{aligned}
\end{equation}
Click to expand and view more

Here the first equality is just telescoping the increments of $`F(\mathbf{x}^{(i)})`$ and $`F(\mathbf{y}^{(i)})`$, and the inequality uses Corollary 18 at each iteration $`i`$.

By construction of the reference path, we have $`\mathbf{o}^{(0)}=\mathbf{o}`$ and

MATH
\begin{equation}
\label{eq:o-endpoints}
\mathbf{x}{(n)}=\mathbf{y}{(n)}=\mathbf{o}^{(n)}.
\end{equation}
Click to expand and view more

Moreover, Algorithm [algo:double] starts from

MATH
\begin{equation}
\label{eq:x0-y0}
\mathbf{x}{(0)}=\mathbf{0},
\qquad
\mathbf{y}{(0)}=\mathbf{1}.
\end{equation}
Click to expand and view more

Using [eq:o-endpoints][eq:x0-y0] in [eq:sum-progress], we obtain

MATH
\begin{equation}
\label{eq:plug-endpoints}
\frac{1}{r}\bigl[F(\mathbf{x}{(n)})-F(\mathbf{0})\bigr]
\;+\; r\bigl[F(\mathbf{x}{(n)})-F(\mathbf{1})\bigr]
\ \ge\ 
2 \gamma^{3/2}\,\bigl[ F(\mathbf{o}) - F(\mathbf{x}{(n)}) \bigr]
\;-\; 4\varepsilon\,\gamma^{9/2}\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

We now collect all terms involving $`F(\mathbf{x}{(n)})`$ on the left-hand side of [eq:plug-endpoints]. Rearranging gives

MATH
\begin{equation}
\label{eq:collect-Fx}
F(\mathbf{x}{(n)})\left(\frac{1}{r}+r+2\gamma^{3/2}\right)
\ \ge\ 
\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr)\,F(\mathbf{o}) \;+\; \frac{1}{r}\,F(\mathbf{0}) \;+\; r\,F(\mathbf{1}),
\end{equation}
Click to expand and view more

where the right-hand side collects the contributions of $`F(\mathbf{o})`$, $`F(\mathbf{0})`$ and $`F(\mathbf{1})`$.

Dividing both sides of [eq:collect-Fx] by $`\frac{1}{r}+r+2\gamma^{3/2}`$ yields

MATH
\begin{equation}
\label{eq:F-xn-fraction}
F(\mathbf{x}{(n)}) \ \ge\ 
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr)\,F(\mathbf{o}) \;+\; \frac{1}{r}\,F(\mathbf{0}) \;+\; r\,F(\mathbf{1})}
{\frac{1}{r}+r+2\gamma^{3/2}}.
\end{equation}
Click to expand and view more

Multiplying numerator and denominator of the right-hand side of [eq:F-xn-fraction] by $`r`$ gives

MATH
\begin{equation}
\label{eq:F-xn-final-r}
F(\mathbf{x}{(n)}) \ \ge\ 
\frac{\bigl(2\gamma^{3/2}-4\varepsilon\,\gamma^{9/2}\bigr)\,r\,F(\mathbf{o}) \;+\; F(\mathbf{0}) \;+\; r^{2}\,F(\mathbf{1})}
{r^{2}+2\gamma^{3/2}r+1}.
\end{equation}
Click to expand and view more

Since $`\mathbf{x}=\mathbf{x}^{(n)}`$ is the output of the algorithm, inequality [eq:F-xn-final-r] holds for every choice of $`r>0`$. Extending to $`r=0`$ by continuity of the right-hand side in $`r`$ and taking the maximum over $`r\ge 0`$ yields [eq:double-final-guarantee], which completes the proof. ◻

Proofs of Lemma <a href="#lem:guessing-triples" data-reference-type=“ref”

data-reference=“lem:guessing-triples”>8 and Theorem 9

In this section first we prove Lemma 8, and then we prove Theorem 9

Lemma 1. Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be nonnegative and $`\gamma`$-weakly DR-submodular for some $`0<\gamma\le 1`$, and let $`P\subseteq[0,1]^n`$ be down-closed. There exists a constant-size (depending only on $`\varepsilon`$ and $`\gamma`$) set of triples $`\mathcal{G} \subseteq \mathbb{R}_{\ge 0}^3`$ such that $`\mathcal{G}`$ contains a triple $`(g,g_\odot,g_\oplus)`$ with

MATH
\label{eq:triple-bounds}
\begin{align}
(1-\varepsilon)\,F(\mathbf{o}) &\le g \le F(\mathbf{o}), 
\label{eq:triple-bounds-g}\\
F(\mathbf{z}\odot \mathbf{o})-\varepsilon\,g &\le g_\odot \le F(\mathbf{z}\odot \mathbf{o}), 
\label{eq:triple-bounds-godot}\\
F(\mathbf{z}\oplus \mathbf{o})-\varepsilon\,g &\le g_\oplus \le F(\mathbf{z}\oplus \mathbf{o}).
\label{eq:triple-bounds-goplus}
\end{align}
Click to expand and view more

Proof. Assume we have a constant-factor estimate $`v`$ such that

MATH
\begin{equation}
\label{eq:v-constant-factor}
c\,F(\mathbf{o})\ \le\ v\ \le\ F(\mathbf{o})
\end{equation}
Click to expand and view more

for some absolute constant $`c\in(0,1]`$. We will construct a constant-size guess set using $`v`$.

Define the one-dimensional guess set

MATH
\begin{equation}
\label{eq:G0-def}
G_o\ :=\ \Bigl\{(1-\varepsilon)^i\cdot \frac{v}{c}\ :\ i=0,1,\dots,\bigl\lceil \log_{1-\varepsilon} c\bigr\rceil\Bigr\}.
\end{equation}
Click to expand and view more

The size of $`G_o`$ is $`|G_o| = \mathcal{O}_\varepsilon(1)`$, since the exponent range in [eq:G0-def] depends only on $`\varepsilon`$ and $`c`$. By construction, the values in $`G_o`$ form a geometric grid that $`\varepsilon`$-covers the interval $`[F(\mathbf{o}),F(\mathbf{o})/c]`$, and therefore there exists

MATH
\begin{equation}
\label{eq:g-good}
g\in G_o \quad\text{with}\quad (1-\varepsilon)\,F(\mathbf{o})\ \le\ g\ \le\ F(\mathbf{o}),
\end{equation}
Click to expand and view more

which proves [eq:triple-bounds-g]; see, e.g., for this standard argument.

Next we upper bound the ranges in which $`F(\mathbf{z}\odot\mathbf{o})`$ and $`F(\mathbf{z}\oplus\mathbf{o})`$ can lie as a function of $`F(\mathbf{o})`$. Since $`\mathbf{z}\odot\mathbf{o}\le \mathbf{o}`$ and $`F`$ is nonnegative, we always have

MATH
\begin{equation}
\label{eq:F-z-odot-upper}
0\ \le\ F(\mathbf{z}\odot\mathbf{o})\ \le\ F(\mathbf{o}).
\end{equation}
Click to expand and view more

If $`F`$ is monotone, then [eq:F-z-odot-upper] is immediate; otherwise we only use the trivial nonnegativity upper bound.

For the $`\oplus`$ operation we can bound, using $`\gamma`$–weakly DR-submodularity and nonnegativity,

MATH
\begin{equation}
\label{eq:F-z-oplus-upper}
\begin{aligned}
F(\mathbf{z}\oplus \mathbf{o}) 
&= F\bigl(\mathbf{o}+ (\mathbf{1}-\mathbf{o})\odot \mathbf{z}\bigr)\\
&\le F(\mathbf{o})\;+\;\frac{1}{\gamma}\Bigl[F\bigl((\mathbf{1}-\mathbf{o})\odot \mathbf{z}\bigr)-F(\mathbf{0})\Bigr]
    &&\text{(by $\gamma$–weakly DR along $(\mathbf{1}-\mathbf{o})\odot \mathbf{z}$)}\\
&\le F(\mathbf{o})\;+\;\frac{1}{\gamma}\,F(\mathbf{o})
    &&\text{(since $F((\mathbf{1}-\mathbf{o})\odot \mathbf{z})\le F(\mathbf{o})$ and $F(\mathbf{0})\ge 0$)}\\
&= \Bigl(1+\tfrac{1}{\gamma}\Bigr)\,F(\mathbf{o}).
\end{aligned}
\end{equation}
Click to expand and view more

Thus both $`F(\mathbf{z}\odot\mathbf{o})`$ and $`F(\mathbf{z}\oplus\mathbf{o})`$ lie in ranges that are linearly bounded in $`F(\mathbf{o})`$.

For any chosen $`g\in G_o`$ satisfying [eq:g-good], we now construct $`\varepsilon g`$-nets for these ranges. For the $`\odot`$-case, define

MATH
\begin{equation}
\label{eq:G-odot-def}
G_\odot(g)\ :=\ \Bigl\{\varepsilon\,i\cdot g\ :\ i=0,1,\dots,\ \Bigl\lceil\tfrac{1}{\varepsilon(1-\varepsilon)}\Bigr\rceil\Bigr\}.
\end{equation}
Click to expand and view more

Since [eq:F-z-odot-upper] and [eq:g-good] imply

MATH
\begin{equation}
\label{eq:range-odot}
0\ \le\ F(\mathbf{z}\odot \mathbf{o})\ \le\ F(\mathbf{o})\ \le\ \frac{g}{1-\varepsilon},
\end{equation}
Click to expand and view more

the grid in [eq:G-odot-def] $`\varepsilon g`$-covers the interval $`[0,F(\mathbf{z}\odot\mathbf{o})]`$: for any value $`x\in[0,F(\mathbf{z}\odot\mathbf{o})]`$, there exists some $`g_\odot\in G_\odot(g)`$ with

MATH
\begin{equation}
\label{eq:G-odot-cover}
x-\varepsilon g\ \le\ g_\odot\ \le\ x.
\end{equation}
Click to expand and view more

In particular, we can choose $`g_\odot`$ so that [eq:G-odot-cover] holds with $`x=F(\mathbf{z}\odot\mathbf{o})`$, which is exactly [eq:triple-bounds-godot].

Similarly, for the $`\oplus`$-case define

MATH
\begin{equation}
\label{eq:G-oplus-def}
G_\oplus(g)\ :=\ \Bigl\{\varepsilon\,i\cdot g\ :\ i=0,1,\dots,\ \Bigl\lceil\tfrac{1+1/\gamma}{\varepsilon(1-\varepsilon)}\Bigr\rceil\Bigr\}.
\end{equation}
Click to expand and view more

Using [eq:F-z-oplus-upper] and [eq:g-good], we have

MATH
\begin{equation}
\label{eq:range-oplus}
0\ \le\ F(\mathbf{z}\oplus \mathbf{o})\ \le\ \Bigl(1+\frac{1}{\gamma}\Bigr)F(\mathbf{o})
\ \le\ \frac{1+1/\gamma}{1-\varepsilon}\,g.
\end{equation}
Click to expand and view more

Thus the grid in [eq:G-oplus-def] $`\varepsilon g`$-covers the interval $`[0,F(\mathbf{z}\oplus\mathbf{o})]`$, and there exists $`g_\oplus\in G_\oplus(g)`$ such that

MATH
\begin{equation}
\label{eq:G-oplus-cover}
F(\mathbf{z}\oplus \mathbf{o})-\varepsilon g\ \le\ g_\oplus\ \le\ F(\mathbf{z}\oplus \mathbf{o}),
\end{equation}
Click to expand and view more

which is [eq:triple-bounds-goplus].

Finally, set

MATH
\begin{equation}
\label{eq:triple-family-def}
\mathcal{G}\ :=\ \bigcup_{g\in G_o}\ \{g\}\times G_\odot(g)\times G_\oplus(g).
\end{equation}
Click to expand and view more

By [eq:g-good], [eq:G-odot-cover], and [eq:G-oplus-cover], there exists a triple $`(g,g_\odot,g_\oplus)\in\mathcal{G}`$ that satisfies all three bounds in [eq:triple-bounds]. Moreover, $`|\mathcal{G}|`$ depends only on $`\varepsilon`$ and $`\gamma`$ via the cardinalities of $`G_o`$, $`G_\odot(g)`$, and $`G_\oplus(g)`$, so $`\mathcal{G}`$ has constant size. ◻

To prove Theorem 9, we now recall the closed form of the iterates $`\{\mathbf{y}(i)\}_{i=0}^{\delta^{-1}}`$ from , together with the feasibility of the terminal iterate. These formulas will be used to relate $`F(\mathbf{y}(\delta^{-1}))`$ to the benchmark value $`F(\mathbf{o})`$.

Lemma 20 (Closed form of $`\mathbf{y}(i)`$ ). *For every integer $`0 \le i \le \delta^{-1}`$,

MATH
\begin{equation}
\label{eq:y-closed-form}
\mathbf{y}(i)=
\begin{cases}
(\mathbf{1}-\mathbf{z})\ \odot\ \displaystyle\bigoplus_{j=1}^{i}\bigl(\delta\,\mathbf{x}(j)\bigr), & i\le i_s,\\[6pt]
(\mathbf{1}-\mathbf{z})\ \odot\ \displaystyle\bigoplus_{j=1}^{i}\bigl(\delta\,\mathbf{x}(j)\bigr)
\;+\;
\mathbf{z}\ \odot\ \displaystyle\bigoplus_{j=i_s+1}^{i}\bigl(\delta\,\mathbf{x}(j)\bigr), & i\ge i_s.
\end{cases}
\end{equation}
Click to expand and view more

By convention, for any index $`a`$,

MATH
\begin{equation}
\label{eq:empty-oplus}
\bigoplus_{j=a}^{a-1}\bigl(\delta\,\mathbf{x}(j)\bigr)\;:=\;\mathbf{0},
\end{equation}
Click to expand and view more

so that both expressions in [eq:y-closed-form] remain valid on their boundary indices.*

Observation 3 (Feasibility ). The terminal iterate satisfies

MATH
\begin{equation}
\label{eq:y-terminal-feasible}
\mathbf{y}(\delta^{-1})\in P.
\end{equation}
Click to expand and view more

We now prove Theorem 9 first for the case

MATH
\begin{equation}
\label{eq:Qi-nonempty}
Q(i)\neq\varnothing\qquad\text{for all }i\in\{1,\dots,\delta^{-1}\},
\end{equation}
Click to expand and view more

where two lemmas (Lemmas 21 and 22) yield the bound stated in Theorem 9 (1), summarized as Corollary 23. The complementary case in which some $`Q(i)=\varnothing`$ is handled separately afterwards.

Observation 4. If $`Q{(i)}\neq\varnothing`$ for some $`i\in[\delta^{-1}]`$, then

MATH
\begin{equation}
\label{eq:obs515-claim}
F\!\bigl(\mathbf{y}(i)\bigr)-F\!\bigl(\mathbf{y}(i-1)\bigr)
\;\ge\;
\delta\,\gamma\Big[V{(i-1)}-F\!\bigl(\mathbf{y}(i-1)\bigr)\Big]
\;-\;\frac{\delta^{2}L D^{2}}{2}\,.
\end{equation}
Click to expand and view more

Proof. Consider the line segment

MATH
\begin{equation}
\label{eq:obs515-seg}
\mathbf{u}(s)\;:=\;\mathbf{y}(i-1)
\;+\;s\,\Big(\big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i)\Big),
\qquad s\in[0,\delta].
\end{equation}
Click to expand and view more

By construction, $`\mathbf{u}(0)=\mathbf{y}(i-1)`$ and $`\mathbf{u}(\delta)=\mathbf{y}(i)`$.

Using the fundamental theorem of calculus along [eq:obs515-seg],

MATH
\begin{align}
&\hspace{0cm}F\!\bigl(\mathbf{y}(i)\bigr)-F\!\bigl(\mathbf{y}(i-1)\bigr)\\
&\hspace{1cm}= \int_{0}^{\delta}
\Big\langle \big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i),\;
\nabla F\!\big(\mathbf{u}(s)\big)\Big\rangle\,ds \label{eq:obs515-int}\\[2pt]
&\hspace{1cm}= \int_{0}^{\delta}
\Big\langle \big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i),\;
\nabla F\!\big(\mathbf{y}(i-1)\big)\Big\rangle\,ds \notag\\
&\hspace{1cm}\qquad+\int_{0}^{\delta}
\Big\langle \big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i),\;
\nabla F\!\big(\mathbf{u}(s)\big)-\nabla F\!\big(\mathbf{y}(i-1)\big)\Big\rangle\,ds \notag\\[2pt]
&\hspace{1cm}= \delta\,\big\langle \mathbf{w}(i),\,\mathbf{x}(i)\big\rangle
+\int_{0}^{\delta}\!\!
\Big\langle \big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i),\;
\nabla F\!\big(\mathbf{u}(s)\big)-\nabla F\!\big(\mathbf{y}(i-1)\big)\Big\rangle ds. \notag
\end{align}
Click to expand and view more

Here we used that

MATH
\mathbf{w}(i)
:= \big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \nabla F\!\big(\mathbf{y}(i-1)\big),
Click to expand and view more

so $`\langle \mathbf{w}(i),\mathbf{x}(i)\rangle = \big\langle (\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1))\odot \mathbf{x}(i),\,\nabla F(\mathbf{y}(i-1))\big\rangle`$.

Applying Cauchy–Schwarz to the second integral gives

MATH
\begin{align}
F\!\bigl(\mathbf{y}(i)\bigr)-F\!\bigl(\mathbf{y}(i-1)\bigr)
&\ge
\delta\,\big\langle \mathbf{w}(i),\,\mathbf{x}(i)\big\rangle \notag\\
&\quad-\int_{0}^{\delta}\!\!
\Big\|\big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i)\Big\|_2\,
\Big\|\nabla F\!\big(\mathbf{u}(s)\big)-\nabla F\!\big(\mathbf{y}(i-1)\big)\Big\|_2\,ds.
\label{eq:obs515-cs}
\end{align}
Click to expand and view more

This step uses $`\langle a,b\rangle \ge -\|a\|_2\|b\|_2`$.

Since $`F`$ is $`L`$-smooth, its gradient is $`L`$-Lipschitz, so

MATH
\begin{equation}
\label{eq:obs515-smooth-grad}
\big\|\nabla F(\mathbf{u}(s))-\nabla F(\mathbf{y}(i-1))\big\|_2
\;\le\; L\,\|\mathbf{u}(s)-\mathbf{y}(i-1)\|_2
\;=\; L\,s\,\Big\|\big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i)\Big\|_2.
\end{equation}
Click to expand and view more

Substituting [eq:obs515-smooth-grad] into [eq:obs515-cs] yields

MATH
\begin{align}
F\!\bigl(\mathbf{y}(i)\bigr)-F\!\bigl(\mathbf{y}(i-1)\bigr)
&\ge
\delta\,\big\langle \mathbf{w}(i),\,\mathbf{x}(i)\big\rangle
-\int_{0}^{\delta}\! s\,L\,\Big\|\big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i)\Big\|_2^{2}\,ds \notag\\
&=
\delta\,\big\langle \mathbf{w}(i),\,\mathbf{x}(i)\big\rangle
-\frac{L}{2}\,\delta^{2}\,\Big\|\big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i)\Big\|_2^{2},
\label{eq:obs515-smooth}
\end{align}
Click to expand and view more

where we used $`\int_{0}^{\delta} s\,ds = \delta^{2}/2`$.

Let

MATH
\mathbf{d}(i)
:= \big(\mathbf{1}-\mathbf{y}(i-1)-\mathbf{z}(i-1)\big)\odot \mathbf{x}(i).
Click to expand and view more

By feasibility and the coordinatewise bounds $`0\le \mathbf{y}(i-1),\mathbf{z}(i-1),\mathbf{x}(i)\le\mathbf{1}`$, each entry of $`\mathbf{d}(i)`$ lies in $`[0,1]`$, so

MATH
\|\mathbf{d}(i)\|_2 \;\le\; D,
Click to expand and view more

Where $`D`$ is diameter. Using this in [eq:obs515-smooth] gives

MATH
\begin{equation}
\label{eq:obs515-diam}
F\!\bigl(\mathbf{y}(i)\bigr)-F\!\bigl(\mathbf{y}(i-1)\bigr)
\;\ge\;
\delta\,\big\langle \mathbf{w}(i),\,\mathbf{x}(i)\big\rangle\;-\;\frac{\delta^{2}L D^{2}}{2}\,.
\end{equation}
Click to expand and view more

Finally, since $`Q{(i)}\neq\varnothing`$, the choice of $`\mathbf{x}(i)\in Q{(i)}`$ guarantees

MATH
\begin{equation}
\label{eq:obs515-Qi-choice}
\big\langle \mathbf{w}(i),\,\mathbf{x}(i)\big\rangle \;\ge\; \gamma\,\Big[V{(i-1)}-F\!\bigl(\mathbf{y}(i-1)\bigr)\Big],
\end{equation}
Click to expand and view more

by the definition of $`Q(i)`$ and the weakly–DR structure. Substituting [eq:obs515-Qi-choice] into [eq:obs515-diam] yields

MATH
F\!\bigl(\mathbf{y}(i)\bigr)-F\!\bigl(\mathbf{y}(i-1)\bigr)
\;\ge\;
\delta\,\gamma\Big[V{(i-1)}-F\!\bigl(\mathbf{y}(i-1)\bigr)\Big]
\;-\;\frac{\delta^{2}L D^{2}}{2},
Click to expand and view more

which is exactly [eq:obs515-claim]. ◻

The bound [eq:obs515-claim] gives a recursive lower bound on $`F(\mathbf{y}(i))`$. In what follows (up to Corollary 23), we unroll this recursion and derive a closed form.

Lemma 21. *Assume $`Q(i)\neq\varnothing`$ for every $`i\in[i_s]`$. Fix $`0<\delta\le 1`$ and $`0<\gamma\le 1`$, and set

MATH
\alpha := \delta\gamma,
\qquad
\beta := \frac{\gamma^2\delta}{\,1-\delta+\gamma^2\delta\,}.
Click to expand and view more

For $`i\ge 0`$, define the geometric shorthands

MATH
\Delta_i\ :=\ 1-(1-\alpha)^i,
\qquad
\Theta_i\ :=\ (1-\beta)^i-(1-\alpha)^i.
Click to expand and view more

Then, for every integer $`0\le i\le i_s`$,

MATH
\begin{equation}
\label{eq:lemma516-main}
\begin{aligned}
F\big(\mathbf{y}(i)\big)
\ \ge\ 
\;\Bigg[\ \frac{(1-2\varepsilon)\,\Delta_i}{\gamma}
\;+\;\frac{\delta(1-\gamma)}{\beta-\alpha}\,\Theta_i\ \Bigg]\;g
\;-\;\frac{\Delta_i}{\gamma}\;g_{\odot}
\;-\;\Bigg[\ \frac{\Delta_i}{\gamma}
\;+\;\frac{\delta}{\beta-\alpha}\,\Theta_i\ \Bigg]\;g_{\oplus}
\;-\;i\,\delta^2 D^2L.
\end{aligned}
\end{equation}
```*

</div>

<div class="proof">

*Proof.* For $`i=0`$, we have $`\Delta_0=0`$ and $`\Theta_0=0`$, so the
right-hand side of
<a href="#eq:lemma516-main" data-reference-type="eqref"
data-reference="eq:lemma516-main">[eq:lemma516-main]</a> equals $`0`$.
Since $`F(\mathbf{y}(0))\ge 0`$ by nonnegativity of $`F`$, the claim
holds for $`i=0`$. We therefore fix an integer $`1\le i\le i_s`$ for the
rest of the proof.

By Observation <a href="#obs:5.15" data-reference-type="ref"
data-reference="obs:5.15">4</a> (and the assumption
$`Q(i-1)\neq\varnothing`$ for $`i\le i_s`$),
``` math
\begin{equation}
\label{eq:recurrence-start}
F\big(\mathbf{y}(i)\big) - F\big(\mathbf{y}(i-1)\big)
\ \ge\
\delta\,\gamma\Big(V(i-1)-F\big(\mathbf{y}(i-1)\big)\Big)
\;-\;\frac{\delta^2 D^2 L}{2}.
\end{equation}
Click to expand and view more

Rearranging [eq:recurrence-start] gives

MATH
\begin{equation}
\label{eq:recurrence}
F\big(\mathbf{y}(i)\big)
\ \ge\
\bigl(1-\alpha\bigr)\,F\big(\mathbf{y}(i-1)\big)
\;+\;\delta\gamma\,V(i-1)
\;-\;\frac{\delta^2 D^2 L}{2},
\end{equation}
Click to expand and view more

where we used $`\alpha=\delta\gamma`$.

The quantity $`V(i-1)`$ has the explicit form as given in [eq:v1-def]

MATH
\begin{equation}
\label{eq:V-form}
V(i-1)
=\Bigl[(1-\beta)^{i-1}+\frac{1-(1-\beta)^{i-1}-2\varepsilon}{\gamma}\Bigr]\,g
\;-\;\frac{1}{\gamma}\,g_{\odot}
\;-\;\frac{1-(1-\beta)^{i-1}}{\gamma}\,g_{\oplus}.
\end{equation}
Click to expand and view more

We now multiply [eq:V-form] by $`\delta\gamma`$ and substitute into [eq:recurrence]. First, for the $`g`$-term,

MATH
\begin{align}
\delta\gamma\,\Bigl[(1-\beta)^{i-1}
+ \frac{1-(1-\beta)^{i-1}-2\varepsilon}{\gamma}\Bigr]
&= \delta\gamma(1-\beta)^{i-1}
+ \delta\Bigl(1-(1-\beta)^{i-1}-2\varepsilon\Bigr) \notag\\[2pt]
&= \delta\Bigl[\gamma(1-\beta)^{i-1}
+ 1-(1-\beta)^{i-1}-2\varepsilon\Bigr] \notag\\[2pt]
&= \delta\Bigl[1 - (1-\gamma)(1-\beta)^{i-1} - 2\varepsilon\Bigr]. \label{eq:coef-g-expanded}
\end{align}
Click to expand and view more

For the $`g_{\odot}`$-term, we obtain

MATH
\begin{equation}
\label{eq:coef-g-odot-expanded}
\delta\gamma\cdot\Bigl(-\frac{1}{\gamma}\,g_{\odot}\Bigr)
= -\delta\,g_{\odot}.
\end{equation}
Click to expand and view more

For the $`g_{\oplus}`$-term,

MATH
\begin{equation}
\label{eq:coef-g-oplus-expanded}
\delta\gamma\cdot\Bigl(-\frac{1-(1-\beta)^{i-1}}{\gamma}\,g_{\oplus}\Bigr)
= -\delta\Bigl(1-(1-\beta)^{i-1}\Bigr)g_{\oplus}.
\end{equation}
Click to expand and view more

Substituting [eq:coef-g-expanded][eq:coef-g-oplus-expanded] into [eq:recurrence] yields

MATH
\begin{equation}
\label{eq:recurrence-expanded}
\begin{aligned}
F\big(\mathbf{y}(i)\big)
\ \ge\ 
&(1-\alpha)\,F\big(\mathbf{y}(i-1)\big) +\;\delta\Bigl(1-(1-\gamma)(1-\beta)^{i-1}-2\varepsilon\Bigr)\,g \\
&\quad-\;\delta\,g_{\odot}
\;-\;\delta\Bigl(1-(1-\beta)^{i-1}\Bigr)\,g_{\oplus}
\;-\;\frac{\delta^2 D^2L}{2}.
\end{aligned}
\end{equation}
Click to expand and view more

We now unroll the recursion [eq:recurrence-expanded] from $`k=1`$ up to $`k=i`$. Writing $`F_k := F(\mathbf{y}(k))`$ for brevity, [eq:recurrence-expanded] becomes

MATH
\begin{equation}
\label{eq:recurrence-Fk}
F_k
\ \ge\
(1-\alpha)\,F_{k-1}
+ \delta\,A_{k-1}\,g
- \delta\,g_{\odot}
- \delta\,B_{k-1}\,g_{\oplus}
- \frac{\delta^2 D^2L}{2},
\end{equation}
Click to expand and view more

where

MATH
\begin{equation}
\label{eq:A-B-def}
A_{k-1}
:= 1-(1-\gamma)(1-\beta)^{k-1}-2\varepsilon,
\qquad
B_{k-1}
:= 1-(1-\beta)^{k-1}.
\end{equation}
Click to expand and view more

Iterating [eq:recurrence-Fk] from $`k=1`$ to $`k=i`$ gives

MATH
\begin{equation}
\label{eq:unrolled-general}
\begin{aligned}
F_i
&\ge (1-\alpha)^i F_0
+ \delta\sum_{k=1}^{i}(1-\alpha)^{i-k} A_{k-1}\,g \\
&\qquad - \delta\sum_{k=1}^{i}(1-\alpha)^{i-k} g_{\odot}
- \delta\sum_{k=1}^{i}(1-\alpha)^{i-k} B_{k-1}\,g_{\oplus}
- \frac{\delta^2 D^2L}{2}\sum_{k=1}^{i}(1-\alpha)^{i-k}.
\end{aligned}
\end{equation}
Click to expand and view more

Since $`F_0 = F(\mathbf{y}(0))\ge 0`$ by nonnegativity of $`F`$, the first term $`(1-\alpha)^i F_0`$ is nonnegative and can be dropped for a lower bound. We next compute the geometric sums appearing in [eq:unrolled-general].

First,

MATH
\begin{equation}
\label{eq:geom-1}
\sum_{k=1}^{i}(1-\alpha)^{i-k}
= \sum_{t=0}^{i-1}(1-\alpha)^t
= \frac{1-(1-\alpha)^i}{\alpha}
= \frac{\Delta_i}{\alpha}.
\end{equation}
Click to expand and view more

Next, for the mixed sum, using the change of variable $`m=k-1`$,

MATH
\begin{equation}
\label{eq:geom-2-setup}
\sum_{k=1}^{i}(1-\alpha)^{i-k}(1-\beta)^{k-1}
= \sum_{m=0}^{i-1}(1-\alpha)^{i-1-m}(1-\beta)^m.
\end{equation}
Click to expand and view more

This is a geometric series in $`m`$. Factoring out $`(1-\alpha)^{i-1}`$, we obtain

MATH
\begin{align}
\sum_{m=0}^{i-1}(1-\alpha)^{i-1-m}(1-\beta)^m
&= (1-\alpha)^{i-1}
\sum_{m=0}^{i-1}\Bigl(\frac{1-\beta}{1-\alpha}\Bigr)^m \notag\\[2pt]
&= (1-\alpha)^{i-1}\cdot
\frac{1-\Bigl(\frac{1-\beta}{1-\alpha}\Bigr)^i}{1-\frac{1-\beta}{1-\alpha}} \notag\\[2pt]
&= (1-\alpha)^{i-1}\cdot
\frac{1-\frac{(1-\beta)^i}{(1-\alpha)^i}}{\frac{\beta-\alpha}{1-\alpha}} \notag\\[2pt]
&= \frac{(1-\alpha)^i-(1-\beta)^i}{\beta-\alpha}. \label{eq:geom-2}
\end{align}
Click to expand and view more

Recalling $`\Theta_i = (1-\beta)^i-(1-\alpha)^i`$, we can also write

MATH
\begin{equation}
\label{eq:geom-2-theta}
\sum_{k=1}^{i}(1-\alpha)^{i-k}(1-\beta)^{k-1}
= \frac{(1-\alpha)^i-(1-\beta)^i}{\beta-\alpha}
= -\,\frac{\Theta_i}{\beta-\alpha}.
\end{equation}
Click to expand and view more

We now substitute into each coefficient in [eq:unrolled-general].

Coefficient of $`g`$.

Using [eq:A-B-def], we split

MATH
\begin{align}
\sum_{k=1}^{i}(1-\alpha)^{i-k} A_{k-1}
&= \sum_{k=1}^{i}(1-\alpha)^{i-k}\Bigl(1 - (1-\gamma)(1-\beta)^{k-1} - 2\varepsilon\Bigr) \notag\\[2pt]
&= (1-2\varepsilon)\sum_{k=1}^{i}(1-\alpha)^{i-k}
 - (1-\gamma)\sum_{k=1}^{i}(1-\alpha)^{i-k}(1-\beta)^{k-1}. \label{eq:coef-g-split}
\end{align}
Click to expand and view more

Using [eq:geom-1] and [eq:geom-2-theta], we obtain

MATH
\begin{align}
\sum_{k=1}^{i}(1-\alpha)^{i-k} A_{k-1}
&= (1-2\varepsilon)\cdot\frac{\Delta_i}{\alpha}
 - (1-\gamma)\cdot\Bigl(-\frac{\Theta_i}{\beta-\alpha}\Bigr) \notag\\[2pt]
&= \frac{(1-2\varepsilon)\Delta_i}{\alpha}
 + \frac{(1-\gamma)\Theta_i}{\beta-\alpha}. \label{eq:coef-g-simplified}
\end{align}
Click to expand and view more

Multiplying by $`\delta`$ and using $`\alpha=\delta\gamma`$ (so $`\delta/\alpha = 1/\gamma`$), we get

MATH
\begin{equation}
\label{eq:coef-g-final}
\delta\sum_{k=1}^{i}(1-\alpha)^{i-k} A_{k-1}
= \frac{(1-2\varepsilon)\Delta_i}{\gamma}
 + \frac{\delta(1-\gamma)\Theta_i}{\beta-\alpha}.
\end{equation}
Click to expand and view more

Coefficient of $`g_{\odot}`$.

By [eq:unrolled-general] and [eq:geom-1],

MATH
\begin{equation}
\label{eq:coef-g-odot-sum}
-\delta\sum_{k=1}^{i}(1-\alpha)^{i-k} g_{\odot}
= -\delta\cdot\frac{\Delta_i}{\alpha}\,g_{\odot}
= -\frac{\Delta_i}{\gamma}\,g_{\odot},
\end{equation}
Click to expand and view more

again using $`\alpha=\delta\gamma`$.

Coefficient of $`g_{\oplus}`$.

Using [eq:A-B-def],

MATH
\begin{align}
\sum_{k=1}^{i}(1-\alpha)^{i-k} B_{k-1}
&= \sum_{k=1}^{i}(1-\alpha)^{i-k}\Bigl(1-(1-\beta)^{k-1}\Bigr) \notag\\[2pt]
&= \sum_{k=1}^{i}(1-\alpha)^{i-k}
 - \sum_{k=1}^{i}(1-\alpha)^{i-k}(1-\beta)^{k-1}. \label{eq:coef-g-oplus-split}
\end{align}
Click to expand and view more

Substituting [eq:geom-1] and [eq:geom-2-theta] into [eq:coef-g-oplus-split],

MATH
\begin{align}
\sum_{k=1}^{i}(1-\alpha)^{i-k} B_{k-1}
&= \frac{\Delta_i}{\alpha}
 - \Bigl(-\frac{\Theta_i}{\beta-\alpha}\Bigr) \notag\\[2pt]
&= \frac{\Delta_i}{\alpha}
 + \frac{\Theta_i}{\beta-\alpha}. \label{eq:coef-g-oplus-simplified}
\end{align}
Click to expand and view more

Thus

MATH
\begin{equation}
\label{eq:coef-g-oplus-final}
-\delta\sum_{k=1}^{i}(1-\alpha)^{i-k} B_{k-1}\,g_{\oplus}
= -\delta\left(\frac{\Delta_i}{\alpha}
 + \frac{\Theta_i}{\beta-\alpha}\right)g_{\oplus}
= -\frac{\Delta_i}{\gamma}\,g_{\oplus}
 - \frac{\delta}{\beta-\alpha}\,\Theta_i\,g_{\oplus},
\end{equation}
Click to expand and view more

where we again used $`\delta/\alpha = 1/\gamma`$.

Smoothness penalty.

The last term in [eq:unrolled-general] is

MATH
\begin{equation}
\label{eq:smooth-sum-exact}
-\frac{\delta^2 D^2L}{2}\sum_{k=1}^{i}(1-\alpha)^{i-k}.
\end{equation}
Click to expand and view more

Using that $`(1-\alpha)^{i-k}\le 1`$ for all $`k`$, we have

MATH
\begin{equation}
\label{eq:smooth-sum-bound}
\sum_{k=1}^{i}(1-\alpha)^{i-k} \;\le\; i,
\end{equation}
Click to expand and view more

and hence

MATH
\begin{equation}
\label{eq:smooth-final}
-\frac{\delta^2 D^2L}{2}\sum_{k=1}^{i}(1-\alpha)^{i-k}
\;\ge\;
-\,\frac{\delta^2 D^2L}{2}\,i
\;\ge\;
-\,i\,\delta^2 D^2L,
\end{equation}
Click to expand and view more

where the last inequality simply relaxes the factor $`1/2`$ to obtain a slightly weaker but simpler bound.

Putting together [eq:unrolled-general] with the bounds [eq:coef-g-final], [eq:coef-g-odot-sum], [eq:coef-g-oplus-final], and [eq:smooth-final], and recalling $`\mathbf{y}(i)`$ corresponds to $`F_i`$, we obtain

MATH
F\big(\mathbf{y}(i)\big)
\ \ge\ 
\;\Bigg[\ \frac{(1-2\varepsilon)\,\Delta_i}{\gamma}
\;+\;\frac{\delta(1-\gamma)}{\beta-\alpha}\,\Theta_i\ \Bigg]\;g
\;-\;\frac{\Delta_i}{\gamma}\;g_{\odot}
\;-\;\Bigg[\ \frac{\Delta_i}{\gamma}
\;+\;\frac{\delta}{\beta-\alpha}\,\Theta_i\ \Bigg]\;g_{\oplus}
\;-\;i\,\delta^2 D^2L,
Click to expand and view more

which is exactly [eq:lemma516-main]. ◻

Lemma 22. *Assume $`0<\delta\le 1`$ and $`0<\gamma\le 1`$, and set

MATH
\alpha := \delta\gamma,
\qquad
\beta := \frac{\gamma^2\delta}{\,1-\delta+\gamma^2\delta\,}.
Click to expand and view more

Let $`i_s< i\le \delta^{-1}`$ and suppose $`Q(i)\neq\varnothing`$ for every integer $`i_s

MATH
\begin{equation}
\label{eq:517-consts}
A\ :=\ \frac{(1-\beta)^{-i_s}}{\gamma}-\Bigl(1+\frac{3}{\gamma}\Bigr)\varepsilon+1-\frac{1}{\gamma},
\qquad
C_\gamma\ :=\ \frac{(1-\beta)^{-i_s}-1}{\gamma}.
\end{equation}
Click to expand and view more

For every integer $`i_s\le i\le \delta^{-1}`$, with the shorthands

MATH
\begin{equation}
\label{eq:517-sums-def}
S_1(i)\ :=\ \sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}(1-\beta)^{\,k},
\qquad
S_2(i)\ :=\ \sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}(1-\beta)^{\,k}\Bigl(C_\gamma-\beta\,(k-i_s)\Bigr),
\end{equation}
Click to expand and view more

the following bound holds:

MATH
\begin{equation}
\label{eq:517-main}
F\big(\mathbf{y}(i)\big)
\ \ge\
(1-\alpha)^{\,i-i_s}\,F\big(\mathbf{y}(i_s)\big)
\;+\;\alpha\,A\,S_1(i)\,g
\;-\;\alpha\,S_2(i)\,g_{\oplus}
\;-\;(i-i_s)\,\delta^2 D^2L.
\end{equation}
Click to expand and view more

Moreover, letting $`n:=i-i_s`$ and $`q:=\tfrac{1-\beta}{\,1-\alpha\,}`$, we have the closed forms

MATH
\begin{equation}
\label{eq:517-closed}
\begin{aligned}
S_1(i)
&=\ (1-\alpha)^{\,n}(1-\beta)^{\,i_s}\cdot \frac{q(1-q^{\,n})}{1-q}
\;=\ \frac{(1-\alpha)^{\,n}(1-\beta)^{\,i_s+1}- (1-\beta)^{\,i+1}}{\beta-\alpha},\\[4pt]
S_2(i)
&=\ C_\gamma\,S_1(i)\;-\;\beta\,(1-\alpha)^{\,n}(1-\beta)^{\,i_s}\cdot
\frac{q\left(1-(n+1)q^{\,n}+n q^{\,n+1}\right)}{(1-q)^2}.
\end{aligned}
\end{equation}
```*

</div>

<div class="proof">

*Proof.* Set $`\alpha:=\delta\gamma`$ and fix an index $`i`$ with
$`i_s<i\le \delta^{-1}`$. From
Observation <a href="#obs:5.15" data-reference-type="ref"
data-reference="obs:5.15">4</a>, for every $`k\in\{i_s+1,\dots,i\}`$ we
have the one–step recurrence
``` math
\begin{equation}
\label{eq:517-recurr-full}
F\big(\mathbf{y}(k)\big)
\ \ge\
(1-\alpha)\,F\big(\mathbf{y}(k-1)\big)\;+\;\alpha\,V(k-1)\;-\;\frac{\delta^{2}D^{2}L}{2}.
\end{equation}
Click to expand and view more

Here we used $`\alpha=\delta\gamma`$ to rewrite the term $`\delta\gamma\,V(k-1)`$ as $`\alpha V(k-1)`$.

In the post–switch phase ($`k>i_s`$), the surrogate takes the explicit form given in [eq:v2-def-uniq]

MATH
\begin{equation}
\label{eq:517-Vform-full}
V(k-1)\ =\ (1-\beta)^{\,k}\!\left[
A\,g\;-\;\Bigl(C_\gamma-\beta\,(k-i_s)\Bigr)\,g_{\oplus}
\right],
\end{equation}
Click to expand and view more

where $`A`$ and $`C_\gamma`$ are as in [eq:517-consts].

Applying [eq:517-recurr-full] iteratively from $`k=i_s+1`$ up to $`k=i`$ gives (by a standard induction on $`k`$)

MATH
\begin{equation}
\label{eq:517-unroll}
\begin{aligned}
F\big(\mathbf{y}(i)\big)
&\ge\ (1-\alpha)^{\,i-i_s}\,F\big(\mathbf{y}(i_s)\big)\\
&\quad
\;+\;\alpha\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}\,V(k-1)
\;-\;\frac{\delta^{2}D^{2}L}{2}\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}.
\end{aligned}
\end{equation}
Click to expand and view more

The factor $`(1-\alpha)^{\,i-i_s}`$ comes from repeatedly multiplying by $`(1-\alpha)`$ in the homogeneous part of the recurrence.

Substituting [eq:517-Vform-full] into [eq:517-unroll] yields

MATH
\begin{equation}
\label{eq:517-unroll-sub}
\begin{aligned}
F\big(\mathbf{y}(i)\big)
\ge\ &(1-\alpha)^{\,i-i_s}\,F\big(\mathbf{y}(i_s)\big) +\;\alpha\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}(1-\beta)^{\,k}\,A\,g \\
&\quad-\;\alpha\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}(1-\beta)^{\,k}\Bigl(C_\gamma-\beta\,(k-i_s)\Bigr)\,g_{\oplus} \\
&\quad-\;\frac{\delta^{2}D^{2}L}{2}\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}.
\end{aligned}
\end{equation}
Click to expand and view more

The first sum collects all $`g`$-terms, the second all $`g_{\oplus}`$-terms, and the last sum is the accumulated smoothness penalty.

We now define the sums

MATH
\begin{equation}
\label{eq:517-sums-def-again}
S_1(i)\ := \sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}(1-\beta)^{\,k},
\quad
S_2(i)\ := \sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}(1-\beta)^{\,k}\Bigl(C_\gamma-\beta\,(k-i_s)\Bigr).
\end{equation}
Click to expand and view more

With [eq:517-sums-def-again], inequality [eq:517-unroll-sub] becomes

MATH
\begin{equation}
\label{eq:517-main-before-smooth}
F\big(\mathbf{y}(i)\big)
\ \ge\
(1-\alpha)^{\,i-i_s}\,F\big(\mathbf{y}(i_s)\big)
\;+\;\alpha\,A\,S_1(i)\,g
\;-\;\alpha\,S_2(i)\,g_{\oplus}
\;-\;\frac{\delta^{2}D^{2}L}{2}\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}.
\end{equation}
Click to expand and view more

Thus the only remaining tasks are to bound the smoothness term and compute closed forms for $`S_1(i)`$ and $`S_2(i)`$.

To obtain closed forms, let $`n:=i-i_s`$ and $`q:=\frac{1-\beta}{\,1-\alpha\,}`$, and set $`t:=k-i_s`$ so that $`t=1,\dots,n`$. Then

MATH
\begin{align}
S_1(i)
&= \sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}(1-\beta)^{\,k} = (1-\alpha)^{\,n}(1-\beta)^{\,i_s}\sum_{t=1}^{n} q^{\,t},
\label{eq:517-S1-geometric}
\end{align}
Click to expand and view more

because $`(1-\alpha)^{i-k} = (1-\alpha)^{n-t}`$ and $`(1-\beta)^k = (1-\beta)^{i_s+t}`$, so each term is

MATH
(1-\alpha)^{n-t}(1-\beta)^{i_s+t}
= (1-\alpha)^{n}(1-\beta)^{i_s}\Bigl(\frac{1-\beta}{1-\alpha}\Bigr)^{t}
= (1-\alpha)^{n}(1-\beta)^{i_s} q^{t}.
Click to expand and view more

Using the geometric sum identity

MATH
\sum_{t=1}^{n} q^{t} = \frac{q(1-q^{n})}{1-q},
Click to expand and view more

we obtain

MATH
\begin{equation}
\label{eq:517-S1-closed-1}
S_1(i)\;=\;(1-\alpha)^{\,n}(1-\beta)^{\,i_s}\cdot \frac{q(1-q^{\,n})}{1-q}.
\end{equation}
Click to expand and view more

Writing $`1-q=\frac{\beta-\alpha}{1-\alpha}`$ and $`q^n = \bigl(\frac{1-\beta}{1-\alpha}\bigr)^n`$, a simple algebraic rearrangement yields the equivalent form

MATH
\begin{equation}
\label{eq:517-S1-closed-2}
S_1(i)\;=\;\frac{(1-\alpha)^{\,n}(1-\beta)^{\,i_s+1}- (1-\beta)^{\,i+1}}{\beta-\alpha},
\end{equation}
Click to expand and view more

which is the first line of [eq:517-closed].

For $`S_2(i)`$, define

MATH
T(i)\;:=\;\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}(1-\beta)^{\,k}\,(k-i_s)
\;=\;(1-\alpha)^{\,n}(1-\beta)^{\,i_s}\sum_{t=1}^{n} t\,q^{\,t}.
Click to expand and view more

The standard identity

MATH
\sum_{t=1}^{n} t\,q^{\,t}
\;=\; \frac{q\left(1-(n+1)q^{\,n}+n q^{\,n+1}\right)}{(1-q)^2}
Click to expand and view more

then gives

MATH
\begin{equation}
\label{eq:517-T-closed}
T(i)\;=\;(1-\alpha)^{\,n}(1-\beta)^{\,i_s}\cdot
\frac{q\left(1-(n+1)q^{\,n}+n q^{\,n+1}\right)}{(1-q)^2}.
\end{equation}
Click to expand and view more

By definition of $`S_2(i)`$,

MATH
S_2(i) = C_\gamma\,S_1(i) - \beta\,T(i),
Click to expand and view more

which together with [eq:517-S1-closed-1] and [eq:517-T-closed] yields the second line of [eq:517-closed].

Finally, we bound the smoothness penalty in [eq:517-main-before-smooth]. Since $`0\le (1-\alpha)^{i-k}\le 1`$,

MATH
\begin{equation}
\label{eq:517-smooth-tail}
\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}
= \sum_{t=0}^{n-1} (1-\alpha)^t
\;\le\; n.
\end{equation}
Click to expand and view more

Therefore

MATH
\begin{equation}
\label{eq:517-smooth-final}
-\frac{\delta^{2}D^{2}L}{2}\sum_{k=i_s+1}^{i} (1-\alpha)^{\,i-k}
\ \ge\ -\,\frac{\delta^{2}D^{2}L}{2}\,n
\ \ge\ -(i-i_s)\,\delta^{2}D^{2}L,
\end{equation}
Click to expand and view more

where we relaxed the factor $`\tfrac12`$ to get a slightly simpler bound.

Substituting [eq:517-smooth-final] into [eq:517-main-before-smooth] gives exactly [eq:517-main], which completes the proof. ◻

Combining Lemmas 21 and 22, we obtain the following corollary, which finishes the proof of Theorem 9 in the regime where $`Q(i)`$ is non-empty for all $`i\in[\delta^{-1}]`$.

Corollary 23. *Assume $`Q(i)\neq\varnothing`$ for every $`i\in[\delta^{-1}]`$. Fix $`0<\delta\le 1`$, $`0<\gamma\le 1`$, and set

MATH
\begin{equation}
\label{eq:518-params}
\alpha:=\delta\gamma,
\qquad
\beta:=\frac{\gamma^2\delta}{\,1-\delta+\gamma^2\delta\,}.
\end{equation}
Click to expand and view more

Then, with $`i_s\in\{0,1,\dots,\delta^{-1}\}`$ and $`\mathbf{y}(\cdot)`$ as in Lemma 20, we have

MATH
\begin{equation}
\label{eq:518-discrete}
\begin{aligned}
F\big(\mathbf{y}(\delta^{-1})\big)
\;&\ge\;
(1-\alpha)^{\,\delta^{-1}-i_s}\,F\big(\mathbf{y}(i_s)\big)
\;+\;\alpha\,A\,S_1(\delta^{-1})\,g
\;-\;\alpha\,S_2(\delta^{-1})\,g_{\oplus}
\;-\;(\delta^{-1}-i_s)\,\delta^2 D^2L\\[4pt]
&\ge\;
(1-\alpha)^{\,\delta^{-1}-i_s}\Bigg\{
\Big[\tfrac{1-(1-\alpha)^{i_s}}{\gamma}\,(1-2\varepsilon)
+\tfrac{\delta(1-\gamma)}{\beta-\alpha}\big((1-\beta)^{i_s}-(1-\alpha)^{i_s}\big)\Big]g\\
&\hspace{1.1cm}
-\;\tfrac{1-(1-\alpha)^{i_s}}{\gamma}\,g_{\odot}
-\Big[\tfrac{1-(1-\alpha)^{i_s}}{\gamma}
+\tfrac{\delta}{\beta-\alpha}\big((1-\beta)^{i_s}-(1-\alpha)^{i_s}\big)\Big]g_{\oplus}
-\,i_s\,\delta^2 D^2L
\Bigg\}\\[2pt]
&\quad+\;\alpha\,A\,S_1(\delta^{-1})\,g
\;-\;\alpha\,S_2(\delta^{-1})\,g_{\oplus}
\;-\;(\delta^{-1}-i_s)\,\delta^2 D^2L,
\end{aligned}
\end{equation}
Click to expand and view more

where

MATH
\begin{equation}
\label{eq:518-consts}
A\;:=\;\frac{(1-\beta)^{-i_s}}{\gamma}-\Bigl(1+\frac{3}{\gamma}\Bigr)\varepsilon+1-\frac{1}{\gamma},
\qquad
C_\gamma\;:=\;\frac{(1-\beta)^{-i_s}-1}{\gamma},
\end{equation}
Click to expand and view more

and the sums from Lemma 22 admit the closed forms

MATH
\begin{equation}
\label{eq:518-sums}
\begin{aligned}
S_1(\delta^{-1})
&=\ \frac{(1-\alpha)^{\,\delta^{-1}-i_s}(1-\beta)^{\,i_s+1}- (1-\beta)^{\,\delta^{-1}+1}}{\ \beta-\alpha\ },\\[4pt]
S_2(\delta^{-1})
&=\ C_\gamma\,S_1(\delta^{-1})
\;-\;\beta\,(1-\alpha)^{\,\delta^{-1}-i_s}(1-\beta)^{\,i_s}\,\\
&\qquad \frac{\displaystyle \frac{1-\beta}{1-\alpha}\left(1-(\delta^{-1}-i_s+1)\Bigl(\tfrac{1-\beta}{1-\alpha}\Bigr)^{\delta^{-1}-i_s}
+(\delta^{-1}-i_s)\Bigl(\tfrac{1-\beta}{1-\alpha}\Bigr)^{\delta^{-1}-i_s+1}\right)}
{\displaystyle \left(1-\tfrac{1-\beta}{1-\alpha}\right)^{2}}\,.
\end{aligned}
\end{equation}
```*

</div>

<div class="proof">

*Proof.* We use the shorthand parameters $`\alpha`$ and $`\beta`$ from
<a href="#eq:518-params" data-reference-type="eqref"
data-reference="eq:518-params">[eq:518-params]</a>. By
Lemma <a href="#lem:516" data-reference-type="ref"
data-reference="lem:516">21</a>, evaluated at $`i=i_s`$, we have
``` math
\begin{equation}
\label{eq:518-lemma516-at-is}
\begin{aligned}
F\!\big(\mathbf{y}(i_s)\big)
\;\ge\;&\;
\Bigg[\frac{1-(1-\alpha)^{i_s}}{\gamma}\,(1-2\varepsilon)
+\frac{\delta(1-\gamma)}{\beta-\alpha}\,\Big((1-\beta)^{i_s}-(1-\alpha)^{i_s}\Big)\Bigg]\,g\\[2pt]
&-\;\frac{1-(1-\alpha)^{i_s}}{\gamma}\,g_{\odot}
-\Bigg[\frac{1-(1-\alpha)^{i_s}}{\gamma}
+\frac{\delta}{\beta-\alpha}\,\Big((1-\beta)^{i_s}-(1-\alpha)^{i_s}\Big)\Bigg]\,g_{\oplus}
-\,i_s\,\delta^2 D^2L.
\end{aligned}
\end{equation}
Click to expand and view more

This is just Lemma 21 with $`i=i_s`$ and the definitions $`\Delta_{i_s}=1-(1-\alpha)^{i_s}`$ and $`\Theta_{i_s}=(1-\beta)^{i_s}-(1-\alpha)^{i_s}`$.

Next, by Lemma 22, evaluated at $`i=\delta^{-1}`$, we obtain

MATH
\begin{equation}
\label{eq:518-lemma517-at-end}
\begin{aligned}
F\big(\mathbf{y}(\delta^{-1})\big)
\;\ge\;&\;
(1-\alpha)^{\,\delta^{-1}-i_s}\,F\big(\mathbf{y}(i_s)\big)
+\alpha\,A\,S_1(\delta^{-1})\,g
-\alpha\,S_2(\delta^{-1})\,g_{\oplus}
-(\delta^{-1}-i_s)\,\delta^2 D^2L,
\end{aligned}
\end{equation}
Click to expand and view more

where $`A`$ and $`C_\gamma`$ are as in [eq:518-consts] and $`S_1(\delta^{-1})`$, $`S_2(\delta^{-1})`$ are given in [eq:518-sums].

Substituting the lower bound [eq:518-lemma516-at-is] for $`F(\mathbf{y}(i_s))`$ into [eq:518-lemma517-at-end], then collecting the coefficients of $`g`$, $`g_{\odot}`$, and $`g_{\oplus}`$, and combining the smoothness penalties $`i_s\delta^2 D^2L`$ and $`(\delta^{-1}-i_s)\delta^2 D^2L`$, yields exactly the discrete inequality [eq:518-discrete]. This is a straightforward algebraic rearrangement.

To pass from the discrete-time bound [eq:518-discrete] to the continuous-time guarantee, define the switch time

MATH
t_s:=\delta\,i_s\in[0,1],
Click to expand and view more

and let $`\delta\to 0^{+}`$ while keeping $`t_s`$ fixed. Using

MATH
(1-\alpha)^{\,\delta^{-1}-i_s}
= \big(1-\delta\gamma\big)^{(1-t_s)/\delta}
\;\longrightarrow\; e^{-\gamma(1-t_s)},
\qquad
(1-\alpha)^{\,i_s}
= \big(1-\delta\gamma\big)^{t_s/\delta}
\;\longrightarrow\; e^{-\gamma t_s},
Click to expand and view more

and the closed forms [eq:518-sums], each discrete sum converges to the corresponding time integral in the continuous analysis. Concretely, $`S_1(\delta^{-1})`$ and $`S_2(\delta^{-1})`$ can be viewed as Riemann sums in the step size $`\delta`$; applying standard first-order expansions of $`(1-\alpha)`$ and $`(1-\beta)`$ (or equivalently, l’Hôpital’s rule to the associated limits) yields the continuous coefficients $`A_\gamma(t_s)`$, $`B_\gamma(t_s)`$, and $`C_\gamma(t_s)`$:

MATH
\begin{equation}
\label{eq:518-cts-final}
F\big(\mathbf{y}^{\ast}\big)
\ \ge\ 
A_\gamma(t_s)\,g\;+\;B_\gamma(t_s)\,g_{\odot}\;+\;C_\gamma(t_s)\,g_{\oplus}
\;-\;O(\varepsilon)\,\bigl(g+g_{\odot}+g_{\oplus}\bigr)\;-\;\delta\,L D^{2},
\end{equation}
Click to expand and view more

with

MATH
\begin{align}
A_\gamma(t_s)
&:= -\frac{e^{\gamma t_s-\gamma}}{\,1-\gamma\,}
+\frac{e^{-\gamma^2}}{\gamma(1-\gamma)}\Big(e^{\gamma^2 t_s}-(1-\gamma)\Big), \label{eq:518-cts-A}\\[2pt]
B_\gamma(t_s)
&:= \frac{e^{-\gamma}-e^{\gamma t_s-\gamma}}{\gamma}, \label{eq:518-cts-B}\\[2pt]
C_\gamma(t_s)
&:= \frac{e^{\gamma^2 t_s}-1}{\gamma(1-\gamma)}\Big(e^{-\gamma(1-t_s)-\gamma^2 t_s}-e^{-\gamma^2}\Big)
+\frac{e^{-\gamma(1-t_s)}}{\gamma}\!\left[
-\big(1-e^{-\gamma t_s}\big)
+\frac{e^{-\gamma^2 t_s}-e^{-\gamma t_s}}{1-\gamma}
\right]\notag\\
&\quad
+\,e^{-\gamma(1-t_s)-\gamma^2 t_s}\!\left[
\frac{\gamma^2}{1-\gamma}(1-t_s)\,e^{\gamma(1-\gamma)(1-t_s)}
+\frac{\gamma}{(1-\gamma)^2}\Big(1-e^{\gamma(1-\gamma)(1-t_s)}\Big)
\right]. \label{eq:518-cts-C}
\end{align}
Click to expand and view more

Finally, identifying $`(g,g_{\odot},g_{\oplus})`$ with $`\big(F(\mathbf{o}),F(\mathbf{z}\!\odot\!\mathbf{o}),F(\mathbf{z}\!\oplus\!\mathbf{o})\big)`$ shows that [eq:518-cts-final] is exactly the continuous–time guarantee used in Theorem 9 (1). ◻

At this point, we handle the case where $`Q{(i)}=\varnothing`$ for some $`i\in[\delta^{-1}]`$. Note that $`Q{(i)}=\varnothing`$ implies, in particular, $`\mathbf{o}\notin Q{(i)}`$. Accordingly, define

MATH
i_o\;:=\;\min\big\{\,i\in[\delta^{-1}] \;:\; \mathbf{o}\notin Q{(i)}\,\big\}.
Click to expand and view more

Observation 5. For every $`i\in[i_o-1]`$,

MATH
\begin{equation}
\label{eq:obs519-claim}
F\big(\mathbf{x}(i)\big)\;\ge\;
\frac{\gamma^{2}\,F\big(\mathbf{x}(i)\vee \mathbf{o}\big)+F\big(\mathbf{x}(i)\wedge \mathbf{o}\big)}{1+\gamma^{2}}
\;-\;\varepsilon\,F(\mathbf{o})\;-\;\delta\,L D^{2}\,.
\end{equation}
Click to expand and view more

Proof. Fix $`i\in[i_o-1]`$. Since $`i5 to the down-closed body $`P:=Q(i)`$ with comparison point $`\mathbf{y}:=\mathbf{o}\in Q(i)`$, and run the local routine with accuracy parameter

MATH
\eta\;:=\;\min\{\varepsilon,\,2\delta\}.
Click to expand and view more

By Theorem 5, the output $`\mathbf{x}(i)\in Q(i)`$ satisfies

MATH
\begin{equation}
\label{eq:obs519-thm}
\begin{aligned}
F\!\big(\mathbf{x}(i)\big)
&\ge
\frac{\gamma^{2}\,F\!\big(\mathbf{x}(i)\!\vee \mathbf{o}\big)+F\!\big(\mathbf{x}(i)\!\wedge \mathbf{o}\big)}{1+\gamma^{2}}
-
\frac{\eta\,\gamma}{1+\gamma^{2}}
\left(\max_{\mathbf{y}'\in Q(i)}F(\mathbf{y}')+\tfrac{1}{2}L D^{2}\right).
\end{aligned}
\end{equation}
Click to expand and view more

Here the first term is exactly the lattice-based comparison from Theorem 5, and the second term is the uniform first-order error with parameter $`\eta`$.

Because $`\mathbf{o}\in Q(i)`$, we have

MATH
\max_{\mathbf{y}'\in Q(i)}F(\mathbf{y}') \;\ge\; F(\mathbf{o}).
Click to expand and view more

The coefficient in front of $`\max_{\mathbf{y}'\in Q(i)}F(\mathbf{y}')`$ in [eq:obs519-thm] is negative, namely $`-\tfrac{\eta\gamma}{1+\gamma^{2}}`$. Thus replacing $`\max_{\mathbf{y}'\in Q(i)}F(\mathbf{y}')`$ by the smaller value $`F(\mathbf{o})`$ yields a stronger lower bound on $`F(\mathbf{x}(i))`$:

MATH
\begin{equation}
\label{eq:obs519-with-eta}
F\!\big(\mathbf{x}(i)\big)\;\ge\;
\frac{\gamma^{2}\,F\big(\mathbf{x}(i)\vee \mathbf{o}\big)+F\big(\mathbf{x}(i)\wedge \mathbf{o}\big)}{1+\gamma^{2}}
-
\frac{\eta\,\gamma}{1+\gamma^{2}}\,F(\mathbf{o})
-
\frac{\eta\,\gamma}{2(1+\gamma^{2})}\,L D^{2}.
\end{equation}
Click to expand and view more

We now simplify the two error terms. Since $`\gamma\in(0,1]`$,

MATH
\frac{\gamma}{1+\gamma^{2}}\;\le\;1,
\qquad
\frac{\gamma}{2(1+\gamma^{2})}\;\le\;\frac{1}{2}.
Click to expand and view more

Using $`\eta\le\varepsilon`$ and $`\eta\le 2\delta`$ (by the definition $`\eta=\min\{\varepsilon,2\delta\}`$), we obtain

MATH
\begin{equation}
\label{eq:obs519-simplify}
\frac{\eta\,\gamma}{1+\gamma^{2}}\,F(\mathbf{o})
\;\le\;\varepsilon\,F(\mathbf{o}),
\qquad
\frac{\eta\,\gamma}{2(1+\gamma^{2})}\,L D^{2}
\;\le\;\delta\,L D^{2}.
\end{equation}
Click to expand and view more

Substituting the bounds in [eq:obs519-simplify] into [eq:obs519-with-eta] gives

MATH
F\!\big(\mathbf{x}(i)\big)\;\ge\;
\frac{\gamma^{2}\,F\big(\mathbf{x}(i)\vee \mathbf{o}\big)+F\big(\mathbf{x}(i)\wedge \mathbf{o}\big)}{1+\gamma^{2}}
\;-\;\varepsilon\,F(\mathbf{o})\;-\;\delta\,L D^{2},
Click to expand and view more

which is exactly [eq:obs519-claim]. ◻

Observation 5 yields Theorem 9 as soon as we can find some $`i\in[i_o-1]`$ with

MATH
\begin{equation}
\label{eq:gap-cond}
F\big(\mathbf{x}(i)\oplus \mathbf{o}\big)\ \le\ F\big(\mathbf{z}\oplus \mathbf{o}\big)\;-\;\varepsilon\,F(\mathbf{o}).
\end{equation}
Click to expand and view more

Therefore, our task reduces to showing that the gap condition [eq:gap-cond] holds for at least one index $`i\in[i_o-1]`$. This is exactly what Lemma 25 and Lemma 26 prove, which in turn completes the proof of Theorem 9. Before presenting those lemmas, we record one auxiliary lemma that we will use in their proofs.

Lemma 24. *It must hold that

MATH
F\bigl(\mathbf{y}(i_o-1)\oplus \mathbf{o}\;-\;\mathbf{z}(i_o-1)\odot \mathbf{o}\bigr)\ \le\ V_{\gamma}(i_o-1).
```*

</div>

<div class="proof">

*Proof.* By the definition of $`i_o`$, we have
$`\mathbf{o}\notin Q(i_o)`$. The weakly-DR membership-failure condition
at iteration $`i_o`$ states that
``` math
\begin{equation}
\label{eq:membership-failure}
\big\langle \mathbf{w}(i_o),\,\mathbf{o}\big\rangle
\;\le\;
\gamma\bigl(V_{\gamma}(i_o-1)-F(\mathbf{y}(i_o-1))\bigr).
\end{equation}
Click to expand and view more

On the other hand, the weakly-DR gradient bound for any $`i\in[\delta^{-1}]`$ gives

MATH
\begin{equation}
\label{eq:grad-bound}
\big\langle \mathbf{w}(i),\,\mathbf{o}\big\rangle
\;\ge\;
\gamma\Big(
F\bigl(\mathbf{y}(i-1)\oplus \mathbf{o}\;-\;\mathbf{z}(i-1)\odot \mathbf{o}\bigr)
-
F(\mathbf{y}(i-1))
\Big).
\end{equation}
Click to expand and view more

This applies in particular for $`i=i_o`$, so

MATH
\begin{equation}
\label{eq:grad-bound-io}
\big\langle \mathbf{w}(i_o),\,\mathbf{o}\big\rangle
\;\ge\;
\gamma\Big(
F\bigl(\mathbf{y}(i_o-1)\oplus \mathbf{o}\;-\;\mathbf{z}(i_o-1)\odot \mathbf{o}\bigr)
-
F(\mathbf{y}(i_o-1))
\Big).
\end{equation}
Click to expand and view more

Combining [eq:membership-failure] and [eq:grad-bound-io] yields

MATH
\begin{equation}
\label{eq:combine-bounds}
\gamma\Big(
F\bigl(\mathbf{y}(i_o-1)\oplus \mathbf{o}\;-\;\mathbf{z}(i_o-1)\odot \mathbf{o}\bigr)
-
F(\mathbf{y}(i_o-1))
\Big)
\;\le\;
\gamma\bigl(V_{\gamma}(i_o-1)-F(\mathbf{y}(i_o-1))\bigr).
\end{equation}
Click to expand and view more

Since $`\gamma>0`$, we can divide both sides of [eq:combine-bounds] by $`\gamma`$ and add $`F(\mathbf{y}(i_o-1))`$ to both sides, obtaining

MATH
F\bigl(\mathbf{y}(i_o-1)\oplus \mathbf{o}\;-\;\mathbf{z}(i_o-1)\odot \mathbf{o}\bigr)
\;\le\;
V_{\gamma}(i_o-1),
Click to expand and view more

which is the desired inequality. ◻

Lemma 25. Let $`\beta:=\beta_\gamma(\delta)=\dfrac{\gamma^2\delta}{\,1-\delta+\gamma^2\delta\,}`$. Then it must hold that $`i_o>i_s`$.

Proof. Assume for contradiction that $`i_o\le i_s`$. Since $`i_o-124 at time $`i_o-1`$ (cf. $`v_1(\cdot)`$ with the triple $`(g,g_{\odot},g_{\oplus})`$) implies

MATH
\begin{align}
&F\!\big(\mathbf{y}(i_o-1)\oplus \mathbf{o}\;-\;\mathbf{z}\odot \mathbf{o}\big)\notag\\
&\hspace{1cm}\le\ \Bigl[(1-\beta)^{\,i_o-1}+\frac{1-(1-\beta)^{\,i_o-1}-2\varepsilon}{\gamma}\Bigr]\;g
\;-\;\frac{1}{\gamma}\,g_{\odot}
\;-\;\frac{1-(1-\beta)^{\,i_o-1}}{\gamma}\,g_{\oplus}
\label{eq:weak521-contr1}\\
&\hspace{1cm}\le\ \Bigl[(1-\beta)^{\,i_o-1}+\frac{1-(1-\beta)^{\,i_o-1}}{\gamma}\Bigr]\;F(\mathbf{o})
\;-\;\frac{1}{\gamma}\,F(\mathbf{z}\odot \mathbf{o})
\;-\;\frac{1-(1-\beta)^{\,i_o-1}}{\gamma}\,F(\mathbf{z}\oplus \mathbf{o}).
\label{eq:weak521-contr2}
\end{align}
Click to expand and view more

Here [eq:weak521-contr1] is exactly the benchmark inequality evaluated at $`i=i_o-1`$, and [eq:weak521-contr2] uses Lemma 8:

MATH
(1-\varepsilon)F(\mathbf{o})\le g\le F(\mathbf{o}),\quad
F(\mathbf{z}\odot\mathbf{o})-\varepsilon g \le g_{\odot}\le F(\mathbf{z}\odot\mathbf{o}),\quad
F(\mathbf{z}\oplus\mathbf{o})-\varepsilon g \le g_{\oplus}\le F(\mathbf{z}\oplus\mathbf{o}),
Click to expand and view more

and the fact that replacing $`g`$ by $`F(\mathbf{o})`$ and $`g_{\odot},g_{\oplus}`$ by $`F(\mathbf{z}\odot\mathbf{o}),F(\mathbf{z}\oplus\mathbf{o})`$ can only increase the right-hand side of [eq:weak521-contr1] (because they appear with coefficients $`+1`$ for $`g`$ and $`-1/\gamma`$ for $`g_{\odot},g_{\oplus}`$).

Next, by the closed form of $`\mathbf{y}(\cdot)`$ (Lemma 20 with $`i=i_o-1`$),

MATH
\begin{equation}
\label{eq:weak521-yform}
\mathbf{y}(i_o-1)\;=\;(\mathbf{1}-\mathbf{z})\ \odot\ \bigoplus_{j=1}^{i_o-1}\big(\delta\,\mathbf{x}(j)\big).
\end{equation}
Click to expand and view more

Applying Corollary 28 with $`h=1`$, $`r=i_o-1`$, $`p_j=\delta`$ for all $`j`$, and outer mask $`(\mathbf{1}-\mathbf{z})`$ yields the mixture lower bound

MATH
\begin{align}
F\big(\mathbf{y}(i_o-1)\oplus \mathbf{o}\;-\;\mathbf{z}\odot \mathbf{o}\big)
&=F\Big((\mathbf{1}-\mathbf{z})\odot \bigoplus_{j=1}^{i_o-1}(\delta\,\mathbf{x}(j))\ \oplus\ \mathbf{o}\;-\;\mathbf{z}\odot \mathbf{o}\Big)\notag\\
&\ge \sum_{S\subseteq[i_o-1]}\beta^{|S|}(1-\beta)^{\,i_o-1-|S|}
\;F\Big((\mathbf{1}-\mathbf{z})\odot \bigoplus_{j\in S}\mathbf{x}(j)\ \oplus\ \mathbf{o}\Big).
\label{eq:weak521-mixture}
\end{align}
Click to expand and view more

We now bound separately the contributions from $`S=\varnothing`$ and $`S\neq\varnothing`$.

The $`S=\varnothing`$ term.

When $`S=\varnothing`$, the inner vector reduces to $`(\mathbf{1}-\mathbf{z})\odot\mathbf{o}`$. Using the weakly-DR inequality in the form of Lemma 2 together with nonnegativity, we have

MATH
\begin{equation}
\label{eq:weak521-Sempty}
F\big((\mathbf{1}-\mathbf{z})\odot \mathbf{o}\big)
\;\ge\; F(\mathbf{o})\;-\;\frac{1}{\gamma}\,F(\mathbf{z}\odot \mathbf{o}),
\end{equation}
Click to expand and view more

which says that reducing $`\mathbf{o}`$ along the direction $`\mathbf{z}\odot\mathbf{o}`$ cannot decrease $`F`$ too much, up to a $`1/\gamma`$ factor. Thus the $`S=\varnothing`$ contribution in [eq:weak521-mixture] satisfies

MATH
\begin{equation}
\label{eq:weak521-contrSem}
(1-\beta)^{\,i_o-1}\,F\big((\mathbf{1}-\mathbf{z})\odot \mathbf{o}\big)
\;\ge\; (1-\beta)^{\,i_o-1}\Bigl[F(\mathbf{o})-\frac{1}{\gamma}F(\mathbf{z}\odot \mathbf{o})\Bigr].
\end{equation}
Click to expand and view more

The $`S\neq\varnothing`$ terms.

Fix any nonempty $`S\subseteq[i_o-1]`$. We first write

MATH
(\mathbf{1}-\mathbf{z})\odot \bigoplus_{j\in S}\mathbf{x}(j)\ \oplus\ \mathbf{o}
\;=\;
\Big((\mathbf{1}-\mathbf{z})\odot \bigoplus_{j\in S}\mathbf{x}(j)\ \oplus\ \mathbf{o}\;-\;\mathbf{z}\odot \mathbf{o}\Big)
\;+\; \mathbf{z}\odot \mathbf{o},
Click to expand and view more

and then apply the weakly-DR inequality and nonnegativity in the same way as for [eq:weak521-Sempty]. This gives

MATH
\begin{align}
F\Big((\mathbf{1}-\mathbf{z})\odot \bigoplus_{j\in S}\mathbf{x}(j)\ \oplus\ \mathbf{o}\Big)
&\ge
F\Big((\mathbf{1}-\mathbf{z})\odot \bigoplus_{j\in S}\mathbf{x}(j)\ \oplus\ \mathbf{o}\;-\;\mathbf{z}\odot \mathbf{o}\Big)
\;-\;\frac{1}{\gamma}\,F(\mathbf{z}\odot \mathbf{o}) \notag\\
&\ge \frac{1}{\gamma}\Bigl[F(\mathbf{o})-F(\mathbf{z}\oplus \mathbf{o})-F(\mathbf{z}\odot \mathbf{o})\Bigr],
\label{eq:weak521-Snonempty}
\end{align}
Click to expand and view more

where in the second inequality we use the weakly-DR difference bounds to compare the value at $`\mathbf{o}`$ with those at $`\mathbf{z}\oplus\mathbf{o}`$ and $`\mathbf{z}\odot\mathbf{o}`$. Multiplying [eq:weak521-Snonempty] by $`\beta^{|S|}(1-\beta)^{\,i_o-1-|S|}`$ and summing over all nonempty $`S`$ yields

MATH
\begin{align}
&\sum_{\varnothing\neq S\subseteq[i_o-1]}\beta^{|S|}(1-\beta)^{\,i_o-1-|S|}
F\Big((\mathbf{1}-\mathbf{z})\odot \bigoplus_{j\in S}\mathbf{x}(j)\ \oplus\ \mathbf{o}\Big)\notag\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\ge \Bigl[1-(1-\beta)^{\,i_o-1}\Bigr]\,
\frac{1}{\gamma}\Bigl[F(\mathbf{o})-F(\mathbf{z}\oplus \mathbf{o})-F(\mathbf{z}\odot \mathbf{o})\Bigr],
\label{eq:weak521-Ssum}
\end{align}
Click to expand and view more

since $`\sum_{\varnothing\neq S\subseteq[i_o-1]}\beta^{|S|}(1-\beta)^{\,i_o-1-|S|} =1-(1-\beta)^{i_o-1}`$.

Putting everything together.

Combining the $`S=\varnothing`$ contribution [eq:weak521-contrSem] and the $`S\neq\varnothing`$ contribution [eq:weak521-Ssum] with the mixture representation [eq:weak521-mixture], we obtain

MATH
\begin{align}
F\big(\mathbf{y}(i_o-1)\oplus \mathbf{o}\;-\;\mathbf{z}\odot \mathbf{o}\big)
&\ge \Bigl[(1-\beta)^{\,i_o-1}+\frac{1-(1-\beta)^{\,i_o-1}}{\gamma}\Bigr]\,F(\mathbf{o})\notag\\
&\hspace{2cm}\;-\;\frac{1-(1-\beta)^{\,i_o-1}}{\gamma}\,F(\mathbf{z}\oplus \mathbf{o})
\;-\;\frac{1}{\gamma}\,F(\mathbf{z}\odot \mathbf{o}).
\label{eq:weak521-lower}
\end{align}
Click to expand and view more

The right-hand side of [eq:weak521-lower] is strictly larger than the upper bound in [eq:weak521-contr2], because [eq:weak521-contr2] lacks the $`-2\varepsilon`$ slack present in [eq:weak521-contr1] and the triple $`(g,g_{\odot},g_{\oplus})`$ approximates $`\big(F(\mathbf{o}),F(\mathbf{z}\odot\mathbf{o}),F(\mathbf{z}\oplus\mathbf{o})\big)`$ up to additive $`O(\varepsilon)`$ terms. This contradicts [eq:weak521-contr2]. Hence our assumption $`i_o\le i_s`$ was false, and we conclude that $`i_o>i_s`$. ◻

Lemma 26. *If $`i_o>i_s`$, then there exists some $`i\in[i_o-1]`$ such that

MATH
F\big(\mathbf{x}(i)\ \oplus\ \mathbf{o}\big)\ \le\ F\big(\mathbf{z}\ \oplus\ \mathbf{o}\big)\;-\;\varepsilon\,F(\mathbf{o})\,.
```*

</div>

<div class="proof">

*Proof.* Set
$`\beta:=\beta_\gamma(\delta)=\dfrac{\gamma^2\delta}{\,1-\delta+\gamma^2\delta\,}`$
and note that $`0<\beta\le \varepsilon\le \tfrac12`$. Since $`i_o>i_s`$,
we have $`i_o-1\ge i_s`$.

#### Upper bound:

By the post–switch surrogate (the $`v_2`$–bound) evaluated at
$`i=i_o-1`$ and the successful-heir assumptions on the guessed triple
$`(g,g_{\odot},g_{\oplus})`$
(cf. <a href="#eq:v2-def-uniq" data-reference-type="eqref"
data-reference="eq:v2-def-uniq">[eq:v2-def-uniq]</a> and
<a href="#eq:triple-bounds" data-reference-type="eqref"
data-reference="eq:triple-bounds">[eq:triple-bounds]</a>), we obtain
``` math
\begin{align}
F\big(\mathbf{y}(i_o-1)\oplus \mathbf{o}\big)
&\overset{\text{(a)}}{\le}\ (1-\beta)^{\,i_o-1}\left[
\left(\frac{(1-\beta)^{-i_s}}{\gamma}-\Bigl(1+\frac{3}{\gamma}\Bigr)\varepsilon+1-\frac{1}{\gamma}\right) g
\right.\notag\\[-2pt]
&\hspace{3.3cm}\left.
-\ \left(\frac{(1-\beta)^{-i_s}}{\gamma}-\frac{1}{\gamma}-\beta\,(i_o-1-i_s)\right) g_{\oplus}
\right]
\label{eq:522-UB-a}\\
&\overset{\text{(b)}}{\le}\ (1-\beta)^{\,i_o-1}\!\left[
\left(\frac{(1-\beta)^{-i_s}}{\gamma}-\varepsilon+1-\frac{1}{\gamma}\right) F(\mathbf{o})\right.\notag\\
&\hspace{3.3cm}\left. -\ \left(\frac{(1-\beta)^{-i_s}}{\gamma}-\frac{1}{\gamma}-\beta\,(i_o-1-i_s)\right) F(\mathbf{z}\oplus \mathbf{o})
\right].\label{eq:522-UB-b}
\end{align}
Click to expand and view more

Here: (a) is just the explicit formula for $`v_2(i_o-1)`$ with the constants $`A`$ and $`C_\gamma`$ expanded. For (b) we use the triple guarantees [eq:triple-bounds]:

MATH
g\le F(\mathbf{o}),\qquad
  g_{\oplus}\ \ge\ F(\mathbf{z}\oplus \mathbf{o})-\varepsilon\,g.
Click to expand and view more

Writing $`B:=\frac{(1-\beta)^{-i_s}}{\gamma}-\frac{1}{\gamma}-\beta\,(i_o-1-i_s)\ge 0`$, we have

MATH
A g - B g_{\oplus}
  \;\le\; A g - B\big(F(\mathbf{z}\oplus\mathbf{o})-\varepsilon g\big)
  \;=\; (A+B\varepsilon)\,g\;-\;B\,F(\mathbf{z}\oplus\mathbf{o}),
Click to expand and view more

and then $`g\le F(\mathbf{o})`$ gives $`(A+B\varepsilon)\,g\le (A+B\varepsilon)F(\mathbf{o})`$. A crude bound $`(1-\beta)^{-i_s}\le (1-\beta)^{-1/\beta}\le 4`$ (using $`\beta\le 1/2`$) implies

MATH
B
  =\frac{(1-\beta)^{-i_s}-1}{\gamma}-\beta(i_o-1-i_s)
  \ \le\ \frac{(1-\beta)^{-i_s}-1}{\gamma}
  \ \le\ \frac{3}{\gamma}.
Click to expand and view more

Thus

MATH
A+B\varepsilon
  \;\le\;\left(\frac{(1-\beta)^{-i_s}}{\gamma}-\Bigl(1+\frac{3}{\gamma}\Bigr)\varepsilon+1-\frac{1}{\gamma}\right)
  +\frac{3}{\gamma}\varepsilon
  =\frac{(1-\beta)^{-i_s}}{\gamma}-\varepsilon+1-\frac{1}{\gamma},
Click to expand and view more

which is exactly the coefficient of $`F(\mathbf{o})`$ in [eq:522-UB-b].

Lower bound:

From the closed form of $`\mathbf{y}(\cdot)`$ (Lemma 20 with $`i=i_o-1`$),

MATH
\begin{equation}
\label{eq:522-y-form}
\mathbf{y}(i_o-1)\;=\;(\mathbf{1}-\mathbf{z})\odot\bigoplus_{j=1}^{i_o-1}\big(\delta\,\mathbf{x}(j)\big)
\;+\;\mathbf{z}\odot\bigoplus_{j=i_s+1}^{i_o-1}\big(\delta\,\mathbf{x}(j)\big).
\end{equation}
Click to expand and view more

Apply the weakly-DR mixture inequality with masking (Corollary 28) to [eq:522-y-form]. We keep only three nonnegative groups of terms: - the empty-set term $`S=\varnothing`$; - all subsets $`S\subseteq[i_s]`$; - all singletons $`S=\{j\}`$ with $`j\in\{i_s+1,\dots,i_o-1\}`$.

Using Lemma 2 (weakly-DR gradient bounds) and nonnegativity of $`F`$, these three groups yield

MATH
\begin{align}
F\big(\mathbf{y}(i_o-1)\oplus \mathbf{o}\big)
&\overset{\text{(c)}}{\ge}\ (1-\beta)^{\,i_o-1}\,F(\mathbf{o})
\ +\ (1-\beta)^{\,i_o-1-i_s}\bigl(1-(1-\beta)^{\,i_s}\bigr)\,
\frac{1}{\gamma}\bigl[F(\mathbf{o})-F(\mathbf{z}\oplus \mathbf{o})\bigr]\notag\\
&\hspace{3cm}
+\ \beta(1-\beta)^{\,i_o-2}\sum_{j=i_s+1}^{i_o-1} F\big(\mathbf{x}(j)\oplus \mathbf{o}\big),
\label{eq:522-LB}
\end{align}
Click to expand and view more

where (c) comes from: - $`S=\varnothing`$: gives the term $`(1-\beta)^{i_o-1}F(\mathbf{o})`$; - $`S\subseteq[i_s], S\neq\varnothing`$: combined and bounded below by $`\frac{1}{\gamma}\bigl[F(\mathbf{o})-F(\mathbf{z}\oplus \mathbf{o})\bigr]`$, with total weight $`(1-\beta)^{i_o-1-i_s}\bigl(1-(1-\beta)^{i_s}\bigr)`$; - singletons $`S=\{j\}`$ with $`j>i_s`$: each has weight $`\beta(1-\beta)^{i_o-2}`$ and contributes $`F(\mathbf{x}(j)\oplus \mathbf{o})`$.

Comparing the bounds.

Combining [eq:522-UB-b] and [eq:522-LB] and dividing both sides by the common factor $`(1-\beta)^{\,i_o-1}>0`$ yields

MATH
F(\mathbf{o}) 
+ (1-\beta)^{-i_s}\bigl(1-(1-\beta)^{\,i_s}\bigr)\,
\frac{1}{\gamma}\bigl[F(\mathbf{o})-F(\mathbf{z}\oplus \mathbf{o})\bigr]
+ \frac{\beta}{1-\beta}\sum_{j=i_s+1}^{i_o-1} F\big(\mathbf{x}(j)\oplus \mathbf{o}\big)
Click to expand and view more
MATH
\le\ 
\left(\frac{(1-\beta)^{-i_s}}{\gamma}-\varepsilon+1-\frac{1}{\gamma}\right) F(\mathbf{o})
-\ \left(\frac{(1-\beta)^{-i_s}}{\gamma}-\frac{1}{\gamma}-\beta\,(i_o-1-i_s)\right) F(\mathbf{z}\oplus \mathbf{o}).
Click to expand and view more

Rearranging and simplifying the coefficient of $`F(\mathbf{o})`$ and $`F(\mathbf{z}\oplus \mathbf{o})`$ gives

MATH
\begin{equation}
\label{eq:522-sum-ineq}
\beta\sum_{j=i_s+1}^{i_o-1} F\big(\mathbf{x}(j)\oplus \mathbf{o}\big)
\ \le\ 
\bigl(1-(1-\beta)^{\,i_o-1-i_s}\bigr)\,\Bigl[F(\mathbf{z}\oplus \mathbf{o})-\varepsilon\,F(\mathbf{o})\Bigr].
\end{equation}
Click to expand and view more

Moreover,

MATH
1-(1-\beta)^{\,i_o-1-i_s}\ \le\ \beta\,(i_o-1-i_s),
Click to expand and view more

so from [eq:522-sum-ineq] we obtain

MATH
\frac{1}{\,i_o-1-i_s\,}\sum_{j=i_s+1}^{i_o-1} F\big(\mathbf{x}(j)\oplus \mathbf{o}\big)
\ \le\ F(\mathbf{z}\oplus \mathbf{o})-\varepsilon\,F(\mathbf{o}).
Click to expand and view more

By averaging, there must exist some $`j\in\{i_s+1,\dots,i_o-1\}`$ such that

MATH
F\big(\mathbf{x}(j)\oplus \mathbf{o}\big)\ \le\ F(\mathbf{z}\oplus \mathbf{o})-\varepsilon\,F(\mathbf{o}),
Click to expand and view more

which proves the lemma. ◻

Supporting results

In this section we prove two auxiliary results that are used in the proofs of Lemma 25 and Lemma 26. Throughout, we use the convention

MATH
\bigoplus_{i\in \varnothing} \mathbf{x}(i)\ :=\ \mathbf{0}.
Click to expand and view more

Lemma 27. *Let $`F:[0,1]^n\to\mathbb{R}_{\ge 0}`$ be differentiable and $`\gamma`$-weakly DR-submodular with $`0<\gamma\le 1`$. Fix an integer $`r\ge 1`$, vectors $`\mathbf{x}(1),\dots,\mathbf{x}(r)\in[0,1]^n`$, and scalars $`p_1,\dots,p_r\in[0,1]`$. Define

MATH
\beta_\gamma(p)\ :=\ \frac{\gamma^2\,p}{\,1-p+\gamma^2p\,}\qquad(p\in[0,1]).
Click to expand and view more

Then

MATH
F\!\left(\bigoplus_{i=1}^r p_i\,\mathbf{x}(i)\right)
 \;\ge\;
\sum_{S\subseteq[r]}
\ \Biggl(\ \prod_{i\in S}\beta_\gamma(p_i)\prod_{i\notin S}\bigl(1-\beta_\gamma(p_i)\bigr)\ \Biggr)
F\!\left(\bigoplus_{i\in S} \mathbf{x}(i)\right).
```*

</div>

<div class="proof">

*Proof.* We prove the statement by induction on $`r`$.

*Base case $`r=1`$.* Apply
Lemma <a href="#lemma:simpe2" data-reference-type="ref"
data-reference="lemma:simpe2">1</a>(2) with $`\mathbf{x}=\mathbf{0}`$,
$`\mathbf{y}=\mathbf{x}(1)`$ and $`\lambda=p_1`$:
``` math
F\bigl(p_1\,\mathbf{x}(1)\bigr)-F(\mathbf{0})
\ \ge\ \frac{\gamma^2p_1}{1-p_1+\gamma^2p_1}\,\bigl(F(\mathbf{x}(1))-F(\mathbf{0})\bigr).
Click to expand and view more

Rearranging gives

MATH
F\bigl(p_1\,\mathbf{x}(1)\bigr)
\ \ge\ \beta_\gamma(p_1)\,F(\mathbf{x}(1))+\bigl(1-\beta_\gamma(p_1)\bigr)\,F(\mathbf{0}),
Click to expand and view more

which is exactly the claimed formula when $`r=1`$ and $`S\in\{\varnothing,\{1\}\}`$.

Inductive step. Assume the statement holds for some $`r-1\ge 1`$, and consider $`r`$. Define

MATH
G_1(\mathbf{x})\ :=\ F\!\left(\mathbf{x}\ \oplus\ \bigoplus_{i=1}^{r-1} p_i\,\mathbf{x}(i)\right),
\qquad
G_2(\mathbf{x})\ :=\ F\!\bigl(\mathbf{x}\oplus \mathbf{x}(r)\bigr).
Click to expand and view more

By Lemma 3, both $`G_1`$ and $`G_2`$ are nonnegative and $`\gamma`$-weakly DR-submodular.

Apply Lemma 1(2) to $`G_1`$ along the ray $`\mathbf{x}(r)`$, from base $`\mathbf{0}`$ with step $`\lambda=p_r`$:

MATH
\begin{align}
F\!\left(\bigoplus_{i=1}^{r} p_i\,\mathbf{x}(i)\right)
&= G_1\!\bigl(p_r\,\mathbf{x}(r)\bigr)\notag\\
&\ge
\beta_\gamma(p_r)\,G_1\!\bigl(\mathbf{x}(r)\bigr)
\ +\ \bigl(1-\beta_\gamma(p_r)\bigr)\,G_1(\mathbf{0}). \label{eq:mix-step}
\end{align}
Click to expand and view more

By definition of $`G_1`$ and $`G_2`$,

MATH
G_1(\mathbf{x}(r)) = G_2\!\left(\bigoplus_{i=1}^{r-1} p_i\,\mathbf{x}(i)\right),
\qquad
G_1(\mathbf{0})=F\!\left(\bigoplus_{i=1}^{r-1} p_i\,\mathbf{x}(i)\right).
Click to expand and view more

Substituting into [eq:mix-step] gives

MATH
F\!\left(\bigoplus_{i=1}^{r} p_i\,\mathbf{x}(i)\right)
\ \ge\
\beta_\gamma(p_r)\,G_2\!\left(\bigoplus_{i=1}^{r-1} p_i\,\mathbf{x}(i)\right)
\ +\
\bigl(1-\beta_\gamma(p_r)\bigr)\,F\!\left(\bigoplus_{i=1}^{r-1} p_i\,\mathbf{x}(i)\right).
Click to expand and view more

Now apply the induction hypothesis to $`G_2`$ (with the $`r-1`$ vectors $`\mathbf{x}(1),\dots,\mathbf{x}(r-1)`$ and weights $`p_1,\dots,p_{r-1}`$) and to $`F`$:

MATH
\begin{align*}
G_2\!\left(\bigoplus_{i=1}^{r-1} p_i\,\mathbf{x}(i)\right)
&\ge
\sum_{S\subseteq[r-1]}
\Biggl(\prod_{i\in S}\beta_\gamma(p_i) \prod_{i\notin S}\bigl(1-\beta_\gamma(p_i)\bigr)\Biggr)
\,G_2\!\left(\bigoplus_{i\in S}\mathbf{x}(i)\right),\\
F\!\left(\bigoplus_{i=1}^{r-1} p_i\,\mathbf{x}(i)\right)
&\ge
\sum_{S\subseteq[r-1]}
\Biggl(\prod_{i\in S}\beta_\gamma(p_i)\prod_{i\notin S}\bigl(1-\beta_\gamma(p_i)\bigr)\Biggr)
\,F\!\left(\bigoplus_{i\in S}\mathbf{x}(i)\right).
\end{align*}
Click to expand and view more

Note that

MATH
G_2\!\left(\bigoplus_{i\in S}\mathbf{x}(i)\right)
=F\!\left(\bigoplus_{i\in S\cup\{r\}}\mathbf{x}(i)\right).
Click to expand and view more

Plugging these expansions into the right-hand side of [eq:mix-step], we get a sum over all $`S\subseteq[r-1]`$ of:

  • terms with factor $`\beta_\gamma(p_r)`$ and value $`F\bigl(\bigoplus_{i\in S\cup\{r\}}\mathbf{x}(i)\bigr)`$;

  • terms with factor $`(1-\beta_\gamma(p_r))`$ and value $`F\bigl(\bigoplus_{i\in S}\mathbf{x}(i)\bigr)`$.

Reindex the first group by $`S' = S\cup\{r\}`$ (so $`S'\subseteq[r]`$ with $`r\in S'`$) and the second group by $`S'=S`$ (so $`S'\subseteq[r]`$ with $`r\notin S'`$). The coefficient in front of each $`F\bigl(\bigoplus_{i\in S'}\mathbf{x}(i)\bigr)`$ is then

MATH
\prod_{i\in S'}\beta_\gamma(p_i)\prod_{i\notin S'}\bigl(1-\beta_\gamma(p_i)\bigr),
Click to expand and view more

which yields exactly the desired mixture inequality over all $`S'\subseteq[r]`$. This completes the induction. ◻

Corollary 28. *Let $`F:[0,1]^N\to\mathbb{R}_{\ge 0}`$ be nonnegative and $`\gamma`$-weakly DR-submodular with $`0<\gamma\le 1`$. Fix integers $`r,h\ge 1`$. For each $`i\in[h]`$, let $`\mathbf{x}^{(i)}(1),\dots,\mathbf{x}^{(i)}(r)\in[0,1]^N`$ and $`\mathbf{b}(i)\in[0,1]^N`$ satisfy $`\sum_{i=1}^h \mathbf{b}(i)=\mathbf{1}`$ (coordinatewise), and let $`p_1,\dots,p_r\in[0,1]`$. Define

MATH
\beta_\gamma(p)\ :=\ \frac{\gamma^2 p}{1-p+\gamma^2 p}\qquad(p\in[0,1]).
Click to expand and view more

Then

MATH
\begin{align*}
F\!\left(\sum_{i=1}^{h} \mathbf{b}(i)\ \odot\  \bigoplus_{j=1}^{r} \bigl(p_j\,\mathbf{x}^{(i)}(j)\bigr)\right)
\ \ge\
\sum_{S\subseteq[r]}
\left(\ \prod_{j\in S}\beta_\gamma(p_j)\prod_{j\notin S}\bigl(1-\beta_\gamma(p_j) \bigr)\right)
F\!\left( \sum_{i=1}^{h} \mathbf{b}(i)\ \odot\ \bigoplus_{j\in S} \mathbf{x}^{(i)}(j)\right).
\end{align*}
```*

</div>

<div class="proof">

*Proof.* Define $`G:[0,1]^{Nh}\to\mathbb{R}_{\ge 0}`$ on $`h`$ blocks by
``` math
G\big(\mathbf{c}(1),\ldots,\mathbf{c}(h)\big)\ :=\ F\!\left(\sum_{i=1}^{h} \mathbf{b}(i)\odot \mathbf{c}(i)\right).
Click to expand and view more

Since $`F\ge 0`$ and the map $`(\mathbf{c}(1),\dots,\mathbf{c}(h))\mapsto \sum_{i=1}^h \mathbf{b}(i)\odot \mathbf{c}(i)`$ is coordinatewise nonnegative and linear with $`\sum_i \mathbf{b}(i)=\mathbf{1}`$, it follows from Lemma 3 (applied blockwise) that $`G`$ is also nonnegative and $`\gamma`$-weakly DR-submodular on $`[0,1]^{Nh}`$.

For each $`j\in[r]`$, define the block vector

MATH
\mathbf{x}(j)\ :=\ \big(\mathbf{x}^{(1)}(j),\,\mathbf{x}^{(2)}(j),\,\ldots,\,\mathbf{x}^{(h)}(j)\big)\ \in\ [0,1]^{Nh}.
Click to expand and view more

Then

MATH
G\!\left(\bigoplus_{j=1}^{r} p_j\,\mathbf{x}(j)\right)
\ =\
F\!\left(\sum_{i=1}^{h} \mathbf{b}(i)\odot \bigoplus_{j=1}^{r} p_j\,\mathbf{x}^{(i)}(j)\right),
Click to expand and view more

which is exactly the left-hand side of the desired inequality.

Apply Lemma 27 to $`G`$ with inputs $`\mathbf{x}(1),\dots,\mathbf{x}(r)`$ and coefficients $`p_1,\dots,p_r`$:

MATH
G\!\left(\bigoplus_{j=1}^{r} p_j\,\mathbf{x}(j)\right)
 \;\ge\;
\sum_{S\subseteq[r]}
\ \Biggl(\ \prod_{j\in S}\beta_\gamma(p_j)\prod_{j\notin S}\bigl(1-\beta_\gamma(p_j)\bigr)\ \Biggr)
G\!\left(\bigoplus_{j\in S} \mathbf{x}(j)\right).
Click to expand and view more

Finally, note that for every $`S\subseteq[r]`$,

MATH
G\!\left(\bigoplus_{j\in S} \mathbf{x}(j)\right)
\;=\;
F\!\left(\sum_{i=1}^{h} \mathbf{b}(i)\odot \bigoplus_{j\in S} \mathbf{x}^{(i)}(j)\right),
Click to expand and view more

so substituting this identity into the right-hand side above gives exactly the claimed inequality. ◻

A Note of Gratitude

The copyright of this content belongs to the respective researchers. We deeply appreciate their hard work and contribution to the advancement of human civilization.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut