Separable convex optimization problems with linear ascending constraints
📝 Abstract
Separable convex optimization problems with linear ascending inequality and equality constraints are addressed in this paper. Under an ordering condition on the slopes of the functions at the origin, an algorithm that determines the optimum point in a finite number of steps is described. The optimum value is shown to be monotone with respect to a partial order on the constraint parameters. Moreover, the optimum value is convex with respect to these parameters. Examples motivated by optimizations for communication systems are used to illustrate the algorithm.
💡 Analysis
Separable convex optimization problems with linear ascending inequality and equality constraints are addressed in this paper. Under an ordering condition on the slopes of the functions at the origin, an algorithm that determines the optimum point in a finite number of steps is described. The optimum value is shown to be monotone with respect to a partial order on the constraint parameters. Moreover, the optimum value is convex with respect to these parameters. Examples motivated by optimizations for communication systems are used to illustrate the algorithm.
📄 Content
- Problem description. Let g m , m = 1, 2, • • • , L be functions that satisfy the following:
• g m : (a m , b m ) → R where a m ∈ [-∞, 0) and b m ∈ (0, +∞] and therefore a m < 0 < b m ; • g m is strictly convex in its domain (a m , b m ); • g m is continuously differentiable in its domain (a m , b m ); • The slopes of the functions at 0, i.e., the values of the strictly increasing function h m := g ′ m at 0, are in increasing order with respect to the index m:
• There is a point in the domain (a m , b m ) where the slope of g m equals h 1 (0), the slope of the first function at 0. (This may be equivalently stated as h 1 (0) ≥ h m (a m +), given (1.1) and that h m is continuous and strictly increasing). In this paper, we minimize the separable objective function G : R L → R given by
where y = (y 1 , • • • , y L ), subject to the following linear inequality and equality constraints: We also assume K m=L α m > 0.
(1.7)
The inequalities in (1.3) impose positivity and upper bound constraints. Note that if β m = b m , the upper bound constraint is irrelevant because the domain of g m is (a m , b m ). The inequalities in (1.4) impose a sequence of ascending constraints with increasing heights l m=1 α m indexed by l. Assumption (1.6) is necessary for the constraint set to be nonempty. Without (1.7), it is easy to see that y L = 0, and the problem reduces to a similar one with fewer variables.
What we have described is a separable convex optimization problem with linear inequality and equality constraints. A rich duality theory exists for such problems. See Bertsekas [1,Sec. 5.1.6]. Here, we provide an algorithm that puts out a vector that minimizes (1.2) and terminates in at most L steps. Section 2 contains a description of the algorithm and Section 4 the proof of its optimality. While we may take K = L without loss of generality, allowing K ≥ L simplifies the exposition of our algorithm.
Problems of the above kind arise in the optimization of multi-terminal communication systems where power utilized, measured in Joules per second, is minimized, or throughput achieved, measured in bits per second, is maximized, subject to meeting certain quality of service and feasibility constraints. See Viswanath & Anantharam [2] for details and Section 3 for specific examples. Viswanath & Anantharam [2] provide two algorithms for their power minimization and throughput maximization problems. Our work unites their solutions and goes further to minimize any G that satisfies the constraints mentioned above. Under a further condition on the functions which will be stated in Section 2, we argue that our algorithm provides the solution to the above optimization problem with the additional ordering constraint
- The Main Results. We begin with some remarks on notation.
• For integers i, j satisfying i ≤ j, we let i, j denote the set {i, i + 1, • • • , j}.
• Let E m := h m ( (a m , b m ) ), the range of h m . Thus the condition h 1 (0) > h m (a m +), m ∈ 1, L in Section 1 may be written as
(2.1)
) the inverse of the continuous and strictly increasing function h m . The inverse is also continuous and strictly increasing in its domain.
• For convenience, define the functions H m : E m → (a m , β m ] to be
H m is clearly increasing. 1 Assignments to the variable y m will be via evaluation of H m so that the upper bound constraint in (1.3) is automatically satisfied.
• For 1 ≤ i ≤ l < L, let θ l i denote the least θ ≥ h 1 (0) that satisfies the equation
provided the set of such θ is nonempty. Otherwise we say θ l i does not exist. The domain of
The function l m=i H m is increasing, and moreover, strictly increasing until all functions in the sum saturate. So there is no solution to (2.3) when for example
In general, if we can demonstrate the existence of θ and θ, both in the set
then the existence of θ l i ∈ ∩ l m=i E m is assured, thanks to the continuity of l m=i H m . Indeed, we may always take θ = h 1 (0). This is because our assumptions (2.1), (1.1), and the increasing property of
Thus, in order to show existence of θ l i , it is sufficient to identify a θ that satisfies the right side inequality of (2.4). We will have occasion to use this remark a few times in the proof of correctness of the algorithm.
• Similarly, for 1 ≤ i ≤ j ≤ L, we let Θ j i denote the least θ ≥ h 1 (0) that satisfies the equation
provided the set of such θ is nonempty. Otherwise we say Θ j i does not exist. The difference between (2.3) and (2.6) is the summation up to K in the right side of (2.6) and the consequent difference in the upper limits on the left and right sides of (2.6). Hence the upper case Θ j i . The remarks made above on the existence of θ l i are applicable to Θ j i .
• We now provide a description of the variables used in the algorithm for ease of reference.
n: Iteration number.
i n and j n : Pointer locations of the first and the last variables, y in and y jn , that are yet to be set. -N : The last iteration number i
This content is AI-processed based on ArXiv data.