On the Complexity of Local Search for Weighted Standard Set Problems
In this paper, we study the complexity of computing locally optimal solutions for weighted versions of standard set problems such as SetCover, SetPacking, and many more. For our investigation, we use the framework of PLS, as defined in Johnson et al.…
Authors: Dominic Dumrauf, Tim S"u{ss}
Set Problems and Their Approximation In this paper, we study the complexity of computing locally optimal solutions for weighted standard set problems in the framework of PLS, as defined in Johnson et al., [13]. In weighted set problems such as SETPACKING or SETCOVER, the input consists of a set system along with a weight function on the set system. The task is to compute a solution maximizing or minimizing some objective function on the set system while obeying certain constraints. Weighted set problems are fundamental combinatorial optimization problems with a wide range of applications spanning from crew scheduling in transportation networks and machine scheduling to facility location problems. Since they are of such fundamental importance on the one hand but of computational intractability on the other hand, [8], the approximation of weighted standard set problems has been extensively studied in the literature. Numerous heuristics have been applied to or developed for these problems, spanning from greedy algorithms and linear programming to local search.
Local search is a standard approach to approximate solutions of hard combinatorial optimization problems. Starting from an arbitrary feasible solution, a sequence of feasible solutions is iteratively generated, such that each solution is contained in the predefined neighborhood of its predecessor solution and strictly improves a given cost function.
If no improvement within the neighborhood of a solution is possible, a local optimum (or locally optimal solution) is found. In practice, local search algorithms often require only a few steps to compute a solution. However, the running time is often pseudo-polynomial and even exponential in the worst case.
Polynomial Time Local Search Johnson, Papadimitriou, and Yannakakis, [13], introduced the class PLS (polynomial-time local search) in 1988 to investigate the complexity of local search algorithms. Essentially, a problem in PLS is given by some minimization or maximization problem over instances with finite sets of feasible solutions together with a non-negative cost function. A neighborhood structure is superimposed over the set of feasible solutions, with the property that a local improvement in the neighborhood can be found in polynomial time. The objective is to find a locally optimal solution. The notion of a PLS-reduction was defined in Johnson et al., [13], to establish relationships between PLS-problems and to further classify them. Not many problems are known to be PLS-complete, since reductions are mostly technically involved, which seems to be in large parts due to the transformation of the neighborhood under the reduction. In the recent past, game theoretic approaches re-raised the focus on the class PLS since in many games the computation of a Nash Equilibrium can be modeled as a local search problem, [7]. The knowledge about PLS is still very limited and not at all comparable with our rich knowledge about N P.
In this paper, we show that for most weighted standard set problems, computing locally optimal solutions is PLS-complete, even for very small natural neighborhoods. This implies that computing local optima for these problems via successive improvements may not yield a sufficient performance improvement over computing globally optimal solutions. Furthermore, we believe that most problems investigated in this paper have the potential to serve as candidates for the base of future reductions.
In this section, we describe the notation, complexity classes, and problems considered throughout this paper. The fundamental definitions of a PLS-problem and the class PLS were introduced by Johnson, Papadimitriou, and Yannakakis, [13]. For all k ∈ N, denote [k] := {1, . . . , k}, and [k] 0 := [k] ∪ {0}. Given a k-tuple T , let P i (T ) denote the projection to the i-th coordinate for some i ∈ [k]. For some set S, denote by 2 |S| the power set of S. PLS, Reductions, and Completeness, [13] A PLS-problem L = (D L , F L , c L , N L , INIT L , COST L , IMPROVE L ) is characterized by seven parameters. The set of instances is given by D L ⊆ {0, 1} * . Every instance I ∈ D L has a set of feasible solutions F L (I), where feasible solutions s ∈ F L (I) have length bounded by a polynomial in the length of I. Every feasible solution s ∈ F L (I) has a non-negative integer cost c L (s, I) and a neighborhood N L (s, I) ⊆ F L (I). INIT L (I), COST L (s, I), and IMPROVE L (s, I) are polynomial time algorithms. Algorithm INIT L (I), given an instance I ∈ D L , computes an initial feasible solution s ∈ F L (I). Algorithm COST L (s, I), given a solution s ∈ F L (I) and an instance I ∈ D L , computes the cost of the solution. Algorithm IMPROVE L (s, I), given a solution s ∈ F L (I) and an instance I ∈ D L , finds a better solution in N L (s, I) or returns that there is no better one.
A solution s ∈ F L (I) is locally optimal, if for every neighboring solution s ∈ N L (s, I) it holds that c L (s , I) ≤ c L (s, I) in case L is a maximization PLS-problem and c L (s , I) ≥ c L (s, I) in case L is a minimization PLS-problem. A search problem R is given by a relation over {0, 1} * × {0, 1} * . An algorithm "solves" R, when given I ∈ {0, 1} * it computes an s ∈ {0, 1} * , such that (I, s) ∈ R or it correctly outputs that such an s does not exist. Given a PLS-problem L, let the according search problem be R L := {(I, s)
We write limitations to a problem as a prefix and the size of the neighborhood as a suffix. For all PLS-problems L studied in this paper, the algorithms INIT L , COST L , and IMPROVE L are straightforward and polynomial-time computable.
We next describe the PLS-problems we study in this paper. All problems we present are local search versions of their respective decision problems. In the following, let B denote some finite set and let C := {C 1 , . . Unless otherwise mentioned, we use the k-differ-neighborhood where two solutions are mutual neighbors if they differ in at most k-elements which describe a solution. Except for SETCOVER, all problems are maximization problems.
Definition 1 (W3DM-(p,q), [6]). I ∈ D W3DM of WEIGHTED-3-DIMENSIONALMATCHING (in short W3DM) is a pair (n, w) with n ∈ N and w is a function w : [n] 3 → R ≥0 . The components of triples are identified with boys, girls, and homes. F W3DM (I) are all matchings of boys, girls, and homes, i.e. all S ⊆ [n] 3 , with
S, I) contains all feasible solutions where at most p triples are replaced and up to q boys or girls move to new homes.
of all 3-element sets over a finite set B, with |B| = 3q for some q ∈ N, and w :
The hardness results we present in this paper, rely on known hardness results for the problems given below. For all these problems, we use the neighborhood where the value of one variable is changed. The task is to compute an assignment maximizing the sum of the weights.
Definition 11 ((p,q,r)-MCA, [5]). An instance
. Every variable appears in at most q constraints and takes values from [r] with r ∈ N.
Definition 12 (POSNAE, [17]). An instance I ∈ D POSNAE of POSITIVENOTALLEQUAL (in short POSNAE) is an instance of (2, * , 2)-MCA. Constraints have length two and return the weight w C i of constraint C i ∈ C if the two literals in the clause do not have the identical assignment, otherwise they return 0.
Definition 13 ((h)-CNFSAT, [14]). An instance I ∈ D CNFSAT of CNFSAT is an instance of (h, * , 2)-MCA. Constraints are limited to disjunctions of literals over binary variables x ∈ X . We drop the prefix if we refer to instances where clauses can have arbitrary length.
In this subsection, we mainly present related work about PLS and PLS-completeness. The approximation of set problems has been intensively studied in the literature, [10][11][12]16]. Survey articles about local search algorithms can be found in several books, [1,2]. Local search for set problems has been applied in numerous papers, [4,9]. For a survey on the quality of solutions obtained via local search not only for set problems, confer [3]. PLS was defined in Johnson et al., [13], and the fundamental definitions and results are presented in [13,17]. Krentel, [14], shows that (h)-CNFSAT is PLS-complete for some constant h ∈ N. Schäffer and Yannakakis, [17],
show that POSNAE, among numerous other local search problems, is PLS-complete. The problem (p, q, r)-MCA is known to be PLS-complete for triples (3,2,3), (2,3,8), and (6,2,2), [5,14]. Orlin, Punnen, and Schulz present an FPTAS for computing approximate local optima for every linear combinatorial optimization problem in PLS, [15]. The book of Aarts et al., [1], contains a list of PLS-complete problems known so far.
In this paper, we show that for most of the weighted standard set problems given in Subsection 2.1, computing a locally optimal solution is PLS-complete for the 1-differ-neighborhood. This means, that the problems are already hard, when one element describing the solution is allowed to be added, deleted, or exchanged for another element which is not part of the solution. As our main result, we prove the following two theorems: Neighborhoods, Weights, and Hardness. The hardness of a PLS-problems crucially depends on both the structure of the neighborhood and the involved weights. On the one hand, if the neighborhood structure limits the options for improvements in every step such that this can be exploited by polynomial time algorithms, then the problems become easy, regardless of the weights. This is the case in SP-(1) and SC- (1) where the neighborhood structure can be exploited by a greedy algorithm. Interestingly, for all other problems we investigate, the neighborhood structure does not interfere with weights in terms of hardness. For most of the problems, this is the case even for the smallest possible neighborhood of size 1. On the other hand, if all weights are polynomially bounded then locally optimal solutions can be computed via successive improvements in polynomial time. All PLS-complete generalized satisfiability problems we reduce from were proven to be PLS-complete via tight reduction and the involved weights are of exponential size. We incorporate these weights in our reductions, preserving their overall structure. Usually, we introduce additional weights which are not part of the input problem. They belong to auxiliary gadgets that are specific to the reduction. The weights involved are either of size one or such that a single weight exceeds the sum of all weights in the original problem.
The General Technique of Our Reductions. As with most reductions in PLS, [5,17], our reductions for hardness results consist of two parts: In one part, we encode the input problem I in the reduced instance Φ(I) in a rather direct manner, while preserving the structure of the original weights. In the other part, which is specific to the reduction and represents a large part of our contribution, we introduce auxiliary gadgets that enforce a particular structure in local optima. Eventually, these gadgets ensure that locally optimal solutions in Φ(I) indeed correspond to locally optimal solution in I. Our proofs also consist of two parts:
1. First, we show that all feasible solutions which are locally optimal for Φ(I) use the gadgets as intended, thereby uncovering the structure of locally optimal solutions. Depending on the reduction, we call these solutions standard solutions or to be consistent for some property. 2. Second, we show that all local optima for Φ(I) correspond to local optimal for I.
Step 1 now allows to concentrate on the set of all consistent or standard solutions.
We want to stress that reducing from (3, 2, r)-MCA is crucial for us to show tight bounds for SETPACKING and SETCOVER. Furthermore, we believe that reducing from very restricted but PLS-complete versions of the MAXCONSTRAINTASSIGNMENT-problem might prove useful for establishing that further PLS-problems with a small neighborhood are PLS-complete.
To the best of our knowledge, these are are one of the very few PLS results for local search on weighted standard set problems, as intensively studied in the literature. Our analysis also unveils that the hardness of the problems stems from the combination of a numerical problem on an underlying combinatorial problem.
In this section, we investigate the complexity of computing locally optimal solutions for the weighted standard set problems presented in Section 2.1.
Preliminaries Denote by (p, q, r)-MINCA the minimization version of (p, q, r)-MCA. Here, results about (p, q, r)-MCA carry over to (p, q, r)-MINCA. In the following, let integer W ∈ N be larger than the sum of all weights in an instance I from problem POSNAE, CNFSAT, (p, q, r)-MCA or (p, q, r)-MINCA. In detail, for a given instance of POSNAE or CNFSAT, let integer W C i ∈C w C i . For a given instance of (p, q, r)-MCA or (p, q, r)-MINCA, let integer
In this subsection, we show that W3DM-(p, q) is PLS-complete for all p ≥ 6 and q ≥ 12. Since instances of X3C-(k) are instances of W3DM-(k) by defining triples as 3-element sets, our reduction is also applicable to X3C-(k) with the same argumentation. This eventually shows that X3C-(k) is PLS-complete for all k ≥ 6. We present the reduction function Φ and the solution mapping Ψ , which are both slight modifications of a reduction proving that W3DM-(9, 15) is PLS-complete, presented in [6]. We also use the notation presented therein.
The Reduction In a nutshell, the main idea is to mimic assignments of variables in a constraint with triples possessing the weight of the constraint for the given assignment. An additional gadget ensures the consistency for all variable assignments. In more detail, given an instance (6,12) , consisting of a positive integer N ∈ N and a weight function w :
[N ] 3 → N that maps triples to positive integer weights. From [5] it follows that the subclass of instances of (3, 2, r)-MCA where every clause has length three and where the set of variables is tri-colored such that no clause contains two variables with the same color and all sets of variables with a certain color have the same cardinality is PLS-complete. Thus, without loss of generality, we assume that in I, every constraint has length three, every variable appears twice and is colored blue, red, or white. The coloring of the variables is such that no clause contains two variables with the same color and each subset of variables with a certain color has cardinality |X |/3. Let σ be an ordering of C. We define
Fig. 1: Gadgets assign(i, x) for a blue, a red, and a white variable with two large triples (solid triangles) and two medium triples (dashed triangles).
Forcing a consistent assignment We define the three sets
each of cardinality N . For every blue variable x ∈ X and i ∈ [r], we define gadgets assign(i, x) consisting of two large triples
. We depicted a gadget assign(i, x) in Figure 1a for some blue variable x ∈ X and i ∈ [r]. For every red variable y ∈ X and j ∈ [r], we define gadgets assign(j, y) consisting of two large triples (b y 1 (j), g y 1 (0), h y 1 (j)) and (b y 2 (j), g y 2 (0), h y 2 (j)) of weight 7W and two medium triples (b y 1 (j), g y 1 (j), h y 2 (j)) and (b y 2 (j), g y 2 (j), h y 1 (j)) of weight 3W . We again depicted a gadget assign(j, y) in Figure 1b for some red variable y ∈ X and j ∈ [r]. For every white variable z ∈ X and ∈ [r], we define gadgets assign( , z) consisting of two large triples
)) of weight 3W . We again depicted a gadget assign( , z) in Figure 1c for some white variable z ∈ X and ∈ [r].
Evaluating the assignment Without loss of generality, let x ∈ X , be a blue variable, y ∈ X be a red variable, and z ∈ X be a white variable. For every constraint C i (x, y, z) ∈ C, where, with respect to σ, variable x appears for the s-th, variable y appears for the t-th time, and z appears for the u-th time, with s, t, u ∈ [2], we define small triples (b x s (i), g y t (j), h z u ( )) of weight w C i (i, j, ) for every i, j, ∈ [r]. All other triples have weight zero. This terminates the description of the reduction function Φ(I).
Standard assignment Extending the definition from [6], we define a standard assignment as a feasible solution S ∈ F W3DM-(6,12) (Φ(I)), consisting of an assignment part and an evaluation part, of the following form: Considering the assignment part, for every blue variable x ∈ X there is some i ∈ [r], such that for all s ∈ [3], large triples (b x s (0),
Analogously, large and medium triples are present for red and white variables. Considering the evaluation part, let x, y, z ∈ X and i, j, ∈ [r], such that large triples for x, y, and z in S are from gadgets assign(i, x), assign(j, y), and assign( , z). For every constraint C p (x, y, z) ∈ C, where x occurs for the s-th, y occurs for the t-th time, and z occurs for the u-th time, with respect to σ and s, t, u ∈ [3], if x is a blue variable, y is a red variable, and z is a white variable, the triple (b x s (i), g y t (j), h z u ( )) ∈ S; analogously for all other colorings of the involved variables. Proof. We present the proof for sake of completeness, as it is similar to the proof of Lemma 1 presented in [6]. Let S ∈ F W3DM-(6,12) (Φ(I)) be a locally optimal solution. Without loss of generality, let x ∈ X be a blue variable.
Roadmap With variable x fixed, the proof splits into three parts:
1. We first show that there are two large triples (b x 1 (0), g x 1 (i), h x 1 (i)) and (b x 2 (0), g x 2 (j), h x 2 (j)) in S for some i, j ∈ [r]. For every gadget without a large triple, there are two medium triples in S.
2. Second, we prove that the two large triples are on the same gadget. 3. Finally, we show that the small triples in S are chosen in consistency with the placement of the large triples.
(1): Two Large Triples and Two Medium Triples. Assume that w.l.o.g triple (b x 1 (0), andh x 1 (i) are in at most three triples, each of weight at most 2W . Thus, we substitute a total of three triples to obtain a strictly better solution. Considering the medium triples, assume that there exists some j ∈ [r] such that no large triple and not both medium triples from gadget assign(j, x) are in S. Without loss of generality, let (b x 1 (j), g x 1 (j), h x 2 (j)) ∈ S. On gadget assign(j, x), the medium triple (b x 1 (j), g x 1 (j), h x 2 (j)) of weight 2W is built. The necessary elements are in at most three triples of total weight at most W . Thus, we again substitute a total of three triples to obtain a strictly better solution.
Gadget assign(i, x) Gadget assign(j, x)
Better Neighbor Fig. 2: Illustration of the construction of a better solution in (2) from proof of Lemma 1.
(2): Two Large Triples On A Single Gadget. Assume that the large triples are placed on two different gadgets assign(i, x) and assign(j, x) for some i, j ∈ [r] with i = j. In detail, let large triples
We have depicted this situation in the upper part of Figure 2. Note that by construction, there are no medium triples from gadgets assign(i, x) or assign(j, x) in S. We construct a better solution by removing the large triple (b x 1 (0), g x 1 (j), h x 1 (j)) from S. Additionally, on gadget assign(i, x), the large triple (b x 1 (0), g x 1 (i), h x 1 (i)) with weight 7W is built. On gadget assign(j, x), the two new medium triples (b x 1 (j), g x 1 (j), h x 2 (j)) and (b x 2 (j), g x 2 (j), h x 1 (j)), each of weight 2W , are built. We have depicted the better neighboring solution in the lower part of Figure 2. Elements b x 1 (0), g x 1 (j), and h x 1 (j)) are in given triples. The remaining elements g x 1 (i), h x 1 (i), b x 1 (j), h x 2 (j), b x 2 (j), and g x 2 (j) are in at most six triples. Our construction yields an additional two medium triples, each of weight 2W while all decomposed triples which are not shifted to a different gadget have weight at most W . Thus, we replace a total of at most six triples to obtain a solution of strictly higher cost.
(3): Small Weights. The above two cases show that the assignment part of S is that of a standard assignment. In detail, for every variable x ∈ X , there exists some i ∈ [r] such that the two large triples are from the same gadget assign(i, x). For every blue variable x this implies that elements b x 1 (i) and b x 2 (i) are not in any large or medium triple; analogously for every red and white variable. By construction, for every C i ∈ C, only one small triple with strictly positive weight can be uniquely chosen. Without loss of generality, let x ∈ X be a blue variable with s ∈ [2] and i ∈ [r] such that boy b x s (i) is not in any large or medium triple. Without loss of generality, let y ∈ X be a red variable with t ∈ [2] and j ∈ [r] such that girl g y t (j) is not in any large or medium triple. Without loss of generality, let z ∈ X be a white variable with u ∈ [2] and l ∈ [r] such that home h x u (l) is not in any large or medium triple. Let C p (x, y, z) ∈ C be such that x appears for the s-th time, y appears for the t-th time, and z appears for the u-th time with respect to the given ordering σ. Assume that S deviates in the evaluation part. Thus, elements b x s (i), g y t (j), and h x u ( ) are in at most three triples, each of weight zero. By building the small triple (b x s (i), g y t (j), h x u ( )) of weight w Cp (i, j, ), we replace at most three triples to obtain a neighboring solution with strictly improved cost. Lemma 2. (3, 2, r)-MCA ≤ pls W3DM-(p, q) for all p ≥ 6 and q ≥ 12.
Proof. Assume there exists a feasible solution S ∈ F W3DM-(6,12) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. By Lemma 1, S is a standard assignment. This implies that Ψ (I, S) is a legal assignment to all variables x ∈ X . Since Ψ (I, S) is not locally optimal for I, there exists a (w.l.o.g.) white variable z ∈ X from instance I ∈ (3, 2, r)-MCA, which can be set from value i ∈ [r] to a value j ∈ [r] such that the objective function strictly increases by some z > 0. Let variable z appear in constraints C p , C q ∈ C. The neighboring solution of S, where the two large triples are on gadget assign(j, z) and all medium triples are on gadgets assign( , z) for all ∈ [r] and = j, and all small triples are chosen according to the new assignment of value j to z improves the cost of S by z by construction. This exchange involves the six triples ( * , * ,
, and ( * , * , h z 2 (j)). The involved homes are h z 1 (i) and h z 2 (i) from gadget assign(i, z), homes h z 1 (j) and h z 2 (j) from gadget assign(j, z) and homes h z 1 (0) and h z 2 (0) which are in every gadgets assign( * , z). On gadget assign(i, x), girl g z 2 (i) and boy b z 1 (i) move to home h z 1 (i), and girl g z 1 (i) and boy b z 2 (i) move to home h z 2 (i). On gadget assign(j, x), girl g z 1 (j) and boy b z 1 (j) move to home h z 1 (0), and girl g z 2 (j) and boy b z 2 (j) move to home h z 2 (0). All boys and girls in small triples move from homes h z 2 (i) and h z 1 (i) to respective homes h z 2 (j) and h z 1 (j). Thus, 12 boys or girls move to different homes. For all other colors of variables which switch assignment, at most 10 boys or girls move to new homes.
In this subsection, we prove that SP-(k) is PLS-complete for all k ≥ 2 and polynomial-time solvable for k = 1. Given an instance I ∈ D (3,2,r)-MCA , we construct a reduced instance Φ(I) = (M, w) ∈ D SP- (2) , consisting of a collection M of sets over a finite set B, a weight function w : M → N that maps sets in collection M to positive integer weights, and a positive integer m ≤ |M|. W.l.o.g., we assume that in instance I ∈ D (3,2,r)-MCA , every constraint C i ∈ C has length 3 and the weight of every non-zero assignment is strictly larger than 1. Furthermore, we assume that every variable x ∈ X appears in 2 constraints and takes values from [r]. Additionally, we may assume that the variables are ordered by appearance.
The Reduction In a nutshell, the main idea is to define sets representing assignments of variables to values in constraints such that inconsistent assignments intersect. The weight of a set corresponds to the weight of the constraint for the variable assignment the set represents. Additional intersection-free sets of weight 1 offer a relatively small incentive in situations where sets intersect.
In more detail, we create a reduced instance of SP-( 2) with m := |C|. Sets in collection M are defined on elements from the finite set
Collection M consists of the following sets: For all i ∈ [m], we introduce sets
for the first time u 1 , . . . , u a-1 , u a+1 , . . . , u r otherwise, analogously for v b and w c . We call an element x j for some variable x ∈ X and asssignment j ∈ [r] enclosed in a set from M due to the first appearance of x direct representative of x. We say that a family of sets C * , * , * Proof. Assume there exists a feasible solution S ∈ F SP-(2) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. By Lemma 3, S is This implies that Ψ (I, S) is a legal assignment of values to variables x ∈ X . Since Ψ (I, S) is not locally optimal for I, there exists a variable x ∈ X from instance I ∈ (3, 2, r)-MCA, which can be set from value i ∈ [r] to some value j ∈ [r] such that the objective function strictly increases by some z > 0. Let variable x appear in constraints C p , C q ∈ C. Exchanging the sets C i, * , * p and C i, * , * q by sets C j, * , * p and C j, * , * p in S yields a feasible and set-consistent solution and by construction this strictly increases the cost of S by z. A contradiction.
Despite the negative result for SP-(k) for all k ≥ 2, it is possible to compute a locally optimal solution for all instances I ∈ SP-(1) in polynomial time.
Lemma 5. SP-( 1) is polynomial-time solvable.
Proof. Given an instance I ∈ D SP-(1) , we use the following algorithm GREEDYPACKING: Starting from the feasible solution S := ∅, process all sets in M by weight in descending order and add the heaviest yet unprocessed set to S, if it is disjoint from all sets S i ∈ S. In order to prove that a solution S ∈ F SP-(1) (I) computed by GREEDYPACKING is locally optimal, assume that GREEDYPACKING terminated and S is not locally optimal. This implies that there either exists a set S i ∈ M that can be added, or a set S j ∈ S that can be deleted, or exchanged for another set S ∈ M with S ∈ S. Assume there exists a set S i ∈ M with S i ∈ S which can be added to S such that the cost strictly improves by some z ∈ N. This implies that S i is pairwise disjoint from all sets from S and thus, GREEDYPACKING would have included set S i . A contradiction. Assume there exists a set S j ∈ S which can be deleted from S such that the cost strictly improves by some z ∈ N. This implies that S j intersects with some set from S and GREEDYPACKING would have not included S j . A contradiction. Assume there exists a set S j ∈ S which can be exchanged for some set S ∈ M with S ∈ S such that the cost strictly improves by some z ∈ N. This implies that S is pairwise disjoint from all sets in S \ S j and has a larger weight than S j . Thus, GREEDYPACKING would have included S instead of S j . A contradiction.
In this subsection, we prove that SSP-(k) is PLS-complete for all k ≥ 1. Given an instance I ∈ POSNAE, we construct a reduced instance Φ(I) = (M, w) ∈ D SSP-(1) consisting of a collection M of sets over a finite set B and a weight function w : M → N that maps sets from collection M to positive integer weights.
The Reduction Since SETSPLITTING is similar to HYPERGRAPH-2-COLORABILITY, we use a direct reduction: From instance I, we define the reduced instance of SSP-( 1 Proof. Assume there exists a feasible solution S ∈ F SSP-(1) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. This implies that there exists a variable x ∈ X in I which can be flipped such that clauses C i , . . . , C j ∈ C now have literals with non-identical assignments, clauses C p , . . . , C q ∈ C now have literals with identical assignments and the cost of Ψ (I, S) strictly increases by z > 0. By construction, this implies that in Φ(I), element x ∈ B can switch partition and now sets C SSp i , . . . , C SSp j ∈ M are not entirely contained in either S 1 or S 2 and sets C SSp p , . . . , C SSp q ∈ M are entirely contained in either S 1 or S 2 . By definition of w, this strictly increases the cost of S by z. Thus, S is not locally optimal. A contradiction.
In this subsection, we prove that SC-(k) is PLS-complete for all k ≥ 2 and polynomial-time solvable for k = 1.
The Reduction In a nutshell, the main idea is to reuse the encoding of variable assignments and constraints presented in Subsection 3.2 such that for every consistent assignment of variables to values, there exists a covering where no element is covered by two sets of the solution. Shifting the weights by a large constant incentivizes dropping sets which double cover elements.
In more detail, given an instance I ∈ D Proof. Assume there exists a feasible solution S ∈ F SC-(2) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. By Lemma 7, S is a set-consistent assignment. This implies that Ψ (I, S) is a legal assignment of values to variables x ∈ X . Since Ψ (I, S) is not locally optimal for I, there exists a variable x ∈ X from instance I ∈ (3, 2, r)-MINCA, which can be set from value i ∈ [r] to a value j ∈ [r] such that the objective function strictly increases by some z > 0. Let variable x appear in constraints C p , C q ∈ C. Exchanging sets C i, * , * p and C i, * , * q for sets C j, * , * p and C j, * , * q in S yields a feasible and set-consistent solution and this strictly decreases the cost of S by z, by construction. A contradiction.
Despite the negative result for SC-(k) for all k ≥ 2, it is again possible to compute a locally optimal solution for all instances I ∈ SP-(1) in polynomial time.
Lemma 9. SC-( 1) is polynomial-time solvable.
Proof. Given an instance I ∈ D SC-(1) , we use the following algorithm GREEDYCOVER: Starting from the initial feasible solution S := M, process all sets in S by weight in descending order and remove the heaviest yet unprocessed set if S is still a legal cover of B after the removal. In order to prove that a solution S ∈ F SP-(1) (I) computed by GREEDYCOVER is locally optimal, assume that GREEDYCOVER terminated and S is not locally optimal. This implies that there exists a set S i ∈ S that can be deleted or exchanged for another set S j ∈ M with S j ∈ S such that the cost strictly improves by some z > 0. Assume there exists a set S i ∈ S which can be removed. This implies that S is still a legal cover of B after the removal of S i and thus, GREEDYCOVER would have removed set S i as well. A contradiction. Assume there exists a set S i ∈ S which can be exchanged for a set S j ∈ M with S j ∈ S. This implies that set S j of smaller weight covers all elements B \ S ∈(S\S i ) S , i.e. all elements which are uncovered if S i would be removed from S. Since S i has larger weight and after its removal, S is still a legal cover, GREEDYCOVER would have deleted S i from S. A contradiction.
In this subsection, we prove that TS-(k) is PLS-complete for all k ≥ 1. Given an instance I ∈ POSNAE, we construct an instance Φ(I) = (M, w, m) ∈ D TS- (1) . Here, Φ(I) consists of a collection M of sets over a finite set B, a weight function w : M × M → N that maps tuples of elements of B to positive integer weights, and a positive integer m ≤ |M|.
The Reduction The main idea is on the one hand to encode the assignment of variables to values in the choice of singleton sets representing literals in some solution and on the other hand to simulate the evaluation of clauses in the weight function w. Additional small incentives reward the inclusion of singleton sets whereas medium incentives reward the inclusion of distinct literals for variables.
In more detail, given instance I, we construct the reduced instance of TS-(1) over the finite set Assume there exists a locally optimal solution S ∈ F TS-(1) (Φ(I)) with |S | = m, S contains two sets {x 0 } ∈ S and {x 1 } ∈ S . By pigeonhole principle, there exist sets {y 0 } ∈ M and {y 1 } ∈ M with {y 0 } ∈ S and {y 1 } ∈ S . Thus, exchanging {x 0 } for {y 0 } increases the cost of S , since no weight W is lost and the additional weight of w(x 1 , y 0 ) ≥ W dominates the sum of the weights lost due to the removal of {x 0 }. A contradiction.
Lemma 11. POSNAE ≤ pls TS-(k) for all k ≥ 1.
Proof. Assume there exists a feasible solution S ∈ F TS-(1) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. By Lemma 10, S is positive-element-consistent. This implies that Ψ (I, S) is a legal assignment for variables x ∈ X . Since Ψ (I, S) is not locally optimal for I, there exists a variable x ∈ X from instance I, which can be flipped such that clauses C i , . . . , C j ∈ C now have literals with non-identical assignments, clauses C p , . . . , C q ∈ C now have literals with identical assignments and the cost strictly increases by z > 0. This implies that in Φ(I), set {x i } ∈ S can be replaced by set {xī} ∈ M. On the one hand, for every variable y ∈ X with y = x which appears in a clause C l from {C i , . . . , C j } we have that element y i ∈ S and w(xī, y i ) := w C l + 1 + W . On the other hand, for every variable z ∈ X with z = x which appears in a clause C t from {C p , . . . , C q } we have that element yī ∈ S and w(xī, yī) := W + 1 by construction. All other pairs of elements of B remain unchanged in S. By definition of w, this strictly increases the cost of S by z. Thus, S is not locally optimal. A contradiction.
In this subsection, we prove that SB-(k) is PLS-complete for all k ≥ 1. Given an instance I ∈ (h)-CNFSAT, we construct an instance Φ(I) = (M, w, m) ∈ D SB-(1) consisting of a collection M of sets over a finite set B, a weight function w : M → N that maps sets in collection M to positive integer weights, and a positive integer m ≤ |M|.
The Reduction In a nutshell, the main idea is to encode every satisfying assignment of a clause via sets containing the respective literals and possessing the weight of the clause. This is polynomial in the size of the input since the length of a clause in I is at most h. For feasible solutions to be a collection of singleton sets, we add large incentives to include singleton sets and medium incentives to include literals for distinct variables.
From instance I, we construct a reduced instance of SB-(
where ϕ (x i ) := x i if P i (ϕ(x i 1 , . . . , x i k )) = 1 and ϕ (x i ) := xi otherwise. We call a feasible solution S single-set-consistent if |S| = m, |S i | = 1 for all S i ∈ S and for every set {x i } ∈ S it holds that {x i } ∈ S. Here, function Ψ (I, S) returns for a feasible and element-consistent solution S ∈ F SB-(1) (Φ(I)) for every set {x} ∈ S assignment 1 for variable x ∈ X and for every set {x} ∈ S assignment 0 for variable x ∈ X . If S is not single-set-consistent, the assignment computed by INIT (h)-CNFSAT (I) is returned.
Lemma 12. Every locally optimal solution S ∈ F SB-(1) (Φ(I)) is single-set-consistent.
Proof. Assume there exists a locally optimal solution S ∈ F SB-(1) (Φ(I)) which contains a set S i ∈ S with |S i | > 1. Since set S i has cardinality at least two, S i is only used in the union of sets which by construction have total weight strictly smaller than 2W . By pigeonhole principle, there exists a set {x} ∈ 2 |B| which is not in S . Exchanging set S i for {x} strictly increases the cost function. Now, set C SB x can be constructed and w(C SB x ) is larger than the sum of the weights of the sets which cannot be generated any more due to the removal of S i . Thus, S is not locally optimal. A contradiction.
Assume there exists a locally optimal solution S ∈ F SB-(1) (Φ(I)) and S contains two sets {x}, {x} ∈ S . By pigeonhole principle, there exists a set {y} ∈ B with {y} ∈ S and {ȳ} ∈ S . Thus, by exchanging {x} for {y}, the additional set C SB xy can now be constructed. No weight 2W is lost due to the exchange operation. Set C SB xy has weight W and this dominates the sum of the weights of the sets which may not be constructed any more due to the removal of {x}. Thus, the cost of S increased and S is not locally optimal. A contradiction.
Proof. Assume there exists a feasible solution S ∈ F SB-(1) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. By Lemma 12, S is single-set-consistent. This implies that Ψ (I, S) is a legal assignment for all variables x ∈ X . Since Ψ (I, S) is not locally optimal for I, there exists a variable x ∈ X in (h)-CNFSAT, which can be flipped such that clauses C i , . . . , C j ∈ C become satisfied, clauses C p , . . . , C q ∈ C become unsatisfied and the cost strictly increases by z > 0. This implies that in Φ(I), set {x} ∈ 2 |B| can be replaced by set {x} ∈ 2 |B| . Now, sets ∈ M cannot be formed by the union of subcollections of sets of S. By definition of w, this strictly increases the cost of S by z. Thus, S is not locally optimal. A contradiction.
In this subsection, we prove that HS-(k) is PLS-complete for all k ≥ 1. Given an instance I ∈ CNFSAT, we construct an instance Φ(I) = (M, w, m) ∈ D HS-(1) consisting of a collection M of sets over a finite set B, a weight function w : M → N mapping sets in collection M to positive integer weights, and a positive integer m ≤ |B|.
The Reduction In a nutshell, the main idea is to encode every clause as some set containing the respective literals and possessing the weight of the clause. To ensure consistency, we add large incentives to include at least one literal from every variable, but not both.
In more detail, from instance I, we create a reduced instance of HS-(1) over the finite set B := {x, x | x ∈ X } where we define m := |X |. For every variable x ∈ X , we introduce sets C HS x := {x, x} in M with w(C HS x ) := W . For every clause, we introduce a single set possessing the weight of the respective clause and the elements of the set correspond to the literals in the clause. In detail, for every clause C i (x i 1 , . . . , x i l ) ∈ C, we introduce sets C HS i := {x i 1 , . . . , x i l } in M, and we define w(C HS i ) := w C i . We call a feasible solution S element-consistent if |S| = |C| and for every element x ∈ S it holds that x ∈ S. Here, function Ψ (I, S) returns for a feasible and element-consistent solution S ∈ F HS-(1) (Φ(I)) for every element x ∈ S assignment 1 for variable x ∈ X and for every element x ∈ S assignment 0 for variable x ∈ X . If S is not element-consistent, then the assignment computed by INIT CNFSAT (I) is returned. Proof. Assume there exists a feasible solution S ∈ F HS-(1) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. By Lemma 14, S is element-consistent. This implies that Ψ (I, S) is a legal assignment to all variables x ∈ X . Since Ψ (I, S) is not locally optimal for I, there exists a variable x ∈ X in CNFSAT, which can be flipped such that clauses C i , . . . , C j ∈ C become satisfied, clauses C p , . . . , C q ∈ C become unsatisfied and the cost strictly increases by z > 0. This implies that in Φ(I), element x ∈ S can be replaced by element x ∈ B and now sets C HS i , . . . , C HS j ∈ M are hit, and sets C HS p , . . . , C HS q ∈ M are not hit. By definition of w, this strictly increases the cost of S by z. Thus, S is not locally optimal. A contradiction.
In this subsection, we prove that IP-(k) is PLS-complete for all k ≥ 1. Given an instance I ∈ POSNAE, we construct an instance
, and a collection M of sets over a finite set B. W.l.o.g., we assume that in I, every pair of variables x, y ∈ X occurs in some clause C i (x, y) ∈ C. Furthermore, let σ be an ordering of X and let γ x i denote the number of clauses C j ∈ C variable x i ∈ X appears in.
The Reduction In a nutshell, the main idea is for every variable x ∈ X to introduce sets of identical cardinality for both assignments, but which have distinct cardinality from all other sets. These sets contain elements encoding satisfying assignments for all clauses variable x appears in. If a clause is satisfied by a given assignment, then the intersection of the two corresponding sets has cardinality two. In this case, the weight of the clause is added to the solution. Large incentives ensure that, identified by cardinality, the sets for variables are placed in the right position in the solution.
In more detail, let v := |X | and m := 2|C|. We create a reduced instance of IP-(1) over the finite set B := {x
In the v × v matrix A for Φ(I), we define a ii := m + i for all i ∈ [n] and for all i, j ∈ [n] with i = j, we define a ij := 2. In the v × v matrix B for Φ(I), we define b ii := W for all i ∈ [n] and for all i, j ∈ Proof. Assume there exists a feasible solution S ∈ F IP-(1) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. By Lemma 16, S is position-consistent. This implies that Ψ (I, S) is a legal assignment of values to variables x ∈ X . Since by assumption, Ψ (I, S) is not locally optimal for I, there exists a variable x ∈ X in instance I ∈ POSNAE, which can be flipped such that clauses C s (x, y), . . . , C t (x, z) ∈ C now have literals with non-identical assignments, clauses C p (x, u), . . . , C q (x, v) ∈ C ∈ C now have literals with identical assignments and the cost strictly increases by z > 0. This implies that in Φ(I), set C IP . By definition of B, this strictly increases the cost of S by z. Thus, S is not locally optimal. A contradiction.
In this subsection, we prove that CC-(k) is PLS-complete for all k ≥ 1. Given an instance I ∈ (h)-CNFSAT, we construct an instance Φ(I) = (M, N , w) ∈ D CC- (1) . Here, Φ(I), consists of two collections M and N of sets over a finite set B and a weight function w : M ∪ N → N that maps sets from collections M ∪ N to positive integer weights.
The Reduction In a nutshell, the main idea is to encode every satisfying assignment of a clause via sets containing the respective literals and possessing the weight of the clause. This is polynomial in the size of the input since the length of a clause in I is at most h. Additionally, we add large incentives to exclude both literals of a variable and medium incentives to include literals.
In more detail, we create an instance of CC-(1) over the finite set B := {x, x | x ∈ X }. For every x ∈ X , we introduce sets X CC Proof. Assume that there exists a feasible solution S ∈ F CC-(1) (Φ(I)) which is locally optimal for Φ(I), but is not locally optimal for I. By Lemma 18, S is element-consistent. This implies that Ψ (I, S) is a legal assignment for all variables y ∈ X . Since Ψ (I, S) is not locally optimal for I, there exists a variable x ∈ X in (h)-CNFSAT, which can be flipped such that clauses C i , . . . , C j ∈ C become satisfied, clauses C p , . . . , C q ∈ C become unsatisfied and the cost strictly increases by z > 0. This implies that in Φ(I), element x ∈ S can be replaced by element ∈ M. By definition of w, this strictly increases the cost of S by z. Thus, S is not locally optimal. A contradiction.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment