Settling the complexity of local max-cut (almost) completely

We consider the problem of finding a local optimum for Max-Cut with FLIP-neighborhood, in which exactly one node changes the partition. Schaeffer and Yannakakis (SICOMP, 1991) showed PLS-completeness of this problem on graphs with unbounded degree. O…

Authors: Robert Elsaesser, Tobias Tscheuschner

For an undirected graph G = (V, E) with weighted edges w : E → N a cut is a partition of V into two sets V 1 , V 2 . The weight of the cut is the sum of the weights of the edges connecting nodes between V 1 and V 2 . The Max-Cut problem asks for a cut of maximum weight. Computing a maximum cut is one of the most famous problems in computer science and is known to be N P-complete even on graphs with maximum degree three [8]. For a survey of Max-Cut including applications see [17]. A frequently used approach of dealing with hard combinatorial optimization problems is local search. In local search, to every solution there is assigned a set of neighbor solutions, i. e. a neighborhood. The search begins with an initial solution and iteratively moves to better neighbors until no better neighbor can be found. For a survey of local search, we refer to [14]. To encapsulate many local search problems, Johnson et al. [9] introduced the complexity class PLS (polynomial local search) and initially showed PLS-completeness for the Circuit-Flip problem. Schäffer and Yannakakis [22] showed PLS-completeness for many popular local search problems including the local Max-Cut problem with FLIP-neighborhood -albeit their reduction builds graphs with linear degree in the worst case. Moreover, they introduced the notion of so called tight PLS-reductions which preserve not only the existence of instances and initial solutions that are exponentially many improving steps away from any local optimum but also the PSPACE-completeness of the computation of a local optimum reachable by improving steps from a given solution. In a recent paper Monien and Tscheuschner [15] showed the two properties that are preserved by tight PLS-completeness proofs for the local Max-Cut problem on graphs with maximum degree four. However, their proof did not use a PLS-reduction; they left open whether the local Max-Cut problem is PLScomplete on graphs with maximum degree four. For cubic graphs, Poljak [16] showed that any FLIP-local search takes O(n 2 ) improving steps, where Loebl [13] earlier showed that a local optimum can be found in polynomial time using an approach different from local search. Thus, it is unlikely that computing a local optimum is PLS-complete on graphs with maximum degree three. Partially supported by the German Research Foundation (DFG) Priority Programme 1307 "Algorithm Engineering". Due to the huge gap between degree three and unbounded degree, Ackermann et al. [2] asked for the smallest d such that on graphs with maximum degree d the computation of a local optimum is PLS-complete. In this paper, we show that d is either four or five (unless PLS ⊆ P), and thus solve the above problem almost completely. A related problem has been considered by Krentel [10]. He showed PLS-completeness for a satisfiability problem with trivalent variables, a clause length of at most four, and maximum occurrence of the variables of three. Our result has impact on many other problems, since the local Max-Cut has been the basis for many PLS-reductions in the literature. Some of these reductions directly carry over the property of maximum degree five in some sense and result in PLS-completeness of the corresponding problem even for very restricted sets of feasible inputs. In particular, PLS-completeness follows for the Max-2Sat problem [22] with FLIP-neighborhood, in which exactly one variable changes its value, even if every variable occurs at most ten times. PLS-completeness also follows for the problem of computing a Nash Equilibrium in Congestion Games (cf. [6], [2]) in which each strategy contains at most five resources. The problem to Partition [22] a graph into two equally sized sets of nodes by minimizing or maximizing the weight of the cut, where the maximum degree is six and the neighborhood consists of all solutions in which two nodes of different partitions are exchanged, is also PLS-complete. Moreover, our PLS-completeness proof was already helpful showing a complexity result in hedonic games [7]. In this paper, we also consider the smoothed complexity of any FLIP local search on graphs in which the degrees are bounded by O(log n). This performance measure has been introduced by Spielman and Teng in their seminal paper on the smoothed analysis of the Simplex algorithm [20] 1 . Since then, a large number of papers deal with the smoothed complexity of different algorithms. In most cases, smoothed analysis is used to explain the speed of certain algorithms in practice, which have an unsatisfactory running time according to their worst case complexity. The smoothed measure of an algorithm on some input instance is its expected performance over random perturbations of that instance, and the smoothed complexity of an algorithm is the maximum smoothed measure over all input instances. In the case of an LP, the goal is to maximize z T x subject to Ax ≤ b, for given vectors z, b, and matrix A, where the entries of A are perturbated by Gaussian random variables with mean 0 and variance σ 2 . That is, we add to each entry a i,j some value max i,j a i,j • y i,j , where y i,j is a Gaussian random variable with mean 0 and standard deviation σ. Spielman and Teng showed that an LP, which is perturbated by some random noise as described before, has expected running time polynomial in n, m, and σ. This result has further been improved by Vershynin [23]. The smoothed complexity of other linear programming algorithms has been considered in e.g. [3], and quasi-concave minimization was studied in [11]. Several other algorithms from different areas have been analyzed w. r. t. their smoothed complexity (see [21] for a comprehensive description). Two prominent examples of local search algorithms with polynomial smoothed complexity are 2-opt TSP [5] and k-means [1]. We also mention here the papers of Beier, Röglin, and Vöcking [4,19] on the smoothed analysis of integer linear programming. They showed that if Π is a certain class of integer linear programs, then Π has an algorithm of probably polynomial smoothed complexity 2 iff Π u ∈ ZP P , where Π u is the unary representation of Π, and ZP P denotes the class of decision problems solvable by a randomized algorithm with polynomial expected running time that always returns the correct answer. The results of [4,19] imply that e.g. 0/1-knapsack, constrained shortest path, and constrained minimum weighted matching have probably polynomial smoothed complexity. Unfortunately, the results of these papers cannot be used to settle the smoothed complexity of local Max-Cut. Overview In section 3, we introduce a technique by which we substitute graphs whose nodes of degree greater than five have a certain type -we will call these nodes comparing -by graphs of maximum degree five. In particular, we show that certain local optima in the former graphs induce unique local optima in the latter ones. In section 4 we show an overview of the proof of the PLS-completeness of computing a local optimum of Max-Cut on graphs with maximum degree five by reducing from the PLS-complete problem CircuitFlip. In a nutshell, we map instances of CircuitFlip to graphs whose nodes of degree greater than five are comparing. Some parts of the graphs are adjustments of subgraphs of the PLS-completeness proof of [22]. Then, using our technique, we show that local optima for these graphs induce local optima in the corresponding instances of CircuitFlip. In section 5 we show that on graphs with degree O(log n) local Max-Cut has probably polynomial smoothed complexity. To obtain this result, we basically prove that every improving step w. r. t. the FLIPneighborhood increases the cut by at least a polynomial value in n and/or σ, with high probability. A graph G together with a 2-partition P of V is denoted by G P . We let c G P : V → {0, 1} with c G P (u) = 1 if and only if u ∈ V 1 in G P . We let c G P (u) be the color of u in G P , where u is white if c G P (u) = 0 and black otherwise. If the considered graph is clear from the context then we also just write c P (v) and if even the partition is clear then we omit the whole subscript. For convenience we treat the colors of the nodes also as truth values, i. e. black corresponds to true and white to false. For a vector v of nodes we let c(v) be the vector of colors induced by c. We say that an edge {u, v} is in the cut in P if c P (u) = c P (v). For a node u we say that u flips if it changes the partition. A node u is happy in G P if a flip of u does not increase the weight of the cut, and unhappy otherwise. Since we consider weighted graphs, we also say that a flip increases the cut if it increases the weight of the cut. A partition P is a local optimum if all nodes in G P are happy. A local search problem Π consists of a set of instances I, a set of feasible solutions F(I) for every instance I ∈ I, and an objective function f : F(I) → Z. In addition, every solution s ∈ F(I) has a neighborhood N (s, I) ⊆ F(I). For an instance I ∈ I, the problem is to find a solution s ∈ F(I) such that for all s ∈ N (s, I) solution s does not have a greater value than s with respect to f in case of maximization and not a lower value in case of minimization. A local search problem Π is in the class PLS [9] if the following three polynomial time algorithms exist: algorithm A computes for every instance I ∈ I a feasible solution s ∈ F(I), algorithm B computes for every I ∈ I and s ∈ F(I) the value f (s), and algorithm C returns for every I ∈ I and s ∈ F(I) a better neighbor solution s ∈ N (s, I) if there is one and "locally optimal" otherwise. A problem Π ∈ PLS is PLS-reducible to a problem Π ∈ PLS if there are the following polynomial time computable functions Φ and Ψ . The function Φ maps instances I of Π to instances of Π and Ψ maps pairs (s, I), where s is a solution of Φ(I), to solutions of I, such that for all instances I of Π and local optima s * of Φ(I) the solution Ψ (s * , I) is a local optimum of I. Finally, a problem Π ∈ PLS is PLS-complete if every problem in PLS is PLS-reducible to Π. In our technique, as well as in the PLS-completeness proof, we make use of a result of Monien and Tscheuschner [15]. They showed a property for a set of graphs containing two certain types of nodes of degree four. Since we do not need their types in this paper, we omit the restrictions on the nodes and use the following weaker proposition. ). Let C f be a boolean circuit with N gates which computes a function f : {0, 1} n → {0, 1} m . Then, using O(logN ) space, one can compute a graph G f = (V f , E f ) with maximum degree four containing nodes s 1 , . . . , s n , t 1 , . . . , t m ∈ V f of degree one such that for the vectors s := (s 1 , . . . , s n ), t := (t 1 , . . . , t n ) we have f (c P (s)) = c P (t) in every local optimum P of G f . Definition 1. For a polynomial time computable function f we say that G f = (V f , E f ) as constructed in Lemma 1 is the graph that looks at the input nodes s i ∈ V f and biases the output nodes t i ∈ V f to take the colors induced by f . Usage of Lemma 1 Notice first that G f can be constructed in logarithmic space and thus polynomial time for any polynomial time computable function f . In the rest of the paper we use the graph G f for several functions f and we will scale the weights of its edges. Then, the edges of G f give incentives of appropriate weight to certain nodes of those graphs to which we add G f . The incentives bias the nodes to take the colors induced by f . We already point out that for any node v we will introduce at most one subgraph that biases v. Moreover, the unique edge e = {u, v} incident to a biased node v that is an edge of the subgraph that biases v will in many cases have the lowest weight among the edges incident to v. In particular, the weight of e will then be chosen small enough such that the color of v, in local optima, depends on the color of u if and only if v is indifferent with respect to the colors of the other nodes adjacent to v. Note that in local optima the node u has the opposite color as the color to which v is biased according to f . 3 Substituting certain nodes of unbounded degree m , u ∈ V \{v} with edge weights a 1 , . . . , a m , δ, as shown in Figure 1, (ii) u is a node of a subgraph G = (V , E ) of G that looks at a subset of V \ {u, v} and biases v, (iii) a i ≥ 2a i+1 for all 1 ≤ i < m and a m ≥ 2δ. The subgraph G is called the biaser of v. For u j i with 1 ≤ i ≤ m, 1 ≤ j ≤ 2 we call the node u k i with 1 ≤ k ≤ 2 and k = j adjacent to v via the unique edge with the same weight as {u j i , v} the counterpart of u j i with respect to v. The name of the comparing node stems from its behaviour in local optima. If we treat the colors of the neighbors u 1 1 , . . . , u 1 m of v as a binary number a, with u 1 1 being the most significant bit, and the colors of u 2 1 , . . . , u 2 m as the bitwise complement of a binary number b then, in a local optimum, the comparing node v is white if a > b, it is black if a < b, and if a = b then v has the color to which it is biased by its biaser. In this way, the color of v "compares" a and b in local optima. In the following, we let G = (V, E) be a graph and v ∈ V be a comparing node with adjacent nodes and incident edges as in Figure 1. We say that we degrade v if we remove v and its incident edges and add the following nodes and edges. We introduce nodes v k i,j for 1 and v 1 m,2 with edges and weights as depicted in Figure 2 -the nodes u j i in Figure 2 have gray circumcircles to indicate that they, in contrast to the other nodes, also occur in G. Furthermore, we add a subgraph G that looks at u and biases all nodes v k i,1 to the opposite of the color of u (this is illustrated by short gray edges in Figure 2) and the nodes v k i,2 to the color of u (short gray dashed edges). The weights of the edges of G are scaled such that each of them is strictly smaller than δ. Note that due to the scaling the color of the unique node of G adjacent to u does not affect the happiness of u -node u is therefore not depicted in Figure 2 anymore. We let G(G,v) be the graph obtained from G by degrading v and we call v weakly indifferent in a partition P if c P (u 1 i ) = c P (u 2 i ) for all 1 ≤ i ≤ m. If v is not weakly indifferent then we call the two nodes u 1 i , u 2 i adjacent to v via the edges with highest weight for which c P (u 1 i ) = c P (u 2 i ) the decisive neighbors of v in P . We let V com ⊆ V be the set of comparing nodes of V , and for a partition P of the nodes of G(G, v) we let col P : V com → {0, 1} be the partial function defined by col P (v) = 0, if for all i, j : c P (v j i,1 ) = 0 and c P (v j i,2 ) = 1, 1, if for all i, j : c P (v j i,1 ) = 1 and c P (v j i,2 ) = 0. We say that a comparing node v has the color κ ∈ {0, 1} in a partition P if col P (v) = κ. Theorem 1. Let G = (V, E) be a graph, v ∈ V a comparing node, its adjacent nodes and incident edges as in Figure 1, P be a local optimum of G such that in P the biaser of v biases v to c P (v), i. e. c P (u) = c P (v). Let P be a partition of the nodes of G(G, v) such that c P (w) = c P (w) for all w ∈ V \ {v}. Then, P is a local optimum if and only if c P (v) = col P (v). Note the restriction that in the local optimum P the biaser of v biases v to the color that v in fact has in P and not to the opposite. In the PLS-completeness proof in section 4 the biaser of any comparing node v is designed to bias v to the color that v has in a local optimum P due the colors of its neighbors. Then, we can use Theorem 1 to argue about col P (v) in G(G, v). Proof. Let κ ∈ {0, 1} be the color to which v is biased by its biaser in P , i. e. κ := c P (v). For all i, j we call the color of v j i,1 correct if c P (v j i,1 ) = κ and we call the color of v j i,2 correct if c P (v j i,2 ) = κ. Moreover, we call v k i,j correct for any i, j, k if it has its correct color. "⇒": Let P be a local optimum. Note that each node v k i,j is biased by an edge with weight lower than δ to its correct color. Therefore, to show that it is correct in the local optimum P , it suffices to show that it gains at least half of the sum of weights of the incident edges with weight greater than δ if it is correct. We prove the Theorem by means of the following Lemmas which are each proven via straightforward inductive arguments. Lemma 2. Let q ≤ m and c P (u 1 i ) = κ for all i ≤ q. Then, v 1 i,1 and v 1 i,2 are correct for all i ≤ q. Proof. We prove the claim by induction on i. Due to c P (u 1 1 ) = κ we get the correctness of v 1 1,1 . For each i ≤ q the correctness of v 1 i,1 implies the correctness of v 1 i,2 . Moreover, for each i < q the correctness of v 1 i,2 together with c P (u 1 i+1 ) = κ implies the correctness of v 1 i+1,1 . Lemma 3. Let q ≤ m, node v 1 i,1 and v 1 i,2 be correct for all i ≤ q, and v 2 q,1 be correct. Then, v 2 i,1 and v 2 i,2 are correct for all i < q. Proof. We prove the claim by induction on i. Node v 2 q,1 and v 1 q,1 are correct by assumption. q,1 and v 2 q,1 are correct then v k i,j is correct for any j, k, and q ≤ i ≤ m. Proof. If q = m then the correctness of v 1 m,1 implies the correctness of v 1 m,2 . The case q < m is done by induction on i. Node v 1 q,1 and v 2 q,1 are correct by assumption. Assume that v 1 i,1 and v 2 i,1 are correct for an arbitrary q ≤ i < m. Then, the nodes v 1 i,2 and v 2 i,2 are correct whereafter the correctness of v 1 i+1,1 and v 2 i+1,1 follows. Finally, the correctness of v 1 m,1 implies the correctness of v 1 m,2 . We first consider the case that v is weakly indifferent. Then, for each i at least one of the nodes u 1 i and u 2 i has the color κ. Due to the symmetry between the nodes v 1 i,j and v 2 i,j we may assume w. l. o. g. that c P (u 1 i ) = κ for all i. Then, Lemma 2 implies that v 1 i,1 and v 1 i,2 are correct for all i. Then, the correctness of v 1 m,2 and v 1 m-1,2 together imply the correctness of v 2 m,1 . Then, Lemma 3 implies the correctness of v 2 i,1 and v 2 i,2 for all i < m. Now assume that v is not weakly indifferent and let u 1 q and u 2 q be the decisive neighbors of v. As in the previous case we assume w. l. o. g. that c P (u 1 i ) = κ for all i ≤ q. Then, due to Lemma 2 node v 1 i,1 and v 1 i,2 are correct for all i ≤ q. If q = 1 then c(u 2 1 ) = κ implies the correctness of v 2 1,1 -recall that by assumption v is biased to the opposite color of the color of the decisive nodes. On the other hand, if q > 1 then the correctness of v 1 q-1,2 and c(u 2 q ) = κ together imply the correctness of v 2 q,1 . Then, Lemma 3 implies the correctness of v 2 i,1 and v 2 i,2 for all i < q. Finally, Lemma 4 implies the correctness of v k i,j for all j, k, and q ≤ i ≤ m. "⇐": Assume, that every node v k i,j is correct. As we have seen in "⇒" v k i,j is happy then. Moreover, each u j i is also happy since its neighbors have the same colors as in the local optimum P -recall that if v j i,1 is correct it has the same color in P as v in P . The colors of the remaining nodes are unchanged. Therefore, P is a local optimum. This finishes the proof of Theorem 1. Our reduction bases on the following PLS-complete problem CircuitFlip (in [9] it is called Flip, which we avoid in this paper since the neighborhood of Max-Cut has the same name). ). An instance of CircuitFlip is a boolean circuit C with n input bits and m output bits. A feasible solution of CircuitFlip is a vector v ∈ {0, 1} n of input bits for C and the value of a solution is the output of C treated as a binary number. Two solutions are neighbors if they differ in exactly one bit. The objective is to maximize the output of C. Theorem 2. The problem of computing a local optimum of the Max-Cut problem on graphs with maximum degree five is PLS-complete. Proof. We reduce from the PLS-complete problem Circuitflip. Let C be an instance of Circuitflip with input variables X 1 , . . . , X n , outputs C 1 , . . . , C m , and gates G N , . . . , G 1 . W. l. o. g. we make the following assumptions. Each input variable occurs exactly once in exactly one gate. All gates are NOR-gates with a fanin of 2 and are topologically sorted such that i > j if G i is an input of G j . For the sake of simplicity, we denote G i also as the output of gate G i . The two inputs of a gate G i are denoted by I 1 (G i ) and I 2 (G i ), i. e. a gate G i computes correctly if and only if The gates G 1 , . . . , G m are the output of C where G m is the most significant bit and G m+1 , . . . , G 2m compute the corresponding negations of the output bits. The gates G 2m+1 , . . . , G 2m+n and G 2m+n+1 , . . . , G 2m+2n return the same better neighbor solution if there is one and return X 1 , . . . , X n otherwise. Finally, let C(x) be the output of C on input x ∈ {0, 1} n and w(x) be the better neighbor of x computed by C on input x and assume w. l. o. g. N > 20 and m ≥ n > 4. The proof in a nutshell: representing copies of C -the overall structure of our proof is inspired by [10]. For each gate G i in C there is a subgraph S κ i for κ ∈ {0, 1} in G C . The subgraphs S κ i are taken from [22] and adjusted such that they have maximum degree five without changing local optima. In particular, each S κ i contains a comparing node g κ i whose color represents the output of G i . To maintain a maximum degree of five we assume that g κ i is degraded in G C and argue via Theorem 1 about its color in local optima. Then, the colors of the nodes of S κ i , in local optima, either behave as a NOR-gate or have a reset state, i. e. a state in which each input node of S κ i is indifferent w. r. t. its neighbors in S κ i . For each κ ∈ {0, 1} we have a subgraph T κ that looks at g κ i for 2m + 1 ≤ i ≤ 2m + n, i. e. at the improving solution, and biases each input node of G κ C to the color of its corresponding g κ i . Finally, we have a subgraph that looks at the input nodes of G 0 C , G 1 C , decides whose input results in a greater output w. r. t. C -this subgraph is called winner as opposed to the loser which is the other subgraph -and biases the subgraphs S κ i of the winner to behave like NOR-gates and the subgraphs of the loser to take the reset state. Then, we show that the colors of the subgraphs S κ i of the winner in fact reflect the correct outputs w. r. t their inputs and that the input nodes of the loser in fact are indifferent w. r. t. their neighbors in the subgraphs S κ i . Then, due to the bias of T κ , the input nodes of the loser take the colors of the improving neighbor computed by the winner whereafter the loser becomes the new winner. Hence, the improving solutions switch back and forth between the two copies until the colors of the input nodes of both copies are local optima and the copies return their input as improving solution. Then, the colors of the input nodes induce a local optimum of C. Before turning into the details we introduce some notations w. r. t. G C . We let x κ i be the input nodes of G κ C , w κ i,1 := g κ 2m+i , w κ i,2 := g κ 2m+n+i for 1 ≤ i ≤ n, and ĝκ i := g κ m+i for 1 ≤ i ≤ m. Each subgraph G κ C also contains nodes y κ i , z κ i for 0 ≤ i ≤ 2N + 1 and λ κ i for 1 ≤ i ≤ n which induce vectors y κ , z κ , and λ κ . Moreover, we let x κ be the vector of nodes induced by x κ i for 1 ≤ i ≤ n. We will introduce the nodes and edges of The components of G C have fourteen types: type 1 up to type 14, where we say that the nodes, edges, and weights of the edges of the components have the same types as their corresponding components. We will explicitly state weights for the edges of type 2 up to 7. However, the weights of these components are only stated to indicate the relations between edge weights of the same type. The only edge weights that interleave between two different types are those of type 3 and 4. The edges of type 3 and 4 are scaled by the same number. For all other types we assume that their weights are scaled such that the weight of an edge of a given type is greater than four times the sum of the weights of the edges of higher types combined. Note that for these types a lower type implies a higher edge weight. To distinguish between the meaning of the explicitly stated edge weights and the final edge weights, i. e. the weights resulting by the scale, we will speak of the explicitly stated weights of relative edge weights. The components of some types are introduced via drawings. In the drawings, the thick black edges and the nodes with black circumcircles are nodes counted among the components of the introduced type. Gray edges and nodes with gray circumcircles are of a different type than the component introduced in the corresponding drawing and are only (re-)drawn to simplify the verification of the proofs for the reader -in particular the condition that each node is of maximum degree five. If for a gray edge there is no explicit relative weight given then the edge is among the types 8 -14. If a gray edge is dotted then it is of higher type than the non-dotted gray edges of the same drawing. If a node has a black or a white filling then it is of type 1. These nodes are also (re-)drawn in components of type higher than 1. Type 1 is to provide the constants 0 and 1 for the components of higher type. It contains nodes s, t which are connected by an edge with a weight that is greater than the sum of all other edges in E C . Assume w. l. o. g. c(s) = 0 and let S and T be the sets of nodes representing the constants 0 and 1. Type 1 looks at s and biases the nodes of S to the color of s and the nodes of T to the opposite. In the following we assume for each constant introduced in components of higher types there is a separate node in the sets S, T . Type 2 contains the nodes d 0 , d 1 , u 0 , u 1 -we will see later that d 0 and d 1 are comparing nodes -with edges and relative weights as depicted in Figure 3. The purpose of these edges is -together with the edges of type 9 and 10 -to guarantee that d 0 and d 1 are not both black in local optima. The nodes d 0 and d 1 are adjacent to many nodes of higher type, and have a degree greater than five. The components of type 3 to 7 are to represent the two subgraphs G 0 C and G 1 C . The components are very similar to certain clauses of [22]. There are three differences between our components and their clauses. First, we omit some nodes and edges to obtain a maximum degree of five for all nodes different from g κ i , I 1 (g κ i ), and I 2 (g κ i ). Second, we use different edge weights. However, the weights are manipulated in a way such that the happiness of each node for given colors of the corresponding adjacent nodes is the same as in [22]. Third, we add nodes that we bias and to which we look at. Their purpose is to derive the color that a comparing node g κ i would have if it was a single node. This color is used to bias g κ i such that Theorem 1 implies either col(g κ i ) = 0 or col(g κ i ) = 1. Type 3 consists of subgraphs S κ i which are to represent the gates G i of C. For gates whose inputs are not inputs of G κ C they are depicted in Figure 4. Together with d 0 and d 1 , the nodes g κ i (and I k (g κ j ) respectively) are the only nodes which have a degree greater than five -we will see later that they are also comparing. For each gate G κ i whose inputs are inputs of G κ C we take the same components as for those gates whose inputs are not inputs of G κ C but make the following adjustment. We omit the edges {I 1 (g κ i ), 0} and {I 2 (g κ i ), 1} and subtract their relative weights from the edges {I 1 (g κ i ), 1} and {I 2 (g κ i ), 0} respectively, i. e. their relative weights are 2 10i+7 -2 10i-5 and 2 10i-5 -2 10i-1 . Note that the adjustment does not change the happiness of the nodes I 1 (g κ i ) and I 2 (g κ i ) for any given colors of themselves and their neighbors. We call the edges {g κ i , u κ i,j } for j ∈ {2, 3, 6, 7, 10, 11} corresponding to g κ i . Fig. 4. The components of type 3; extra factor for relative edge weights: Type 4 (Figure 5) checks whether the outputs of the gates represented by the components of type 3 are correct and gives incentives to nodes of other components depending on the result. As in [22] we say that the natural value of the nodes y κ i is 1 and the natural value of the nodes z κ i is 0. The nodes y κ N +1 , z κ N +1 , . . . , y κ 2 , z κ 2 check the correct computation of the corresponding gates and give incentives to their corresponding gates depending on whether the previous gates are correct. The nodes y κ 1 , z κ 1 , y κ 0 , z κ 0 are to give incentives to d 0 , d 1 depending on whether all gates are correct. Recall that the weights of the edges of type 4 are the only weights that interleave with weights of edges of a higher type, namely with those of type 3. Type 6 contains nodes dκ i for all 1 ≤ i ≤ n with incident edges { dκ i , d κ } of relative weight 2 2i . These edges are to ensure that col(d κ ) = c( dκ i ) for all i. The component also contains n edges {1, d κ } with relative weights 2 2i for all 1 ≤ i ≤ n -recall that each constant is represented by a separate node of type 1. These edges are needed for d κ to be a comparing node. Type 7 (Figure 7) is to incite the input nodes of G κ C to take the color corresponding to the better neighbor computed by G κ C if col(d κ ) = 0. As we will see in Lemma 6 the node λ κ i has the same color as ) and col(d κ ) = 0. Moreover, we will see in the same Lemma that λ κ i has the opposite color as µ κ i in any local optima. Therefore the nodes λ κ i and µ κ i together with their incident edges, in the case that w κ i,1 = w κ i,2 and col(d κ ) = 0, have the functionality of a subgraph T κ that looks at the nodes w κ i,1 and biases the input nodes of G κ C to take the color of their corresponding w κ i,1 . Concerning the maximum degree of five recall that the number of edges of type 3 incident to x κ was three due to the adjustment. One edge of type 7 is incident to x κ and one edge of higher type -depicted as a gray edge in Figure 7 -is incident to x κ . Thus, x κ has a degree of five. The components of type 8 to 14 are subgraphs that look at certain nodes and bias other nodes. No node to which any component looks at is a comparing node. Therefore, all of them must be of degree at most five in our construction. But to some of these nodes more than one component looks at. To maintain a maximum degree of five for these nodes, we assume that the component of the lowest type which looks at such a node v not only biases the nodes of which we state that it biases them but also biases extra nodes v 1 , . . . , v k , for k ∈ N great enough, to have the same color as v and the components of higher types look at v 1 , . . . , v k instead of the original nodes. Type 8 looks at the vectors x 0 , x 1 of nodes representing the inputs of G C 0 and G C 1 and at the vectors λ 0 , λ 1 of nodes of type 7 and biases the vectors y 0 , z 0 , y 1 , and z 1 in the following way. The nodes y 0 i , z 0 i for all 0 ≤ i ≤ 2N + 1 are biased to their unnatural value, as defined in type 4, if C(x 0 ) < C(x 1 ), w(x 1 ) = c(x 0 ), and w(x 1 ) = c(λ 0 ) and to their natural value otherwise. Similarly, y 1 i , z 1 i are biased to their unnatural value if C(x 0 ) ≥ C(x 1 ), w(x 0 ) = c(x 1 ), and w(x 0 ) = c(λ 1 ) and to their natural value otherwise. The comparison between C(x 0 ) and C(x 1 ) is used to decide which circuit is the winner and which one is the loser and the consideration of the other colors is to avoid certain troublemaking local optima. The idea behind the next two components is as follows. In any local optimum, we want for the nodes d 0 and d 1 at most one to be black. The immediate idea to reach this would be to use a simple edge between them in the component of type 2 (see Figure 3) without the intermediate nodes u 0 and u 1 . To show -later in the proof -that a comparing node d κ has a certain color, we want to apply Theorem 1. For this, we need to know the colors of the neighbors adjacent to d κ via the edges of the highest weight, which includes the color of d κ . But argue about the color of d κ via Theorem 1 analogously needs the information about the color of d κ . To solve this problem, we introduce the intermediate nodes u 0 and u 1 , bias them appropriately and use their colors to bias d 0 and d 1 . Type 9 looks at y 0 1 , y 1 1 , and at the vectors x 0 and x 1 and biases u 0 and u 1 as follows. If C(x 0 ) ≥ C(x 1 ) then it biases u 0 to the color of y 0 1 and u 1 to the opposite. Otherwise it biases u 1 to the color of y 1 1 and u 0 to the opposite. Type 10 looks at u 0 , u 1 , y 0 1 , y 1 1 , and at the vectors x 0 and x 1 and biases d 0 and d 1 as follows. If c(y 0 1 ) = c(y 1 1 ) = 0 then d 0 is biased to the color of u 1 and d 1 to the color of u 0 . If c(y 0 1 ) = c(y 1 1 ) then d 0 is biased to the color of y 1 1 and d 1 to the opposite. If c(y 0 1 ) = c(y 1 1 ) = 1 then we distinguish two cases. If C(x 0 ) ≥ C(x 1 ) then d 0 is biased to 0 and d 1 to 1, otherwise d 0 to 1 and d 1 to 0. Type 11 is to bias the nodes of type 3 to certain preferred colors depending on whether y κ 2i+1 has its natural value. If it has its natural value then it biases the subgraph S κ i to colors which reflect the behavior of a NOR-gate for S κ i and otherwise it biases them such that the input nodes I 1 (g κ i ) and I 2 (g κ i ) are indifferent with respect to their neighbors in S κ i , i. e. the nodes of S κ i are biased to their reset state. In particular the component looks at y κ 2i+1 for 1 ≤ i ≤ N and biases α κ i,1 , α κ i,2 , γ κ i,1 , γ κ i,2 , β κ i,3 , τ κ i,1 , and τ κ i,2 to the color of y κ 2i+1 and β κ i,1 , β κ i,2 , γ κ i,3 , σ κ i,1 , σ κ i,2 , δ κ i,1 , and δ κ i,2 to the opposite. The aim of the next two components is as follows. We want to bias the comparing nodes g κ i such that we can apply Theorem 1 to obtain either col(g κ i ) = 1 or col(g κ i ) = 0. To reach this, we need to know the colors of the nodes adjacent to g κ i . For this purpose we introduce -similarly as in the component of type 2 -extra nodes u κ i,j , bias them appropriately and use their colors instead. Type 12 looks at y κ 2i+1 , y κ 2i-1 , α κ i,1 , and α κ i,2 and biases u κ i,1 , u κ i,3 , u κ i,5 , u κ i,7 , u κ i,10 , u κ i,12 to white and u κ i,2 , u κ i,4 , u κ i,6 , u κ i,8 , u κ i,9 , u κ i,11 to black if c(y κ 2i+1 ) = c(y κ 2i-1 ). Otherwise, u κ i,3 , u κ i,4 , u κ i,7 , u κ i,8 , u κ i,11 , u κ i,12 are biased to their respective opposite and the biases of the remaining nodes split into the following cases. Node u κ i,1 is biased to c(α κ i,1 ) and u κ i,2 to the opposite. Similarly, u κ i,5 is biased to c(α κ i,2 ) and u κ i,6 to the opposite. Finally, u κ i,9 is biased to c(α κ i,2 ) ∧ c(α κ i,2 ) and u κ i,10 to the opposite. Type 13 looks for all 1 ≤ i ≤ m at y κ 2i-1 , α κ i,1 , and α κ i,2 and biases u κ i,14 to c(y κ 2i-1 ) ∧ c(α κ i,1 ) ∧ c(α κ i,2 ) and u κ i,13 to the opposite. Similarly, it looks for all m + 1 ≤ i ≤ 2m at y κ 2i-1 , α κ i,1 , and α κ i,2 and biases u κ i,15 to c(y κ 2i-1 ) ∧ (¬c(α κ i,1 ) ∨ ¬c(α κ i,2 )) and u κ i,16 to the opposite. Type 14 looks at all nodes of type lower than 14 that are adjacent to g κ i with the single exception of ). This finishes the description of G C . Now we consider the colors of the nodes of G C in an arbitrary local optimum. All of the remaining Lemmas have an inherent statement "for any local optimum P ". We call a gate ). In the following we will, among other things, argue about the colors of the comparing nodes v ∈ V C in P . We do this by naming the decisive neighbors of v, their colors, and the color to which v is biased. Then, we can deduce the color of v via Theorem 1 -recall that a necessary condition of Theorem 1 is that v is biased to the opposite color as the color of its decisive neighbors if v is not weakly indifferent. The following Lemmas characterize properties of some components. Lemma 5. d 0 , d 1 and g κ i for any 1 ≤ i ≤ N -n, κ ∈ {0, 1} are comparing nodes. Either col(g κ i ) = 1 or col(g κ i ) = 0 for all 1 ≤ i ≤ N . Moreover, c(u 0 ) = c(u 1 ). Proof. In Table 1 we name all nodes adjacent to d 0 , d 1 , and g κ i for all 1 ≤ i ≤ N -n, κ ∈ {0, 1} and the weigths of the corresponding edges. By means of the table it can easily be verified that the aforementioned nodes are comparing. Now consider the nodes g κ i . Recall first that Theorem 1 only applies to local optima in which the comparing node is biased to the color that it had if it was a single node. The only nodes different from the constants that are incident to any g κ j and to which the components of type 14 do not look at is and any k ∈ {1, 2}. From Lemma 6 we know that c(η κ i ) = c(µ κ i ). Thus, the component of type 14 correctly decides whether g κ i is weakly indifferent as outlined in the description of type 14 and therefore it biases g κ i such that Theorem 1 implies that either col(g κ i ) = 1 or col(g κ i ) = 0 for all 1 ≤ i ≤ N . Due to the weights of the edges incident to u 0 and u 1 and since they are biased to different colors by type 9 in each local optimum at least one of them is unhappy if both have the same color. Thus, the claim follows. Lemma 6 (similar to Claims 5.9.B and 5.10.B in [22]). If col(d κ ) = 1 then neither flipping w κ i,1 nor w κ i,2 change the cut by a weight of type 7. Proof. The proof uses the following claim. Proof. There are three edges incident to each node dκ i as introduced in type 6. Namely, one edge of type 6 and two edges of type 7. Since the weight of the edge of type 6 is greater than the sum of all edges of higher type, in particular the two edges of type 7, the claim follows. Assume col(d κ ) = 1. Then, by Claim 3 we have c( dκ i ) = 0 for all i. Since col(d κ ) = 1, the weights of the five edges incident to θ κ i,1 as depicted in Figure 7 imply c(θ κ i,1 ) = c(η κ i ). Similarly, we can argue that c(θ κ i,2 ) = c(η κ i ). But then, neither a flip of w κ i,1 nor a flip of w κ i,2 can change the cut by a weight of type 7. Now assume col(d κ ) = 0 and col(w κ i,1 ) = col(w κ i,2 ). Due to Claim 3 we have c( dκ i ) = 1 for all i. The weights of the edges incident to θ κ i,1 and θ κ i,2 imply c(θ κ i,1 ) = 1 and c(θ κ i,2 ) = 0. Since col(w κ i,1 ) = col(w κ i,2 ) and c(θ κ i,1 ) = c(θ κ i,2 ), node η κ i is happy if and only if its color is different from the color of w κ i,1 and w κ i,2 . Finally, the claim c(η κ i ) = c(λ κ i ) = c(µ κ i ) follows directly from the weights of the edges incident to λ κ i and µ κ i . Lemma 7 (similar to Lemma 4.1H in [22]). If c(z κ j ) = 1 then c(y κ j-1 ) = 0. If c(y κ j ) = 0 then c(y κ p ) = 0 and c(z κ p ) = 1 for all p ≤ j. Proof. The sum of the weights of the edges {z κ j , y κ j-1 } and {y κ j-1 , 1} is greater than the sum of all other edges incident to y κ j-1 . Thus, if c(z κ j ) = 1 then c(y κ j-1 ) = 0. Similarly, we can argue that z κ p has its unnatural value if y κ p has its unnatural value. Therefore, the claim follows by induction. Lemma 8 (similar to Lemma 4.1 in [22]). If Proof. The proof uses the following claims. is biased to black by the component of type 8 then c(y κ 2i-1 ) = 1 since c(z κ 2i ) = 0 which is a contradiction. Thus, y κ 2i-1 is biased to 0. Since z κ 2i and y κ 2i-1 are biased to opposite colors by type 8, node z κ 2i is biased to 1. Due to the weight of its incident edges it cannot be white then. But this is a contradiction. Node Neighbor Type R. Weight Condition no name 14 1 Table 1. Neighborhood of the nodes d 0 , d 1 , and Proof. Assume first that c(δ κ i,1 ) = c(δ κ i,2 ) = 0. Type 11 biases δ κ i,1 and δ κ i,2 to black. Therefore, both nodes δ κ i,1 and δ κ i,2 are unhappy. Therefore, we may assume that at least one of them is black. If c(β κ i,3 ) = c(γ κ i,3 ) = 1 then β κ i,3 is unhappy because β κ i,3 is biased to 0 by type 11. Now assume c(β κ i,3 ) = c(γ κ i,3 ) = 0. Then, node γ κ i,3 is unhappy since c(y 2i ) = 0 has its unnatural value due to Lemma 7 and since γ κ i,3 is biased to 1 by type 11. Now assume c(β κ i,3 ) = 1 and c(γ κ i,3 ) = 0. If col(g κ i ) = 0 then the bias of type 12 implies c(u κ i,11 ) = 1 and c(u κ i,12 ) = 0 which is a contradiction since γ κ i,3 is unhappy then due to the bias of type 11. But if col(g κ i ) = 1 then the bias of type 12 implies c(u κ i,10 ) = 0 and c(u κ i,9 ) = 1 which is also a contradiction since β κ i,3 is unhappy then due to the bias of type 11. Thus, c(β κ i,3 ) = 0 and c(γ κ i,3 ) = 1. Since c(β κ i,3 ) = 0 we get c(δ κ i,1 ) = c(δ κ i,2 ) = 1 due to the biases of type 11. Then, c(τ κ i,1 ) = c(τ κ i,2 ) = 0 and therefore c(σ κ i,1 ) = c(σ κ i,2 ) = 1 also due to the biases of type 11. Lemma 10 (partially similar to Lemma 4.3 in [22]). Assume c(y κ 2i+1 ) = 1 and c(y κ 2i-1 ) = 0. If g κ i is correct then z κ 2i , z κ 2i+1 , and y κ 2i have the colors to which they are biased by type 8. If g κ i is not correct then flipping g κ i does not decrease the cut by a weight of an edge type 3 corresponding to g κ i and increases it by a weight of type 14 if g i is indifferent with respect to edges of type 5 and 7. Proof. The proof uses the following three claims. is biased to 1 by type 11. Now assume c(y ) = 1 and since β κ i,1 is biased to 0 by type 11, it can only be black if γ κ i,1 and u κ i,1 are both white. But if γ κ i,1 is white then u κ i,4 must be black since γ κ i,1 is biased to black by type 11. If col(g κ i ) = 1 then c(u κ i,2 ) = 0 and c(u κ i,1 ) = 1 due to the bias of type 12 which is a contradiction. On the other hand, if col(g κ i ) = 0 then c(u κ i,3 ) = 1 and c(u κ i,4 ) = 0 due to the bias of type 12 which is also a contradiction. Thus, c(β κ i,1 ) = 0. The argumentation for α κ i,2 and β κ i,2 is analogous. Claim 13. Assume c(y κ 2i+1 ) = 1 and c(y κ 2i-1 ) = 0. Then, c(β κ i,3 ) = col(I 1 (g κ i )) ∨ col(I 2 (g κ i )). Proof. If an input is white then the corresponding δ κ i,j is black due to Claim 7. Thus, if both inputs are white then β κ i,3 is white. Now assume that at least one input is black. Let I 1 (g κ i ) = 1. Since σ κ i,1 is biased to white, we have c(σ κ i,1 ) = 0. Analogously, we get c(τ κ i,1 ) = 1. Node δ κ i,1 is biased to white by type 11. If both nodes δ κ i,1 and δ κ i,2 are black then δ κ i,1 is unhappy. Thus, we may assume that at least one of them is white. Since β κ i,3 is biased to 1 by type 11, it can only be white if γ κ i,3 and u κ i,9 are both black. But if γ κ i,3 is black then u κ i,12 must be white since γ κ i,3 is biased to white by type 11. Then, the bias of type 12 implies that if g κ i is white then u κ i,10 is black and u κ i,9 is white and if g κ i is black then u κ i,11 is white and u κ i,12 is black, each resulting in a contradiction. Thus, c(β κ i,3 ) = 1. ) has the same color as g κ i , at least one of the nodes c(u κ i,6 ), c(u κ i,7 ) has the same color as g κ i , and at least one of the nodes c(u κ i,10 ), c(u κ i,11 ) has the same color as g κ i . Proof. Assume first that g κ i is correct. From Claim 12 we know that c(β κ i,1 ) = col(I 1 (g κ i )). Since g κ i is correct, at least one of the two nodes β κ i,1 and g κ i is white. Assume first that c(β κ i,1 ) = 1. Then, due to Claim 12, we have c(α κ i,1 ) = 0. If g κ i is white then c(u κ i,3 ) = 1 and c(u κ i,4 ) = 0 since they are biased to 1 and 0 respectively by type 12. Since at least one of the nodes u κ i,4 and β κ i,1 is white and γ κ i,1 is biased to black by type 11 it is actually black. Analogously, we can argue that γ κ i,2 is also black. Moreover, by Claim 13 we know that c(β κ i,3 ) = col(I 1 (g κ i )) ∨ col(I 2 (g κ i )). Since g κ i is correct, it has the opposite color as β κ i,3 . If col(g κ i ) = 1 then c(α κ i,1 ) = c(α κ i,2 ) = 1 and therefore c(u κ i,11 ) = 0 and c(u κ i,12 ) = 1 since they are biased to 0 and 1 respectively by type 12. Therefore, at least one of the nodes u κ i,12 and β κ i,3 is black. Thus, γ κ i,3 has the color to which it is biased by type 11, i. e. 0. Now assume that g κ i is not correct. If col(I 1 (g κ i )) = 1 then c(α κ i,1 ) = 0 and c(β κ i,1 ) = 1 due to Claim 12. Moreover, since g κ i is not correct, we have col(g κ i ) = 1. Then c(α κ i,1 ) = 0 and the biases of type 12 imply c(u κ i,1 ) = 0 and c(u κ i,2 ) = 1. If col(I 1 (g κ i )) = 0 then c(α κ i,1 ) = 1 and c(β κ i,1 ) = 0 due to Claim 12. Since γ κ i,1 is biased to 1 by type 11 we get c(γ κ i,1 ) = 1. Moreover, since c(α κ i,1 ) = 1 the biases of type 12 imply c(u κ i,1 ) = 1, c(u κ i,2 ) = 0, c(u κ i,4 ) = 0 and c(u κ i,3 ) = 1. The proof for c(u κ i,6 ) and c(u κ i,7 ) is analogous. By Claim 13 we know that c(β κ i,3 ) = col(I 1 (g κ i )) ∨ col(I 2 (g κ i )). Since g κ i is not correct, we have col(g κ i ) = c(β κ i,3 ). If c(β κ i,3 ) = 0 then, due to Claim 12 we have c(α κ i,1 ) = c(α κ i,2 ) = 1. Then, the biases of the component of type 12 imply c(u κ i,9 ) = 1 and c(u κ i,10 ) = 0. Thus, u κ i,10 has the same color as g κ i . If c(β κ i,3 ) = 1 then c(γ κ i,3 ) = 0 since it is biased to white by type 11. Moreover, c(α κ i,1 ) = 0 or c(α κ i,2 ) = 0 due to Claim 12. Then, the biases of the component of type 12 imply c(u κ i,9 ) = 0 and c(u κ i,10 ) = 1 as well as c(u κ i,12 ) = 1 and c(u κ i,11 ) = 0. Then, we have c(u κ i,10 ) = c(u κ i,11 ) which proves the claim. Assume c(y κ 2i+1 ) = 1 and c(y κ 2i-1 ) = 0. Assume furthermore that g κ i is correct. Then, due to Claim 14 we have c(γ κ i,1 ) = c(γ κ i,2 ) = 1 and c(γ κ i,3 ) = 0. Then, if the nodes y κ j , z κ j for all j are biased to their natural values then due to c(y κ 2i+1 ) = 1 we get c(z κ 2i+1 ) = 0, c(y κ 2i ) = 1, and c(z κ 2i ) = 0. If, on the other hand, the nodes y κ j , z κ j for all j are biased to their unnatural values then due to c(y κ 2i-1 ) = 0 we get c(z κ 2i ) = 1, c(y κ 2i ) = 0, and c(z κ 2i+1 ) = 1. Now assume that g κ i is not correct. Due to c(y κ 2i-1 ) = 0 Lemma 7 implies c(y κ 2j+1 ) = 0 for all j < i. Then, Lemma 9 implies c(α κ j,1 ) = c(α κ j,2 ) = 0 and c(σ κ j,1 ) = c(σ κ j,2 ) = 1 for all j < i. Then, Claim 14 implies that flipping g κ i does not decrease the cut by a weight of type 3. Finally, Claim 12 implies c(α κ i,j ) = ¬col(I j (g κ i )) for 1 ≤ j ≤ 2. Thus, flipping g κ i to its correct color gains a weight of type 14 if g κ i is indifferent with respect to edges of type 5 and 7. Lemma 11. If col(d κ ) = 1, col(d κ ) = 0, and all nodes y κ i , z κ i for 0 ≤ i ≤ 2N + 1 are biased to their natural values then c(y κ 1 ) = 1. Proof. Assume col(d κ ) = 1, col(d κ ) = 0, and that all nodes y κ i , z κ i for 0 ≤ i ≤ 2N + 1 are biased to their natural values. We show that all gates of G κ C are correct. For the sake of contradiction we assume that G κ C contains an incorrect gate and let g κ i be the incorrect gate with the highest index. We first show by induction that the nodes y κ j , z κ j for j > 2i + 1 and y κ 2i+1 have their natural values. Since y κ 2N +1 is biased to its natural value, we have c(y κ 2N +1 ) = 1. Assume c(y κ 2j+1 ) = 1 for any j > i. If any one of the nodes z κ 2j+1 , y κ 2j , z κ 2j has its unnatural value then Lemma 7 implies c(y κ 2j-1 ) = 0. Then, Lemma 10 implies that all nodes z κ 2j+1 , y κ 2j , z κ 2j have their natural values whereafter Claim 4 implies c(y κ 2i-1 ) = 1 which is a contradiction. Thus, c(y κ 2j+1 ) = 1 implies c(y κ 2j-1 ) = 1 for any j > i and therefore it follows by induction that all nodes y κ j , z κ j for j > 2i + 1 and y κ 2i+1 have their natural values. Since g κ i is incorrect, all nodes y κ j , z κ j for j ≤ 2i -1 have their unnatural values due to Lemma 8 and 7. According to Lemma 9 and 10 correcting g κ i does not decrease the cut by a weight of type 3 and gains a weight of type 14. In the following, we distinguish between three cases for the index i and show that g κ i is unhappy in each of the cases. First, if i > 2n + 2m then there are no node edges of type 5 or 7 incident to g κ i . Thus, g κ i is unhappy then. Second, if 2m + 1 ≤ i ≤ 2n + 2m then there are no edges of type 5 incident to g κ i . Due to Lemma 6 correcting g κ i does not decrease the cut by a weight of type 7. Third, if i ≤ 2m then there are no edges of type 7 incident to g κ i . Correcting g κ i does not decrease the cut by a weight of type 5 since due to the biases of type 13 we have c(u i,14 ) = 0, c(u i,13 ) = 1 for i ≤ m and c(u i,16 ) = 1, c(u i,15 ) = 0 for m < i ≤ 2m. Altogether, g κ i is unhappy in each of the three cases which is a contradiction. Thus, g κ i is correct for all i. Thus, all nodes y κ i , z κ i for 1 ≤ i ≤ 2N + 1 have their natural values. Proof. Assume c(y κ 1 ) = c(u κ ) = 0 and c(u κ ) = 1. Then, independently of the color of y κ 1 , node d κ is biased to 1 and d κ to 0 by type 10 -recall that Theorem 1 only applies to local optima in which the comparing node is biased to the color that it had if it was a single node. Lemma 7 implies c(y κ 0 ) = 0. Since c(u κ ) = 0 and c(y κ 0 ) = 0 node y κ 0 and its counterpart, namely the constant 0, are decisive for d κ . Thus, Theorem 1 implies col(d κ ) = 1. Since c(u κ ) = 1, node u κ and its counterpart, namely the constant 1, are decisive for d κ . Thus, Theorem 1 implies col(d κ ) = 0. We consider the smoothed complexity of local Max-Cut for graphs with degree O(log n). Smoothed analysis, as introduced by Spielman and Teng [20], is motivated by the observation that practical data is often subject to some small random noise. Formally, let Ω n,m be the set of all weighted graphs with n vertices and m edges, in which each graph has maximum degree O(log n). In this paper, if A is an algorithm on graphs with maximum degree O(log n), then the smoothed complexity of A with σ-Gaussian perturbation is where x m = (x 1 , . . . , x m ) is a vector of length m, in which each entry is an independent Gaussian random variable of standard deviation σ and mean 0. E xm indicates that the expectation is taken over vectors x m according to the distribution described before, T A (G) is the running time of A on G, and G wmax•xm is the graph obtained from G by adding w max • x i to the weight of the i-th edge in G, where w max is the largest weight in the graph. We assume that the edges are considered according to some arbitrary but fixed ordering. According to Spielman and Teng, an algorithm A has polynomial smoothed complexity if there exist positive constants c , n 0 , σ 0 , k 1 , and k 2 such that for all n > n 0 and 0 ≤ σ < σ 0 we have In this paper, we use a relaxation of polynomial smoothed complexity [21], which builds up on Blum and Dungan [3] (see also Beier and Vöcking [4]). According to this relaxation, an algorithm A has probably polynomial smoothed complexity if there exist positive constants c , n 0 , σ 0 , and α such that for all n > n 0 and 0 ≤ σ < σ 0 we have Theorem 16. Let A be some FLIP local search algorithm for local Max-Cut. Then, A has probably polynomial smoothed complexity on any graph with maximum degree O(log n). Proof. Let V = {v 1 , . . . , v n }, and denote by d i the degree of v i . Furthermore, let w i,j be the weight of egde (v i , v j ). Let m = |E|, and x m = (x 1 , . . . , x m ) a vector of Gaussian random variables of standard deviation σ and mean 0. Alternatively, we denote by x i,j the Gaussian random variable which perturbates edge (v i , v j ), i. e., wi,j = w i,j + w max • x i,j represents the weight of (v i , v j ) in the perturbated graph G wmax•xm . In the following, G is an arbitrary graph in Ω n,m where m = O(n log n). We show that for any δ ∈ (0, 1) there are constants c , n 0 , σ 0 , k 1 , and k 2 such that for all n > n 0 and 0 ≤ σ < σ 0 we obtain Then, (1) implies the statement of the theorem (cf. [4]). In order to show the inequality above, we make use of the fact that the sum of k Gaussian random variables with variance σ 2 and mean 0 is a Gaussian random variable with variance kσ 2 and mean 0. Let X 1 , . . . , X k be k Gaussian random variables with variance σ 2 and mean 0. Furthermore, let a be some real number, and S ⊂ {1, . . . , k}. Then, we can state the following claim. Claim 17. For some large constant c and any δ ∈ (0, 1) Proof. Let X = j∈S X j and Y = j ∈S X j . Then, X is a Gaussian random variable with variance |S|σ 2 , and Y is a Gaussian random variable with variance (k -|S|)σ 2 . Since X is distributed according to the density function Setting b = Y + a we obtain the claim. In order to show the theorem, we normalize the weights by setting the largest weight to 1, and dividing all other weights by w max . That is, we obtain some graph G with weights w i,j = w i,j /w max . The edge weights of G are perturbated accordingly by Gaussian random variables with variance σ 2 and mean 0. Clearly, ]. Therefore, we consider G instead of G in the rest of the proof. In the next step, we show that for an arbitrary but fixed partition P of G and node v i , flipping v i increases (or decreases) the cut by Ω δσ n2 d i , with probability 1 -δ/2 • n -1 2 -di . This is easily obtained from Claim (17) in the following way. Define S to be the set of the neighbors of v i , which are in the same partition as v i according to P . Let e 1 , . . . , e di be the edges incident to v i , and denote by w 1 , . . . , w di the weights of these edges in G . We assume w. l. o. g. that e 1 , . . . , e |S | have both ends in the same partition as v i , and S = {1, . . . , |S |}. Furthermore, let a = j ∈S w j -j∈S w j , k = d i , and δ = δ/(2n). Applying now Equation ( 2) we obtain the desired result. For a node v i , there are at most di i=0 di i = 2 di possibilities to partition the edges into two parts, one subset in the same partition as v i and the other subset in the other partition. Therefore, by applying the union bound we conclude that any flip of an unhappy v i increases the cut by Ω δσ n2 d i , with probability at least 1 -δ/2 • n -1 . Since there are n nodes in total, we may apply the union bound again, and obtain that every flip (carried out by some unhappy node) increases the cut by Ω δσ n2 d i , with probability at least 1 -δ/2. Since d i = O(log n) and the largest weight in G is 1, we conclude that the largest cut in G may have weight O(n log n). Furthermore, for each i we have |x i | ≤ l √ ln n with probability 1 -O(n -l ) whenever l is large enough (remember that σ < 1). Let A 1 be the event that there is some x i with |x i | = ω(log n), and A 2 is the event that there is a node v i and a partition P such that flipping v i increases the cut by at most τ δσ n2 d i , where τ is a very small constant. We know that P r[A 1 ] = n -ω (1) and P r[A 2 ] < δ/2. Thus, as long as δ = n -O (1) , the total number of steps needed by A is at most 1) δσ with probability 1 -(P r[A 1 ] + P r[A 2 ]) > 1 -δ. Now we consider the case when δ = n -Ω (1) . Again, let A 1 be the event that there is some x i with |x i | = ω(log δ -1 ). Since x i is a Gaussian random variable, P r[A 1 ] = δ ω (1) . On the other hand, let A 2 be the event that there is a node v i and a partition P such that flipping v i increases the cut by at most τ δσ n2 d i , where τ is a very small constant. Again, P r[A 2 ] < δ/2. Then, the total number of steps needed by A is at most In this paper, we introduced a technique by which we can substitute graphs with certain nodes of unbounded degree, namely so called comparing nodes, by graphs with nodes of maximum degree five such that local optima of the former graphs induce unique local optima of the latter ones. Using this technique, we show that the problem of computing a local optimum of the Max-Cut problem is PLS-complete even on graphs with maximum degree five. We do not show that our PLS-reduction is tight, but the tightness of our reduction would not result in the typical knowledge gain anyway since the properties that come along with the tightness of PLS-reductions, namely the PSPACE-completeness of the standard algorithm problem and the existence of instances that are exponentially many improving steps away from any local optimum, are already known for the maximum degree four [15]. The obvious remaining question is to ask for the complexity of local Max-Cut on graphs with maximum degree four. Is it in P? Is it PLS-complete? Another important question is whether local Max-Cut has in general probably polynomial smoothed complexity. Unfortunately, the methods used so far seem not to be applicable to show that in graphs with super-logarithmic degree the local Max-Cut problem has probably polynomial smoothed complexity (cf. also [18]). nodes as induced by their types. First, c P (v i ) = c P (v i+1 ) for all N + 1 ≤ i ≤ 3N . Thus, c P (v N +2i ) = 1 and c P (v N +2i-1 ) = 0 for all 1 ≤ i ≤ N and c P (v 3N +1 ) = 0. Then, for each N -n + 1 ≤ i ≤ N we have c P (v i ) = 1 if value(g i ) = 1 and c P (v i ) = 0 otherwise, i. e. the colors of the nodes v i for N -n + 1 ≤ i ≤ N correspond to the assignment for the inputs of C. Now consider the nodes v i for 1 ≤ i ≤ N -n. If g i is a NOT-gate with I(g i ) = g j for m + 1 ≤ j ≤ N then c P (v i ) = c P (v j ), i. e. the of v i corresponds to the output of a NOT-gate w. r. t. the color of v j . Finally, if g i is a NOR-gate with I 1 (g i ) = g k and I 2 (g i ) = g j for m + 1 ≤ j < k ≤ N then c P (v i ) = 1 if and only c P (v j ) = c P (v k ) = 0 since v i is of type III and its neighbor v N +2i is already known to be black. Thus, the color of v i corresponds to the output of a NOR-gate with respect of the colors of v j and v k . Therefore, the color of each node v i for 1 ≤ i ≤ N -n corresponds to the output of g i in C. In particular, the colors of v 1 , . . . , v m correspond to the output of C. In the following we show that our reduction is in logspace. The number of edges in T is linear in N since each node has maximum degree four. The weights of the edges are powers of two. Thus, we only need to store the exponents of the weights. If we write an edge weight to the output tape then we first write the for the most significant bit and then we write as often a "0" as determined by the exponent. Now we show (b). We let G C = (V C , E C ) be the graph obtained from T by omitting the edges described in (iii). Furthermore, we let s i = v N -n+i for 1 ≤ i ≤ n and t j = v j for 1 ≤ j ≤ m. Then, the nodes s i and t j are of degree one. As in the proof for (a) we get f (c P (s)) = c P (t). i is weakly indifferent then it biases g κ i to c(α κ i,1 ) ∧ c(α κ i,2 For this work, Spielman and Teng was awarded the Gödel Prize in 2 For the definition of probably polynomial smoothed complexity see Section 5.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment