Constraint optimization and landscapes

Reading time: 5 minute
...

📝 Original Info

  • Title: Constraint optimization and landscapes
  • ArXiv ID: 0709.1023
  • Date: 2008-09-25
  • Authors: 저자 정보가 논문 본문에 명시되지 않아 확인할 수 없습니다.

📝 Abstract

We describe an effective landscape introduced in [1] for the analysis of Constraint Satisfaction problems, such as Sphere Packing, K-SAT and Graph Coloring. This geometric construction reexpresses these problems in the more familiar terms of optimization in rugged energy landscapes. In particular, it allows one to understand the puzzling fact that unsophisticated programs are successful well beyond what was considered to be the `hard' transition, and suggests an algorithm defining a new, higher, easy-hard frontier.

💡 Deep Analysis

📄 Full Content

Amongst glassy systems, the particular class of 'Constraint Optimisation' has received constant attention [2,3]. These are problems in which we are given a set of constraints that must be satisfied, and our task is to optimize the conditions without violating them. The typical example is packing: we are asked to put as many objects (spheres, say) in a given volume, without violating the constraint that they should not overlap. Another example that has been widely studied by computer scientists is K-SAT, where we have N Boolean variables, and αN logic clauses: our task is to add more and more clauses while still finding some set of variables that satisfy them. One last example is the q-coloring problem: we have a graph with N nodes and αN links and our task is to color each vertex with one of q colours, with the condition that linked vertices have different colours. If we consider a sequence of graphs as a set of nodes and a predefined list of links, then adding links one by one makes the problem harder and harder.

What motivated our interest in these problems was what we percieved as a confusing situation in the literature. Consider first sphere packing. In Fig. 1 we show different volume fractions that are often quoted in the literature. In particular ‘Random Close Packing’ (as defined empirically), the ‘optimal random packing’ (zero-temperature glass state) and the socalled ‘J-point’, are sometimes used as synonyms and sometimes not. The ‘J-point’ deserves some explanation. It can be defined as follows [4]: one starts from small spheres in random positions, and ‘inflates’ them gradually (in the computer, of course), only displacing them the least neccessary to avoid overlaps [6]. At some point the system blocks and the procedure stops: this is the J-point. It was studied extensively by the Chicago group [4,5], who proposed that it be identified with Random Close Packing. On the other hand, Random

Close Packing is often associated with the zero-temperature ideal glass state. The two identifications seem hardly compatible, as they would imply that the fast algorithm described above allows to find rapidly the ideal glass state, contrary to all our prejudices.

Let us now turn to the SAT and Colouring problems. Carrying over the knowledge from mean-field glasses, it was concluded that the set of solutions evolves, as the difficulty is increased, in the following manner: for low α the set of solutions is connected. As α is increased there is a well defined ‘dynamic’ or ‘clustering’ point α d at which the set of satisfied solutions breaks into many comparable disconnected pieces [11]. At a larger value α K the volume becomes dominated by a few regions, and finally, at some α c , there are no more solutions [7].

Beyond the clustering transition α d the problem was thought to become hard. And yet, as it turned out, even very simple programs[15] manage to find solutions well beyond this hard transition! The situation is showed in Fig. 2. This is another puzzle we set out to clarify.

A first observation one can make is that the ‘J-point’ procedure can be generalised to all of these problems: one just has in all cases to increase the difficulty gradually, and keep the system satisfied by minimal changes each time. For example, for the Colouring problem, one adds one link at a time, and corrects any miscoloring generated by such an addition.

The number of colour flips needed each time to correct the miscoloring grows and it diverges with a well-defined, reproducible power law (see Fig. 3) at a value α * , by definition the limit reached by the program.

Second, and most important, we introduce a (pseudo) energy landscape as follows (see Fig. 4). As the difficulty in the problem is increased -by increasing the radius, or adding clauses, or adding links -the set of satisfied configurations becomes a subset of the previous one. This allows to construct a single-valued envelope function (Fig. 4): the pseudo-energy.

It is easy to see that the J-point procedure is just a zero-temperature descent on this landscape.

We can now carry over everything we know from energy landscapes. For the J-point in the context of sphere packings, we conclude that: 2: Why is it so easy to go beyond α d , the putative ‘hard’ limit? Values of the parameter: i) α d the ‘clustering’ transition, ii) α ASAT for ASAT, α * for our algorithm, iii) α SP the performance of a Survey Propagation implementation, and iv) α c the optimum [8].

• The J-point, being the result of a gradient from a random configuration, cannot be the optimal amourphous solution. It is just the analogue of the infinite temperature inherent structures.

• It is in general more compact than the clustering (Mode Coupling) point, since it gains from ‘falling to the bottom of one cluster’.

• It may be more or less compact than the Kauzmann (α K ) point itself, depending on the dimension, polydispersity, shape, etc.

For problems such as SAT and Coloring, we have now a recurs

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut