These are the lecture notes for the DIMACS Tutorial "Limits of Approximation Algorithms: PCPs and Unique Games" held at the DIMACS Center, CoRE Building, Rutgers University on 20-21 July, 2009. This tutorial was jointly sponsored by the DIMACS Special Focus on Hardness of Approximation, the DIMACS Special Focus on Algorithmic Foundations of the Internet, and the Center for Computational Intractability with support from the National Security Agency and the National Science Foundation. The speakers at the tutorial were Matthew Andrews, Sanjeev Arora, Moses Charikar, Prahladh Harsha, Subhash Khot, Dana Moshkovitz and Lisa Zhang. The sribes were Ashkan Aazami, Dev Desai, Igor Gorodezky, Geetha Jagannathan, Alexander S. Kulikov, Darakhshan J. Mir, Alantha Newman, Aleksandar Nikolov, David Pritchard and Gwen Spencer.
Deep Dive into Limits of Approximation Algorithms: PCPs and Unique Games (DIMACS Tutorial Lecture Notes).
These are the lecture notes for the DIMACS Tutorial “Limits of Approximation Algorithms: PCPs and Unique Games” held at the DIMACS Center, CoRE Building, Rutgers University on 20-21 July, 2009. This tutorial was jointly sponsored by the DIMACS Special Focus on Hardness of Approximation, the DIMACS Special Focus on Algorithmic Foundations of the Internet, and the Center for Computational Intractability with support from the National Security Agency and the National Science Foundation. The speakers at the tutorial were Matthew Andrews, Sanjeev Arora, Moses Charikar, Prahladh Harsha, Subhash Khot, Dana Moshkovitz and Lisa Zhang. The sribes were Ashkan Aazami, Dev Desai, Igor Gorodezky, Geetha Jagannathan, Alexander S. Kulikov, Darakhshan J. Mir, Alantha Newman, Aleksandar Nikolov, David Pritchard and Gwen Spencer.
In this lecture, we will introduce the notion of approximation algorithms and see examples of approximation algorithms for a variety of NP-hard optimization problems.
Let Q be an optimization problem1 . An optimal solution for an instance of this optimization problem is a feasible solution that achieves the best value for the objective function. Let OP T (I) denote the value of the objective function for an optimal solution to an instance I.
Definition 1.1.1 (Approximation ratio). An algorithm for Q has an approximation ratio α if for instances I, the algorithm produces a solution of cost ≤ α • OP T (I) (α ≥ 1), if Q is a minimization problem and of cost ≥ α • OP T (I) if Q is a maximization problem.
We are interested in polynomial-time approximation algorithms for NP-hard problems. How does a polynomial-time approximation algorithm know what the cost of the optimal solution is, which is NP-hard to compute? How does one guarantee that the output of the algorithm is within α of the optimal solution when it is NP-hard to compute the optimal solution. In various examples below, we see techniques of handling this dilemma.
- 2-approximation for metric Travelling Salesman Problem (metric-TSP): Consider a complete graph G formed by n points in a metric space. Let d ij be the distance between point i and j. The metric TSP problem is to find a minimum cost cycle that visits every point exactly once.
The following observation relating the cost of the minimum spanning tree (MST) to the optimal TSP will be crucial in bounding the approximation ratio.
Observation 1.1.2. The cost of the Minimum spanning Tree (MST) is at most the optimal cost of TSP.
Algorithm A:
(a) Find the MST (b) Double each edge (c) Do an “Eulerian transversal” and output its cost Observe that T SP ≤ cost(A) ≤ 2 • M ST ≤ 2 • T SP .
- A 1.5-approximation to metric-TSP: The approximation ratio can be improved to 1.5 by modifying the above using an idea due to Christofides [Chr76]. Instead of doubling each edge of the MST as in the above algorithm, a minimum cost matching is added among all odd degree nodes. Observe that cost of matching ≤ 1 2 T SP . So,
It is to be noted that since 1976, there has been no further improvement on this approximation ratio.
The above examples are examples of approximation algorithms that attain a constant approximation ratio. In the next section, we will see how to get arbitrarily close to the optimal solution when designing an approximation algorithm, ie., approximation ratios arbitrarily close to 1.
A PTAS is a family of polynomial-time algorithms, such that for every > 0, there is an algorithm in this family that is an (1 + ) approximation to the NP-hard problem Q, if it is a minimization problem and an (1 -)-approximation if Q is a maximization problem.
The above definition allows the running time to arbitrarily depend on but for each it should be polynomial in the input size e.g. n 1 or n 2 1 .
Various type of number problems typically have type-1 PTAS. The usual strategy is to try to round down the numbers involved , so the choice of numbers is small and then use Dynamic Programming.
The classic example of such an approach is the Knapsack problem.
Knapsack problem Given a set of n items, of sizes s 1 , s 2 . . . s n such that s i ≤ 1 ∀i, and profits c 1 , c 2 , . . . , c n , associated with these items, and a knapsack of capacity 1, find a subset I of items whose total size is bounded by 1 such that the total profit is maximized.
The knapsack problem is NP-hard in general, however if the profits fall in a small-sized set, then there exists an efficient polynomial time algorithm.
Observation 1.2.1. If the values c 1 , c 2 , . . . c n are in [1, . . . , w], then the problem can be solved in poly(n, w)-time using dynamic programming.
This naturally leads to the following approximation algorithm for knapsack.
(1 + )-Approximation Algorithm 1. Let c = max i c i .
Round down each c i to the nearest multiple of c n . Let this quantity be r i • c n , i.e., r i = c i / c n .
With these new quantities (r i ) as profits of items, use the standard Dynamic Programming algorithm, to find the most profitable set I .
The number of r i ’s is at most n/ . Thus, the running time of this algorithm is at most poly(n, n/ ) = poly(n, 1/ ). We now show that the above algorithm obtains a (1-)-approximation ratio Claim 1.2.2. i∈I c i is an (1 -)-approximation to OPT.
Proof. Let O be the optimal set. For each item, rounding down of c i causes a loss in profit of at most c n . Hence the total loss due to rounding down is at most n times c n . In other words,
The first inequality follows from the definition of r i , the second from the fact that I is an optimal solution with costs r i ’s, the third from the above observation and the last from the fact that OP T ≥ c.
In these kinds of problems we define a set of “simple” solutions and find the minimum cost simple solution in polynomial time. Next, we show that an
…(Full text truncated)…
This content is AI-processed based on ArXiv data.