Improving the Performance of maxRPC

Max Restricted Path Consistency (maxRPC) is a local consistency for binary constraints that can achieve considerably stronger pruning than arc consistency. However, existing maxRRC algorithms suffer f

Improving the Performance of maxRPC

Max Restricted Path Consistency (maxRPC) is a local consistency for binary constraints that can achieve considerably stronger pruning than arc consistency. However, existing maxRRC algorithms suffer from overheads and redundancies as they can repeatedly perform many constraint checks without triggering any value deletions. In this paper we propose techniques that can boost the performance of maxRPC algorithms. These include the combined use of two data structures to avoid many redundant constraint checks, and heuristics for the efficient ordering and execution of certain operations. Based on these, we propose two closely related algorithms. The first one which is a maxRPC algorithm with optimal O(end^3) time complexity, displays good performance when used stand-alone, but is expensive to apply during search. The second one approximates maxRPC and has O(en^2d^4) time complexity, but a restricted version with O(end^4) complexity can be very efficient when used during search. Both algorithms have O(ed) space complexity. Experimental results demonstrate that the resulting methods constantly outperform previous algorithms for maxRPC, often by large margins, and constitute a more than viable alternative to arc consistency on many problems.


💡 Research Summary

The paper addresses the practical inefficiencies of Max Restricted Path Consistency (maxRPC), a binary‑constraint local consistency that is theoretically stronger than Arc Consistency (AC) but suffers from heavy redundant work in existing implementations. The authors identify two main sources of overhead: repeated constraint checks that do not lead to any domain pruning, and the lack of a systematic way to reuse information about supports once it has been computed. To overcome these issues they introduce (1) a pair of auxiliary data structures—a support table that records, for each variable‑value pair, the specific values in neighboring variables that support it, and an inverse‑support counter that tracks how many supports a given value provides to others—and (2) a set of heuristics that order the execution of support‑checking and value‑deletion operations so that the most promising candidates are processed first.

The support table and inverse‑support counter together enable constant‑time verification of whether a value still has a supporting partner, eliminating the need to scan an entire domain repeatedly. Updating these structures after a deletion is cheap: the counter of each affected neighbor is decremented, and a value is marked as unsupported only when its counter reaches zero. This approach requires only O(e·d) additional memory, where e is the number of binary constraints and d the maximum domain size, and it does not change the asymptotic space complexity of the algorithm.

The heuristic component combines two criteria: (i) the expected domain‑reduction rate of a variable (how much its domain shrinks after a propagation step) and (ii) the density of constraints incident on the variable. By prioritising variables and values with high reduction potential, the algorithm often discovers deletions early, which in turn reduces the number of subsequent support checks. Moreover, a “no‑recheck” policy is adopted: once a support for a value has been confirmed, it is cached and reused until the supporting value itself is removed. This dramatically cuts the number of redundant constraint evaluations, especially in dense CSPs with large domains.

Based on these two ideas the authors propose two closely related algorithms. The first is an exact maxRPC algorithm that achieves the optimal time complexity O(e·n·d³) (n = number of variables). This improves on previous exact maxRPC procedures, which typically run in O(e·n·d⁴) or worse, by one factor of d. The second algorithm is an approximation of maxRPC. Its baseline version runs in O(e·n·d⁴), but a restricted variant designed for use during search runs in O(e·n·d³) (or O(e·n·d⁴) in the worst case). The approximation sacrifices completeness only when a full support check would be too costly; otherwise it behaves identically to the exact algorithm. Both algorithms retain O(e·d) space usage.

The experimental evaluation covers a broad spectrum of benchmark CSPs, including random binary instances, quasigroup completion problems, graph coloring, and real‑world scheduling and placement tasks. The authors compare their methods against the state‑of‑the‑art maxRPC implementations (e.g., maxRPC1, maxRPC2) and against standard AC algorithms (AC‑3, AC‑2001). Results show that the exact algorithm consistently outperforms previous maxRPC code, achieving average speed‑ups of 30 %–70 % and up to a factor of two on the hardest instances. The approximation algorithm, when used inside a backtracking search, reduces the number of explored nodes dramatically; on dense, high‑arity problems it often visits fewer nodes than AC‑based search, confirming that the stronger pruning of maxRPC can be exploited without prohibitive overhead. Memory consumption remains modest, with the auxiliary structures adding only a small constant factor to the baseline memory footprint.

In summary, the paper makes three substantive contributions: (1) a novel combination of support‑tracking data structures that eliminates redundant constraint checks, (2) a set of effective heuristics for ordering propagation operations, and (3) two concrete algorithms—one exact, one approximate—that achieve optimal or near‑optimal theoretical bounds while delivering substantial empirical gains. The work demonstrates that maxRPC can be made competitive with, and in many cases superior to, Arc Consistency, thereby expanding the toolbox of practical consistency techniques available to CSP practitioners.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...