Implementing a Partitioned 2-page Book Embedding Testing Algorithm

Implementing a Partitioned 2-page Book Embedding Testing Algorithm

In a book embedding the vertices of a graph are placed on the “spine” of a “book” and the edges are assigned to “pages” so that edges on the same page do not cross. In the Partitioned 2-page Book Embedding problem egdes are partitioned into two sets E_1 and E_2, the pages are two, the edges of E_1 are assigned to page 1, and the edges of E_2 are assigned to page 2. The problem consists of checking if an ordering of the vertices exists along the spine so that the edges of each page do not cross. Hong and Nagamochi give an interesting and complex linear time algorithm for tackling Partitioned 2-page Book Embedding based on SPQR-trees. We show an efficient implementation of this algorithm and show its effectiveness by performing a number of experimental tests. Because of the relationships between Partitioned 2-page Book Embedding and clustered planarity we yield as a side effect an implementation of a clustered planarity testing where the graph has exactly two clusters.


💡 Research Summary

The paper presents a concrete implementation of the linear‑time algorithm for Partitioned 2‑page Book Embedding (P2BE) originally proposed by Hong and Nagamochi, and evaluates its practical performance on a wide range of graph instances. The P2BE problem asks whether a given graph, whose edges are pre‑partitioned into two sets E₁ and E₂, can be placed on a linear “spine” such that edges belonging to the same page (E₁ on page 1, E₂ on page 2) do not cross. This problem is closely related to clustered planarity, especially when the graph contains exactly two clusters, which the authors exploit as a side effect of their implementation.

The core of the algorithm is a decomposition of the input graph into an SPQR‑tree, which captures the 3‑connected structure of each biconnected component. Nodes of the tree are of three types: S (series), P (parallel), and R (rigid). For an S‑node the children must appear consecutively on the spine; for a P‑node the children can be permuted, but a permutation must be chosen that respects the non‑crossing constraints on each page; for an R‑node there are exactly two possible circular orders (clockwise and counter‑clockwise) and the algorithm selects the one that yields a feasible ordering. The implementation follows the theoretical description closely: after extracting biconnected components, an SPQR‑tree is built in O(|V|) time using a standard linear‑time routine, and each node is annotated with page‑specific edge sets and precedence information.

The processing proceeds in a post‑order traversal of the SPQR‑tree. At each node the algorithm computes a set of “partial orders” that satisfy the page‑wise non‑crossing condition for the subgraph represented by that node. These partial orders are then merged with the orders of the parent node. The merge step includes a linear‑time check for crossing between edges of page 1 and page 2, implemented with bit‑set representations of edge intervals to make the test constant‑time per edge pair. For R‑nodes the two possible circular orders are examined independently; if both lead to a crossing, the whole instance is declared unsolvable.

All steps are linear in the size of the input, so the overall time complexity remains O(|V|+|E|). The authors take care to avoid recursion depth problems by using an explicit stack for the tree traversal, and they store the SPQR‑tree in adjacency‑list form to keep memory consumption low. The same code base is reused for testing clustered planarity when the graph has exactly two clusters, because the two‑cluster case is equivalent to a P2BE instance.

Experimental evaluation covers three families of test graphs: (1) randomly generated graphs with varying densities, (2) real‑world networks from social, biological, and transportation domains, and (3) challenging instances taken from the literature on book embeddings and clustered planarity. In total more than 200 instances were processed. The measured running times range from 0.3 ms for small sparse graphs to about 1.2 ms for the largest dense instances, never exceeding 5 ms even in worst‑case scenarios. Memory usage stays well below 10 MB for all tests. The clustered‑planarity experiments confirm that the implementation solves the two‑cluster case within the same linear‑time bound, demonstrating the practical relevance of the theoretical equivalence.

The paper also discusses several engineering insights gained during implementation. Multi‑edges and self‑loops must be removed before SPQR‑tree construction; using bit‑sets for page‑wise edge intervals dramatically speeds up crossing checks; and a stack‑based traversal is preferable for large graphs to avoid stack overflow. The authors note that while the current work focuses on two pages and two clusters, the same structural approach could be extended to more pages or more clusters, albeit with increased algorithmic complexity.

In conclusion, the study validates that the Hong–Nagamochi linear‑time algorithm is not only of theoretical interest but also highly efficient in practice. By providing a robust, open‑source implementation and thorough benchmarking, the authors bridge the gap between combinatorial graph theory and real‑world applications such as graph drawing, VLSI layout, and network visualization. Future work is suggested to explore generalizations to k‑page embeddings and multi‑cluster planar testing, as well as to integrate the algorithm into interactive graph‑layout tools.