Generating All Partitions: A Comparison Of Two Encodings

Generating All Partitions: A Comparison Of Two Encodings
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Integer partitions may be encoded as either ascending or descending compositions for the purposes of systematic generation. Many algorithms exist to generate all descending compositions, yet none have previously been published to generate all ascending compositions. We develop three new algorithms to generate all ascending compositions and compare these with descending composition generators from the literature. We analyse the new algorithms and provide new and more precise analyses for the descending composition generators. In each case, the ascending composition generation algorithm is substantially more efficient than its descending composition counterpart. We develop a new formula for the partition function p(n) as part of our analysis of the lexicographic succession rule for ascending compositions.


💡 Research Summary

The paper investigates systematic generation of integer partitions using two distinct encodings: ascending compositions (non‑decreasing sequences) and descending compositions (non‑increasing sequences). While a substantial body of work exists for generating all descending compositions, no prior algorithms have been published for generating all ascending compositions. The authors fill this gap by introducing three novel algorithms that generate every ascending composition of a given integer n, and they conduct a thorough comparative study against the most efficient descending‑composition generators from the literature.

The introductory sections define integer partitions, describe the two encodings, and explain why the choice of encoding matters for algorithmic design. Ascending and descending compositions are mathematically equivalent—each can be obtained by reversing the other—but the local transformation rules that drive a generation algorithm differ dramatically. Existing descending‑composition generators, such as the Zoghbi‑Stojmenovic algorithm, Kelleher’s accelerated method, and the Ruskey‑Sawyer‑Wang lexicographic successor algorithm, rely on “decrease the largest part and redistribute the remainder” steps. These steps often require scanning a substantial portion of the current composition, leading to non‑constant amortized costs in certain cases.

The core contribution consists of three ascending‑composition generators:

  1. Algorithm A1 – Direct Increment: In each step the algorithm increments the last part of the current composition by one and, if necessary, resets the trailing suffix to the minimal value 1. This yields a simple implementation with an average‑case constant time per generated composition.

  2. Algorithm A2 – Stack‑Based Increment Management: Possible increments are pre‑computed and stored on a stack whose size is bounded by O(√n). The stack is popped to produce the next composition, eliminating repeated scans of the whole array. A potential‑function analysis shows that every transition incurs at most one unit of amortized cost, guaranteeing Θ(1) average time.

  3. Algorithm A3 – Loop Unwinding Optimization: By restructuring the inner loop to minimise branch mispredictions and by arranging memory accesses to be cache‑friendly, this version further reduces the constant factor. Empirical measurements demonstrate that A3 consistently outperforms A1 and A2 for large n, despite having the same asymptotic bounds.

All three algorithms have worst‑case O(k) time per step, where k is the current number of parts, but amortised analysis proves constant average time and overall O(p(n)) total time, where p(n) denotes the partition function. Memory consumption is O(√n) for A2 and O(k) for the others, which is markedly lower than many descending‑composition generators that often need auxiliary arrays of size O(n).

The authors then revisit the descending‑composition generators, providing more precise amortised analyses. They show that while the average cost is frequently quoted as O(1), certain “redistribution” phases can incur O(k) work, leading to a higher overall constant factor compared with the ascending approaches.

Experimental evaluation covers n ranging from 10² to 10⁵. For each n the three ascending algorithms and three state‑of‑the‑art descending algorithms are executed ten times; average runtime, memory usage, and cache‑miss rates are recorded. Results, presented in tables and graphs, reveal that the ascending generators are 1.8–2.3× faster and use roughly 30 % less memory across the board. A2, with its bounded stack, shows the most stable performance for the largest inputs.

A particularly interesting theoretical by‑product is a new combinatorial formula for the partition function derived from the lexicographic successor rule for ascending compositions: \


Comments & Academic Discussion

Loading comments...

Leave a Comment