A Reduced Offset Based Method for Fast Computation of the Prime Implicants Covering a Given Cube
In order to generate prime implicants for a given cube (minterm), most of minimization methods increase the dimension of this cube by removing one literal from it at a time. But there are two problems of exponential complexity. One of them is the selection of the order in which the literals are to be removed from the implicant at hand. The latter is the mechanism that checks whether a tentative literal removal is acceptable. The reduced Offset concept has been developed to avoid of these problems. This concept is based on positional-cube representation where each cube is represented by two n-bit strings. We show that each reduced Off-cube may be represented by a single n-bit string and propose a set of bitwise operations to be performed on such strings. The experiments on single-output benchmarks show that this approach can significantly speed up the minimization process, improve the quality of its results and reduce the amount of memory required for this aim.
💡 Research Summary
The paper addresses a fundamental bottleneck in sum‑of‑products (SOP) minimization: generating all prime implicants (PIs) that cover a given on‑cube (minterm). Traditional direct‑cover heuristics repeatedly expand an on‑cube by removing literals and intersect the expanded cube with the off‑set to verify that no off‑set minterm is covered. This “expand‑and‑check” loop suffers from two exponential sources: (1) the order in which on‑cubes are selected for expansion, and (2) the order in which literals are removed from a cube. Both choices lead to a combinatorial explosion, especially when the off‑set size grows toward 2ⁿ.
To overcome these issues, the authors adopt the reduced‑offset concept originally proposed by Malik, Brayton, Newton, and Sagiv. For a specific on‑cube P, each off‑cube Z is reduced by keeping only those literals that are complements of literals in P; all other literals become don’t‑care (x). In the classic positional‑cube representation this reduction yields two n‑bit strings (left and right bits). The key insight of the paper is that a reduced off‑cube can be represented by a single n‑bit string called a Difference Indicator (DI). The DI is simply the bitwise XOR of P and the right‑bit string of Z:
D = P ⊕ Z_R
A ‘1’ in D marks a don’t‑care position, while a ‘0’ marks a fixed literal. Using DIs eliminates the need for the left‑bit string and allows absorption (redundancy) checks to be performed with a single bitwise AND: if D_i & D_j == D_i then D_i is absorbed by D_j. Consequently, the whole reduction of the off‑set can be carried out in O(|S_OFF|·n) time, a dramatic improvement over the original O(|S_OFF|·2ⁿ) behavior.
After generating the set of DIs for all off‑cubes, the algorithm incrementally builds a minimal DI set S_DM(P) by comparing each new DI against the current set and discarding absorbed entries (procedure Reform_S_DM). This step remains polynomial because each comparison is a constant‑time bitwise operation.
The next phase converts the minimal DI set into the clauses required for De Morgan’s transformation. An m‑DI (containing m ones) is decomposed into m one‑bit DIs using a linear‑time routine (Generate_M_j). Each one‑bit DI corresponds to a literal position in a clause, but the polarity (complemented or uncomplemented) is still unspecified. The collection of all such clauses is accumulated into an N‑vector via repeated bitwise OR operations (formula (6)). Although the theoretical worst‑case complexity of this accumulation can be exponential (≈2.5ⁿ), empirical measurements on standard benchmarks show that the number of clauses remains modest, and the overall cost stays polynomial in practice.
Finally, the N‑vector is combined with the complement of the original on‑cube (¬P) using bitwise AND to resolve literal polarities, yielding the full set of prime implicants covering P.
Experimental evaluation on 45 single‑output MCNC benchmarks demonstrates that the DI‑based method reduces memory consumption by roughly 30 % and accelerates PI generation by up to a factor of two compared with established tools such as MINI, PRESTO, and Espresso. The speedup is especially pronounced for functions with large off‑sets, confirming that the reduction of the off‑set to a compact DI representation effectively mitigates the exponential blow‑up inherent in traditional approaches.
In summary, the paper makes four principal contributions: (1) a novel single‑bit‑vector representation (DI) for reduced off‑cubes, (2) an O(|S_OFF|·n) algorithm for constructing the minimal reduced‑offset set, (3) a clear pipeline (DI → clause → N‑vector → PI) with provable polynomial steps, and (4) empirical evidence of superior performance in both runtime and memory usage. The approach is compatible with existing SOP minimization frameworks and opens avenues for extending reduced‑offset techniques to multi‑output and multi‑level logic synthesis.
Comments & Academic Discussion
Loading comments...
Leave a Comment