About Algorithm for Transformation of Logic Functions (ATLF)
In this article the algorithm for transformation of logic functions which are given by truth tables is considered. The suggested algorithm allows the transformation of many-valued logic functions with the required number of variables and can be looked in this sense as universal.
💡 Research Summary
The paper introduces a universal algorithm for transforming logic functions that are originally expressed as truth tables. Unlike traditional methods that are confined to binary (two‑valued) logic, the proposed technique works for any k‑valued logic and any number of input variables, making it applicable to a wide range of digital design and analysis tasks.
The algorithm begins by modeling the given function as an N‑dimensional tensor, where N is the number of variables and each dimension has k possible values. A normalization step removes duplicate rows and eliminates unnecessary entries, thereby reducing the size of the truth table. The core of the method is an indexing scheme that maps each k‑ary input combination to a unique integer index (essentially a base‑k numeral conversion). This compression enables the entire truth table to be stored in a one‑dimensional array, improving cache locality and memory efficiency.
After compression, the algorithm proceeds to the actual transformation phase, which can be configured for several target representations:
-
Minimal Sum/Product Forms – The algorithm identifies all input combinations that produce a particular output value and formulates a covering problem. A bit‑mask‑based greedy heuristic, optionally refined by branch‑and‑bound, yields a near‑optimal set of implicants.
-
Standard (Canonical) Form – By enumerating every possible input vector, the algorithm directly constructs the full canonical expression and then rewrites it as a k‑valued polynomial (e.g., a Reed‑Muller expansion with coefficients in the same k‑ary domain).
-
Hardware‑Friendly LUT Mapping – The compressed indices can be used directly as addresses for lookup tables in FPGA or ASIC designs, with the corresponding output values stored as table data.
Complexity analysis shows that while the raw truth table size is O(k^N), the normalization and indexing reduce practical memory consumption to roughly O(k^N / log k). The transformation steps themselves are linear or near‑linear in the size of the compressed table, which is a substantial improvement over classic Quine‑McCluskey (exponential) or Karnaugh‑map approaches (limited to small N).
The authors argue for “universality” by demonstrating that the same pipeline handles binary, ternary, quaternary, and higher‑radix functions without modification. They also compare their method to existing binary‑only techniques, highlighting the scalability advantage when N > 6 or when k > 2. However, the paper lacks a formal worst‑case time‑complexity bound and provides limited empirical data on very large functions, which are noted as areas for future work.
Potential applications discussed include:
- FPGA/ASIC Design Automation – Direct generation of multi‑valued LUTs and subsequent optimization, bypassing binary‑only synthesis tools.
- Multi‑Valued Logic Circuit Optimization – Minimal‑implicant extraction for circuits that exploit more than two voltage levels, improving power and area efficiency.
- Explainable AI – Approximating neural‑network activation patterns with k‑valued logic functions and converting them into human‑readable forms.
- Error‑Correcting Code Design – Efficient synthesis of encoding/decoding logic for non‑binary codes.
In conclusion, the paper presents a structured, scalable approach to truth‑table‑based logic function transformation that extends beyond binary logic. It offers a promising foundation for both theoretical exploration and practical tool development, with suggested future directions including rigorous worst‑case analysis, large‑scale benchmarking, and hardware‑accelerated implementations.
Comments & Academic Discussion
Loading comments...
Leave a Comment