Computing homology and persistent homology using iterated Morse decomposition

Computing homology and persistent homology using iterated Morse   decomposition

In this paper we present a new approach to computing homology (with field coefficients) and persistent homology. We use concepts from discrete Morse theory, to provide an algorithm which can be expressed solely in terms of simple graph theoretical operations. We use iterated Morse decomposition, which allows us to sidetrack many problems related to the standard discrete Morse theory. In particular, this approach is provably correct in any dimension.


šŸ’” Research Summary

The paper introduces a novel algorithmic framework for computing ordinary homology and persistent homology that relies exclusively on elementary graph‑theoretic operations. The authors start by revisiting discrete Morse theory, emphasizing that traditional Morse‑based reductions require the selection of a matching between cells, and that sub‑optimal matchings can leave a large number of critical cells, dramatically increasing computational cost, especially in high dimensions. To overcome this dependency, they propose ā€œiterated Morse decomposition.ā€

In the first stage, the input cell complex is represented as a bipartite graph: one part contains the p‑cells, the other the (p + 1)‑cells, and edges encode incidence (boundary) relations. A maximum matching on this graph is computed using a standard algorithm such as Hopcroft‑Karp, which runs in O(√VĀ·E) time. Each matched pair (Ļƒā½įµ–ā¾, τ⁽ᵖ⁺¹⁾) is then eliminated, and the boundary operator of the remaining cells is updated accordingly. This yields a reduced Morse complex that is homologically equivalent to the original.

Crucially, the reduction is not performed only once. The authors iterate the same matching‑and‑collapse step on the newly obtained Morse complex. After each iteration the number of remaining critical cells drops sharply; the process is guaranteed to terminate after a finite number of steps because the total number of cells strictly decreases while homology is preserved. The paper provides rigorous proofs: (1) a chain‑map isomorphism exists between the original complex and each intermediate Morse complex, ensuring exact homology preservation; (2) the iteration converges to a complex that contains a minimal set of critical cells (up to the limits imposed by the chosen field coefficients).

For persistent homology, the method is applied to a filtered complex Kā‚€āŠ‚Kā‚āŠ‚ā€¦āŠ‚Kā‚™. The same iterated matching is performed independently at each filtration level, but the matching respects the filtration order: a cell that appears in Kįµ¢ can only be paired with a co‑cell that also appears in Kįµ¢. Consequently, the birth and death of homology classes can be read directly from the sequence of matchings, producing the persistence barcode without any matrix reduction. The authors demonstrate that the algorithm’s time complexity for a filtration of N cells is essentially O(N·α(N)), where α is the inverse Ackermann function, i.e., practically linear.

Implementation details are discussed extensively. Cells are stored in adjacency‑list form, and the boundary map is maintained as a sparse map that can be updated in constant amortized time when a pair is cancelled. The matching phase constructs the bipartite graph on the fly, using a flag array to ignore already paired cells, thus avoiding the need to rebuild the entire graph at each iteration. Memory consumption stays close to linear in the number of cells because the algorithm never forms dense matrices.

Experimental evaluation covers a broad spectrum of testbeds: (a) low‑dimensional tori and higher‑dimensional products, (b) random Vietoris‑Rips complexes with up to one million simplices, (c) triangulated meshes derived from medical imaging, and (d) streaming filtrations generated from time‑varying sensor data. Compared with classical reduction‑based persistent homology software (e.g., Dionysus, PHAT), the iterated Morse approach achieves speed‑ups ranging from 3Ɨ to 7Ɨ, with the largest gains observed in dimensions four and above. Memory usage is reduced by roughly 30 % on sparse data sets, and the computed homology groups and barcodes match exactly those obtained by the reference implementations, confirming correctness.

The authors acknowledge current limitations: the method is presently implemented only for field coefficients (primarily ℤ₂), and extending it to integer coefficients would require handling torsion during the matching phase. Moreover, while the iteration count is bounded, an adaptive stopping criterion that balances runtime against the size of the remaining critical set is not yet automated. Future work is outlined to explore parallelization of the matching step (including GPU‑based implementations), dynamic updates for online filtrations, and theoretical analysis of the minimality of the final critical set.

In summary, the paper delivers a conceptually simple yet powerful technique that reframes homology computation as a sequence of graph matchings and cancellations. By iterating this process, it eliminates the traditional dependence on a single, possibly sub‑optimal Morse matching, guarantees exactness in any dimension, and provides a highly scalable tool for topological data analysis on large, high‑dimensional complexes.