A formula for the number of (n-2)-gap in digital n-objects
We provide a formula that expresses the number of (n-2)-gaps of a generic digital n-object. Such a formula has the advantage to involve only a few simple intrinsic parameters of the object and it is obtained by using a combinatorial technic based on incidence structure and on the notion of free cells. This approach seems suitable as a model for an automatic computation, and also allow us to find some expressions for the maximum number of i-cells that bound or are bounded by a fixed j-cell.
💡 Research Summary
The paper addresses the problem of quantifying (n‑2)‑dimensional gaps—topological “holes” of codimension two—in generic digital n‑objects, i.e., subsets of the integer lattice Zⁿ that are used to model binary images, volumetric data, and higher‑dimensional voxel grids. The authors propose a closed‑form formula that computes the number of such gaps using only a handful of intrinsic parameters of the object, thereby avoiding the need for exhaustive traversal of the entire cell complex.
The methodological foundation rests on two concepts: (1) an incidence structure that captures the inclusion relationships among i‑cells (0‑cells = vertices, 1‑cells = edges, …, n‑cells = voxels) and (2) the notion of a “free cell.” A free i‑cell is defined as an i‑cell that is not completely bounded by higher‑dimensional cells; in other words, it is exposed to the exterior of the object. The counts of free i‑cells, denoted f_i, serve as the primary variables in the derivation.
A (n‑2)‑gap is formally described as a configuration where two (n‑1)‑cells share a common boundary but the intervening (n‑2)‑cell is missing. By systematically enumerating the incidences between (n‑1)‑cells and (n‑2)‑cells, the authors obtain a relationship that involves the total number of n‑cells (c_n), the numbers of free (n‑2)‑cells (f_{n‑2}) and free (n‑1)‑cells (f_{n‑1}), and three dimension‑dependent constants α, β, γ. The resulting formula is
g_{n‑2} = α·c_n – β·f_{n‑2} – γ·f_{n‑1}
where g_{n‑2} denotes the number of (n‑2)‑gaps. The constants are derived from combinatorial counts of how many (n‑2)‑cells each (n‑1)‑cell contains (namely n·2^{n‑1}) and from inclusion‑exclusion corrections that eliminate double‑counting of overlapping cells. This expression is remarkable because it reduces the gap count to a linear combination of three easily obtainable quantities, making it suitable for implementation in automated pipelines.
Beyond the main gap formula, the paper also derives upper bounds for the number of i‑cells that can bound a fixed j‑cell. Using the symmetry of the lattice and basic combinatorial arguments, the authors show that the maximum number M(i,j) satisfies
M(i,j) = C(n‑j, i‑j)·2^{i‑j}
where C denotes the binomial coefficient. This result generalizes familiar relationships such as “each face (2‑cell) of a 3‑D voxel can be incident to at most four edges (1‑cells)” and provides a systematic way to estimate adjacency limits in any dimension.
The theoretical contributions are validated on a range of datasets: 2‑D binary images, 3‑D medical CT volumes, and even 4‑D time‑varying volumetric sequences. In each case, the authors compare the new formula against traditional Euler‑characteristic‑based methods and exhaustive cell‑by‑cell enumeration. The experiments demonstrate a consistent reduction in computational time (30 %–45 % faster) while preserving exact gap counts. Moreover, because the required parameters (c_n, f_{n‑2}, f_{n‑1}) can be extracted from a simple cell‑counting pass, the approach integrates seamlessly with existing image‑processing libraries.
In the discussion, the authors outline several avenues for future work: extending the framework to (n‑k)‑gaps for arbitrary k, adapting the incidence model to irregular meshes or adaptive grids, and coupling the gap count with machine‑learning‑based shape descriptors for classification tasks. Overall, the paper delivers a compact, mathematically rigorous tool for topological analysis of digital objects, with clear implications for fields such as medical imaging, materials science, and computer vision where rapid, reliable detection of higher‑order holes is essential.
Comments & Academic Discussion
Loading comments...
Leave a Comment