Speeding up of microstructure reconstruction: I. Application to labyrinth patterns

Speeding up of microstructure reconstruction: I. Application to   labyrinth patterns
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Recently, entropic descriptors based the Monte Carlo hybrid reconstruction of the microstructure of a binary/greyscale pattern has been proposed (Piasecki 2011 Proc. R. Soc. A 467 806). We try to speed up this method applied in this instance to the reconstruction of a binary labyrinth target. Instead of a random configuration, we propose to start with a suitable synthetic pattern created by cellular automaton. The occurrence of the characteristic attributes of the target is the key factor for reducing the computational cost that can be measured by the total number of MC steps required. For the same set of basic parameters, we investigated the following simulation scenarios: the biased/random alternately mixed #2m approach, the strictly biased #2b and the random/partially biased #2rp one. The series of 25 runs were performed for each scenario. To maintain comparable accuracy of the reconstructions, during the final stages the only selection procedure we used was the biased one. This allowed us to make the consistent comparison of the first three scenarios. The purely random #2r approach of low efficiency was included only for completeness of the approaches. Finally, for the conditions established, the best single reconstruction and the best average tolerance value among all the scenarios were given by the mixed #2m method, which was also the fastest one. The slightly slower the alternative #2b and #2rp variants provided comparable but less satisfactory results.


💡 Research Summary

The paper addresses the computational inefficiency inherent in Monte‑Carlo (MC) based microstructure reconstruction when a random initial configuration is used. Building on the entropic descriptor (ED) framework introduced by Piasecki (2011), the authors propose a two‑fold acceleration strategy for reconstructing a binary labyrinth pattern. First, they generate a synthetic initial pattern (“pattern 2”) using an extended Young’s cellular automaton (CA) model. By tuning the CA parameters (R₁ = 1.8, R₂ = 6.2, w₁ = 1, w₂ = –0.068, ε = –0.127) they ensure that the synthetic pattern reproduces two key statistical features of the target: the position of the first peak of the spatial entropy descriptor S at scale k = 5 and the total number of black pixels (n_final = 5789, corresponding to a volume fraction φ = 0.591). This pre‑conditioning dramatically reduces the initial energy gap and consequently the number of MC steps required for convergence.

Second, the authors introduce a biased pixel‑exchange procedure that is invoked whenever the acceptance rate of standard random exchanges falls below 1 %. The bias selects a black pixel on the border of a black cluster and a neighboring white pixel; the exchange is accepted only if the white pixel has at least two black nearest neighbours and the black pixel has at most two white neighbours. This rule preferentially eliminates tiny isolated clusters and smooths cluster boundaries, which is especially beneficial for the labyrinth morphology that consists of one large connected black region (size = 5789 pixels) plus a few small islands.

Four simulation scenarios are examined, each repeated 25 times under identical cooling schedules (temperature T(l) = 0.8^l, initial T₀ = 4 × 10⁻³) and loop‑length function f(l) = ⌊a l^c + b l⌋ with a = 100, b = 25, c = 750, l_max = 32. The scenarios are:

  • #2m – mixed approach: biased and random exchanges alternate between successive temperature loops.
  • #2b – strictly biased: only the biased exchange is used throughout.
  • #2rp – random/partially biased: random exchanges dominate, with bias applied only in the final stage.
  • #2r – purely random (baseline, low efficiency).

All scenarios start from the same synthetic pattern “2”; a completely random initial configuration is deliberately omitted because the focus is on the benefit of the informed start. The objective function (Eq. 3.1) is the weighted sum of squared differences between the target and trial EDs (both S and G, together with their complexities C). Acceptance of a trial configuration follows the Metropolis rule p = exp(–ΔE/T).

Performance metrics are the total number of MC steps (MCS) required to reach the prescribed tolerance (ΔE < 3 × 10⁻³) and the final average tolerance value across the 25 runs. The mixed #2m scenario consistently yields the smallest MCS count and the lowest average tolerance, making it both the fastest and the most accurate method. The strictly biased #2b and the random/partially biased #2rp approaches are slightly slower and produce marginally higher tolerances, yet they still outperform the baseline #2r by a large margin.

Additional code optimisation is reported: factorial calculations for large arguments are performed using a Lanczos‑based gamma‑function approximation (reference


Comments & Academic Discussion

Loading comments...

Leave a Comment