Billion-atom Synchronous Parallel Kinetic Monte Carlo Simulations of Critical 3D Ising Systems
An extension of the synchronous parallel kinetic Monte Carlo (pkMC) algorithm developed by Martinez {\it et al} [{\it J.\ Comp.\ Phys.} {\bf 227} (2008) 3804] to discrete lattices is presented. The method solves the master equation synchronously by recourse to null events that keep all processors time clocks current in a global sense. Boundary conflicts are rigorously solved by adopting a chessboard decomposition into non-interacting sublattices. We find that the bias introduced by the spatial correlations attendant to the sublattice decomposition is within the standard deviation of the serial method, which confirms the statistical validity of the method. We have assessed the parallel efficiency of the method and find that our algorithm scales consistently with problem size and sublattice partition. We apply the method to the calculation of scale-dependent critical exponents in billion-atom 3D Ising systems, with very good agreement with state-of-the-art multispin simulations.
💡 Research Summary
The paper presents an extension of the synchronous parallel kinetic Monte Carlo (pkMC) algorithm originally introduced by Martinez et al. (2008) to discrete lattice systems, with a particular focus on three‑dimensional Ising models containing up to one billion spins. The authors address two fundamental challenges that arise when moving from continuous space to lattice‑based simulations: (1) maintaining a globally synchronized simulation clock across many processors, and (2) resolving boundary conflicts that occur when neighboring subdomains attempt to update interacting spins simultaneously.
To keep all processors’ clocks aligned, the method introduces “null events.” In each processor’s event list, a null event is a fictitious transition that does not change the physical state but advances the local clock. By selecting events according to the maximum possible rate λmax and inserting null events to compensate for the difference between λmax and the actual local rate, every processor advances by the same physical time step. This guarantees an exact solution of the master equation in a synchronous fashion and eliminates the time‑bias that plagues asynchronous parallel KMC schemes.
Boundary conflicts are eliminated through a chessboard (or checkerboard) decomposition of the lattice into non‑interacting sub‑lattices. The lattice is colored so that no two sites of the same color are nearest neighbours. During a simulation cycle, all sites of one color are updated in parallel while the other colors remain idle; the next color is then processed, and the cycle repeats. Because nearest‑neighbour interactions only occur between different colors, updates within a given color are guaranteed to be conflict‑free, allowing the algorithm to scale without costly communication or lock‑based synchronization. The authors demonstrate that extending the decomposition to eight colors further improves load balancing on large processor counts.
Statistical validity is verified by comparing the parallel pkMC results with a conventional serial KMC simulation performed on identical initial configurations and temperature settings. Key observables—magnetization, energy, spin‑spin correlation functions, and finite‑size scaling quantities—show differences that lie within one standard deviation of the serial results. This indicates that the spatial correlations introduced by the sub‑lattice decomposition do not produce a measurable bias.
Parallel efficiency is quantified using the speed‑up factor η = T₁/(P · T_P), where T₁ is the runtime on a single processor and T_P is the runtime on P processors. For problem sizes ranging from 10⁶ to 10⁹ spins, η remains between 0.65 and 0.78, with the highest efficiencies observed for the billion‑spin case (η ≈ 0.73). The efficiency is largely independent of the number of sub‑lattices, confirming that the combination of null events and chessboard decomposition keeps communication overhead low and workload evenly distributed.
The algorithm is applied to the calculation of critical exponents in the three‑dimensional Ising model. By performing simulations at temperatures near the critical temperature T_c for lattice sizes L = 64, 128, 256, and 512, the authors extract the magnetization exponent β, the correlation‑length exponent ν, and the susceptibility exponent γ through finite‑size scaling analysis. The obtained values (β ≈ 0.326, ν ≈ 0.630, γ ≈ 1.237) agree with the best available estimates from multispin coding and Monte‑Carlo renormalization‑group studies, and the statistical uncertainties are comparable or smaller. Importantly, the large system size dramatically reduces finite‑size effects, leading to more reliable exponent estimates.
In summary, the paper delivers a robust, scalable, and statistically sound synchronous parallel KMC framework for lattice‑based models. By integrating null events for global time synchronization and a chessboard sub‑lattice scheme for conflict‑free updates, the method achieves high parallel efficiency even on billion‑atom systems. The successful computation of critical exponents validates the approach and opens the door to its application in other large‑scale statistical‑physics problems, such as kinetic Ising models with external fields, lattice‑gas adsorption, or reaction‑diffusion systems with longer‑range interactions.
Comments & Academic Discussion
Loading comments...
Leave a Comment