Amplitude Surrogates for Multi-Jet Processes
Accurate and efficient amplitude predictions are essential for precision studies of multi-jet processes at the LHC. We introduce a novel neural network architecture that predicts multi-jet amplitudes by leveraging the Catani-Seymour factorization scheme and related lower-jet amplitudes, requiring the network to learn only a correction factor. This hybrid approach combines theoretical factorization with a data-driven ansatz, enabling fast and scalable amplitude predictions. Our networks also estimate the accuracy of each prediction, allowing us to selectively use results that meet a predefined accuracy threshold. In the context of leading-order event generation, this approach achieves speed-up factors of up to 20 while maintaining all observables at the percent-level accuracy.
💡 Research Summary
The pursuit of precision in high-energy physics, particularly within the context of Large Hadron Collider (LHC) experiments, necessitates highly accurate predictions for multi-jet production processes. However, the computational complexity of calculating multi-leg amplitudes grows exponentially with the number of jets, creating a significant bottleneck in the event generation pipeline. This paper addresses this challenge by introducing a novel neural network-based surrogate model that integrates fundamental theoretical physics with data-driven machine learning.
The core innovation of this work is a hybrid architecture that leverages the Catani-Seymour (CS) factorization scheme. Rather than employing a “black-box” approach where a neural network attempts to learn the entire amplitude from scratch, the authors utilize the mathematical structure of QCD factorization. By decomposing an $n$-jet amplitude into $(n-1)$-jet amplitudes and a singular kernel, the researchers designed the network to predict only the remaining “correction factor.” This strategic reduction in the learning task’s complexity allows the network to focus on the residual differences between the simplified lower-order calculation and the full-order amplitude, significantly easing the training process and improving scalability.
A standout feature of this proposed architecture is its integrated uncertainty estimation. The network does not merely provide a point estimate for the amplitude; it also quantifies the expected accuracy of each prediction. This capability enables a “selective inference” strategy, where researchers can filter out predictions that do not meet a predefined accuracy threshold, thereby ensuring that the high-precision requirements of LHC physics are strictly maintained.
The empirical results demonstrate the profound impact of this approach. In the context of leading-order (LO) event generation, the surrogate model achieves speed-up factors of up to 20 compared to traditional methods. Crucially, this massive computational gain is achieved without compromising physical integrity, as all key observables remain within percent-level accuracy of the exact theoretical calculations. This research represents a significant paradigm shift toward “physics-informed” machine learning, providing a scalable and efficient framework for the next generation of high-precision particle physics simulations.
Comments & Academic Discussion
Loading comments...
Leave a Comment