Physics-informed data-driven inference of an interpretable equivariant LES model of incompressible fluid turbulence

Reading time: 5 minute
...

📝 Original Info

  • Title: Physics-informed data-driven inference of an interpretable equivariant LES model of incompressible fluid turbulence
  • ArXiv ID: 2602.15743
  • Date: 2026-02-17
  • Authors: ** - Roman Grigoriev (Georgia Institute of Technology) – 교신 저자 - 기타 공동 저자 (동등 1저자) **

📝 Abstract

Restrictive phenomenological assumptions represent a major roadblock for the development of accurate subgrid-scale models of fluid turbulence. Specifically, these assumptions limit a model's ability to describe key quantities of interest, such as local fluxes of energy and enstrophy, in the presence of diverse coherent structures. This paper introduces a symbolic data-driven subgrid-scale model that requires no phenomenological assumptions and has no adjustable parameters, yet it outperforms leading LES models. A combination of a priori and a posteriori benchmarks shows that the model produces accurate predictions of various quantities including local fluxes across a broad range of two-dimensional turbulent flows. While the model is inferred using LES-style spatial coarse-graining, its structure is more similar to RANS models, as it employs an additional field to describe subgrid scales. We find that this field must have a rank-two tensor structure in order to correctly represent both the components of the subgrid-scale stress tensor and the various fluxes.

💡 Deep Analysis

📄 Full Content

The objective of large eddy simulations (LES) is to provide an accurate and general description of the evolution of turbulent fluid flows at high Reynolds numbers when the cost of direct numerical simulation becomes prohibitive. Rather than explicitly resolving the computationally small (unresolved) scales, LES represents their effect on the large (resolved) scales through a closure term in the momentum equation, expressed in terms of the subgrid-scale (SGS) stress tensor 𝜏 𝑖 𝑗 . In the most common LES formulations, this tensor is assumed to depend only on the resolved variables, e.g., the filtered velocity ū, thereby strongly constraining the admissible structure of phenomenological closures. An example of this is the restrictive Boussinesq assumption, which posits a linear relation between these tensors, 𝜏 (𝑖 𝑗 ) = 2𝜈 𝑒 ∇ (𝑖 ū 𝑗 ) , where 𝜈 𝑒 is the scalar eddy viscosity and parentheses denote the trace-free symmetric component of a rank-2 tensor.

Within this framework, phenomenological SGS models can be broadly classified into functional, structural, and hybrid approaches. Functional models mainly aim to introduce a statistically appropriate amount of energy dissipation. Indeed, this is the key role of the SGS stress tensor, as turbulent transport typically induces energy transfer towards smaller scales on the net. Prominent † These authors contributed equally and share first authorship.

‡ Email address for correspondence: roman.grigoriev@physics.gatech.edu examples of functional models include the Smagorinsky model (Smagorinsky 1963) and its dynamic version (Lilly 1992;Germano et al. 1991), both of which rely on the Boussinesq assumption. In contrast, structural models seek to reproduce the SGS stress tensor itself by using a formal series expansion or scaling arguments. Representative examples include the nonlinear/gradient model (NGM) (Leonard 1975;Clark et al. 1979a) and the similarity model (Bardina et al. 1980). Hybrid approaches combine elements of both classes and include models such as the dynamic nonlinear mixed model (Vreman et al. 1996) and the dynamic mixed model (Bardina et al. 1980).

Current phenomenological SGS models mostly rely on restrictive physical assumptions such as homogeneity, isotropy, and scale invariance. These are frequently violated, for instance in flows dominated by pronounced coherent structures, where such models become inaccurate even in describing basic properties such as energy dissipation (Moser et al. 2021). Even when these assumptions appear to be approximately satisfied, phenomenological models often fail to capture the correct interscale energy transfer, in particular energy fluxes from small to large scalescommonly referred to as backscatter (Vreman et al. 1997)-which can be dynamically significant for some types of turbulent flows.

Recent progress in machine learning (ML) has enabled discovery of explicit parameterizations of the closure term using data generated by direct numerical simulations (DNS). The most common approach is based on the application of the Cayley-Hamilton theorem to express the tensor 𝜏 𝑖 𝑗 as a finite sum of linearly independent tensor basis functions, with coefficients given by scalar functions of a finite number of invariants of the strain rate tensor ∇ 𝑖 ū 𝑗 (Pope 1975). While this representation is formally complete, inferring the functional form of each coefficient for all of the invariants is extremely challenging. Consequently, practical implementations typically rely on truncating the expansion or restricting the number of invariants retained. Examples include sparse regression approaches such as random forest regression (Ling et al. 2016) and sequential thresholding ridge regression (Schmelzer et al. 2018), and symbolic regression methods based on genetic expression programming (GEP) (Weatheritt & Sandberg 2017;Reissmann et al. 2021).

Sparse regression has also been used to learn the explicit functional form of the closure term, component by component, in terms of the derivatives of the resolved fields but without relying on the Cayley-Hamilton theorem. For instance, relevance vector machines (RVM) have been used to infer a quadratic parameterization resembling NGM for oceanic flows (Zanna & Bolton 2020a) and two-dimensional turbulence (Jakhar et al. 2024). A hybrid sparse/symbolic regression algorithm was employed to identify a (scalar) closure in a two-layer model of quasi-geostrophic turbulence (Ross et al. 2023) which involves higher-order derivatives of the large-scale variables (here, velocity and potential vorticity). However, none of these approaches ensure that the inferred closure transforms correctly under rotations or yields stable evolution (Jakhar et al. 2024).

The bulk of ML studies use deep learning (DL) to yield an implicit, neural network-based parameterization. Representative examples include closures for the momentum equation in two-dimensional (Maulik et al. 2019;Kochkov et al. 2021) and three-dimensiona

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut