Why summation by parts is not enough
We investigate the construction and performance of summation-by-parts (SBP) operators, which offer a powerful framework for the systematic development of structure-preserving numerical discretizations of partial differential equations. Previous approaches for the construction of SBP operators have usually relied on either local methods or sparse differentiation matrices, as commonly used in finite difference schemes. However, these methods often impose implicit requirements that are not part of the formal SBP definition. We demonstrate that adherence to the SBP definition alone does not guarantee the desired accuracy, and we identify conditions for SBP operators to achieve both accuracy and stability. Specifically, we analyze the error minimization for an augmented basis, discuss the role of sparsity, and examine the importance of nullspace consistency in the construction of SBP operators. Furthermore, we show how these design criteria can be integrated into a recently proposed optimization-based construction procedure for function space SBP (FSBP) operators on arbitrary grids. Our findings are supported by numerical experiments that illustrate the improved accuracy for the numerical solution using the proposed SBP operators.
💡 Research Summary
The paper investigates the construction and performance of summation‑by‑parts (SBP) operators, focusing on why the formal SBP definition alone is insufficient for achieving both high accuracy and stability in the numerical solution of hyperbolic partial differential equations. SBP operators are defined by three conditions: (i) exactness on a chosen finite‑dimensional function space F, (ii) a symmetric positive‑definite norm matrix P, and (iii) the SBP property Q+Qᵀ = B, where B encodes the boundary contributions. While these conditions guarantee discrete integration‑by‑parts and energy stability, the authors demonstrate that they do not automatically enforce other crucial properties required in practice.
The authors first review existing SBP constructions, distinguishing dense local operators (typically used in finite‑element‑type methods) from sparse global operators (used in finite‑difference schemes). Classical finite‑difference SBP operators are limited to polynomial bases on uniform or mildly non‑uniform grids, whereas recent optimization‑based approaches allow arbitrary node distributions and general function spaces (FSBP). The optimization framework treats the unknown matrices S (antisymmetric part of Q) and P as variables, parameterized by vectors σ and ρ, and solves a non‑convex least‑squares problem to enforce the exactness condition XW + BV/2 = 0.
To illustrate the shortcomings of relying solely on the SBP definition, the paper presents counterexamples. An extreme case with a rank‑one differentiation matrix and an almost singular norm matrix satisfies the SBP constraints but clearly fails to approximate derivatives in a non‑periodic domain. More realistic experiments construct FSBP operators of various polynomial degrees d on a uniform grid with N=50 points and solve the linear advection equation ∂ₜu + a∂ₓu = 0 using a third‑order SSP‑RK scheme. Despite being exact for polynomials up to degree d, operators with d ≤ 3 produce large errors, especially for higher‑frequency initial data (k=2). The errors decrease only when the basis includes high‑degree polynomials (d=9 or 11), indicating that unresolved modes are not sufficiently damped. Energy stability is preserved, confirming that stability alone does not guarantee accuracy.
The authors identify the missing ingredient as null‑space consistency. For a derivative operator D, the null space should contain only constant vectors; otherwise, non‑constant null‑modes lead to spurious, undamped components. Lemma 4.1 shows that null‑space consistency is equivalent to rank(D)=N−1, while Lemma 4.2 provides an alternative test using the modified operator \tilde D = D + ν P⁻¹ e_L e_Lᵀ (ν>0). The paper also discusses the eigenvalue property, requiring all eigenvalues of \tilde D to have positive real parts for any ν>½, a stronger condition often needed for convergence.
To enforce these properties, two strategies are proposed. First, the optimization problem is augmented with constraints that enforce rank(D)=N−1 or directly penalize the presence of extra null‑vectors, ensuring that only constants lie in the null space. Second, sparsity is addressed by adding regularization terms (e.g., L₁ penalties) on the entries of S and P, preserving a banded structure desirable for finite‑difference implementations while still allowing high‑order accuracy. For dense local operators, regularization helps avoid over‑fitting when large bases (exponential, trigonometric, RBF) are used.
Numerical experiments confirm that the enhanced FSBP operators outperform classical FD‑SBP operators of comparable order. With polynomial bases up to degree d=11, the L₂ error drops to the order of 10⁻⁴ and the L∞ error to 10⁻⁴ as well, whereas fourth‑order FD‑SBP operators exhibit errors an order of magnitude larger on the same grid. The optimization converges reliably for larger bases, with computation time depending more on the number of nodes than on the basis size.
The paper concludes by emphasizing that SBP operator design must incorporate null‑space consistency, sparsity, and error‑minimization criteria beyond the basic definition. It outlines future work on extending the framework to multi‑dimensional tensor‑product grids, nonlinear conservation laws, and global optimization techniques to guarantee global minima. Overall, the study provides a systematic, optimization‑based methodology for constructing function‑space SBP operators that are both accurate and stable on arbitrary grids, filling a critical gap in the current SBP literature.
Comments & Academic Discussion
Loading comments...
Leave a Comment