Multi-material Multi-physics Topology Optimization with Physics-informed Gaussian Process Priors
Machine learning (ML) has been increasingly used for topology optimization (TO). However, most existing ML-based approaches focus on simplified benchmark problems due to their high computational cost, spectral bias, and difficulty in handling complex physics. These limitations become more pronounced in multi-material, multi-physics problems whose objective or constraint functions are not self-adjoint. To address these challenges, we propose a framework based on physics-informed Gaussian processes (PIGPs). In our approach, the primary, adjoint, and design variables are represented by independent GP priors whose mean functions are parametrized via neural networks whose architectures are particularly beneficial for surrogate modeling of PDE solutions. We estimate all parameters of our model simultaneously by minimizing a loss that is based on the objective function, multi-physics potential energy functionals, and design-constraints. We demonstrate the capability of the proposed framework on benchmark TO problems such as compliance minimization, heat conduction optimization, and compliant mechanism design under single- and multi-material settings. Additionally, we leverage thermo-mechanical TO with single- and multi-material options as a representative multi-physics problem. We also introduce differentiation and integration schemes that dramatically accelerate the training process. Our results demonstrate that the proposed PIGP framework can effectively solve coupled multi-physics and design problems simultaneously – generating super-resolution topologies with sharp interfaces and physically interpretable material distributions. We validate these results using open-source codes and the commercial software package COMSOL.
💡 Research Summary
The paper introduces a novel framework for tackling multi‑material, multi‑physics topology optimization (TO) problems by leveraging physics‑informed Gaussian processes (PIGPs). Traditional machine‑learning‑based TO methods have largely been confined to simple benchmark cases because of high computational cost, spectral bias of neural networks, and difficulty handling complex, non‑self‑adjoint physics. Moreover, the nested analysis‑and‑design (NAND) paradigm, which solves state equations separately from design updates, becomes prohibitively expensive when many materials and coupled physics are involved.
To overcome these limitations, the authors model three distinct sets of variables—design densities (ρ), primary state fields (e.g., displacement u and temperature T), and adjoint fields (λ)—as independent Gaussian processes. Each GP is equipped with a kernel that automatically satisfies Dirichlet‑type boundary conditions, ensuring that the solution space is physically admissible. The mean functions of the GPs are parameterized by a specialized neural architecture called the Parametric Grid Convolution Attention Network (PGCAN). PGCAN mitigates the spectral bias of multilayer perceptrons, captures sharp gradients, and represents localized features that are essential for crisp material interfaces.
The hyper‑parameters of the kernels are fixed a priori, which eliminates the need for repeated matrix inversions and reduces the usual O(N³) scaling of GP inference. Only the parameters of the PGCAN mean functions are learned. The loss function combines three components: (1) the original TO objective (e.g., compliance, thermal dissipation) augmented with adjoint contributions to embed sensitivity information; (2) variational potential‑energy terms for both the primary and adjoint fields, enforcing the governing PDEs in a weak form; and (3) design constraints such as volume or cost fractions expressed as spatial integrals. This loss is fully differentiable; gradients are obtained via reverse‑mode automatic differentiation.
A key computational innovation is the use of dynamic collocation points that correspond to groups of finite‑element (FE) cells. Shape functions and reduced‑order Gauss integration are employed to evaluate the spatial integrals efficiently, avoiding mesh‑dependent errors while keeping the number of evaluation points modest. Curriculum training is applied: the optimization starts with a coarse resolution and low material contrast, then progressively refines the collocation set and deepens the network, leading to rapid convergence and elimination of gray regions.
The framework is validated on a suite of 2‑D and 3‑D benchmark problems: compliance minimization, heat‑conduction optimization, compliant‑mechanism design, and coupled thermo‑mechanical design. Each case is examined under both single‑material and multi‑material (up to four phases) settings. Comparisons are made against the classic SIMP method, the polynomial‑FE based PolyMat approach, and the commercial multiphysics solver COMSOL. Results show that the PIGP method consistently produces higher‑resolution topologies with sharp material interfaces, respects all physics constraints, and yields physically interpretable material distributions. In the thermo‑mechanical examples, the simultaneous solution of temperature and displacement fields captures interaction effects that sequential methods miss.
Overall, the paper contributes three major advances: (i) a unified probabilistic representation that blends kernel‑based physics enforcement with deep‑learning expressiveness; (ii) a fully variational, physics‑aware loss that integrates objective, energy, and constraints; and (iii) an efficient integration scheme based on FE shape functions and adaptive collocation. These innovations enable scalable, accurate, and mesh‑free TO for problems that were previously out of reach for ML‑based methods. Future work is suggested in extending the approach to large‑scale 3‑D problems, non‑linear material models, and automated hyper‑parameter tuning for real‑time design assistance.
Comments & Academic Discussion
Loading comments...
Leave a Comment