A Big-Data Approach to Handle Many Process Variations: Tensor Recovery and Applications
Fabrication process variations are a major source of yield degradation in the nano-scale design of integrated circuits (IC), microelectromechanical systems (MEMS) and photonic circuits. Stochastic spectral methods are a promising technique to quantify the uncertainties caused by process variations. Despite their superior efficiency over Monte Carlo for many design cases, these algorithms suffer from the curse of dimensionality; i.e., their computational cost grows very fast as the number of random parameters increases. In order to solve this challenging problem, this paper presents a high-dimensional uncertainty quantification algorithm from a big-data perspective. Specifically, we show that the huge number of (e.g., $1.5 \times 10^{27}$) simulation samples in standard stochastic collocation can be reduced to a very small one (e.g., $500$) by exploiting some hidden structures of a high-dimensional data array. This idea is formulated as a tensor recovery problem with sparse and low-rank constraints; and it is solved with an alternating minimization approach. Numerical results show that our approach can simulate efficiently some ICs, as well as MEMS and photonic problems with over 50 independent random parameters, whereas the traditional algorithm can only handle several random parameters.
💡 Research Summary
The paper addresses the severe “curse of dimensionality” that hampers stochastic collocation when applied to integrated circuits (ICs), micro‑electromechanical systems (MEMS), and photonic circuits with dozens of independent process parameters. Traditional stochastic spectral methods require a tensor‑product quadrature grid whose size grows as n^d (n points per dimension, d random variables), quickly reaching astronomically large numbers (e.g., 1.5 × 10^27 samples for d≈50). The authors propose a big‑data‑inspired solution: treat the full set of simulation results as a high‑order tensor Y and recover it from a tiny subset of entries Ω (on the order of a few hundred) by exploiting two structural properties. First, the solution tensor is assumed to have a low canonical polyadic (CP) rank, meaning it can be expressed as a sum of a small number of rank‑1 tensors. Second, the generalized polynomial chaos (gPC) expansion of the stochastic response is sparse; most coefficients are near zero. Combining a low‑rank constraint with an ℓ₁‑norm sparsity penalty yields a regularized optimization problem:
min ½‖P_Ω(TCP(U^{(1)},…,U^{(d)}) – Y)‖_F² + λ‖c‖₁,
where TCP denotes the CP reconstruction, P_Ω projects onto the observed entries, and c_α = ⟨X, W_α⟩ are the gPC coefficients obtained by inner products with pre‑computed rank‑1 weight tensors W_α. The problem is solved by alternating minimization: each factor matrix U^{(k)} is updated in turn via a linear least‑squares step, while λ balances low‑rank fidelity against sparsity.
The methodology proceeds as follows: (1) generate 1‑D Gauss quadrature points and weights for each random variable; (2) select a small sampling set Ω and run the expensive circuit/device simulator only on those points; (3) solve the tensor recovery problem to obtain a low‑rank approximation X of the full solution tensor; (4) compute the gPC coefficients from X; (5) evaluate statistical quantities (mean, variance, PDFs) of the performance metric.
Three realistic case studies are presented: (i) an analog IC power‑consumption model with 55 random parameters (polynomial order 3); (ii) a MEMS resonator frequency model with 52 parameters (order 4); and (iii) a photonic wavelength‑shift model with 58 parameters (order 3). In each case, the traditional tensor‑product collocation would require n^d ≈ 10^55–10^60 simulations, whereas the proposed approach uses only 400–700 actual simulations. The recovered gPC expansions achieve mean absolute errors below 1 % and standard‑deviation errors below 2 %, matching or surpassing Monte‑Carlo results while delivering speed‑ups of two orders of magnitude. Moreover, the recovered CP rank is modest (r ≈ 7–9), leading to memory footprints of only a few megabytes.
In summary, the authors demonstrate that high‑dimensional uncertainty quantification can be reframed as a tensor recovery problem with low‑rank and sparsity priors. This reframing dramatically reduces the number of required expensive simulations, enabling practical stochastic analysis of complex IC, MEMS, and photonic designs with over 50 process variations. Future work is suggested on adaptive sampling, non‑linear low‑rank models, and integration into real‑time design optimization loops.
Comments & Academic Discussion
Loading comments...
Leave a Comment