Transfer Learning-Based Surrogate Modeling for Nonlinear Time-History Response Analysis of High-Fidelity Structural Models
📝 Abstract
In a performance based earthquake engineering (PBEE) framework, nonlinear time-history response analysis (NLTHA) for numerous ground motions are required to assess the seismic risk of buildings or civil engineering structures. However, such numerical simulations are computationally expensive, limiting the real-world practical application of the framework. To address this issue, previous studies have used machine learning to predict the structural responses to ground motions with low computational costs. These studies typically conduct NLTHAs for a few hundreds ground motions and use the results to train and validate surrogate models. However, most of the previous studies focused on computationally-inexpensive response analysis models such as single degree of freedom. Surrogate models of high-fidelity response analysis are required to enrich the quantity and diversity of information used for damage assessment in PBEE. Notably, the computational cost of creating training and validation datasets increases if the fidelity of response analysis model becomes higher. Therefore, methods that enable surrogate modeling of high-fidelity response analysis without a large number of training samples are needed. This study proposes a framework that uses transfer learning to construct the surrogate model of a high-fidelity response analysis model. This framework uses a surrogate model of low-fidelity response analysis as the pretrained model and transfers its knowledge to construct surrogate models for high-fidelity response analysis with substantially reduced computational cost. As a case study, surrogate models that predict responses of a 20-story steel moment frame were constructed with only 20 samples as the training dataset. The responses to the ground motions predicted by constructed surrogate model were consistent with a site-specific time-based hazard.
💡 Analysis
In a performance based earthquake engineering (PBEE) framework, nonlinear time-history response analysis (NLTHA) for numerous ground motions are required to assess the seismic risk of buildings or civil engineering structures. However, such numerical simulations are computationally expensive, limiting the real-world practical application of the framework. To address this issue, previous studies have used machine learning to predict the structural responses to ground motions with low computational costs. These studies typically conduct NLTHAs for a few hundreds ground motions and use the results to train and validate surrogate models. However, most of the previous studies focused on computationally-inexpensive response analysis models such as single degree of freedom. Surrogate models of high-fidelity response analysis are required to enrich the quantity and diversity of information used for damage assessment in PBEE. Notably, the computational cost of creating training and validation datasets increases if the fidelity of response analysis model becomes higher. Therefore, methods that enable surrogate modeling of high-fidelity response analysis without a large number of training samples are needed. This study proposes a framework that uses transfer learning to construct the surrogate model of a high-fidelity response analysis model. This framework uses a surrogate model of low-fidelity response analysis as the pretrained model and transfers its knowledge to construct surrogate models for high-fidelity response analysis with substantially reduced computational cost. As a case study, surrogate models that predict responses of a 20-story steel moment frame were constructed with only 20 samples as the training dataset. The responses to the ground motions predicted by constructed surrogate model were consistent with a site-specific time-based hazard.
📄 Content
Future earthquakes are inherently uncertain phenomena and cannot be predicted deterministically in terms of resulting damage and associated frequency. In rational design decision-making, a framework called performance based earthquake engineering (PBEE) [1] [2] [3] was proposed to probabilistically evaluate such seismic risk. The framework consists of the following four steps. First, the seismic hazard at the site is assessed using probabilistic seismic hazard analysis (PSHA) [4] quantified by exceeding probability of ground motion intensity measures (IMs).
Second, engineering demand parameters (EDPs) of structures, such as peak inter-story drift [5], peak floor acceleration, and permanent residual displacement [6] are evaluated by conducting response analyses or by using empirical formulas [7] [8] [9]. Third, component damage states are evaluated based on fragility curves, which are functions of the EDPs. Finally, the consequences, such as fatalities, economic loss and downtime, and their associated likelihoods of occurrence are assessed by integrating the information obtained in the previous three steps, resulting in decision variables.
Although PBEE has been extensively studied and made significant contributions to seismic risk assessment, there is still room for improvement. In particular, the common use of scalar IMs or EDPs limits the framework, as these quantities fail to capture the essential temporal characteristics of ground motions and responses. In the first step of current PBEE, or PSHA, a scalar value such as spectral acceleration at the fundamental period S a (T 1 ) is used. Then, the evaluations of EDPs are conducted with S a (T 1 ) as input. This simplification results in the loss of physically important information inherent in ground motions. To generate input ground motion time-histories for dynamic response analysis using only limited information such as S a (T 1 ), several ground motion selection methods have been proposed [10]. However, ground motions selected using such methods may still exhibit bias in certain characteristics, such as duration. The advancement of PSHA methodologies has the potential to address these issues by enabling the direct synthesis of ground-motion time-histories directly [11] [12] [13] [14]. Matsumoto et al. [15] [16] [17] proposed a deep-learning-based ground motion generative model (GMGM) capable of predicting the probability distribution of acceleration time-histories of ground motions, which can be used for Monte Carlo-based time-history analyses of structures. Conducting timehistory response analyses increases the amount of information available for the evaluation of structural damage. Furthermore, enhancing the fidelity of response analysis models can be another enhancement of performance evaluation. Instead of using low-fidelity models such as a lumped mass model, utilizing high-fidelity ones can increase the information available in the damage assessment of PBEE. However, conducting numerous nonlinear time-history response analyses (NLTHAs), especially utilizing high-fidelity response analysis models with numerous input ground motions, incurs high computational costs. Therefore, alternative methods are essential for the efficient calculation or prediction of structural responses with reduced computational cost.
Recent studies have attempted to apply machine learning, particularly deep learning, to predict time-history responses of structures with low computational costs [18] [19]. For instance, surrogate models that predict time-history responses of a single degree of freedom (SDOF) model, a 2-story simple steel moment frame, and multi-component bridge structures have been constructed using WaveNet, long short-term memory (LSTM), and convolutional neural network (CNN) [20]. Using attention mechanism, an explainable deep learning model for time-history response prediction has also been constructed. [21]. Moreover, physics-guided convolutional neural networks (Phy-CNNs), which embed the physical relationships among acceleration, velocity, and displacement, have achieved accurate prediction with a relatively small training dataset [22]. Although these studies demonstrate the feasibility of deep learning for structural response prediction, most studies focus on low-fidelity or simple response analysis models, where dataset generation through numerous simulations is computationally inexpensive. However, for highfidelity models, the cost of conducting hundreds or thousands of NLTHAs for dataset creation is prohibitive. Thus, there is a need for methods that can construct surrogate models of highfidelity response analysis models, for which Monte Carlo analysis, using only a limited number of training samples, is impractical in seismic risk assessment.
In this paper, we propose a framework that uses transfer learning to construct surrogate models of high-fidelity response analysis-which are typically computationally expensive to run repeatedly-with a r
This content is AI-processed based on ArXiv data.