Koopman Operators for Generalized Persistence of Excitation Conditions for Nonlinear Systems

It is hard to identify nonlinear biological models strictly from data, with results that are often sensitive to experimental conditions. Automated experimental workflows and liquid handling enables unprecedented throughput, as well as the capacity to…

Authors: Nibodh Boddupalli, Aqib Hasnain, Sai Pushpak N

Koopman Operators for Generalized Persistence of Excitation Conditions   for Nonlinear Systems
K oopman Operators f or Generalized P ersistence of Excitation Conditions f or Nonlinear Systems Nibodh Boddupalli, Aqib Hasnain, Sai Pushpak Nandanoori, and Enoch Y eung Abstract — It is hard to identify nonlinear biological models strictly from data, with r esults that are often sensitiv e to experimental conditions. A utomated experimental workflows and liquid handling enables unprecedented thr oughput, as well as the capacity to generate extremely large datasets. W e seek to develop generalized identifiability conditions for informing the design of automated experiments to discover predictiv e nonlinear biological models. For linear systems, identifiability is characterized by persistence of excitation conditions. F or nonlinear systems, no such persistence of excitation conditions exist. W e use the input-K oopman operator method to model nonlinear systems and derive identifiability conditions f or open- loop systems initialized from a single initial condition. W e show that nonlinear identifiability is intrinsically tied to the rank of a given dataset’s power spectral density , transformed through the lifted Koopman observable space. W e illustrate these identifiability conditions with a simulated synthetic gene circuit model, the r epressilator . W e illustrate how rank degen- eracy in datasets results in overfitted nonlinear models of the repr essilator , resulting in poor predicti ve accuracy . Our findings pro vide novel experimental design criteria for discovery of globally predictiv e nonlinear models of biological phenomena. I . I N T R O D U C T I O N Many physical systems exhibit phenomena with unkno wn gov erning dynamics. These systems can be high dimen- sional and partially modeled or completely unstudied. Self- assembling complex systems, biological networks, Internet- of-Things infrastructure models, smart cities, and social networks are all examples of dynamically ev olving systems that frequently are represented by data. Identifying globally predictiv e models from data requires data collection strate- gies that capture all the modes of a system. For example, in biological network modeling and discov ery , designing an informativ e set of experimental conditions can produce global biological models that capture multiple modes of dynamics, including in variant subspaces and multiple equi- libria. Unfortunately , there are few identifiability metrics for quantifying the richness, or the informativity , of datasets of nonlinear systems. Further , a dataset may only elicit linear modes of a nonlinear system, e ven though nonlinear modes may lay dormant. In these scenarios, the accuracy of a model disco very algorithm is often confounded with the informativity or richness of a dataset used to train the model. Please address correspondence to Nibodh Boddupalli at nibodh@ucsb.edu. Nibodh Boddupalli, Aqib Hasnain, and Enoch Y eung are with the De- partment of Mechanical Engineering and the Center for Control,Dynamical Systems, and Computation at the University of California Santa Barbara, Santa Barbara, CA 93106, USA. Sai Pushpak Nandanoori is with the Energy and En vironment Directorate at the Pacific Northwest National Laboratory . Nonlinear systems lack generalized criteria for quantifying the information content of a given dataset. Even if a nonlinear model is deemed globally identifiable, this may be under the assumption of continuous noise-free sampling (perfect data) of all states. In linear systems theory , the informativity of a dataset is characterized by its ability to persistently excite all the modes in the transfer function. These conditions, referred to as persistence of excitation conditions, prescribe rank requirements on either time-series or spectral signals. First introduced in [1], PE is “not (yet) consistently defined” [2]. There are two classical approaches to modeling persistence of e xcitation [1], [2] . Firstly , defining criteria or spectral properties of a control input such that it can elicit a response from the system, thereby uniquely specifying the frequency response model of the system ([3], [4]. Secondly , requiring that a control input that is non-zero in all channels, at least once throughout the time course ([5], [6]). Persistence of excitation conditions can also inform the appropriate construction of a uni versally exciting input signal, e.g. the construction of the Pseudo-Random Binary Sequence [7], which persistently e xcites all linear systems. It is thus desirable to establish a modeling frame work to link classical results in linear system identification [3], [4], [6] for the treatment of nonlinear systems. Recently , an emer ging set of operator -theoretic tools ha ve gained traction, centered on discovering linear representa- tions of nonlinear dynamical systems in a lifted space of coordinates [8], [9], [10]. Originally deriv ed for Hamiltonian systems [11], numerical [12] and theoretical [13] techniques for Koopman operator theory enable input [14], [15], [16], [17], [18], [19] and spectral modeling of nonlinear systems [9], [20], deep-learning based models of nonlinear phenom- ena [21], [22], [23], and study of chaotic and uncountable spectra arising in complex nonlinear phenomena [18], [24], [16], [8], [9]. In this work, we use the K oopman operator method (Sections II and III) to lift nonlinear systems into a linear space and thereby deriv e generalized persistence of ex- citation conditions (Section V), follo wing a similar approach taken in linear systems theory . W e illustrate how generalized PE conditions inform the design of initial conditions for sim- ulated experiments on the three node repressilator (Section VI). I I . K O O P M A N O P E R A T O R T H E O RY In 1931, Koopman showed the existence of a coordinate transformation ψ ∈ R n L and a corresponding unitary oper- ator K : R n L → R n L for any Hamiltonian (non-dissipativ e) dynamical systems. He sho wed that the operator K and observable ψ could be used to represent the time ev olution of the underlying Hamiltonian system as a linear time-inv ariant system. This idea has been developed and generalized to other classes of nonlinear systems in recent years ([8], [9], [13], etc.), in the ef fort to find global representations instead of local approximations. A discrete-time nonlinear dynamical system with state x t ∈ R n at time t ∈ N under f : M → M can be represented as: x t +1 = f ( x t ) (1) Then functions ψ , { ψ i } ∞ i =1 ∈ F , i ∈ Z + are called ”observables, ” and represent a mapping from the state-space into a lifted set of coordinates. For example, ψ may be a scalar observable comprised of nonlinearities, weighted combinations of the state, such as ψ 1 ( x t ) = x 1 ,t , ψ 2 ( x t ) = x 0 . 5 3 ,t , ψ 3 ( x t ) = x 2 ,t x 2 n,t . The true K oopman observables are often approximated in numerical K oopman learning algo- rithms like e xtended DMD (eDMD), deepDMD, and Hankel dynamic mode decomposition using functions which define a generic basis on a Hilbert function space, e.g. radial basis functions (RBFs), Hermite polynomials, or a combination of such basis functions in deepDMD [21]. For an analytical function f ( x t ) , we know from [14] that there exists a countably infinite or finite dimensional K oopman operator K that acts linearly on the observable ψ : M → R under function composition, satisfying the K oopman equation K ψ ( x t ) = ψ ◦ f ( x ) = ψ  f ( x t )  = ψ ( x t +1 ) . (2) This equation states that the action of the K oopman operator on an observable function is equiv alent to the action of the observable function on the vector field of the state. From the above transient behaviour , one can see that this itself is a dynamical system in the space of observables whose time-evolution is governed by an infinite-dimensional operator that preserves the essence of (1). The transformation from (1) to (2) is frequently referred to as ”lifting” since the observable ψ ( x ) ∈ R n L usually has dimension n L ≥ n. This is not always the case, but [23] shows that the observables define an expansion of the nonlinearities in the governing equations. A. F inite Dimensional Appr oximations I I I . K O O P M A N I N P U T O U T P U T T H E O RY Our study of the dynamical system (1), ultimately will require treating the initial condition as an input signal (via a Kronecker delta function) to the system. For clarity , we introduce the notion of an input-Koopman operator here, as well as the notion of the K oopman transfer function of a nonlinear system. Given a discrete-time nonlinear system with analytic vector field f ( x , u ) and control input u t ∈ R m , we write the dynamics of the system as: x t +1 = f ( x t , u t ) y t = h ( x t ) [16] showed that an observable ψ : M × R m → R n L can lift the above system to F such that: K ψ ( x t , u t ) = ψ ( f ( x t , u t ) , u t +1 ) = ψ ( x t +1 , u t +1 ) For an exogenous memoryless input, [15] demonstrated that the above can be modified by splitting ψ ( x , u ) into components ψ x ( x ) and ψ u ( x , u ) , where ψ x ( x ) is a stack ed vector v alued observable of all scalar v alued observable func- tions from ψ ( x , u ) that do not depend on u , and ψ u ( x , u ) is a stacked vector valued observable of all remaining terms. This results in the decomposed representation ψ x ( x t +1 ) = K x ψ x ( x t ) + K u ˜ ψ u ( z t ) y = W h ψ ( x t , u t ) (3) where ˜ ψ u ( z ( u t )) ≡ ψ u ( x , u ) and z t is a stacked vector of all multiv ariate terms of u t and x t . This is a representation of the nonlinear system dynamics that is linear in the lifted state observable ψ x ( x ) and the lifted input-state mixture observable ψ u ( x , u ) = ˜ ψ u ( z t ) . W e thus can define the K oopman discrete time transfer function as G K ( z ) = W h ( z I − K x ) − 1 K u (4) where we have assumed that y t = ψ x ( x t ) . The zeros and the poles of the transfer function are defined in the usual manner , but with respect to the transformations on the state ψ x ( x t ) and input ˜ ψ u ( z ( u t ))) . A continuous time analogue of the discrete-time Koopman operator is readily deriv ed using the K oopman generator , rather than the K oopman operator . While the discrete-time formulation is chosen in this work, a similar treatment of the sequel can be used to dev elop persistence of excitation conditions for continuous- time nonlinear dynamical systems. I V . P R O B L E M S T A T E M E N T In classical system identification theory , the design of an input or initial condition to guarantee identifiability is framed in terms of PE. One of the merits of a K oopman operator representation is that it provides a representation of a nonlinear system in linear coordinates. This allo ws us to rigorously formulate the question of how to design an initial condition that guarantees identifiability of the model. Specifically , we consider two problems: Problem 1 (Identifiability of a Nonlinear System with Fixed Initial Conditions) . Given a discr ete time-in variant autonomous nonlinear system of the form x t +1 = f ( x t ) , x t 0 ∈ X 0 y t = h ( x t ) ≡ x t (5) determine if the model f ( x ) can be identified fr om the continuously sampled data stream x ( t ) and the set of initial conditions X 0 . This problem is difficult to solv e, as it couples the problem of nonlinear function regression of f ( x ) and h ( x ) with the requirement of characterizing the richness of a set of fixed initial conditions. This problem is equiv alent to the nonlinear state-space realization problem, given a single initial condition or set of initial conditions X 0 . The other v ariant of this problem is where the initial conditions are design parameters in an experiment and can be set by the user . This scenario is especially common in synthetic biology where experiments are conducted with varying initial conditions of concentrations, pH, etc. and hence is of interest from a design of experiment (DoE) standpoint. Problem 2 (Identifiability of a Nonlinear System with De- signed Initial Conditions) . Given a continuous time-in variant autonomous nonlinear system of the form and the design choice of initial conditions X 0 x t +1 = f ( x t ) , x t 0 ∈ X 0 y t = h ( x t ) ≡ x t (6) F ind X 0 that guarantees the model f ( x ) can be identified fr om the continuously sampled data str eam x ( t ) . This variant of the problem presumably has more de- grees of freedom, namely dim( x t 0 ) | X 0 ) | = n | X 0 | to be precise. Howe ver , the challenge is to relate identifiability of the model to the initial condition, which for an unkno wn f ( x ) is inherently dif ficult. W e now use the input-Koopman framew ork deri ved in the previous sections to reformulate the problem with a linear Koopman representation. The recasting of the problem will permit extension of classical identifiability notions such as the design of PE or SR input signals or initial conditions. V . P E R S I S T E N C E O F E X C I T A T I O N C O N D I T I ON S F O R N O N L I N E A R S Y S T E M S W I T H K O O P M A N O P E R A T O R S Our contrib ution in this paper is to formulate the problem of identifiability with fixed and designed initial conditions using the method of K oopman. Once we hav e formulated the problem in the Koopman operator theoretic framework, we deriv e computational certificates for PE for fixed or designed initial conditions. This leads to an algorithm for selection of the initial conditions of a dynamical system gi ven x t 0 . In both scenarios, we treat the system’ s initial condition as an input. Given a nonlinear discrete-time dynamic system of the form x t +1 = f ( x t ) y t = h ( x t ) ≡ x t (7) where x ∈ R n and t ∈ Z , the Koopman equation for the corresponding system defines the action of an operator K on a vector valued observ able ψ ∈ R n L acting on the state x t ∈ R n , namely ψ ( f ( x )) = K ψ ( x t ) Then, (3) can be represented with initial state ψ ( x t 0 ) as an input: ψ ( x t +1 ) = K ψ ( x t ) + ( ψ ( x t 0 ) − K ψ ( x t )) δ t,t 0 − 1 W e thus model the initial condition as a Kronecker delta input to the dynamical system. Accordingly , we will ab use notation slightly and define the input of the system as ( ψ ( x t 0 ) − K ψ ( x t )) δ t,t 0 − 1 ≡ ϕ ( u t ) , which yields the classic input-state K oopman representation for a nonlinear system ψ ( x t +1 ) = K ψ ( x t ) + ϕ ( u t ) . (8) Notice that the state vector ψ x ( x ) ∈ R n L is a vector observable function on which the Koopman operator acts as a linear operator to update the state. In essence, by defining an appropriate ’lifting’ or ’observable’ function, we can treat the problem as a classical linear identifiability problem, with additional cav eats imposed by the presence of the nonlinearities in ψ x ( x ) and ψ u ( x , u ) . As there are multiple definitions in the literature for PE [3], [6], we chose a definition that enables relating identifiability to the initial condition of a transfer function. W e pose an e xtension of the definition from [3] which is that of SR from [6], but in the K oopman frame work. Definition 1 (Persistence of Excitation) . A quasi-stationary discr ete time observable ϕ ( u t ) ∈ R n L is said to be persis- tently exciting of or der N ∈ Z + if the covariance matrix R ϕ ( N ) is positive definite: R ϕ ( N ) :=    R ϕ (0) · · · R ϕ ( N − 1) . . . . . . . . . R ϕ ( − ( N − 1)) · · · R ϕ (0)    (9) wher e R ϕ ( k ) , k ∈ Z is the auto-covariance of ϕ ( u t ) formulated as: R ϕ ( k ) : = E  ϕ ( u t ) ϕ ( u t + k ) T  , (10) and E denotes the e xpectation operator . The above definition has been referred to as sufficient richness in [6] as well. Positiv e semi-definiteness of R ϕ ( N ) can easily be proved. By substituting the R.H .S. of (10) in (9) we obtain: R ϕ ( N ) = E     ϕ ( u t +1 ) . . . ϕ ( u t + N )     ϕ ( u t +1 ) T , · · · , ϕ ( u t + N ) T   Defining v ,  ϕ ( u t +1 ) T , · · · , ϕ ( u t + N ) T  T , the above becomes: R ϕ ( N ) = E  v v T  . (11) Let q = [ q T 1 , . . . , q T k , . . . , q T N ] T , q k ∈ R n L \ { 0 } , we hav e q T R ϕ ( N ) q = E  k q T v k 2 2  = N X l,m =1 q T l R ϕ ( m − l ) q m ≥ 0 (12) Hence, R ϕ ( N ) is positiv e semi-definite. From the Her- glotz Theorem, an y function R g ( k ) defined on integers is positiv e semi-definite if and only if it has a Bochner representation (can be represented as the inv erse Fourier transform) of a unique Spectral Measure S g ( ν ) on a circle: R g ( k ) = Z π − π e ikν S g ( dν ) = Z π − π e ikν S g ( ν ) dν If R g ( k ) is a scalar , S g ( ν ) is its power spectrum as pointed out in [6] and [5]. Alternate definitions have been mentioned in literature but the following definition and result elucidates how PE of an initial condition for a dynamical system can be related to the Koopman transfer function. Definition 2. Given system (1), we say an initial condition x 0 , tr eated as a Kronec ker delta signal δ ( x 0 ) is persistently exciting of Koopman-or der n L if it is persistently exciting for a state-inclusive Koopman operator K ∈ R n L of order n L satisfying ψ ( x t +1 ) =  x t +1 ϕ ( x t +1 )  =  K xx K xϕ K ϕx K ϕϕ   x t ϕ ( x t )  + δ t, − 1 ( ψ ( x 0 ) − K ψ ( x t )) . (13) Theorem 1. The initial condition x 0 is persistently exciting for the nonlinear dynamical system (1) if and only if the F ourier transform of the auto-covariance matrix R ϕ ( k ) S ϕ ( ω ) = ∞ X k = −∞ R ϕ ( k ) e − ikω has n L distinct fr equencies ω 1 , ..., ω n L wher e S ϕ ( ω ) does not vanish, i.e. n L positive spectral lines. Pr oof. Let the dimension of the lifted space and correspond- ing K oopman vector v alued observable ψ ( x ) be denoted as n L . W e write the initial condition as an input to the nonlinear dynamical system (1), of the form δ t ( x 0 ) . W e suppose that ψ ( x ) is state inclusive [23]. This implies that: x t +1 = f ( x t ) = K xx x t + K xϕ ϕ ( x t ) + δ t, − 1 ( x 0 − K xx x t − K xϕ ϕ ( x t )) i.e. the PE of an input signal ψ ( x 0 ) for the system (13) implies the PE of the system (1), since the flow and vector field of system (13) is a projection of the flow and vector field of the system (1), respectiv ely . It suffices to demonstrate the equiv alence of the PE of ψ ( x 0 ) , up to order n L , to the linear independence of spectral lines for the power spectral density S ϕ ( ω ) . W e kno w that the spectral measure S ϕ ( ω ) of R ϕ ( k ) would be a positi ve semi-definite, symmetric matrix of bounded measures, symmetric o ver all ω since the observable elements ϕ ( x ) are all real valued. W e ha ve by the definition of the spectral power measure that R ϕ ( k ) = Z π − π e ikω S ϕ ( ω ) dω. (14) Since the spectral measure is bounded almost e verywhere, the monotone conv ergence theorem allows us to write from (12) N X l,m =1 q T l  Z π − π e i ( m − l ) ω S ϕ ( ω ) dω  q m ≥ 0 . After swapping the finite sum with limiting sums Z π − π  N X l,m =1 q T l e − ilω S ϕ ( ω ) e imω q m  dω ≥ 0 , and distributing sums across the integrand, we obtain Z π − π   N X l =1 e − ilω q T l  S ϕ ( ω )  N X m =1 e imω q m   dω ≥ 0 . Defining filters Q ( e iω ) , P N k =1 e ikω q k , the above becomes: Z π − π  Q ∗ ( e iω ) S ϕ ( ω ) Q ( e iω )  dω ≥ 0 . (15) Thus, positiv e definiteness of S ϕ ( ω ) only holds if and only if R ϕ ( k ) is positiv e definite. But positi ve definiteness of S ϕ ( ω ) holds if and only if there are at least n L frequencies in which the integrand does not v anish, namely n L frequencies where N X k =1 e ikω q k is in the orthogonal complement of the null space of S ϕ ( ω ) . This proves the result. From our simulation studies, we found that most initial conditions are PE up to order n L , for their respecti ve K oopman operator and dynamical system. If we consider the design problem of selecting an initial condition x 0 such that ψ ( x 0 ) is PE, or a set of Ψ( x 0 ) are PE of Koopman-order n L , we can e xpress the problem in terms of the positive definite- ness of R ϕ ( N ) , adjusting the signal ψ ( x 0 ) or more generally the timing of the input to ensure the positive definiteness of the auto-co variance matrix. Alternatively , when working with a collection of initial condition signals ψ ( x 0 ) ∈ Ψ( x 0 ) it is straightforward to visualize the power spectrum using the transformed signal δ t ( ψ ( x 0 ) ∈ Ψ( x 0 )) . Initial conditions can be selected or drawn randomly from the phase space until a suitable collection of initial conditions and n L spectral lines are identified. V I . P E R S I S T E N C E O F E X C I TA T I O N O F D I FF E R E N T I N I T I A L C O N D I T I O N S F O R A R E P R E S S I L A T O R G E N E T I C C I R C U I T M O D E L A. The Repressilator The repressilator is a classical genetic circuit used in synthetic biology to implement circadian rhythms or syn- thetic oscillations. The architecture is that of a 3 node Goodwin oscillator , with three genes that produce proteins or mRN A that serve to repress the downstream or target gene’ s function. Each gene represses its downstream tar get, with the final gene repressing the original gene to form a cycle of negati ve feedback. When the gain of the individual genes are balanced with respect to each other [25], the genetic circuit admits a limit cycle in the phase portrait and a single basin of attraction surrounding the origin. There are many models of the repressilator , with varying degrees of complexity and intricacy to capture the underlying biophysical dynamics. W e 0 10 20 30 40 50 60 70 80 90 100 Time (m) -150 -100 -50 0 50 100 Concentration (nM) mTetR mLacI m -cI pTetR pLacI p -cI (a) -150 -100 -50 0 50 100 m -cI -10 0 10 20 30 40 50 60 70 80 p -cI IC 1 IC 2 IC 3 IC 4 IC 5 IC 6 (b) -2000 -1500 -1000 -500 0 500 m -cI 0 20 40 60 80 100 120 p -cI (c) 0 0.1 0.2 0.3 0.4 0.5 Frequency -20 -10 0 10 20 30 Power (d) 0 0.1 0.2 0.3 0.4 0.5 Frequency 0 2 4 6 8 10 Rank (e) Fig. 1: The repressilator trained from inside a unit ball centered at 0 and predicted from another basin of attraction using the K oopman operator . (a) States oscillating with time as simulated (solid lines). K oopman operator trained up to t = 25 for prediction (dotted lines) then onward. (b) Simulated (solid) trajectories growing into limit cycles and prediction (dotted) of the limit cycles. (c) Simulated (solid) and predicted (dotted) trajectory form another basin of attraction. (d) Periodogram of trajectories. (e) Rank of Power Spectrum of trajectories up-to t = 25 used to train the Koopman Operator consider a simplified three dimensional model from the first experimental implementation of the repressilator [26], that captures the limit cycle and basin of attraction, to study the role of the initial condition in PE of the nonlinear system. Consider the model: ˙ m ( i ) = − m ( i ) + α 1 + p n ( j ) + α 0 ˙ p ( i ) = − β ( p ( i ) − m ( i ) ) ( i, j ) = { ([ l acI ] , [ cI ]) , ([ tetR ] , [ l acI ]) , ([ cI ] , [ tetR ]) } n = 2 , α 0 = 0 , α = 100 , β = 1 (16) An example set of commonly used initial concentrations is 1 , 0 , 0 , 0 , 0 , 0 nM for LacI, λ -cI, T etR, mLacI, m λ -cI, 0 10 20 30 40 50 60 70 80 90 100 Time (m) -20 0 20 40 60 80 100 Concentration (nM) mTetR mLacI m -cI pTetR pLacI p -cI (a) 0 20 40 60 80 100 m -cI 0 10 20 30 40 50 60 70 80 p -cI IC 1 IC 2 IC 3 IC 4 IC 5 IC 6 (b) 0 20 40 60 80 100 m -cI 0 20 40 60 80 100 120 p -cI (c) 0 0.1 0.2 0.3 0.4 0.5 Frequency -20 -10 0 10 20 30 Power (d) 0 0.1 0.2 0.3 0.4 0.5 Frequency 0 2 4 6 8 10 Rank (e) Fig. 2: The repressilator trained from outside a unit ball centered at 0 and predicted from another basin of attraction using the K oopman operator . (a) States oscillating with time as simulated (solid lines). K oopman operator trained up to t = 25 for prediction (dotted lines) then onward. (b) Simulated (solid) trajectories growing into limit cycles and prediction (dotted) of the limit cycles. (c) Simulated (solid) and predicted (dotted) trajectory form another basin of attraction. (d) Periodogram of trajectories. (e) Rank of Power Spectrum of trajectories up-to t = 25 used to train the Koopman Operator and mT etR. W e model the degradation and dilution rate of all proteins as a lump term with av erage kinetic rate δ = 0 . 5 . Fig. 1a (and 2a) sho ws simulations (solid lines) of the repressilator from dif ferent initial conditions. The repressilator exhibits a strongly attracting limit cycle and a single unstable equilibrium point at the origin. Sev eral initial conditions in the phase space mapped through the observable function hav e lo w gain, specifically those within B 1 (0) (unit ball in R 6 ). These observable functions have low gain power spectra with spectral lines on the magnitude of numerical noise. W e noted that these initial conditions, when mapped through higher order polynomials, lead to overfitting due to the vanishing of the signal in higher-order terms. T o illustrate, K has been obtained using eDMD with third-order Hermite polynomials. W e considered training the repressilator model with initial conditions drawn from two dif ferent regions of the phase space. First, initial conditions within the unit disc centered at the unstable equilibrium point (solid lines in Fig. 1b) and secondly , initial conditions outside the unit disc centered at the same point (solid lines in Fig. 2b). W e simulated using 6 initial conditions from one basin of attraction and ev aluated test predictions using initial conditions from another basin. For example, we would train within the unit disc (Fig. 1b) or outside it (Fig. 2b) and e valuate prediction accurac y of the Koopman operator for trajectories initiated outside that basin of attraction (Fig. 1c and 2c respectively) using a norm based error between the predicted and simulated trajectories. Notice the rank is greater for the power spectrum in Figure 1 than in Figure 2, which correlates with the failure to predict long-term global behavior in Figure 2. Interestingly , the rank of the po wer spectrum was not as high as the dimension of the list of dictionaries, indicating that the true K oopman observable space is of a lower dimension than the dimension of dictionary functions. Future work will in vestigate the iterativ e processes for identifying the minimal set of K oopman observ ables, their relationship to HankelDMD [27], as well as the formal design of automated biological experiments to ensure global predictive accuracy of discovered Koopman models. V I I . A C K N OW L E D G M E N T S The authors would also like to thank Igor Mezic, Robert Egbert, Bassam Bamieh, Sai Pushpak, Sean W arnick, and Umesh V aidya for stimulating con versations. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (D ARP A), the Department of Defense, or the United States Gov ernment. This work was supported partially by a Defense Advanced Research Projects Agency (DARP A) Grant No. DEAC0576RL01830 and an Institute of Collabo- rativ e Biotechnologies Grant. R E F E R E N C E S [1] K.-J. ˚ Astr ¨ om and B. T orsten, “Numerical identification of linear dynamic systems from normal operating records, ” IF AC Pr oceedings V olumes , vol. 2, no. 2, pp. 96–111, 1965. [2] A. Padoan, G. Scarciotti, and A. Astolfi, “ A geometric characterization of the persistence of excitation condition for the solutions of au- tonomous systems, ” IEEE T ransactions on Automatic Contr ol , vol. 62, no. 11, pp. 5666–5677, 2017. [3] L. Ljung, System identification: theory for the user . Prentice Hall, 1987. [4] A. K. T angirala, Principles of system identification: theory and prac- tice . CRC Press, 2014. [5] S. Boyd and S. S. Sastry , “Necessary and sufficient conditions for parameter con vergence in adapti ve control, ” Automatica , vol. 22, no. 6, pp. 629–639, 1986. [6] E.-W . Bai and S. S. Sastry , “Persistency of excitation, sufficient richness and parameter conv ergence in discrete time adaptive control, ” Systems & contr ol letters , v ol. 6, no. 3, pp. 153–163, 1985. [7] O. Nelles, Nonlinear system identification: fr om classical approac hes to neural networks and fuzzy models . Springer Science & Business Media, 2013. [8] I. Mezi ´ c and A. Banaszuk, “Comparison of systems with complex behavior , ” Physica D: Nonlinear Phenomena , vol. 197, no. 1-2, pp. 101–133, 2004. [9] I. Mezi ´ c, “Spectral properties of dynamical systems, model reduction and decompositions, ” Nonlinear Dynamics , v ol. 41, no. 1-3, pp. 309– 325, 2005. [10] I. Mezi ´ c, “ Analysis of fluid flows via spectral properties of the koopman operator , ” Annual Review of Fluid Mechanics , vol. 45, pp. 357–378, 2013. [11] B. O. K oopman, “Hamiltonian systems and transformation in hilbert space, ” Proceedings of the National Academy of Sciences of the United States of America , vol. 17, no. 5, pp. 315–318, 1931. [12] P . Schmid and J. Sesterhenn, “Dynamic Mode Decomposition of numerical and experimental data, ” in APS Division of Fluid Dynamics Meeting Abstracts , p. MR.007, No v . 2008. [13] I. Mezic, “Koopman operator spectrum and data analysis, ” arXiv pr eprint arXiv:1702.07597 , 2017. [14] E. Y eung, Z. Liu, and N. O. Hodas, “ A k oopman operator approach for computing and balancing gramians for discrete time nonlinear sys- tems, ” in 2018 Annual American Control Conference (ACC) , pp. 337– 344, IEEE, 2018. [15] Z. Liu, S. Kundu, L. Chen, and E. Y eung, “Decomposition of non- linear dynamical systems using koopman gramians, ” in 2018 Annual American Contr ol Confer ence (ACC) , pp. 4811–4818, IEEE, 2018. [16] J. L. Proctor, S. L. Brunton, and J. N. Kutz, “Generalizing k oopman theory to allow for inputs and control, ” SIAM Journal on Applied Dynamical Systems , vol. 17, no. 1, pp. 909–930, 2018. [17] P . Y ou, J. Pang, and E. Y eung, “Deep koopman controller syn- thesis for cyber-resilient market-based frequency regulation, ” IF AC- P apersOnLine , vol. 51, no. 28, pp. 720–725, 2018. [18] M. K orda, M. Putinar, and I. Mezi ´ c, “Data-driv en spectral analysis of the koopman operator, ” Applied and Computational Harmonic Analysis , 2018. [19] M. Korda and I. Mezi ´ c, “Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictiv e control, ” Auto- matica , vol. 93, pp. 149–160, 2018. [20] E. Kaiser, J. N. Kutz, and S. L. Brunton, “Data-driven dis- covery of koopman eigenfunctions for control, ” arXiv preprint arXiv:1707.01146 , 2017. [21] E. Y eung, S. K undu, and N. Hodas, “Learning deep neural netw ork representations for koopman operators of nonlinear dynamical sys- tems, ” in 2019 American Contr ol Confer ence (ACC) , pp. 4832–4839, IEEE, 2019. [22] B. Lusch, J. N. Kutz, and S. L. Brunton, “Deep learning for univ ersal linear embeddings of nonlinear dynamics, ” Nature communications , vol. 9, no. 1, p. 4950, 2018. [23] C. A. Johnson and E. Y eung, “ A class of logistic functions for approximating state-inclusi ve koopman operators, ” in 2018 Annual American Contr ol Confer ence (ACC) , pp. 4803–4810, IEEE, 2018. [24] M. O. Williams, C. W . Rowle y , I. Mezi ´ c, and I. G. Ke vrekidis, “Data fusion via intrinsic dynamic variables: An application of data-driven koopman spectral analysis, ” EPL (Eur ophysics Letters) , v ol. 109, no. 4, p. 40007, 2015. [25] D. Angeli, J. E. Ferrell, and E. D. Sontag, “Detection of multistability , bifurcations, and hysteresis in a large class of biological positive- feedback systems, ” Pr oceedings of the National Academy of Sciences , vol. 101, no. 7, pp. 1822–1827, 2004. [26] M. B. Elowitz and S. Leibler, “ A synthetic oscillatory network of transcriptional regulators, ” Natur e , vol. 403, no. 6767, p. 335, 2000. [27] H. Arbabi and I. Mezic, “Er godic theory , dynamic mode decomposi- tion, and computation of spectral properties of the koopman operator , ” SIAM J ournal on Applied Dynamical Systems , vol. 16, no. 4, pp. 2096– 2126, 2017.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment