Volterra Kernel Identification using Regularized Orthonormal Basis Functions
The Volterra series is a powerful tool in modelling a broad range of nonlinear dynamic systems. However, due to its nonparametric nature, the number of parameters in the series increases rapidly with memory length and series order, with the uncertain…
Authors: Jeremy G. Stoddard, James S. Welsh
V olterra Kernel Iden tification using Regularized Orthonormal Basis F unctions ? Jerem y G. Sto ddard a , James S. W elsh a a Scho ol of Ele ctric al Engine ering and Computing, The University of Newc astle, Austr alia Abstract The V olterra series is a pow erful to ol in modelling a broad range of nonlinear dynamic systems. How ever, due to its nonparametric nature, the num b er of parameters in the series increases rapidly with memory length and series order, with the uncertain t y in resulting model estimates increasing accordingly . In this pap er, we prop ose an iden tification method where the V olterra kernels are estimated indirectly through orthonormal basis function expansions, with regularization applied directly to the expansion co efficien ts to reduce v ariance in the final mo del estimate and provide access to useful mo dels at previously unfeasible series orders. The higher dimensional k ernel expansions are regularized using a method that allo ws smo othness and deca y to be imp osed on the entire hyper-surface. Numerical examples demonstrate improv ed V olterra series estimation up to the 4th order using the regularized basis function method. Key wor ds: System iden tification; Nonlinear systems; V olterra series; Basis functions; Regularization. 1 In tro duction In the field of system iden tification, data-driv en mo d- elling of nonlinear systems p oses unique challenges, whic h can not b e completely addressed by the already w ell established linear identification theory . One of the more significant issues in nonlinear identification is c ho osing a mo del structure from the v ast array of pos- sible mo del classes, which can require significant prior kno wledge on the system. The V olterra series provides a nonparametric represen- tation for a broad range of nonlinear systems, and unlike parametric models, requires only limited prior knowl- edge to perform the series estimation. While the theoret- ical adv antages are clear, practical estimation of V olterra series mo dels is a challenging task [5]. The (truncated) series can b e seen as a high dimensional generalization of the linear Finite Impulse Resp onse (FIR), and muc h lik e FIR models, longer memory lengths are required for more accurate mo delling. T o comp ound this issue, the V olterra series m ust also b e extended to kernels of higher dimension to capture the nonlinear behaviour of the un- derlying system. This results in a large num b er of pa- ? Corresp onding author Jerem y G. Stoddard. Email addr esses: jeremy.stoddard@uon.edu.au (Jerem y G. Sto ddard), james.welsh@newcastle.edu.au (James S. W elsh). rameters for estimation, with a corresponding high v ari- ance in the mo del estimates obtained in the presence of measuremen t noise and finite data records. Whilst large num b ers of parameters are a distinct dis- adv antage for nonparametric mo dels, sev eral tec hniques ha v e b een dev eloped to address this for FIR mo dels in the linear case, and some of these metho ds hav e b een ex- tended to the nonlinear setting for lo w v ariance V olterra series estimation. The so called ‘Ba y esian regularization’ approac h, introduced in [8], is one such method that sho ws promise in a V olterra series context. In the linear case, the v ariance of parameter estimates is reduced by imp osing some degree of smo othness and exp onential de- ca y on the estimated impulse resp onse, at the price of a small bias. Recently it was shown that the same proper- ties can also b e imp osed on impulse resp onses of higher dimension [1]. Orthonormal basis function modelling [7] is another applicable technique, since the V olterra series can be expressed in terms of basis function expansions [10]. How ever, placemen t of the basis function poles is more difficult to optimize in m ulti-dimensional kernels than in the linear case [3],[6]. The con tribution of this paper is to prop ose the direct regularization of basis function expansion co efficients, in order to reduce the v ariance of model estimates and also pro vide access to higher memory lengths and series or- ders. While the concept of regularized expansion co effi- cien ts has b een explored for linear systems [4], its p oten- tial in the nonlinear setting is y et to b e sho wn. This pap er motiv ates the use of regularization on m ulti-dimensional basis function expansions, and pro vides a nov el frame- w ork for the separable optimization of h yp erparameters in the case of Laguerre and Kautz basis functions. Nu- merical results sho w the proposed metho d p erforming b etter than existing metho ds for lo w orders, while also iden tifying mo dels at previously unfeasible series orders. The pap er is organized as follo ws. Section 2 pro vides an o v erview of the V olterra identification problem, while Section 3 introduces the required background on orth- normal basis function mo delling. Section 4 gives details on the regularization approach tak en in this pap er, and a separable optimization metho d is developed in Section 5 for p ole selection in Laguerre and Kautz basis functions. The new identification metho ds are assessed in Section 6 through Mon te Carlo sim ulations on several nonlinear systems. Finally , some conclusions are presen ted in Sec- tion 7. 2 V olterra Kernel Iden tification An y causal, time-inv ariant and fading-memory nonlin- ear system can b e well approximated using a truncated discrete-time V olterra series represen tation [2]. The se- ries consists of a sum of V olterra kernels, where each k er- nel acts on pro ducts of lagged input v alues. The resulting mo del is linear-in-the-parameters, and can be estimated in a least squares framew ork, but the large num b ers of parameters in the mo del can make the estimation quite computationally intensiv e. 2.1 V olterr a Mo del Description F or an input series u and noise-free output y 0 , we con- sider the V olterra series mo del, y 0 ( k ) = M X m =1 " n m − 1 X τ 1 =0 . . . n m − 1 X τ m =0 h m ( τ 1 , . . . , τ m ) τ m Y τ = τ 1 u ( k − τ ) # , (1) where m is the dimension of the k ernels, M is the maxi- m um degree, h m ( τ 1 , . . . , τ m ) is the m ’th V olterra k ernel, n m is the memory length of h m , and τ j is the j ’th lag v ariable for the k ernel. F or m > 1, the kernels can b e view ed as (hyper)surfaces, such as the second order ex- ample of a resonant Wiener system shown in Figure 1. 2.2 L e ast Squar es Identific ation In this pap er w e assume that the input to the V olterra system is deterministic, and that white Gaussian mea- suremen t noise, e , is added directly at the output such that the measured output, y , is given by y ( k ) = y 0 ( k ) + e ( k ) , Fig. 1. Second order V olterra kernel of a resonant Wiener system, and the tw o p erp endicular regularizing directions, (1,1) (solid) and (-1,1) (dotted) where e ( k ) ∼ N (0 , σ 2 ) , and y 0 is the noise-free V olterra series output from (1). The k ernel co efficients can be expressed as parameters in a least squares problem formulation, Y N = φ T N θ + E , (2) where N is the num b er of measuremen ts, Y N and E are the v ectors of output measuremen ts and measure- men t noise resp ectiv ely , with φ N the regressor matrix corresp onding to the v ector of kernel co efficients, θ = [ h T 1 , ¯ h T 2 , . . . ¯ h T M ] T . F or higher-dimensions, symmetry is enforced in the ker- nels to ensure a unique representation [11]. The unique k ernel co efficients from h m ( τ 1 , . . . , τ m ) are taken and v ectorized as ¯ h m b efore b eing placed in the parameter v ector, where the order of vectorization will determine the form of the regressor matrix. Under the assumed noise conditions, the Maximum Like- liho o d (ML) estimate of θ is giv en b y the least squares analytic solution, ˆ θ LS = ( φ N φ T N ) − 1 φ N Y N . (3) 3 Orthonormal Basis F unction Represen tations W e consider t wo particular basis function sets, which form a subset of the ‘Generalized Orthonormal Basis F unctions’ (GOBFs) [7]. They are the Laguerre Basis F unctions (LBFs) and 2-parameter Kautz Basis F unc- tions (KBFs). 2 3.1 L aguerr e Basis F unctions LBFs are formed as a first order realization of the GOBFs, and are parameterized by a single real p ole. In the z domain, the functions take the form [7] F i ( z ) = p 1 − | a | 2 z − a 1 − az z − a i − 1 , a ∈ ( − 1 , 1) . (4) The Laguerre functions ha ve a simple structure, but the absence of complex p oles in the basis yields non-compact mo dels for oscillatory systems [12]. 3.2 Kautz Basis F unctions KBFs are generated from second-order filters, suc h that complex p ole pairs can be included to better model an oscillatory resp onse. A practical parameterization [13] of 2-parameter Kautz functions is given as F 2 i − 1 = √ 1 − c 2 ( z − b ) z 2 + b ( c − 1) z − c − cz 2 + b ( c − 1) z + 1 z 2 + b ( c − 1) z − c i − 1 , F 2 i = p (1 − c 2 )(1 − b 2 ) z 2 + b ( c − 1) z − c − cz 2 + b ( c − 1) z + 1 z 2 + b ( c − 1) z − c i − 1 , (5) where b, c ∈ ( − 1 , 1). 3.3 Basis F unction Exp ansions of V olterr a Mo dels Applying orthonormal basis functions to a V olterra se- ries mo del is a concept referred to as the V olterra/Wiener approac h [10]. F or a k ernel, h m , as defined in (1), the basis function expansion can b e expressed as h m ( τ 1 , . . . , τ m ) = B m X i 1 =1 . . . B m X i m =1 α m ( i 1 , . . . , i m ) m Y j =1 f m,i j ( τ j ) , (6) where f m,l is the impulse resp onse corresp onding to the l ’th basis function of the m ’th kernel’s basis, B m is the n um b er of basis functions in the basis, and α m ( · ) is the set of expansion co efficien ts. Equations (1) and (6) can b e combined to restructure the V olterra mo del, i.e. y ( k ) = M X m =1 " B m X i 1 =1 . . . B m X i m =1 α m ( i 1 , . . . , i m ) m Y j =1 u f m,i j ( k ) # , (7) where u f m,l is the input, u , filtered b y the l ’th basis func- tion of the m ’th kernel’s basis. Note the similarity in structure b et w een the mo dels in (1) and (7), whic h moti- v ate s the treatment of the expansion coefficient sets α m as ‘basis function kernels’ in the domains of their cor- resp onding bases. These new kernels can b e muc h more compact than their time-domain coun terparts, provided that the bases are carefully designed [3],[6]. Least squares identification of the α m k ernels is p ossible using the framew ork describ ed in Section 2.2, with the regressor, φ f ,N , now containing filtered input pro ducts. F or the vectorised set α = [ α T 1 , ¯ α T 2 , . . . , ¯ α T M ] T , we hav e ˆ α LS = ( φ f ,N φ T f ,N ) − 1 φ f ,N Y N . (8) 4 Regularization of Kernel Estimates This section first presen ts an ov erview of the Bay esian regularization metho d from [8] and its extension to higher-dimensional kernels as developed in [1]. W e then in tro duce a nov el application to basis function kernels. 4.1 R e gularize d L e ast Squar es Considering the least squares problem outlined in Sec- tion 2.2, the regularized least squares problem is defined through the addition of a quadratic p enalty on the pa- rameter vector, θ , suc h that the optimization problem is giv en by ˆ θ ReLS = arg min θ k Y N − φ T N θ k 2 2 + σ 2 θ T P − 1 θ , (9) where Y N and φ N are defined as in (2), and P is a reg- ularization p enalty matrix. T aking a Bay esian p ersp ec- tiv e, P can be in terpreted as the prior cov ariance matrix of a Gaussian parameter v ector (i.e. θ ∼ N (0 , P )). In the FIR mo del case, P is designed to impose the prior knowl- edge of smo othness and exp onential decay in impulse resp onses. There exists several tunable co v ariance stru c- tures to encode this prior information [9]. Here we will consider the T uned/Correlated (TC) structure, where the x, y ’th element of P is given by , P ( x, y ) = β λ max ( x,y ) , β ≥ 0 , 0 ≤ λ < 1 , η = [ β , λ ] . (10) The hyperparameters, η , are t ypically tuned via a marginal likelihoo d maximization [8], given by ˆ η = arg min η Y T N Σ − 1 Y Y N + log det Σ Y , (11) where Σ Y is the cov ariance matrix of Y N obtained from the join t distribution of [ θ Y N ] T [8]. The solution to (9) can then b e computed as ˆ θ ReLS = ( P ( ˆ η ) φ N φ T N + σ 2 I ) − 1 P ( ˆ η ) φ N Y N . (12) 3 The noise v ariance, σ 2 , is not t ypically kno wn a priori , and m ust also be estimated. In this pap er, we place this v ariance in the hyperparameter vector, η , for tuning. 4.2 R e gularization for Higher-Dimensional Kernels An extension of the FIR regularization method to the V olterra series was developed in [1], whic h relies on the construction of separate p enalty matrices for each kernel in the mo del. If the least squares problem is formulated as in (2), then the total p enalty , P , in (9), is chosen to b e a blo ck diagonal matrix, P = P 1 0 . . . 0 P M , (13) where P m is the prior cov ariance of the m ’th kernel [1]. While P 1 , b eing one-dimensional, can still b e con- structed using (10), the cov ariance structures for multi- dimensional kernels must no w imp ose smoothness and deca y along the entire (hyper)surface. The approach suggested in [1] is to consider m p erp endicular regular- izing directions for the kernel h m , where one direction is the vector (1 , . . . , 1). The regularizing directions for a second order resonan t Wiener k ernel are depicted in Figure 1 as an example. The regularizing directions for h m form a rotated co- ordinate system which we will denote ( v 1 m , v 2 m , . . . , v m m ). Using this co ordinate system, standard cov ariance struc- tures can be applied to generate a partial cov ariance for eac h regularizing direction. F or the TC structure applied along direction v j m , the corresp onding partial co v ariance is given by , P j m ( x, y ) = ( λ j m ) max ( x 0 ,y 0 ) , (14) where i 0 is the co ordinate of ¯ h m ( i ) on the v j m axis. If ¯ h m ( i ) is asso ciated with lag v alues ¯ τ = ( τ 1 , . . . , τ m ), then i 0 = h ¯ τ , v j m i . The total cov ariance matrix for the kernel is pro duced through element-b y-element multiplication of the individual matrices [1], i.e. P m ( x, y ) = β m · P 1 m ( x, y ) · . . . · P m m ( x, y ) . (15) where β m is a normalization hyperparameter. Using the TC structure, there are no w m + 1 hyperpa- rameters p er kernel whic h contribute to the vector η tuned in (11). The large dimension of the searc h space and the non-conv exity of the problem necessitate a global optimization metho d. 4.3 R e gularization for Basis F unction Exp ansions The parallels b etw een time-domain and basis function V olterra mo dels hav e motiv ated the treatment of ex- pansion co efficients as V olterra kernels in the domains of their bases. A natural progression then would b e to apply regularization to these new kernels in the same w a y it can b e applied to standard V olterra kernels, i.e. imp ose smo othness and decay along the hyper-surfaces generated by the basis function expansions, using ˆ α ReLS = arg min α k Y N − φ T f ,N α k 2 2 + σ 2 α T P − 1 α, (16) where α = [ α T 1 , ¯ α T 2 , . . . , ¯ α T M ] T and P is constructed and tuned as in the previous section. Indeed, the concept has already b een explored for linear FIR mo delling [4], where the TC cov ariance structure w as applied in regularized estimation of Laguerre co efficients. 5 Separable P arameter Selection for Laguerre and Kautz Bases When regularization is applied to basis function expan- sions directly , the basis-generating parameters must b e tuned as w ell. One approac h suggested for the linear case [4] is to place the basis-generating parameters with the existing h yp erparameters in η , and tune them in the op- timization of (11). F or nonlinear iden tification, how ever, w e require a new basis for each kernel, and therefore sev- eral sets of generating parameters. T o limit the searc h space of the non-conv ex optimization, we will develop a separable metho d for estimating the optimal Laguerre or Kautz basis function parameters prior to regularization. 5.1 Optimal Sele ction of L aguerr e and Kautz Poles When expanding V olterra kernels, the optimal generat- ing parameters for Laguerre and Kautz bases were dis- cussed in [3] and [6] respectively , where optimality is de- fined by the minimization of error in tro duced from trun- cation of the basis function expansions to a finite length. In the Laguerre case, the optimal p ole, a , for the basis can b e computed from an analytic function of the time domain kernel, h m ( τ 1 , . . . , τ m ) [3]. F or the 2-parameter Kautz case, no analytic solution exists, but sub-optimal analytic estimates can be obtained by fixing b and op- timizing c . In this pap er, by searching through an ap- propriately discretized space of b ∈ ( − 1 , 1), we analyt- ically compute the optimal c = c ∗ for each b , and the corresp onding cost, J m ( b, c ∗ ) from [6]. The parameter set whic h produced the global minimum for J m is the optimal choice for expansion of h m . 5.2 Algorithm for Sep ar able Optimization It is clear that optimal p ole selection, as summarized in Section 5.1, requires exact knowledge of the time-domain 4 k ernels, h m . Since the kernels are the quantities we are required to estimate, we will instead employ a recursive approac h to the problem of optimizing basis function parameters, which is summarized in Algorithm 1. Algorithm 1 Optimization of LBF/KBF Parameters Require: Data structures Y N and φ N from (2) 1: Obtain LS estimate of all kernels { h m } M m =1 using (3). 2: Compute optimal basis function parameter/s for eac h h m (outlined in Section 5.1). 3: Using the parameter/s from Step 2, obtain a LS es- timate of the basis function kernels α m via (8). 4: T ransform α m k ernels to time-domain k ernels, h m , using (6). 5: Rep eat Steps 2-4 until basis function parameter/s con v erge within desired tolerance. Remark 1 If the data length, N , is less than the numb er of p ar ameters to b e estimate d in Step 1, then the user c an inste ad initialize the algorithm by entering at Step 3 with an initial guess of the b asis function p ar ameters. If N is also lower than the numb er of r e quir e d b asis function c o efficients, then the algorithm c annot b e use d, and p oles should b e include d as hyp erp ar ameters in (11). 6 Numerical Sim ulation A n um ber of Monte Carlo simulation studies were con- ducted to ev aluate the p erformance of the prop osed regu- larized basis function metho d. The studies are p erformed on systems with disparate dynamics, to emphasise the differen t adv antages of Kautz and Laguerre bases. The basis function methods also p ermit estimation of higher order kernels than their time-domain counterpart, due to the compact nature of basis function represen tations, and this is explored via third and fourth order systems. 6.1 Simulation Settings All studies were p erformed on Wiener systems of the form, y ( k ) = M X m =1 B m q − 1 A ( q ) u ( k ) m + e ( k ) , (17) where q is the forward shift op erator, and e ( k ) ∼ N (0 , σ 2 ) is Gaussian output noise added in eac h real- ization. Each term in the sum defines an m ’th order V olterra kernel, and eac h kernel possesses the same denominator dynamics, with B m scaling the kernel’s con tribution to the output. The sim ulation studies differ both in the resonance of the underlying linear filter, as w ell as the maximum ker- nel order, M . Two second order ( M = 2) systems were Fig. 2. Normalized impulse resp onse of the linear block for Sys2a/Sys3/Sys4 (left) and Sys2b (righ t) tested, with their dynamics given by Sys2a: A ( q ) = 1 − 1 . 8036 q − 1 + 0 . 8338 q − 2 Sys2b: A ( q ) = 1 − 1 . 5 q − 1 + 0 . 8125 q − 2 ‘Sys2a’ is almost critically damped, while ‘Sys2b’ has a lo wer damping ratio. The impulse resp onse for each underlying linear filter can b e seen in Figure 2. Studies w ere also p erformed on t wo higher order systems, ‘Sys3’ ( M = 3) and ‘Sys4’ ( M = 4), whic h use the same un- derlying filter as Sys2a, and appro ximately equal contri- butions to the output from each kernel. F or each system and metho d, Monte Carlo studies were p erformed at Signal to Noise Ratios (SNR) of 20dB and 5dB, with 100 system realizations p er setting. The input, u , is constructed as a Gaussian distributed signal with unit v ariance, and the data length chosen to b e N = 3412; only 30% longer than the minimum least squares requiremen t for Sys2a and Sys2b. The metho ds proposed in this pap er w ere ev aluated alongside their unregularized counterparts, as w ell as a direct time-domain approac h. The details of each metho d are: (1) ReLS : The regularized least squares metho d (from [1]) in the time domain, using (13), (14) and (15). (2) LBF / KBF : Least squares estimation of LBF/KBF co efficien ts using (8). Generating parameters are pre-optimized using Algorithm 1. (3) ReLBF / ReKBF : Regularized estimation of LBF/KBF coefficients using (16) and Algorithm 1. The time domain metho d was required to estimate eac h k ernel up to a maximum lag n m = 70, while the basis function methods used a memory length of B m = 15. F or Sys3 and Sys4, ReLS b ecomes to o computationally in tensiv e to b e feasible, hence only LBF and ReLBF metho ds were tested for these cases. F or all regular- ized estimates, optimization of (11) was p erformed via the GlobalSearch algorithm in MA TLAB and using the fmincon function to find lo cal minima. 5 6.2 R esults F or each Mon te Carlo study , the estimation errors are quan tified with the v alidation error metric used in [1], calculated by applying an input of length 50,000 to b oth the true system and the estimated system and defining ‘normalised RMS error’ as, E N R M S = r ms ( y v al − y mod ) r ms ( y v al ) , (18) where y v al and y mod are the noise-less outputs of the true and estimated system resp ectively . The second order results, presented as b oxplots in Fig- ures 3 and 4, show significant impro vemen ts in output prediction using ReLBF and ReKBF. The adv antage of Kautz functions for resonant systems is clear in the Sys2b (low damping) results, with ReKBF outp erform- ing all other metho ds. F or comparison, mean computa- tion times are also provided in T able 1. The v alidation errors for Sys3 and Sys4 are given in Figure 5, which highlight the b enefit of the regularized metho ds in low ering mo del error to tolerable levels. At these orders, the newly prop osed metho d could obtain estimates with reasonable accuracy and computation time using a standard computer arc hitecture, whic h w as not p ossible using exisiting metho ds. T able 1 Mean computation times for 2nd order systems Metho d ReLS LBF KBF ReLBF ReKBF Time (s) 375.8 9.7 37.2 14.9 43.5 Fig. 3. V alidation errors for Sys2a with SNR = 20dB (top) and 5dB (bottom) Fig. 4. V alidation errors for Sys2b with SNR = 20dB (top) and 5dB (bottom) Fig. 5. V alidation errors for Sys3 (left) and Sys4 (righ t) 7 Conclusion This paper makes a no vel prop osal to combine the tech- niques of basis function modelling and regularization for the purp ose of V olterra series estimation, and provides the theoretical motiv ation for imposing standard co v ari- ance structures on basis function kernels. T o reduce the complexit y of the regularization cost function, a separa- ble optimization metho d was designed for pre-selecting Laguerre/Kautz parameters for eac h k ernel. The perfor- mance of the prop osed estimation metho d has been com- pared against unregularized estimates and estimates di- rectly in the time-domain, with improv ed output predic- tion in all test cases. F urthermore, the prop osed metho ds allo w e d access to computationally feasible and accurate estimates at higher series orders than previously p ossi- ble, even for low data length and high noise settings. References [1] G. Birpoutsoukis, A. Marconato, J. Lataire, and J. Schouk ens. Regularized nonparametric V olterra kernel estimation. A utomatica , 82:324–327, 2017. [2] S. Boyd and L. O. Chua. F ading memory and the problem of appro ximating nonlinear operators with Volterra series. IEEE T r ansactions on Cir cuits and Systems , 32(11):1150– 1161, 1985. 6 [3] R. Camp ello, G. F avier, and W. do Amaral. Optimal expansions of discrete-time V olterra models using Laguerre functions. A utomatica , 40:815–822, 2004. [4] T. Chen and L. Ljung. Regularized system identification using orthonormal basis functions. In Pr oc. of CDC-ECC , pages 1291–1296, 2015. [5] C. M. Cheng, Z. K. Peng, W. M. Zhang, and G. Meng. V olterra-series-based nonlinear system modeling and its engineering applications: A state-of-the-art review. Me chanic al Systems and Signal Pr o c essing , 87(A):340–364, 2017. [6] A. da Rosa, R. Campello, and W. do Amaral. Choice of free parameters in expansions of discrete-time V olterra models using Kautz functions. A utomatic a , 43:1084–1091, 2007. [7] P . Heub erger, P . V an den Hof, and B. W ahlb erg. Mo del ling and Identific ation with Rational Ortho gonal Basis F unctions . Springer, 2005. [8] G. Pillonetto and G. De Nicolao. A new k ernel-based approach for linear system identification. Automatic a , 46(1):81–93, 2010. [9] G. Pillonetto, F. Dinuzzo, T. Chen, G. De Nicolao, and L. Ljung. Kernel metho ds in system identification, machine learning and function estimation: A surv ey . Automatic a , 50(3):657–682, 2014. [10] W. J. Rugh. Nonlinear System The ory: The V olterr a-Wiener Appr o ach . Johns Hopkins Universit y Press, 1980. [11] M. Schetzen. The V olterr a and Wiener The ories of Nonline ar Systems . Wiley and Sons, 1980. [12] B. W ahlb erg. System iden tification using Laguerre mo dels. IEEE T r ansactions on Automatic Contr ol , AC-36(5):551– 562, 1991. [13] B. W ahlb erg. System iden tification using Kautz mo dels. IEEE T ransactions on A utomatic Contr ol , A C-39(6):1276– 1282, 1994. 7
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment