Coefficient Matrices Computation of Structural Vector Autoregressive Model
In this paper we present the Large Inverse Cholesky (LIC) method, an efficient method for computing the coefficient matrices of a Structural Vector Autoregressive (SVAR) model.
Authors: Aravindh Krishnamoorthy
1 Coef ficient Matric es Computation of Structural V ector Autore gressi ve Model Aravindh Krishnamoorthy Abstract —In this p aper we present the Large Inv erse Ch olesky (LIC) method, an efficient method for computing the coefficient matrices of a Structural V ector A utoregr essiv e (SV AR) model. I . I N T R O D U C T I O N Structural V ector Autoregressiv e (SV AR ) Model [ 1] is widely used for multi-bran ch signal modelling in th e fields of W ir eless Communica tion, Econo metrics, Physics, and Multi- dimensiona l Aud io Signal Processing. Giv en x = { x ( n ) } , the input matrix with M signal bran ches and length N, we would like to express x in the K-th orde r vector auto -regressi ve fo rm in terms of time-inv ariant M xM matrices L , R i , and intercep t vector t , collectively referred to as coefficient matrice s, such that th e residual w = { w ( n ) } has an identity covariance matrix I M . The model for SV AR is giv en a s follows: Lx ( n ) = t + K X i =1 R i x ( n − i ) + w ( n ) (1) In this pap er we present an effi cient method for compu tation of the model coefficient matrices. In section II we revie w a widely used method b ased on least-squ ares. In section III we pro pose the La rge Inv erse Cholesky (LIC) metho d. In section IV we compare the co mputation al comp lexity of these methods, fo llowed by conclu ding r emarks in V. I I . L E A S T S Q UA R E S B A S E D M E T H O D Least-squares based meth od uses the reduced-fo rm V AR (R V AR) as an intermedia te step in comp uting the coefficient matrices of SV AR. First, the inp ut matrix x is mo delled as an R V AR an d the coefficient matrices o f R V AR are co mputed using least-squ ares. The R V AR mod el is as follows: x ( n ) = c + K X i =1 A i x ( n − i ) + v ( n ) (2) In the above equation , the covariance matrix of v = { v ( n ) } is a po siti ve-definite symm etric matrix. Equation (2) may be re wr itten in the matrix-fo rm as follows: X = AS + V (3) Aravi ndh Krishnamoorthy is currently with Ericsson Modem Nurem- berg Gm bH work ing in the area of Wirel ess Communicati on. E-mail: ar- avi ndh.krishnamoort hy@eric sson.com, aravin dh.k@ieee .org Where, th e matrices X ∈ C Mx(N − K) , A ∈ C Mx(MK+1) , S ∈ C (MK+1)x(N − K) , an d V ∈ C MxN − K are given as: X = [ x ( K + 1) , x ( K + 2) , . . . , x ( N )] (4) A = [ c, A 1 , A 2 , . . . , A K ] (5) S = [ 1 1x(N − K) ; x ( K ) , x ( K + 1) , . . . , x ( N − 1); x ( K − 1) , x ( K ) , . . . , x ( N − 2); . . . x (1) , x (2) , . . . , x ( N − K )] (6) V = [ v ( K + 1 ) , v ( K + 2 ) , . . . , v ( N )] (7) In the definition of S , a comma indicates that the next term is concatenated horizo ntally , thereby increasing the number of columns, while a semicolo n in dicates th at the n ext term is concatenated vertically , ther eby incr easing the number o f rows. The par ameter matrix of th e R V AR model A , and th e matrix V may be found u sing the least-squares method as: A = X S H ( S S H ) − 1 (8) V = X − AS (9) Next, the R V AR model is c on verted to its equivalent SV AR model b y multip lying th e R V AR mo del equa tion (2) with the in verse-Cholesky based whiten ing filter of V . From comp ari- son of eq uations (1) and ( 2) we note that the in verse-Cholesky filter is L , i.e. V V H = L − 1 ( L − 1 ) H . Th erefore, the c oefficient matrices of SV AR may be computed fro m the coefficient matrices of R V AR by multiplyin g them with L . I I I . L A R G E I N V E R S E C H O L E S K Y M E T H O D Large Inverse Cho lesky method compu tes the coefficient matrices of SV AR d irectly using an inverse-Cholesky of T T H , where T ∈ C (M(K+1)+1)x(N − K) defined as follows: T = [ 1 1x(N − K) ; x (1) , x (2) , . . . , x ( N − K ); x (2) , x (3) , . . . , x ( N − K + 1); . . . x ( K ) , x ( K + 1) , . . . , x ( N − 1); x ( K + 1) , x ( K + 2) , . . . , x ( N )] (10) Let U be the lower-triangular in verse-Cholesky of T T H , then we have: U − 1 ( U − 1 ) H = T T H (11) 2 Let in dex variables: α = ( M K + 2 , . . . , M ( K + 1) + 1) (12) β i = (( i − 1) M + 2 , . . . , iM + 1) (13) then, the co efficient matrices of th e SV AR model are given by: L = U ( α, α ) (14) R i = − U ( α, β K − i +1 ) (15) t = − U ( α, 1) (16) W ith i = 1 . . . K . W e can easily verify th at the coefficient matrices co mputed by the least-squares method and the L IC metho d ar e equiva- lent. Let R = [ t, R 1 , R 2 , . . . , R K ] , and let the last M rows of U be th e matrix W , then we ha ve W = [ − R , L ] and T = [ S ; X ] . From eq uation (11) we have: − R L S S H S X H X S H X X H − R H L H = I M (17) Upon simplification: L ( X − L − 1 RS )( X − L − 1 RS ) H L H = I M (18) which is readily identified as the equ i valent o f least-squar es based me thod. A MA TLAB compatible algorithm withou t optimization is giv en below , th e full p rogram may b e downloaded fr om the MA TLAB Centr al File E xchange fr om [4]. 1 T = zeros(M * (K+1)+1,N-K) ; 2 T(1,: ) = ones(1,N-K) ; 3 T(M * K+2:end,:) = X ( :,K+1:end) ; 4 for i=1:K 5 T(M * (i-1)+2:M * i+1,:) = X(:,i:i-1+N-K) ; 6 end 7 8 U = inv(chol(T * T', 'lo w er' )) ; 9 alpha = M * K+2:M * (K+1)+1 ; 10 11 L = U(alpha, alpha) ; 12 R = zeros(M,M,K) ; 13 for i = 1:K 14 R(:,:,i) = -U(alpha, ... (K-i) * M+2:(K-i+1) * M+1) ; 15 end 16 t = -U(alpha, 1) ; I V . C O M P U TA T I O N A L C O M P L E X I T Y For computing the co mplexity , we consider the n umber of multiplies fo r each opera tion. It is assumed that o n modern DSP processors, th e M ultiply- accumulate (MAC ) instru ction hid es the additio ns that are perfor med in e ach step. Fur ther , where th e o utput o f ma- trix m ultiplies resu lt in symm etric matrices, the num ber of multiplies is chosen as half of the conv entional value. The computatio nal complexity for Cholesky de composition an d in version is chosen f or an NxN ma trix a t the most efficient value o f N 3 / 2 . A. Lea st Squ ar es Method Operation Number of Multiplies S S H (MK + 1) 2 (N − K) / 2 ( S S H ) − 1 (MK + 1) 3 / 2 X S H M(MK + 1)(N − K) X S H ( S S H ) − 1 M(MK + 1) 2 ˆ V M(MK + 1)(N − K) ˆ V ˆ V H M 2 (N − K) L M 3 / 2 R i M 3 K t M 2 B. Large I n verse Cholesky Metho d Operation Number of Multiplies T T H (M(K + 1) + 1 ) 2 (N − K) / 2 ˆ U (M(K + 1) + 1 ) 3 / 2 LIC is efficient for M , K << N . For practical cases, LIC can be upto 30% ef ficient than least-squares method for find ing the coefficient matrices of an SV AR. V . C O N C L U S I O N In this pa per we presented the Large In verse Cholesky method for computing the coefficient matrices of a Structural Autoregressive mod el wh ich is u pto 30% efficient compared to th e conventional least-squares based meth od. R E F E R E N C E S [1] Helmut L ¨ utke pohl, New Introd uction to Multiple Time Serie s Analysis, Springer , October 4, 2007. [2] David S. W atkins, ”Fundamentals of Matrix Computations”, Second Edition, Wil ey , 2002. [3] Gene H. Golub, Charles F . V an Loan, Matrix Computation s, Third Edition, The Johns Hopk ins Uni versity Press, 1996. [4] Aravindh Krishnamoorth y , L arg e In verse Cholesky (http:/ /www .mathworks.de/mat labcentral/fileexc hange/44106 ) MA TLAB Central Fi le E xchang e, 2013.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment