Cost Trade-offs in Matrix Inversion Updates for Streaming Outlier Detection

Outlier detection identifies data points that deviate significantly from expected patterns, revealing anomalies that may require special attention. Incorporating online learning further improves accuracy by continuously updating the model to reflect …

Authors: Florian Grivet, Louise Travé-Massuyès

Cost Trade-offs in Matrix Inversion Updates for Streaming Outlier Detection
Cost T rade-offs in Matr ix In v ersion Updates f or Streaming Outlier Detection Florian Grivet a,b , ∗ , Louise T ra v é-Massuyès b ,1 a CNES, 18 A v. Edouar d Belin, T oulouse, 31400, T oulouse, F rance b L AAS-CNRS, U niver sity of T oulouse, CNRS, 7 A v. du Colonel R oche, 31400, T oulouse, F rance A R T I C L E I N F O Keyw ords : Outlier Detection Christoffel function Rank update Matrix Inv ersion Sherman-Mor rison W oodbur y Matr ix Identity Computational cost Abstract Outlier detection identifies data points that deviate significantl y from expected patterns, rev ealing anomalies that may require special attention. Incorporating online lear ning fur ther improv es accuracy by continuously updating the model to reflect the most recent data. When employing t he Chr istoffel function as an outlier score, online lear ning requires updating t he inverse of a matr ix following a rank -  update, given the initial in verse. Surpr isingly , t here is no consensus on the optimal method f or this t ask. This technical note aims to compare t hree different updating methods: Direct Inv ersion (DI), Iterative Sherman-Mor rison (ISM), and W oodbur y Matrix Identity (WMI), to identify the most suitable approach for different scenar ios. W e first der ive t he theoretical comput ational costs of each method and then validate t hese findings through comprehensive Python simulations r un on a CPU . These results allow us to propose a simple, quantitative, and easy-to-remember rule that can be stated qualitativel y as follo ws: ISM is optimal for rank -1 updates, WMI excels f or small updates relative to matrix size, and DI is prefer able otherwise. This technical note produces a general result for any problem inv olving a matr ix inversion update. In par ticular, it contributes to the ongoing development of efficient online outlier detection techniq ues. 1. Introduction The detection of outliers in data streams has become increasingly import ant in a wide range of applications, from fraud detection to quality control in manufacturing. In such settings, data ar r ive seq uentially and often at high rates, mak - ing online lear ning approaches par ticularl y attractive. These methods continuously update models as new observations become av ailable, allowing anomaly detection systems to adapt to evol ving data distributions while maintaining strong performance o ver time. Among recent approaches to anomaly detection in data streams [ 10 , 17 ], Ducharlet et al. [ 2 ] introduce an outlier scoring mec hanism based on the Christoffel function (CF) [ 6 ]. This score is defined in terms of the inv erse of a symetric positive definite moment matr ix associated with the dat a. In streaming settings, this matr ix is updated sequentiall y via rank -  corrections as new obser vations ar r ive. While the resulting CF scores are in v ar iant to the specific in- verse update strategy employ ed – up to numer ical precision – t he choice of update method has a substantial impact on comput ational cost, numerical stability , and scalability . These considerations are cr itical in streaming settings, where efficiency directly constrains real-time applicability . Sev eral strategies are av ailable f or updating matr ix in- verses af ter rank -  corrections, including Direct Inv ersion (DI), Iterativ e Sher man-Mor r ison (ISM) [ 11 ], and the W ood- bury Matrix Identity (WMI) [ 15 ]. Despite t heir widespread ∗ Corresponding aut hor florian.grivet@cnes.fr (F . Grivet); louise@laas.fr (L. Tra vé-Massuy ès) OR CI D (s): 0009-0007-7096-3258 (F . Grivet); 0000-0002-5322-8418 (L. Tra vé-Massuy ès) 1 Head of ANITI chair ADDX use, there is currently no clear quantitative guidance on which method is pref erable under different conditions, such as varying matr ix size  or update rank  . This lack of guidance can lead to inefficient implementations t hat unnec- essarily limit the practicality of CF-based anomaly detection in streaming en vironments. This article is presented as a technical note aimed at addressing this gap by comparing in verse update strategies in the context of Christoffel-function-based outlier detection. It does not propose a ne w scoring model, but instead analyzes how different matrix update methods may affect comput a- tional efficiency . The contr ibutions of t his technical note are summar ized as follo ws: - This note introduces the Christoffel function, explores its key properties, and presents DyCF, a fr ugal stream- ing outlier detection method inspired by these founda- tions, t hat motivates the paper’ s wor k. - This work der ives the computational costs of three matrix inv erse rank -k update methods, namely Direct In version, Iterativ e Sherman-Mor r ison, and W ood- bury Matr ix Identity . - Summarizing and comparing the three t heoretical computational costs yields a unified reference, and the findings are validated through comprehensiv e Python simulations r un on a CPU. - As a key t akea w ay , this note offers a simple, quantita- tive, and easy-to-remember r ule, e xpressed in terms of the matr ix dimension  and the update rank  , f or selecting among the t hree rank -  matr ix in verse update strategies implemented in Python on CPU. Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 Page 1 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection The technical note is organized as follo ws. Section 2 briefly revie ws the Chr istoffel function, highlighting t he properties rele vant to anomal y detection. Section 3 t hen discusses its use in streaming outlier detection, empha- sizing the need f or efficient inv erse updates under rank -  cor rections. Section 4 presents the Direct Inv ersion, Iter - ative Sher man–Mor r ison, and W oodbur y Matr ix Identity approaches, detailing their algor ithms and theoretical com- putational costs. Section 5 compares the t heoretical compu- tational costs of t he three methods. Section 6 analyzes theo- retical predictions with empirical results to der ive practical implementations. Finally , Section 7 summar izes the main findings, resumes practical guidance, discusses limit ations, and points at interesting topics for future w ork. 2. The Christoffel function The CF or iginates from the t heory of approximation and or t hogonal polynomials. Lasser re and Pauwels [ 6 ] demonstrated that the CF is related to a sum-of-sq uares (SOS) polynomial whose sublev el set effectivel y captures the shape of a dat aset. Building on this discov er y , Lasser re and Pauwels [ 7 ], and Lasserre et al. [ 8 ] de veloped a com- prehensiv e theoretical frame work for dat a anal ysis, and in particular, anomaly detection. This section explores key properties of both the theoret- ical CF (ref er red to as the population Chr istoffel function) and its empirical counter par t. 2.1. The population Chr istoffel function Let 𝐱 =   1 ,  2 , ⋯ ,    ∈ ℝ  . T o define polynomials, we adopt the multi-index notation  =      =1 ... ∈ ℕ  , such that the monomial 𝐱  of total degree   ( 𝐱  ) =    =    =1   is given by 𝐱  =   1 1   2 2 ⋯     . In shor t f or m, we denote the set of  -variate polynomials by ℝ [ 𝐱 ] . The dimension of ℝ  [ 𝐱 ] , the space of  -variate polynomials of degree at most  , is giv en by   (  ) =   +    . Let {   ∶ 1      (  )} be a basis of ℝ  [ 𝐱 ] . W e denote   ∶ ℝ  ⟶ ℝ   (  ) 𝐱 ⟼   1 ( 𝐱 ) ,  2 ( 𝐱 ) , ⋯ ,    (  ) ( 𝐱 )   The monomials in   ( 𝐱 ) are graded in the lexicographic order 2 . Let Ω  ℝ  be a compact set, wit h non-empty interior . Let  be a Borel measure suppor ted on Ω and define the associated moment matrix. Definition 2.1 (The moment matrix). The moment matr ix of degr ee  ∈ ℕ , associated with measur e  , denoted by   (  ) ∈ ℝ   (  )×   (  ) , is defined as   (  ) =  ℝ    ( 𝐱 )   ( 𝐱 )    ( 𝐱 ) . (1) 2 lexicographic order: monomials are first ordered according to as- cending total deg ree    , and then using lexicographic order on variables considering 𝐱 1 = , 𝐱 2 =  , etc. Note that this matrix is syme tr ic positive definite, thus non-singular for all  (see [ 6 , Section 2.2] or [ 13 , Remark 2.3] for the proof). The population Christoffel function is defined as f ollow s. Definition 2.2 (The population Christoffel function). The population Christoffel function of degr ee  ∈ ℕ , associated with the measur e  , denoted by Λ   ( 𝐱 ) , is defined as Λ   ( 𝐱 ) =   ∈ ℝ  [ 𝐱 ]   Ω  2 ( 𝐳 )   ( 𝐳 ) ,  ( 𝐱 ) = 1  . (2) No w , for any polynomial  ∈ ℝ  [ 𝐱 ] , there exists some 𝐩 ∈ ℝ   (  ) such that  ( 𝐱 ) = 𝐩    ( 𝐱 ) for any 𝐱 ∈ ℝ  . Thus, the objective function becomes  Ω 𝐩    ( 𝐳 )   ( 𝐳 )  𝐩   ( 𝐳 ) = 𝐩    (  ) 𝐩 , so that Λ   ( 𝐱 ) =  𝐩 ∈ ℝ   (  )  𝐩    (  ) 𝐩 , 𝐩    ( 𝐱 ) = 1  . (3) The Christoffel-Darboux Kernel, which is defined below , is related to t he Chr istoffel function. Definition 2.3 (The Christoffel-Darboux Kernel). The Christoffel-Darboux K ernel (CD-Kernel) associat ed with the measure  , denoted by    ( 𝐱 , 𝐲 ) , is defined as ( 𝐱 , 𝐲 ) ↦    ( 𝐱 , 𝐲 ) =   ( 𝐱 )    (  )   ( 𝐲 ) , (4) while t he polynomial  , reads 𝐱 ↦  , ( 𝐱 ) =    ( 𝐱 , 𝐱 ) =   ( 𝐱 )    (  ) −1   ( 𝐱 ) . (5)  , is a sum-of-squares (SOS) polynomial of degree 2  . An interesting property of this SOS polynomial is its behavior inside and outside its suppor t Ω . Lasser re et al. [ 8 , Lemma 4.3.1] quantifies at least the exponential growth wit h  for data points outside the support, while inside, it is at most polynomial [ 8 , Lemma 4.3.2]. The CF of degree  ∈ ℕ defined in equation ( 2 ) can be rewritten as Λ   ( 𝐱 ) = 1  , ( 𝐱 ) = 1   ( 𝐱 )    (  ) −1   ( 𝐱 ) (6) and we hav e Λ   ( 𝐱 ) −1 =  , ( 𝐱 ) , (7) so t hat Λ   ( 𝐱 ) −1 inherits from t he proper ties of  , ( 𝐱 ) . 2.2. The empirical Christoffel function In practical applications, the measure  is unknown. Let  be a cloud of  data points 𝐱 ∈ ℝ  sampled from the theoretical measure  suppor ted on Ω . W e define the discrete measure   supported on  such t hat   = 1   𝐱 ∈   𝐱 where  𝐱 cor responds to the Dirac measure at 𝐱 . The empirical version of the moment matr ix can be wr itten as   (   ) = 1   𝐱 ∈    ( 𝐱 )   ( 𝐱 )  . (8) Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 2 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection Lasser re et al. [ 8 , Corollary 6.3.5] guarantees t hat the matr ix   (   ) is inv er tible if t he size of  :  , is g reater t han   (  ) , e.g. the number of samples is greater than the size of the moment matr ix. Definition 2.4 (The em pirical Christoffel function). Under the condition    =  >   (  ) , the empirical CF is defined as Λ    ( 𝐱 ) = 1   ( 𝐱 )    (   ) −1   ( 𝐱 ) . (9) According to Lasser re and Pauwels [ 7 , Theorem 3.13], the empirical CF conv erg es to t he population CF as  increases:  Λ    − Λ    ∞ =  𝐱 ∈ ℝ    Λ    ( 𝐱 ) − Λ   ( 𝐱 )   ⟶  → ∞ 0 .. 3. The Christoffel function for outlier detection in data streams Ducharle t et al. [ 2 ] introduce DyCF, a nov el outlier de- tection algorit hm for data streams lever aging t he Chr istoffel function. This algor ithm suppor ts online lear ning using a rank -  update. This section details the anomaly detection method based on the CF as well as the online learning principle of t he method. 3.1. The Christoffel function for outlier detection As mentioned in section 2 , the polynomial  , ( 𝐱 ) , and hence Λ   ( 𝐱 ) −1 , effectivel y captures the shape of the underl ying dataset. Furt hermore, as explained abov e, there exis ts a dichotom y in t he growth beha vior of Λ   ( 𝐱 ) −1 : it exhibits at most polynomial g rowth when 𝐱 is within the support Ω and at least exponential growth when 𝐱 is outside Ω . Since Λ    ( 𝐱 ) −1 con ver ges to Λ   ( 𝐱 ) −1 , these proper ties are preserved f or finite datasets. Consequentl y , Λ    ( 𝐱 ) −1 is well-suited as a scor ing function for outlier detection. One can define a lev el set, or threshold,  , such that all points 𝐱 ∈ ℝ  with a value of Λ    ( 𝐱 ) −1 higher than  , are considered as outliers. This define the scor ing function  , ( 𝐱 ) = Λ    ( 𝐱 ) −1  , , (10) where a point is detected as an outlier if  ,  1 . 3.2. Online learning in DyCF Considering  as a dataset,   (   ) can be seen as a summary or an encoding of this dataset. In a data stream, if a new nominal instance 𝐱 ar r iv es, we can impro ve t he performance of t he algor ithm by integrating t his point into the database  and updating t he moment matr ix   (   ) . This is called online lear ning. The first update method would be to recalculate the moment matr ix   (   ) using equation ( 8 ) and to inv er t it. Ho wev er , t his last operation is very costl y . Ne vertheless, we can notice that:   (   +1 ) = 1  + 1  𝐳 ∈  ∪{ 𝐱 }   ( 𝐳 )   ( 𝐳 )  = 1  + 1     (   ) +   ( 𝐱 )   ( 𝐱 )   . Thus, we can use t he Sherman-Mor r ison’ s f or mula or the W oodbur y Matr ix Identity to a void recalculating the inv erse of   (   +1 ) , which is an   (  ) ×   (  ) matr ix. When the update inv olv es  ∈ ℕ ∗ new data points 𝐱 ∈ ℝ  , the online learning phase goes as follo w s in three steps: (i) W e denormalize   (   ) to obtain either  =   (   ) ×  , or  −1 =   (   ) −1 ∕  . (ii) The DI method calculates  −1   from  using  −1   =   +    =1    𝐱 (  )     𝐱 (  )    −1 (11) and t he ISM and WMI methods calculate  −1   from  −1 using Sher man-Morr ison (SM) and WMI f or mulas, e.g. equations ( 13 ) and ( 14 ) given below , respectiv ely . (iii) Renormalize  −1   to obt ain the updated in verse moment matr ix   (   +  ) −1 =  −1   × (  +  ) . The Sherman-Morrison formula and the W oodbur y Matrix Identity – Suppose  ∈ ℝ  ×  is an in vertible square matr ix and ,  ∈ ℝ  are column vectors. Then  +   is in vertible if and only if 1 +    −1   0 . In this case, the SM f ormula [ 1 ] states the follo wing (  +   ) −1 =  −1 −  −1    −1 1 +    −1  , (12) which, in our case, becomes:   +   ( 𝐱 )   ( 𝐱 )   −1 =  −1 −  −1   ( 𝐱 )   ( 𝐱 )   −1 1+   ( 𝐱 )   −1   ( 𝐱 ) . (13) No w , let us recall the W oodbury Matrix Identity (WMI): (  +    ) −1 =  −1 −  −1    −1 +   −1   −1   −1 , (14) which, in our case, becomes:   +      −1 =  −1 −  −1     −1 +   −1    −1   −1 , (15) with  the identity matr ix of size  and  the design matr ix:  =     𝐱 (1)     𝐱 (2)  ⋯    𝐱 (  )    ∈ ℝ  ×  . Note that t he nor malization costs of steps (i) and (iii) are the same regardless of the method used. The most efficient method will theref ore be the one with t he lo west cost in step (ii). Moreov er, t he DI method updates both   (   ) and   (   ) −1 . How ev er, to detect outliers, since Λ    only uses   (   ) −1 , we do not need to compute   (   +  ) , so w e can use the ISM or WMI methods. Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 3 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection 4. Computational costs For t he sake of simplifying calculations, the size of t he moment matrix   (  ) will be ref er red to as  in this section. Appendix A reports the costs of the inter mediate steps used in t his section in terms of floating-point operations (flops). 4.1. Computational cost of the DI method for a rank -  update In this subsection, w e calculate t he computational cost of t he DI method and provide an algor ithm for it. 4.1.1. Com putational cost of a matr ix inv ersion The computational cost of inv er ting the moment matr ix   (   ) of size  ×  is dependent on t he algor ithm em- ploy ed. Using an LU factorization, t he comput ational cost of   (   ) −1 is expressed as [ 9 , Theorem 2.31]:   2  3    . (16) Since our moment matr ix   (   ) is symetric positive definite (spd), employing Cholesky decomposition reduces the cost to [ 5 , Section III.A]:   5 6  3    . (17) 4.1.2. Rank -  update computational cost T o perform a rank -  update and apply equation ( 11 ), we first need to compute    =1    𝐱 (  )     𝐱 (  )   . This in- v olv es performing  column-vector by row -vector products as described in equation ( 29 ), with a computational cost of  ×  2    . Then, we need to sum t he  resulting matr ices of size  ×  , requiring  − 1 term-by-term matr ix additions with a total cost of (  − 1)  2    . Next, we perform a term-by-term matrix addition with  , which incurs a cost of  2    . Thus, the computational cost of updating  to f or m    =  +    =1    𝐱 (  )     𝐱 (  )   amounts to:  2 + (  − 1)  2 +  2 = 2  2    . Note that if we compute    using the design ma- trix  =     𝐱 (1)     𝐱 (2)  ⋯    𝐱 (  )    ∈ ℝ  ×  and the relation    =  +    , the computational cost remains equiv alent: 1 matr ix-by -matrix product accord- ing to equation ( 32 ) and 1 ter m-by-term addition yields a cost of: 2  2 −  2 +  2 = 2  2    . How ev er , consider ing the super ior optimization in Python, we will employ t his computational approac h f or efficiency dur ing our tests. Finall y , we need to compute the inv erse of    , which is a spd matrix with the cost described in equation ( 17 ). Thus, the computational cost for the DI method ( Algo- rit hm 1 ) is   5 6  3  + 2  2   . (18) Algorithm 1 Direct In version Algorit hm with   5 6  3  + 2  2    ( 18 ) Req uire: Matrix  with size  ×  , and vectors     (  )  ,  = 1 ... to add to  ; 1: Construct the design matr ix  =     𝐱 (1)     𝐱 (2)  ⋯    𝐱 (  )    ∈ ℝ  ×  ; 2:    ←  +    ;  2  2    3: Compute  −1   using Cholesky decomposition;    5 6  3     4: Output  −1   ; 4.2. Computational cost of the ISM method for a rank -N update In this subsection, w e calculate t he computational cost of t he ISM method and provide an algor ithm for it. 4.2.1. Com putational cost of the numerator There are three wa ys to compute the numerator of the SM’ s formula given by equation ( 13 ) and recalled below:   +   ( 𝐱 )   ( 𝐱 )   −1 =  −1 −  −1   ( 𝐱 )   ( 𝐱 )   −1 1 +   ( 𝐱 )   −1   ( 𝐱 ) . - Compute the outer product first , i.e.,  −1    ( 𝐱 )   ( 𝐱 )    −1 . This inv olv es 1 column- vect or by row -vector product (equation ( 29 )) and 2 matrix products (equation ( 32 )), totaling:  2 + 2(2  3 −  2 ) = 4  3 −  2    . - Compute left to right . This in vol ves 1 product matrix by column-vector (equation ( 31 )), 1 column-vector by ro w-v ector product (equation ( 29 )), and 1 matr ix product (equation ( 32 )), totaling: 2  2 −  +  2 + 2  3 −  2 = 2  3 + 2  2 −     . - Compute the matrices-v ectors products first , i.e.,   −1   ( 𝐱 )     ( 𝐱 )   −1  . This inv olves 2 matr ix- vect or products (equations ( 31 ) and ( 30 )), and 1 column-vect or by row -v ector product (equation ( 29 )), totaling: 2(2  2 −  ) +  2 = 5  2 − 2    . Moreover , since  is symmetric,  −1   ( 𝐱 ) =    ( 𝐱 )   −1   , so we only hav e 1 matrix-vector product to compute. Thus, t he comput ational cost is 3  2 −     . The most effective wa y to compute the numerator of equation ( 13 ) is to first perform the matr ix-vector products, resulting in a computational cost of 5  2 − 2    , and in our spd case, a cost of 3  2 −    . (19) 4.2.2. Com putational cost of the denominator Since we ha ve already computed  −1   ( 𝐱 ) dur ing the numerator comput ation, we only need to perform 1 row - vect or by column-vector product (equation ( 28 )), and 1 addition which amounts to 2  − 1 + 1 flops, i.e., 2   . (20) Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 4 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection 4.2.3. Sher man-Mor rison com putational cost The numerator of equation ( 13 ) gives us an  ×  matrix, and the denominator is real. Thus, there is a ter m-by-term division that costs  2    . How ev er, if we perform the division after computing  −1   ( 𝐱 ) and befor e the column- vect or b y row -vector product of the numerator , we only divide a vector of size  , reducing t he division cost from  2 to  . Finally , we hav e a ter m-by -ter m matrix subtraction that costs  2    . Thus, the computational cost of t he SM’ s f ormula f or a spd matrix is ( 19 ) + ( 20 ) +  +  2 = 3  2 −  + 2  +  +  2 flops, i.e., 4  2 + 2    . (21) 4.2.4. Iterative Sherman-Morr ison comput ational cost The SM formula provides a method f or computing the in verse of a matrix t hat has been modified b y a rank -1 update. Specifically , if the matrix is updated wit h  outer products of t he form   ( 𝐱 )   ( 𝐱 )  , t he inv erse can be obtained by applying the SM formula iterativel y  times. As a result, the computational cost wit h a rank -  update using the ISM method for a spd matr ix ( Algorithm 2 ) is 4  2 + 2    . (22) Algorithm 2 Iterative Sher man-Morr ison Algor ithm wit h 4  2 + 2     ( 22 ) Req uire: The in verse of t he matrix  :  −1 with size  ×  , and vectors      (  )   =1 ... ∈ ℝ  to add to  ; 1: Initialize  −1   ←  −1 ; 2: for  = 1 to  do 3:  ←  −1   ×     (  )  ;  2  2 −     4:  ← 1 +     (  )   ×  ;  2    5:   ←  ∕  ;      6:  −1   ←  −1   −   ×   ;  2  2    7: end for 8: Output  −1   ; 4.3. Computational cost of the WMI method for a rank -N update In this subsection, we calculate the computational cost of the WMI method and pro vide an algorit hm for it. The WMI is given by equation ( 15 ) is recalled below:   +      −1 =  −1 −  −1     −1 +   −1    −1   −1 . The first step is to compute the matr ix products   −1   , with a computational cost of equation ( 32 ):   ×   ×  +   ×   ×  = 2  2 −        =  ×  −1 + 2  2  −  2       ×   = 2  2 + (2  2 −  )  −  2    . Ne xt, we per form a ter m-by -ter m addition with  , which has a computational cost of  2    . Then, we need to inv ert t he resulting summed matrix, which by construction is a spd  ×  matrix. The computa- tional cost of t his inv ersion step is   5 6  3     ( 17 ). Since we hav e already computed  =  ×  −1 , and given that  is spd, then  −1 ×   = (  ×  −1 )  =   . Thus, we only hav e 2 matr ix products to compute, that is 2  2  −        =   ×  + 2  2 −  2       ×  = (2  − 1)  2 + (2  2 −  )     . Finall y , a term-by-term subtraction is performed, which costs  2    . Combining these steps, the ov erall computational cost f or appl ying the WMI method ( Algorit hm 3 ) is: 2  2 + (2  2 −  )  −  2 +  2 +   5 6  3  + (2  − 1)  2 + (2  2 −  )  +  2 = 4  2 + (4  2 − 2  )  +   5 6  3     (23) Algorithm 3 W oodbury Matrix Identity Algorithm with 4  2 + (4  2 − 2  )  +   5 6  3     ( 23 ) Req uire: The in verse of t he matrix  :  −1 with size  ×  , and vectors      (  )   =1 ... ∈ ℝ  to add to  ; 1: Initialize  ∈ ℝ  ×  and constr uct the design matr ix  =     𝐱 (1)     𝐱 (2)  ⋯    𝐱 (  )    ∈ ℝ  ×  ; 2:  ←   −1 ;  2  2 −     3:  ←  +   ;  2  2    4: Compute  −1 using Cholesky decomposition;    5 6  3     5:  ←    −1 ;  2  2  −     6:  −1   ←  −1 −  ;  2  2    7: Output  −1   ; 5. Comparison of the computational costs of the DI, ISM, and WMI methods W e recall that t he size of the moment matrix   (   ) is  ×  , and that we want to add  new dat a points into   (   ) . The computational cost of each method is summarized in T able 1 . The goal of t his section is to find the number of new data  to add such that t he DI cost given by equation ( 18 ) is low er than that of t he ISM method given by equation ( 22 ) or t hat of t he WMI method giv en b y equation ( 23 ). 5.1. Direct in version method vs. Iterativ e Sherman-Morrison method The condition on  under which the DI method is prefer - able to the ISM method is obtained b y compar ing their computational cost given by equations ( 18 ) and ( 22 ), re- spectivel y . The condition is obtained by isolating  in the f ollowing inequality:   5 6  3  + 2  2 < 4  2 + 2  ⟺ 5 6  2 < (2  + 2)  ⟺  > 5  2 12 (  + 1 ) (24) Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 5 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection T able 1 Computational cost of the three compared metho ds (in flops) DI ( Algorithm 1 ) ISM ( Algorithm 2 ) WMI ( Algorithm 3 ) Computational cost   5 6  3  + 2  2 ( 18 ) 4  2 + 2  ( 22 ) 4  2 + (4  2 − 2  )  +   5 6  3  ( 23 ) (a) DI ( subsection 4.1 ) vs. ISM ( subsection 4.2 ) (b) DI ( subsection 4.1 ) vs. WMI ( subsection 4.3 ) Figure 1: Theoretical representation of thresholds from which cho osing DI over ISM or WMI 5.2. Direct in version method vs. W oodbury matrix identity method W e next compare the DI method with the WMI method:   5 6  3  + 2  2 < 4  2 + (4  2 − 2  )  +   5 6  3  ⟺ 5 6  3 < 2  2 + (4  2 − 2  )  + 5 6  3 ⟺ 5 6  3 + 4  2 + 2(  2 −  )  − 5 6  3 > 0 (25) Here we hav e a cubic equation t hat we need to solv e with a fixed s. An empirical approximation is giv en in subsection 6.1 . 5.3. W oodbury matrix identity method vs. Iterativ e Sher man-Morrison method For   1 and   1 , the computation cost of the ISM method (eq uation ( 22 )) is generally low er than that of t he WMI method (equation ( 23 )). Howe ver , t hese figures do not account for memory efficiency (read/write operations), nor f or t he optimization of matr ix comput ations compared to vect or calculations and successive iterations, which intro- duce significant over heads. The measurements carr ied out in subsection 6.2 show that the WMI method quic kly becomes more efficient than the ISM method. 6. Optimal method selection based on the number  of new data 6.1. Theoretical guide for method selection based on new data volume  The results shown in Figure 1 are obtained by computing equations ( 24 ) and ( 25 ) f or all  ∈ J 1 , 4200 K and  ∈ J 1 , 10000 K . Figure 1a show s t hat one must choose t he DI method ov er the ISM method when  >  ∕2 . 4002 ≈ 5  ∕12 . Figure 1b show s that one must choose the DI method ov er t he WMI method when  >  ∕3 . 7506 . W e name this relation t he empir ical DI ov er WMI threshold:  >  3 . 7506 (26) T able 2 summarizes the different theoretical thresholds identified f or choosing t he DI method ov er the ISM or WMI methods. 6.2. Experimental results The f ollowing results were obtained on a laptop equipped with an Intel Core Ultra 5 135U CPU and 32 GB of RAM, running Windo ws 11. All results can be reproduced using the Python code and data av ailable on GitHub 3 . W e generate a random dataset of  = 2000 samples in ℝ 1287 . This simulates 2000 vectors   ( 𝐱 ) wit h 𝐱 ∈ ℝ  =8 and polynomials of deg ree at most  = 5 :  =   (  ) =   +    =  8 + 5 5  = 1287 . For ev er y  in {1 , 2 , 3 , 4 , 5 , 10 , 20 , 30 , 40 , 50 , 100 , 200 , 300 , 400 , 500 , 750 , 1000} , we compute the moment matrix   (   ) and its inv erse   (   ) −1 with the first  =  −  samples. Then we update the in verse of the moment matr ix with the last  samples using the t hree different methods and report the av erage ex ecution times on the  = 200 simulations in T able 3 . Note t hat f or t he ISM method, w e reduced  to 50 when  > 50 . W e also computed, f or each method, the er ror of the in verse moment matrix:   =   −    ×  −1   ,   , where  −1   , cor responds to the updated inv erse mo- ment matr ix  −1   f ound b y the method  ∈ { DI, ISM, 3 Link to the GitHub repository: https://github.com/fgrivet/cost-trade- offs-matrix-inv ersion-update Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 6 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection T able 2 Theo retical thresholds for cho osing the DI metho d over the ISM o r WMI metho ds DI over ISM DI over WMI Empirical DI over WMI  5  2 ∕(12(  + 1)) ( 24 ) 5 6  3 + 4  2 + 2(  2 −  )  − 5 6  3 > 0 ( 25 , to solve with  fixed)  ∕3 . 7506 ( 26 ) T able 3 A verage execution time (in seconds) for a rank-  up date of the moment matrix   (   ) of size 1287 × 1287 Smallest time – Second b est time  DI ISM WMI 1 0.08594638 0.007743145 0.0077865 2 0.08421694 0.01141714 0.007551792 3 0.08086838 0.01424055 0.005989846 4 0.07941904 0.02148158 0.005903258 5 0.07437832 0.01883538 0.004278867 10 0.08314858 0.06215603 0.008458068 20 0.08020028 0.09959804 0.008605508 30 0.08923532 0.1900465 0.01317246 40 0.08028041 0.1966603 0.01141213 50 0.08450316 0.2969778 0.01532174 100 0.08219219 0.4851311 0.02341882 200 0.08573109 0.9612609 0.04289328 300 0.09368211 1.453023 0.06708244 400 0.09056586 1.795042 0.09021928 500 0.09777304 2.207409 0.1229474 750 0.1082857 3.867214 0.2134477 1000 0.118939 3.829742 0.3350951 T able 4 Erro rs for a rank-  up date of the moment matrix   (   ) of size 1287 × 1287 with conditioning   (   (   )) ,  =  −  Smallest error – Second best error  DI ISM WMI   (   (   )) 1 1.580182e-13 1.582999e-13 1.583327e-13 83.9304 2 1.593879e-13 2.902351 1.605027e-13 84.0449 3 1.594071e-13 4.358941 1.595547e-13 84.2276 4 1.593055e-13 5.333518 1.609318e-13 85.1681 5 1.59972e-13 6.150092 1.604706e-13 85.412 10 1.598138e-13 9.544279 1.610103e-13 86.4186 20 1.606954e-13 14.10532 1.628296e-13 88.7121 30 1.607787e-13 17.76659 1.641526e-13 91.1043 40 1.607618e-13 20.82128 1.662739e-13 93.5837 50 1.61661e-13 23.75783 1.677276e-13 96.0594 100 1.618503e-13 38.34813 1.773381e-13 104.694 200 1.625134e-13 71.4727 2.144143e-13 136.435 300 1.656789e-13 120.5727 2.788295e-13 219.31 400 1.654148e-13 210.0006 4.830752e-13 325.742 500 1.655513e-13 417.0248 1.316689e-12 604.421 750 1.681712e-13 5.943877e+10 21941.93 3.44460e+18 1000 1.694056e-13 2.559507e+11 503624.6 4.4279e+18 WMI } and  .   is t he Frobenius norm [ 3 ] given by:     =   ,   ,  2 (27) The errors for each method and each v alue of  are reported in Table 4 . W e obser ve that the er rors associated with the WMI and ISM methods increase rapidl y , when  > 500 , indicating numer ical instability . This behavior is larg ely e xplained by the poor conditioning  ≈ 10 18  of the empirical moment matr ix constructed from limited data (1250 and 1000 samples, respectivel y). According to V ersh ynin [ 12 ], and W ainwright [ 14 ], in- creasing t he number of samples improv es conditioning and numerical stability . So we repeated the experiment with a larg er sample size:  = 15000 . The er rors f or all methods stabilize around 10 −13 across all tested values of  (up to  = 1500 ) and matrix sizes up to  = 2000 , confir ming that the large errors obser ved in the small-sample regime are primar ily due to ill conditioning of the moment matr ix. For ISM method, the er ror still increases with  for both  = 2000 and  = 15000 , reflecting the accumulation of floating-point rounding er rors inherent to successive rank - one updates [ 4 ]. Howe ver , wit h sufficientl y many samples, the er ror growth remains bounded and no longer exhibits the rapid instability obser v ed f or small sample sizes. The numer ical stability of t he Sher man-Mor r ison and W oodbur y updates has been analyzed previousl y in [ 16 ]. 6.3. Comparison with theoretical results The goal of this subsection is to validate the theoretical thresholds identified in subsection 6.1 , Table 2 and the values measured in subsection 6.2 , T able 3 . In T able 5 , we present t he theoretical and experimental thresholds for selecting the DI method o ver the ISM or WMI methods, with the matr ix size  fixed at 1287. Firs t, we obser v e that the empirical DI o ver WMI approximation, given by equation ( 26 ) as 343 . 145 , is v er y close to the theoretical t hreshold of equation ( 25 ), which is 343 . 250 . Second, as predicted in subsection 5.3 , the WMI method pro ves to be more efficient than the ISM method e ven for small values of  ; specifically , starting from  = 2 . When compar ing the thresholds f or c hoosing the DI method ov er the ISM or WMI methods, we obser ve that the threshold ( 24 ) f or t he ISM method is significantly ov er - estimated. The theoretical prediction is 535 . 834 , whereas experimentally , it falls between 10 and 20. Con versel y , the threshold ( 25 ) for t he WMI method is substantially under - estimated. The theoretical prediction is 343 . 250 , but ex- periment al results place it between 400 and 500, whic h is approximatel y  3 . These discrepancies ar ise because our analy sis considered only the number of operations (flops) and did not account f or the efficiency of memor y access Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 7 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection T able 5 Thresholds for cho osing the DI metho d over the ISM or WMI metho ds with  = 1287 :  = 8 and  = 5 DI over ISM DI over WMI Empirical DI over WMI    ( 24 ) = 5×1287 2 12(1287+1) ≈ 535 . 834 ( 25 ) ≈ 343 . 250 ( 26 ) = 1287 3 . 7506 ≈ 343 . 145   Bet ween 10 and 20 Bet ween 400 and 500 Figure 2: Fastest metho d for each (s, k) pair based on execution time (read/write operations) or t he intr insic Pyt hon optimizations f or matrix calculations compared to vector calculations or iterativ e loops. 6.4. Experimental guide for method selection based on new data volume  W e repeated the experiment conducted in subsection 6.2 . This time, we used  = 2000 samples in ℝ  with  ranging in {10 , 20 , 50 , 100 , 250 , 500 , 750 , 1000} and  = 200 sim- ulations. Howe ver , for the ISM method, when  > 500 and  > 50 , we reduced  to 50 . Figure 2 illustrates the f astest method f or per forming t he rank -  update for each pair ( ,  ) . A green point signifies that ISM ( Algor ithm 2 ) is the fas test method, a blue point indicates that WMI ( Algorithm 3 ) is the fastest method, and a red point denotes that the DI ( Algorithm 1 ) is the fastest method. Firstl y , we notice that for a small matr ix size (  = 10 ), the DI method outper forms t he other tw o. Secondly , as expected, ISM is fas ter for a rank -1 update. Then the WMI method is f aster for a rank -  update when  <  3 , and finally , f or a large update, t he DI method is super ior . Our recommendation, f or a Pyt hon CPU implementa- tion, is hence as follo ws: - Use the ISM method ( Algor ithm 2 ) f or a rank -1 up- date. - Use the WMI method ( Algor ithm 3 ) f or a rank -  update when    3 , where  is the matrix size. - Use t he DI method ( Algorithm 1 ) if  >  3 . 7. Conclusion This paper com pares three methods f or updating t he in verse of a matr ix after a rank -  update, assuming the orig- inal in verse is known. Although motiv ated by Chr istoffel- function-based outlier detection, the anal ysis applies more broadl y to problems inv ol ving rank -  updates of spd in vert- ible matr ices. Theoretical t hresholds for method selection are der ived and summar ized in Table 2 . They are validated empirically wit h comprehensiv e experiments that account f or implementation effects. This note includes practical guidance for Python CPU implementations. With  denoting t he matr ix dimension and  t he update rank, it recommends the f ollowing rule: - For  = 1 , use ISM ( Algorit hm 2 ). - For    3 , use WMI ( Algorithm 3 ). - For  >  3 , use DI ( Algorithm 1 ). Although the t heoretical computational costs derived in this work are e xpressed in ter ms of the matr ix dimension  and the update rank  , and are therefore independent of any specific programming language or hardware platform, the practical recommendations and proposed selection r ule are empir ically validated only f or Pyt hon implementations ex ecuted on CPU. In par ticular, the quantitative thresholds identified in this study depend on memory-access patter ns characteristic of standard Python numer ical linear algebra Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 8 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection libraries, and should not be assumed to transfer directly to other languages, libraries, or hardware architectures. Sev eral directions for future work emer ge from this study . Firs t, it would be of interest to extend the empir ical comparison to other programming languages and computa- tional platf or ms, including compiled implement ations such as C++ and GPU-accelerated en vironments, where differ- ences in parallelism and memor y hierarchies may signifi- cantly affect per f or mance. Second, bey ond inv erse update strategies, a fundamental limitation of Christoffel-function- based anomaly detection lies in the size  of t he moment matrix, which grow s rapidly with the ambient dimension. Future w ork could therefore focus on reducing the effective dimension of this matr ix in order to low er computational cost and enable CF-based scoring to scale to high-dimensional data streams. CRedi T authorship contr ibution statement Florian Grivet: Writing - or iginal draft, Writing - revie w and editing, Conceptualization, For mal analy sis, Methodology , Software, Data curation, In vestig ation, V al- idation. Louise T ra v é-Massuyès: W riting - revie w and editing, Super vision, Funding acquisition, Resources, Con- ceptualization, V alidation. Data av ailability All code and datasets are a vailable on GitHub: github.com/fgr ivet/cos t-trade-offs-matr ix-in version-update . All the tests conducted in this paper can be reproduced. The README file explains how to use the code. Declaration of generativ e AI and AI-assisted technologies in the manuscript preparation process During t he preparation of this work the authors used Mistral in order to check t he text grammar . After using this tool/service, the aut hors revie wed and edited the content as needed and hence take full responsibility for the content of the submitted ar ticle. Declaration of competing interest The authors declare the follo wing financial interests/personal relationships which may be considered as potential compet- ing interests: Louise Tra v é-Massuyès reports financial sup- port was pro vided by AI Interdisciplinar y Institute ANITI. A ckno wledgments This work has benefited from the AI Interdisciplinary Institute ANITI funded by the France 2030 program under the Grant agreements n ° ANR -19-P3IA-0004 and n ° ANR - 23-IACL-0002. The authors w ould like to t hank Jean-Ber nard Lasser re (LAAS-CNRS) and Didier Henr ion (LAAS-CNRS) for their insightful discussions and contributions to the Christoffel function method used in this paper . A. Computational costs of generic elementary products In this appendix, we detail t he computational costs of the intermediate steps employ ed in section 4 , expressed in terms of flops. Notations – W e adopt the f ollowing generic notation: scalars are denoted by lower case letters (e.g.,  ), vectors by boldf ace lowercase letters (e.g., 𝐚 ), and matrices b y uppercase letters (e.g.,  ). The dimensions of vectors and matrices are represented by  ,  and  . The computational costs are denoted by    , where the superscript  indicates t he dimension of the left element in the multiplication, and t he subscript  indicates the dimension of the right element. A.1. Computational cost of multiplying a ro w-v ector by a column-vect or  1 ⋯   −1      1 ⋮   −1                    × = Figure 3: Representation of the product of a row-vecto r b y a column-vecto r Let 𝐚 ∈ ℝ 1×  , 𝐛 ∈ ℝ  ×1 and  ∈ ℝ such that  = 𝐚 × 𝐛 ( Figure 3 ).  =    =1   ×   , i.e.  multiplications and  − 1 additions, which provides the f ollowing cost:  1×   ×1 = 2  − 1  . (28) A.2. Computational cost of multiplying a column-v ector by a row -v ector  1  2 ⋮                  1 ⋯   −1      1 , 1  , −1             × = Figure 4: Representation of the product of a column-vector by a row-vecto r Let 𝐚 ∈ ℝ  ×1 , 𝐛 ∈ ℝ 1×  and  ∈ ℝ  ×  such that  = 𝐚 × 𝐛 ( Figure 4 ). For ev ery element of  , we ha ve  , =   ×   , i.e. 1 multiplication f or  ×  elements, which provides the f ollowing cost:   ×1 1×  =  ×   . (29) When consider ing two vect ors of equal length  , the compu- tational cost of their product is   ×1 1×  =  2    . Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 9 of 10 Cost T rade-offs in Matrix Inversion Up dates for Streaming Outlier Detection A.3. Computational cost of multiplying a ro w-v ector by a matrix  1  2 ⋯      1 , 1  1 , ⋮ ⋮ ⋮ ⋮  , 1  ,                  1  2 ⋯     × = Figure 5: Representation of the product of a row-vecto r b y a matrix Let 𝐚 ∈ ℝ 1×  ,  ∈ ℝ  ×  and 𝐰 ∈ ℝ 1×  such that 𝐰 = 𝐚 ×  ( Figure 5 ). For ev er y element of 𝐰 , we ha ve   =    =1   ×  , , i.e.  multiplications and  − 1 additions for  elements, which equals to  × (2  − 1) , which pro vides t he f ollowing cost:  1×   ×  = 2  −    . (30) When considering a square matr ix of size  ×  , the computational cost of this product is  1×   ×  = 2  2 −     . A.4. Computational cost of multiplying a matrix b y a column-vector  1 , 1 ⋯ ⋯  1 ,  , 1 ⋯ ⋯  ,              1  2 ⋮                    1  2 ⋮                 × = Figure 6: Representation of the product of a matrix by a column-vecto r Let  ∈ ℝ  ×  , 𝐛 ∈ ℝ  ×1 and 𝐰 ∈ ℝ  ×1 such that 𝐰 =  × 𝐛 ( Figure 6 ). For ev er y element of 𝐰 , we hav e   =    =1   , ×   , i.e.  multiplications and  − 1 additions for  elements, which equals to  × (2  − 1) , which provides the f ollowing cost:   ×   ×1 = 2  −   . (31) When considering a square matr ix of size  ×  , the computational cost of t his product is   ×   ×1 = 2  2 −     A.5. Computational cost of multiplying a matrix b y a matrix  1 , 1 ⋯  1 ,  , 1 ⋯  ,              1 , 1  1 , 2 ⋯  1 , ⋮ ⋮ ⋮ ⋮ ⋮ ⋮  , 1  , 2 ⋯  ,                1 , 1 ⋯ ⋯  1 ,  , 1  , 2 ⋯  ,             × = Figure 7: Representation of the product of a matrix by a matrix Let  ∈ ℝ  ×  ,  ∈ ℝ  ×  and  ∈ ℝ  ×  such that  =  ×  ( Figure 7 ). For ev ery element of  , we ha ve  , =    =1  , ×  , , i.e.  multiplications and  − 1 additions f or  ×  elements, which equals to  × (2  − 1) , which provides the follo wing cost:   ×   ×  = 2   −    . (32) When consider ing squares matr ices of size  ×  , the computational cost of this product is   ×   ×  = 2  3 −  2    . Ref erences [1] Bartlett, M.S., 1951. An In verse Matrix Adjustment Arising in Discriminant Analysis. The Annals of Mat hematical Statistics 22, 107 – 111. doi: 10.1214/aoms/1177729698 . [2] Ducharle t, K., Tr avé-Massuyès, L., Lasserre, J.B., Le Lann, M.V ., Miloudi, Y ., 2025. Lev eraging the christoffel function for outlier detection in data streams. International Journal of Data Science and Analytics 20, 2021–2037. doi: 10.1007/s41060- 024- 00581- 2 . [3] Golub, G.H., V an Loan, C.F., 2013. Matrix Computations - 4th Edition. Johns Hopkins Univ ersity Press, Philadelphia, PA . doi: 10. 1137/1.9781421407944 . [4] Higham, N.J., 2002. Accuracy and stability of numer ical algorit hms. SIAM. doi: 10.1137/1.9780898718027 . [5] Krishnamoor thy , A., Menon, D., 2013. Matrix inversion using cholesky decomposition, in: 2013 Signal Processing: Algorithms, Architectures, Ar rangements, and Applications (SP A), pp. 70–72. URL: https://ieeexplore.ieee.org/document/6710599 . [6] Lasserre, J.B., Pauwels, E., 2016. Sorting out typicality with the inv erse moment matr ix sos pol ynomial, in: Proceedings of the 30th International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY , USA. p. 190–198. URL: https://dl.acm.org/doi/10.5555/3157096.3157118 . [7] Lasserre, J.B., Pauwels, E., 2019. The empir ical christoffel function with applications in dat a analysis. Advances in Computational Mathematics 45, 1439–1468. doi: 10.1007/s10444- 019- 09673- 1 . [8] Lasserre, J.B., Pauwels, E., Putinar, M., 2022. The Christof- f el–Darboux Kernel for Data Analysis. 1st ed., Cambr idge University Press. doi: 10.1017/9781108937078 . [9] Lu, J., 2024. Numerical matr ix decomposition. . [10] Lu, T ., W ang, L., Zhao, X., 2023. Revie w of anomaly detection algorithms f or data streams. Applied Sciences 13. doi: 10.3390/ app13106353 . [11] Sherman, J., Mor rison, W .J., 1950. Adjustment of an Inv erse Matr ix Corresponding to a Change in One Element of a Given Matrix. The Annals of Mat hematical Statistics 21, 124 – 127. doi: 10.1214/aoms/ 1177729893 . [12] V ershynin, R., 2018. High-dimensional probability: An introduction with applications in dat a science. volume 47. Cambr idge university press. [13] V u, Mai Trang, Bachoc, François, Pauwels, Edouard, 2022. Rate of conv erg ence for geometric inference based on t he empirical christoffel function. ESAIM: PS 26, 171–207. doi: 10.1051/ps/2022003 . [14] W ain wright, M.J., 2019. High-dimensional statistics: A non- asymptotic viewpoint. volume 48. Cambridge university press. doi: 10.1017/9781108627771 . [15] W oodbur y , M., of Statistics, P .U.D., 1950. Inverting Modified Ma- trices. Memorandum Report / Statistical Research Group, Princeton, Department of Statistics, Pr inceton University . [16] Yip, E.L., 1986. A note on the stability of solving a rank-p modifica- tion of a linear system by the sherman–mor rison–woodbury formula. SIAM Jour nal on Scientific and Statistical Computing 7, 507–513. doi: 10.1137/0907034 . [17] Zhou, P ., 2025. A surv ey of streaming data anomaly detection in network security . PeerJ Comput. Sci. 11, e3066. doi: 10.7717/ peerj- cs.3066 . Grivet and T ravé-Massuy ès: Published in Array . DOI: 10.1016/j.array .2026.100737 P age 10 of 10

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment