Two Differentially Private Rating Collection Mechanisms for Recommender Systems

We design two mechanisms for the recommender system to collect user ratings. One is modified Laplace mechanism, and the other is randomized response mechanism. We prove that they are both differentially private and preserve the data utility.

Authors: Wenjie Zheng

T w o Differen tially Priv ate Rating Collection Mec hanisms for Recommender Systems Zheng W enjie No vem ber 8, 2021 Abstract W e design tw o mechanisms for the recommender system to collect user ratings. One is mo dified Laplace mec hanism, and the other is randomized response mechanism. W e prov e that they are b oth differentially priv ate and preserve the data utility . 1 In tro duction Recommender Systems (RS) [1] are a kind of system that seek to recommend to users what they are likely interested in. Unlike searc h engines, the users do not need to type any k eyword. The RS’s will learn their in terest automatically . F or instance, if the user has just b ought a numeric camera, the RS will recommend to him some SD memory cards; if a user watc hes a lot of action movies, the RS may suggest some other action mo vies to him. And this is the t ypical b ehaviors which w e observ e universally in Netflix (movies), Y outub e (videos), Go ogle Pla y (apps), F aceb o ok (friends), Amazon (go o ds) and other platforms to day . One may wonder how it works. Let us tak e Netflix as an example. Netflix has a mechanism that allo ws every user to rate the movies they ha ve watc hed. Based on these ratings, Netflix builds a profile for each user. And there should b e quite a few methods to predict the user preference on other movies that the user has y et seen. Readers can learn more in Section 3. This is not the main topic of this article. The issue addressed in this article is whether user priv acy is compromised by the rating collection mec hanism, and what we ma y do to prev ent it. In 2006, Netflix published 100 480 507 ratings that 480 189 users ga ve to 17 770 movies on the Internet to hold the Netflix Prize competition [2]. The data are anon ymous. Ho wev er, in 2007, tw o researchers from the Universit y of T exas, de- anon ymized some of the Netflix data by matc hing the data set with movie ratings on the Internet Mo vie Database [3]. This aroused big priv acy concern. In 2009, four Netflix users filed a lawsuit against Netflix. W e see that this concern of priv acy leak is real, and the anonymization alone is not sufficien t to preven t it. Let us tak e a close look. There were actually t wo priv acy leaks during the procedure. Firstly , the users leak ed their ratings to the service provider Netflix. Then, Netflix leaked the ratings to the public. Users made a big fuss on the second leak, but they o verlooked the fact that it was themselv es who leak ed the ratings to Netflix at the first place. Usually , all legal companies will ask the users to sign a user agreemen t, which authorizes the companies to collect user data and to use them for certain purp oses. Ho w ever, almost no users will ev er read it. An ywa y , if they do not agree, they will not b e able to use the service. Hence, the goal of this article is to minimize the priv acy leak but still guaranteeing the functionalit y of the service pro vider. W e will ac hieve this goal by building differen tial priv acy (DP) into the rating collection mechanism. The concept of DP will b e explained in detail in Section 2. The big idea is that the user ratings are transformed through the rating collection mechanism, so that from the output (transformed ratings), one cannot kno w for sure what the input (original ratings) is. Of 1 course, this kind of transform should satisfy certain prop erties. After this transform, the service pro vider can do whatever they w ant with the ratings without worrying ab out priv acy leak. They can either analyze it themselves, or sub contract the work to a third part y by giving them the data access. It is also p ossible for Netflix to hold a second competition. One trivial transform is to transform every rating to zero or pure random num b er. This absolutely prev ents any priv acy leak, but it erases all information contained in the data as w ell. Therefore, when w e build DP in to the mechanism, we should be careful in order to preserv e as muc h information as p ossible in the data. W e designed t w o mec hanisms. One is mo difie d L aplac e me chanism , and the other is r andomize d r esp onse me chanism (Section 2). W e will sho w that they preserve the utility of the ratings (Section 3). Related w ork [4] also tries to bring DP to RS. Their method is differen t from ours. Let X i b e the original rating set of the i -th user, S b e some aggregation statistic of ratings, A b e some algorithm to do data analysis, and f b e some transform that guaran tees DP . Their metho d can be summarized as A  f  S  ⊗ n i =1 X i  , while our metho d, with a little abuse of notation, can be summarized as A  S  ⊗ n i =1 f  X i  . Note that we c hanged the p osition of the transform f . This mo dification is of significant adv an tage. In their metho d, f should b e adapted to each statistic S , and they can only use algorithms A relying on S . In our method, we can generalize it to A  ⊗ n i =1 f  X i  , which means that we can use more t yp es of algorithms. F urthermore, as long as f is “conjugate” (i.e. f do es not change the space where X i is in), all previous successful algorithms could b e seamlessly “transplanted”. And w e will illustrate in Section 3 that this transplantation is also seamless in theoretical guarantee. When giving it a second though t, their metho d is neither priv acy preserved nor meaningful. Ac- cording to their metho d, at the moment where users transfer their ratings to the service provider so that it could calculate the statistic S , user priv acy has already leaked to the service provider. Then, the service pro vider sends a differen tially priv ate version of recommendation back to the user. But wh y the user bothers to protect the priv acy against himself ?! 2 Mec hanisms In this section, w e will first introduce the concept of differ ential privacy . Then we will define the mo dified Laplace mechanism and randomized resp onse mechanism. Throughout this section, we consider the rating v ector of a single user: x = ( x 1 , x 2 , . . . , x n ) , where n is the num ber of items. Note that the comp onents ma y hav e missing v alues. 2.1 Differen tially priv ate mec hanism The concept of DP is not some entit y lurking in the data, but it is used to describ e a certain type of data releasing mechanism. The original idea is introduced in [5]. Since, dozens of formulations came out. W e use the simplest form ulation here. Definition 1. Let  b e a p ositive v alue, a random application M : R − → S is called  -differentially priv ate mec hanism if Pr ( M ( y ) ∈ S ) ≤ exp (  ) Pr ( M ( z ) ∈ S ) , for an y y , z ∈ R and any S ⊂ S . The idea is that the distributions pro duced by y and z are absolutely con tin uous to each other with the multiplier exp (  ) . With  close to 0 , these tw o distributions should lo ok similar, and it will b e quite difficult to infer whether it is y or z . When it comes to the rating v ector, it should b e Pr  M  x (1)  ∈ S  ≤ exp (  ) Pr  M  x (2)  ∈ S  , where x (1) and x (2) are tw o different rating vectors, whic h ma y represent tw o differen t users. Hence, the outputs of all users are mixed up and thus indistinguishable. 2 2.2 Mo dified Laplace mechanism In this subsection, we in tro duce modified Laplace mec hanism. The name comes from Laplace mec hanism [5], which works only on con tin uous metrizable space. In order to handle missing v alue, w e will modify it a bit. F or con venience, we supp ose that the data are normalized in to the in terv al [ − 1 , 1] , and let the question mark ? denote the missing v alue. Definition 2. F or an y  ≥ 0 , ξ ∼ Laplace (0 , 2  ) is a random v ariable, and ζ ∼ Bernoulli ( exp( / 2) exp( / 2)+1 ) is a random v ariable indep endent to ξ . A mo dified Laplace mechanism M ( x ) = ( M ( x 1 ) , M ( x 2 ) , . . . , M ( x n )) is defined as M ( x i ) =  ζ · ( x i + ξ ) + (1 − ζ ) · ? : x i ∈ [ − 1 , 1] ζ · ? + (1 − ζ ) · ξ : x i =? , (1) where by conv ention, 1 · ? =? , 0 · ? = 0 and ? + 0 =? . The idea is that b esides adding Laplace noise, we randomly remov e and create some ratings as well. This mechanism can be pro ven to be differentially priv ate. Theorem 1. Mo difie d L aplac e me chanism is n -differ ential ly private. 2.3 Randomized response mec hanism In this subsection, we present randomized resp onse mechanism, whic h works on discrete data. Note that [6, 7] also use this term, and their definitions are ev en different b etw een them. Our use of this term is closer to [7], and we will adapt it to rating data. Recommender systems rarely allow users to give con tinuous ratings. Instead, they often ask the user to rate an item b y one to five stars. These ratings are surely ordinal, but we just ignore the order of the set. Along with the missing rating, w e consider them as cardinal num b ers. In the follo wing definition, the n umber 0 can b e seen as the missing rating, and the num b ers 1 , 2 , . . . , d can b e seen as the num b er of stars. Definition 3. Let W = { 0 , 1 , 2 , ..., d } b e a set of finite cardinalit y . F or any  ≥ 0 , an y i ∈ W , ξ i is a (indep enden t) random v ariable with supp ort on W whose probability mass function is defined by p i ( j ) = exp(  ) I ( j = i ) exp(  )+ d , for an y j ∈ W . A randomized resp onse mec hanism M ( x ) = ( M ( x 1 ) , M ( x 2 ) , . . . , M ( x n )) is defined as M ( x k ) = ξ x k . The idea is that the transformed ratings (including missing ratings) will most lik ely remain the same as the original ratings, but there is still possibility that they are transformed to other ratings (with equal probability). One can prov e that this mechanism is differen tially priv ate. Theorem 2. R andomize d r esp onse me chanism is n -differ ential ly private. 3 Utilit y A natural question is that whether the transformed ratings are useful. If the transformed ratings pro duce nonsense, then there will be no meaning of this transform although the user priv acy is protected. This question can b e decomp osed into t wo sub questions: what the usefulness means and what could b e the p ossible wa y to make it useful. F or the first question, we will use the statistical estimation framework. And for the second one, we will use the lo w-rank matrix completion metho d. W e start with the framework. There are m users and n items in the universe. Θ m × n is the unknown matrix of true ratings that eac h user will give to each item. This is a dense matrix without any missing v alues. How ever, since there are so many items, users are not able to test every item and their ratings are corrupted by noise. What we finally observe is a sparse matrix X m × n , which could b e regarded as some approximation of Θ . Then, w e apply either of our mechanism on X to generate the transformed rating matrix Z m × n . Since our mechanism is computed elemen t wisely , this can b e 3 Θ X Z Figure 1: Rating generating process done lo cally at each user’s computer. After that, eac h user sends their transformed rating vector to the service pro vider, who observes the matrix Z . The service provider’s goal is to recov er Θ from Z . No w w e will show how it is p ossible to reco ver Θ from Z instead of from X . As men tioned in Section 1, there are quite a few metho ds. In teresting readers can refer to [1, 8]. Here we will only presen t one metho d, but the analysis can b e generalized to all metho ds. This metho d is low-rank matrix completion. W e supp ose Θ is a low-rank matrix, i.e. rank (Θ) = r  min( m, n ) . If the true rating matrix is low-rank, then w e are able to approximately reco v er it with a few corrupted ratings under certain condition such as r estricte d isometry pr op erty (RIP) [9]. Definition 4. Let Ω Z denote the supp ort of the non-missing ratings of Z . A pro jection op erator P Ω Z satisfies restricted isometry prop erty if it obeys (1 − α ) k A k 2 F ≤ 1 p kP Ω Z ( A ) k 2 F ≤ (1 + α ) k A k 2 F , (2) for an y matrix A with sufficien tly small rank and α ∈ (0 , 1) sufficiently small, where p is the prop ortion of non-missing v alues of Z and k·k F is the F rob enius norm. The recov er process is describ ed as follow. Supp ose that ρ := kP Ω Z (Θ − Z ) k F < ∞ . (3) Our estimator ˆ Θ is obtained from the following optimization problem arg min M k M k ∗ s.t. kP Ω Z ( M − Z ) k F ≤ ρ, (4) where k·k ∗ is the nuclear norm (a.k.a. trace norm). Under low-rank hypothesis and RIP , [9] prov ed    ˆ Θ − Θ    F ≤ C 0 p − 1 / 2 ρ (5) for some numerical constant C 0 . This means that the estimation error on the whole matrix is prop ortional to the error on the supp ort of the observ ed matrix, whic h means that the reco ver metho d enjoys a kind of stability against the noise quan tified by ρ . Of course, this noise includes not only the noise intrinsic in the problem (i.e. b etw een Θ and X ) but also the noise artificially in tro duced through the mechanism (i.e. b etw een X and Z ). W e see how easily the traditional analysis techniques can be seamlessly transplan ted to the new setting. What remains is just to give an upp er b ound of ρ . Let Ω X denote the supp ort of the non-missing ratings of X , and s := | Ω X | b e the num b er of non-missing ratings. Supp ose that kP Ω X (Θ − X ) k F ≤ ρ 0 √ s < ∞ , for some small constant ρ 0 . This hypothesis is quite realistic. Indeed, this is what we need if w e wan t to reco ver Θ from the untransformed ratings X . Then w e ha v e the following theorems. Theorem 3. L et γ ∈ (0 , 1) b e the level of toler anc e. With pr ob ability at le ast 1 − γ , the Z gener ate d by mo difie d L aplac e me chanism satisfies ρ ≤ ρ 0 √ s + 4  r s γ + s 2 mn ( e  2 + 1) γ  1 + 8  2  ; (6) 4 with pr ob ability at le ast 1 − γ , the Z gener ate d by r andomize d r esp onse me chanism satisfies ρ ≤ ρ 0 √ s + 2( d − 1) s 2 mnd ( e  + d ) γ . (7) When the priv acy parameter  increases tow ard infinity , the ab ov e upp er b ounds decrease tow ard ρ 0 √ s . This means that the less the lev el of differential priv acy is, the more accurate the data are and then the more precise estimation w e will hav e. This is intuitiv e, since the larger  is, the less extra noise we in tro duce into the data. In practice, it is desirable to choose an  which mak es the en tire upp er bound match the order of ρ 0 √ s . Combining these b ounds with (5), w e assure the utilit y of our transformed ratings. 4 Pro of 4.1 Pro of of Theorem 1 Pr o of. A ccording to the v alues of ( x, y ) and S , we should divide it into nine cases. W e only consider non empty set of S , since the empty set case is trivial. i) ( x, y ) ∈ [ − 1 , 1] 2 and S ∈ R ([ − 1 , 1]) Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 1 , x + ξ 1 ∈ S ) Pr( ζ 2 = 1 , y + ξ 2 ∈ S ) = Pr( ζ 1 = 1) Pr( x + ξ 1 ∈ S ) Pr( ζ 2 = 1) Pr( y + ξ 2 ∈ S ) = Pr( x + ξ 1 ∈ S ) Pr( y + ξ 2 ∈ S ) ≤ e  . ii) ( x, y ) ∈ [ − 1 , 1] 2 and S = { ? } Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 0) Pr( ζ 2 = 0) = 1 ≤ e  . iii) ( x, y ) ∈ [ − 1 , 1] 2 and S is more than { ? } Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 1 , x + ξ 1 ∈ S ) + Pr( ζ 1 = 0) Pr( ζ 2 = 1 , y + ξ 2 ∈ S ) + Pr( ζ 2 = 0) ≤ e  . iv) x ∈ [ − 1 , 1] , y =? , and S ∈ R ([ − 1 , 1]) Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 1 , x + ξ 1 ∈ S ) Pr( ζ 2 = 0 , ξ 2 ∈ S ) = Pr( ζ 1 = 1) Pr( x + ξ 1 ∈ S ) Pr( ζ 2 = 0) Pr( ξ 2 ∈ S ) ≤ e  2 · e  2 = e  . v) x ∈ [ − 1 , 1] , y =? , and S = { ? } 5 Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 0) Pr( ζ 2 = 1) = e −  2 ≤ e  . vi) x ∈ [ − 1 , 1] , y =? and S is more than { ? } Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 1 , x + ξ 1 ∈ S ) + Pr( ζ 1 = 0) Pr( ζ 2 = 0 , ξ 2 ∈ S ) + Pr( ζ 2 = 1) ≤ e  . vii) y ∈ [ − 1 , 1] , x =? , and S ∈ R ([ − 1 , 1]) Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 0 , ξ 1 ∈ S ) Pr( ζ 2 = 1 , y + ξ 2 ∈ S ) = Pr( ζ 1 = 0) Pr( ξ 1 ∈ S ) Pr( ζ 2 = 1) Pr( y + ξ 2 ∈ S ) ≤ e −  2 · e  2 ≤ e  . viii) y ∈ [ − 1 , 1] , x =? , and S = { ? } Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 1) Pr( ζ 2 = 0) = e  2 ≤ e  . ix) y ∈ [ − 1 , 1] , x =? and S is more than { ? } Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = Pr( ζ 1 = 0 , ξ 1 ∈ S ) + Pr( ζ 1 = 1) Pr( ζ 2 = 1 , y + ξ 2 ∈ S ) + Pr( ζ 2 = 0) ≤ e  . 4.2 Pro of of Theorem 2 Pr o of. F or an y ( x, y ) ∈ W 2 , we hav e Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) = P s ∈ S Pr( M ( x ) = s ) P s ∈ S Pr( M ( y ) = s ) . (8) s could b e three kinds of v alues: x , y and the other. So w e divide it into three cases. i) s = x Pr( M ( x ) = x ) Pr( M ( y ) = x ) = e  e  + d 1 e  + d = e  . ii) s = y Pr( M ( x ) = y ) Pr( M ( y ) = y ) = 1 e  + d e  e  + d = e −  ≤ e  . 6 iii) s 6 = x and s 6 = y Pr( M ( x ) = s ) Pr( M ( y ) = s ) = 1 e  + d 1 e  + d = 1 ≤ e  . So in either wa y , the fraction is no more than e  . Join these equations into (8), we get Pr( M ( x ) ∈ S ) Pr( M ( y ) ∈ S ) ≤ P s ∈ S Pr( M ( y ) = s ) e  P s ∈ S Pr( M ( y ) = s ) = e  . 4.3 Pro of of Theorem 3 Pr o of. Firstly , w e decomp ose 3 into three terms. kP Ω Z (Θ − Z ) k F =   P Ω Z ∩ Ω X (Θ − X ) + P Ω Z ∩ Ω X ( X − Z ) + P Ω Z \ Ω X (Θ − Z )   F ≤ kP Ω Z ∩ Ω X (Θ − X ) k F + kP Ω Z ∩ Ω X ( X − Z ) k F +   P Ω Z \ Ω X (Θ − Z )   F . The first term kP Ω Z ∩ Ω X (Θ − X ) k F ≤ kP Ω X (Θ − X ) k F ≤ ρ 0 √ s. Then, we calculate the mathematical exp ectation of the square of the second term. E kP Ω Z ∩ Ω X ( X − Z ) k 2 F = E   X ( i,j ) ∈ Ω Z ∩ Ω X ( X ij − Z ij ) 2   = E   E   X ( i,j ) ∈ Ω Z ∩ Ω X ( X ij − Z ij ) 2 | Ω X , Ω Z     = E   X ( i,j ) ∈ Ω Z ∩ Ω X E  ( X ij − Z ij ) 2 | Ω X , Ω Z    . (9) In the same wa y , we also calculate for the third term. E   P Ω Z \ Ω X (Θ − Z )   2 F = E   X ( i,j ) ∈ Ω Z \ Ω X (Θ ij − Z ij ) 2   = E   E   X ( i,j ) ∈ Ω Z \ Ω X (Θ ij − Z ij ) 2 | Ω X , Ω Z     = E   X ( i,j ) ∈ Ω Z \ Ω X E  (Θ ij − Z ij ) 2 | Ω X , Ω Z    . (10) F or mo dified Laplace mechanism E  ( X ij − Z ij ) 2 | Ω X , Ω Z  = 2  2   2 = 8  2 , ∀ i, j ∈ Ω X ∩ Ω Z ; E  (Θ ij − Z ij ) 2 | Ω X , Ω Z  = Θ 2 ij + 2  2   2 ≤ 1 + 8  2 , ∀ i, j ∈ Ω Z \ Ω X . 7 Join these into (9) and (10), w e get E kP Ω Z ∩ Ω X ( X − Z ) k 2 F ≤ 8 s  2 E   P Ω Z \ Ω X (Θ − Z )   2 F ≤ mn e  2 + 1  1 + 8  2  , where s := | Ω X | . Then, Pr  kP Ω Z ∩ Ω X ( X − Z ) k F > r 16 s  2 γ  ≤ E kP Ω Z ∩ Ω X ( X − Z ) k 2 F 16 s  2 γ = γ 2 Pr   P Ω Z \ Ω X (Θ − Z )   F > s 2 mn ( e  2 + 1) γ  1 + 8  2  ! ≤ E   P Ω Z \ Ω X (Θ − Z )   2 F 2 mn ( e  2 +1) γ  1 + 8  2  = γ 2 . Finally , we ha v e Pr kP Ω Z (Θ − Z ) k F ≤ ρ 0 √ s + 4  r s γ + s 2 mn ( e  2 + 1) γ  1 + 8  2  ! ≥ 1 − γ . F or randomized resp onse mechanism E  ( X ij − Z ij ) 2 | Ω X , Ω Z  ≤ ( d − 1) 2 d e  + d , ∀ i, j ∈ Ω X ∩ Ω Z ; E  (Θ ij − Z ij ) 2 | Ω X , Ω Z  = ( d − 1) 2 , ∀ i, j ∈ Ω Z \ Ω X . Join these into (9) and (10), w e get E kP Ω Z ∩ Ω X ( X − Z ) k 2 F ≤ ( d − 1) 2 sd e  + d , E   P Ω Z \ Ω X (Θ − Z )   2 F ≤ mnd e  + d ( d − 1) 2 , where s := | Ω X | . Then, Pr kP Ω Z ∩ Ω X ( X − Z ) k F > s 2 sd ( d − 1) 2 ( e  + d ) γ ! ≤ E kP Ω Z ∩ Ω X ( X − Z ) k 2 F 2 sd ( d − 1) 2 ( e  + d ) γ = γ 2 Pr   P Ω Z \ Ω X (Θ − Z )   F > s 2 mnd ( d − 1) 2 ( e  + d ) γ ! ≤ E   P Ω Z \ Ω X (Θ − Z )   2 F 2 mnd ( d − 1) 2 ( e  + d ) γ = γ 2 . Finally , since s ≤ mn , we ha ve Pr kP Ω Z (Θ − Z ) k F ≤ ρ 0 √ s + 2( d − 1) s 2 mnd ( e  + d ) γ ! ≥ 1 − γ . 8 References [1] F rancesco Ricci, Lior Rok ac h, and Bracha Shapira. Intr o duction to r e c ommender systems hand- b o ok . Springer, 2011. [2] James Bennett and Stan Lanning. The netflix prize. In Pr o c e e dings of KDD cup and workshop , v olume 2007, page 35, 2007. [3] Arvind Nara yanan and Vitaly Shmatik ov. Robust de-anonymization of large sparse datasets. In Se curity and Privacy, 2008. SP 2008. IEEE Symp osium on , pages 111–125. IEEE, 2008. [4] F rank McSherry and Ily a Mirono v. Differentially priv ate recommender systems: building priv acy in to the net. In Pr o c e e dings of the 15th ACM SIGKDD international c onfer enc e on Know le dge disc overy and data mining , pages 627–636. A CM, 2009. [5] Cyn thia Dwork, F rank McSherry , Kobbi Nissim, and Adam Smith. Calibrating noise to sensi- tivit y in priv ate data analysis. In The ory of crypto gr aphy , pages 265–284. Springer, 2006. [6] John C Duc hi, Mic hael I Jordan, and Martin J W ain wrigh t. Local priv acy , data pro cessing inequalities, and statistical minimax rates. arXiv pr eprint arXiv:1302.3203 , 2013. [7] P eter Kairouz, Sewoong Oh, and Pramo d Visw anath. Extremal mechanisms for lo cal differential priv acy . In A dvanc es in neur al information pr o c essing systems , pages 2879–2887, 2014. [8] Charu C Aggarwal. R e c ommender Systems: The T extb o ok . Springer, 2016. [9] M F azel, E Candes, B Rech t, and P P arrilo. Compressed sensing and robust recov ery of low rank matrices. In Signals, Systems and Computers, 2008 42nd Asilomar Confer enc e on , pages 1043–1047. IEEE, 2008. 9

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment