Polar $n$-Complex and $n$-Bicomplex Singular Value Decomposition and Principal Component Pursuit

Informed by recent work on tensor singular value decomposition and circulant algebra matrices, this paper presents a new theoretical bridge that unifies the hypercomplex and tensor-based approaches to singular value decomposition and robust principal…

Authors: Tak-Shing T. Chan, Yi-Hsuan Yang

IEEE TRANSA CTIONS ON SIGNAL PR OCESSING, V OL. XX, NO. XX, MONTH 2016 1 Polar n -Comple x and n -Bicomple x Singular V alue Decomposition and Principal Component Pursuit T ak-Shing T . Chan, Member , IEEE and Y i-Hsuan Y ang, Member , IEEE Abstract —Informed by r ecent work on tensor singular value decomposition and cir culant algebra matrices, this paper presents a new theoretical bridge that unifies the h ypercomplex and tensor -based approaches to singular value decomposition and rob ust principal component analysis. W e begin our work by extending the principal component pursuit to Olariu’ s polar n - complex numbers as well as their bicomplex counterparts. In so doing, we hav e derived the polar n -complex and n -bicomplex proximity operators for both the ` 1 - and trace-norm regularizers, which can be used by proximal optimization methods such as the alternating direction method of multipliers. Experimental results on two sets of audio data show that our algebraically-inf ormed formulation outperf orms tensor rob ust principal component anal- ysis. W e conclude with the message that an inf ormed definition of the trace norm can bridge the gap between the hyper complex and tensor-based approaches. Our approach can be seen as a general methodology for generating other principal component pursuit algorithms with pr oper algebraic structures. Index T erms —Hypercomplex, tensors, singular value decom- position, principal component, pursuit algorithms. I . I N T R O D U C T I O N T HE robust principal component analysis (RPCA) [1] has receiv ed a lot of attention lately in many application areas of signal processing [2]–[5]. The ideal form of RPCA decomposes the input X ∈ R l × m into a lo w-rank matrix L and a sparse matrix S : min L , S rank( L ) + λ k S k 0 s.t. X = L + S , (1) where k · k 0 returns the number of nonzero matrix elements. Owing to the NP-hardness of the abov e formulation, the principal component pursuit (PCP) [1] has been proposed to solve this relaxed problem instead [6]: min L , S k L k ∗ + λ k S k 1 s.t. X = L + S , (2) where k · k ∗ is the trace norm (sum of the singular v alues), k · k 1 is the entrywise ` 1 -norm, and λ can be set to c/ p max( l, m ) where c is a positiv e parameter [1], [2]. The trace norm and the ` 1 -norm are the tightest con ve x relaxations of the rank and Manuscript receiv ed August 26, 2015; revised May 26, 2016 and July 16, 2016; accepted September 3, 2016. Date of publication Month xx, 2016; date of current version September 4, 2016. This work was supported by a grant from the Ministry of Science and T echnology under the contract MOST102- 2221-E-001-004-MY3 and the Academia Sinica Career Development Pro- gram. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Masahiro Y ukawa. The authors are with the Research Center for Information T echnol- ogy Innov ation, Academia Sinica, T aipei 11564, T aiwan (e-mail: taksh- ingchan@citi.sinica.edu.tw; yang@citi.sinica.edu.tw). Digital Object Identifier 10.1109/TSP .2016.2612171 the ` 0 -norm, respectively . Under somewhat general conditions [1], PCP with c = 1 has a high probability of exact recovery , though c can be tuned if the conditions are not met. Despite its success, one glaring omission from the original PCP is the lack of complex (and hypercomplex) formulations. In numerous signal processing domains, the input phase has a significant meaning. For example in parametric spatial audio, spectrograms have not only spectral phases but inter-channel phases as well. For that reason alone, we ha ve recently ex- tended the PCP to the complex and the quaternionic cases [7]. Howe ver , there exists inputs with dimensionality greater than four , such as microphone array data, surveillance video from multiple cameras, or electroencephalogram (EEG) signals, which exceed the capability of quaternions. These signals may instead be represented by n -dimensional hypercomplex numbers, defined as [8] a = a 0 + a 1 e 1 + · · · + a n − 1 e n − 1 , (3) where a 0 , . . . , a n − 1 ∈ R and e 1 . . . , e n − 1 are the imaginary units. Products of imaginary units are defined by an arbitrary ( n − 1) × ( n − 1) multiplication table, and multiplication follo ws the distributi ve rule [8]. If we impose the multiplication rules e i e j = ( − e j e i , i 6 = j, − 1 , 0 , or 1 , i = j, (4) and e xtend the algebra to include all 2 n − 1 combinations of imaginary units (formally kno wn as multiv ectors): a = a 0 + a 1 e 1 + a 2 e 2 + . . . + a 1 , 2 e 1 e 2 + a 1 , 3 e 1 e 3 + . . . + . . . + a 1 , 2 ,...,n − 1 e 1 e 2 . . . e n − 1 , (5) then we ha ve a Clifford algebra [9]. For example, the real, complex, and quaternion algebras are all Clifford algebras. Y et previously , Alfsmann [10] suggests two families of 2 N - dimensional hypercomplex numbers suitable for signal pro- cessing and argued for their superiority over Clifford alge- bras. One family starts from the two-dimensional hyperbolic numbers and the other one starts from the four -dimensional tessarines, 1 with dimensionality doubling up from there. Although initially attractive, the 2 N -dimensional restriction (which also af fects Clif ford algebras) seems a bit limiting. 1 Hyperbolic numbers are represented by a 0 + a 1 j where j 2 = 1 and a 0 , a 1 ∈ R [10]. T essarines are almost identical except that a 0 , a 1 ∈ C [10]. c  2016 IEEE. Personal use is permitted. For any other purposes, permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org. IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 2 For instance, if we hav e 100 channels to process, we are forced to use 128 dimensions (wasting 28). On the other hand, tensors can hav e arbitrary dimensions, but traditionally they do not possess rich algebraic structures. Fortunately , recent work on the tensor singular value decomposition (SVD) [11], which the authors call the t-SVD, has begun to impose more structures on tensors [12]–[14]. Furthermore, a tensor PCP formulation based on t-SVD has also been proposed lately [15]. Most rele vantly , Braman [12] has suggested to inv estigate the relationship between t-SVD and Olariu’ s [16] n -complex numbers (for arbitrary n ). This is e xactly what we need, yet the actual work is not forthcoming. So we hav e decided to begin our in vestigation with Olariu’ s polar n -complex numbers. Of special note is Gleich’ s work on the circulant algebra [17], which is isomorphic to Olariu’ s polar n -complex numbers. This observ ation simplifies our current work significantly . Nev ertheless, the existing tensor PCP [15] employs an ad hoc tensor nuclear norm, which lacks algebraic v alidity . So, in this paper , we remedy this gap by formulating the first proper n - dimensional PCP algorithm using the polar n -complex algebra. Our contributions in this paper are twofold. First, we hav e extended PCP to the polar n -complex algebra and the polar n -bicomplex algebra (defined in Section III), via: 1) properly exploiting the circulant isomorphism for the polar n -complex numbers; 2) extending the polar n -complex algebra to a new polar n -bicomplex algebra; and 3) deriving the proximal oper- ators for both the polar n -complex and n -bicomplex matrices by le veraging the aforementioned isomorphism. Second, we hav e provided a novel hypercomplex frame work for PCP where algebraic structures play a central role. This paper is organized as follows. In Section II, we re vie w polar n -complex matrices and their properties. W e extend this to the polar n -bicomple x case in Section III. This leads to the polar n -complex and n -bicomplex PCP in Section IV. Experiments are conducted in Sections V and VI to justify our approach. W e conclude by describing how our work provides a new direction for future work in Section VII. I I . T H E P O L A R n - C O M P L E X N U M B E R S In this section we introduce polar n -complex matrices and their isomorphisms. These will be required in Section IV for the formulation of polar n -complex PCP . Please note that the value of n here does not hav e to be a power of two. A. Backgr ound Olariu’ s [16] polar n -comple x numbers, which we denote by K n , are n -dimensional ( n ≥ 2 ) extensions of the complex algebra, defined as p = a 0 e 0 + a 1 e 1 + · · · + a n − 1 e n − 1 ∈ K n , (6) where a 0 , a 1 , . . . , a n − 1 ∈ R . The first imaginary unit is defined to be e 0 = 1 whereas e 1 , . . . , e n − 1 are defined by the multiplication table [16] e i e k = e ( i + k ) mod n . (7) W e call Re p = a 0 the real part of p and Im i p = a i the imaginary parts of p for i = 0 , 1 , . . . , n − 1 . W e remark that our imaginary index starts with 0 , which includes the real part, to facilitate a shorter definition of equations such as (34) and (41). Multiplication follows the usual associativ e and commutative rules [16]. The in verse of p is the number p − 1 such that pp − 1 = 1 [16]. Olariu named it the polar n - complex algebra because it is moti v ated by the polar repre- sentation of a complex number [16] where a + j b ∈ C is represented geometrically by its modulus √ a 2 + b 2 and polar angle arctan( b/a ) . Like wise, the polar n -complex number in (6) can be represented by its modulus | p | = q a 2 0 + a 2 1 + · · · + a 2 n − 1 (8) together with d n/ 2 e − 1 azimuthal angles, d n/ 2 e − 2 pla- nar angles, and one polar angle (two if n is ev en), to- taling n − 1 angles [16]. T o calculate these angles, let [ A 0 , A 1 , . . . , A n − 1 ] T be the discrete Fourier transform (DFT) of [ a 0 , a 1 , . . . , a n − 1 ] T , defined by      A 0 A 1 . . . A n − 1      = F n      a 0 a 1 . . . a n − 1      , (9) where ω n = e − j 2 π/n is a principal n th root of unity and F n = 1 √ n      1 1 · · · 1 1 ω n · · · ω n − 1 n . . . . . . . . . . . . 1 ω n − 1 n · · · ω ( n − 1)( n − 1) n      , (10) which is unitary , i.e., F ∗ n = F − 1 n . For k = 1 , . . . , d n/ 2 e − 1 , the azimuthal angles φ k can be calculated from [16] A k = | A k | e − j φ k , (11) where 0 ≤ φ k < 2 π . Note that we hav e reversed the sign of the angles as Olariu was a physicist so his DFT is our inv erse DFT . Furthermore, for k = 2 , . . . , d n/ 2 e − 1 , the planar angles ψ k − 1 are defined by [16] tan ψ k − 1 = | A 1 | | A k | , (12) where 0 ≤ ψ k ≤ π / 2 . The polar angle θ + is defined as [16] tan θ + = √ 2 | A 1 | A 0 , (13) where 0 ≤ θ + ≤ π . Finally , for even n , there is an additional polar angle [16], tan θ − = √ 2 | A 1 | A n/ 2 , (14) where 0 ≤ θ − ≤ π . W e can uniquely reco ver the polar n - complex number giv en its modulus and the n − 1 angles defined abov e. 2 More importantly , the polar n -complex numbers are 2 Exact formulas can be found in [16, pp. 212–216], especially (6.80), (6.81), (6.103), and (6.104). W e remark that Olariu’ s choice of A 1 as a reference for the planar and polar angles is conv enient but somewhat arbitrary . IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 3 ring-isomorphic 3 to the following matrix representation [16], χ : K n → R n × n : χ ( p ) =        a 0 a n − 1 a n − 2 · · · a 1 a 1 a 0 a n − 1 · · · a 2 a 2 a 1 a 0 · · · a 3 . . . . . . . . . . . . . . . a n − 1 a n − 2 a n − 3 · · · a 0        , (15) which is a circulant matrix. 4 This means that polar n -complex multiplication is equiv alent to circular conv olution. Due to the circular con volution theorem, it can be implemented efficiently in the Fourier domain [17]: F n ( a ~ b ) = √ n ( F n a ) ◦ ( F n b ) , (16) where a , b ∈ R n , ~ denotes circular conv olution, and ◦ is the Hadamard product. The isomorphism in (15) implies [17]: χ (1) = I n , (17) χ ( pq ) = χ ( p ) χ ( q ) , (18) χ ( p + q ) = χ ( p ) + χ ( q ) , (19) χ ( p − 1 ) = χ ( p ) − 1 , (20) for 1 , p, q ∈ K n . From these properties it becomes natural to define the polar n -complex conjugation ¯ p by [17] χ ( ¯ p ) = χ ( p ) ∗ (21) where χ ( p ) ∗ denotes the conjugate transpose of χ ( p ) . This allows us to propose a ne w scalar product inspired by its quaternionic counterpart [19], h p, q i = Re p ¯ q , (22) which we will use later for the Frobenius norm of the polar n -complex numbers. Note that this differs from the usual definition h p, q i = p ¯ q [17] because we need the real restriction for the desirable property h p, p i = | p | 2 . T o wit, observe that Re p = a 0 = [ χ ( p )] ii for arbitrary i , thus Re p ¯ q = [ χ ( p ) χ ( q ) ∗ ] ii = P n k =1 [ χ ( p )] ik [ χ ( q )] ik which is the standard inner product between the underlying elements. The same results can also be obtained from Re ¯ pq . In other words, if p = P n − 1 i =0 a i e i and q = P n − 1 i =0 b i e i , we get Re p ¯ q = Re ¯ pq = n − 1 X i =0 a i b i . (23) An alternative way of looking at the isomorphism in (15) is to consider the circulant matrix as a sum [20], χ ( p ) = a 0 E 0 n + a 1 E 1 n + . . . + a n − 1 E n − 1 n , (24) where E n =        0 0 · · · 0 1 1 0 · · · 0 0 0 1 · · · 0 0 . . . . . . . . . . . . . . . 0 0 · · · 1 0        ∈ R n × n , (25) 3 A ring isomorphism is a bijecti ve map χ : R → S such that χ (1 R ) = 1 S , χ ( ab ) = χ ( a ) χ ( b ) , and χ ( a + b ) = χ ( a ) + χ ( b ) for all a, b ∈ R . 4 A circulant matrix is a matrix C where each column is a cyclic shift of its previous column, such that C is diagonalizable by the DFT [18]. More concisely , we can write c ik = a ( i − k ) mod n . following the conv ention that E 0 n = I n . It is trivial to show that E i n E k n = E ( i + k ) mod n n [20]. Hence the isomorphism is immediately obvious. Recall that the group of imaginary units { E i n } n − 1 i =0 is called cyclic if we can use a single basis element E n to generate the entire algebra, so the algebra in (24) has another name called a cyclic algebra [21]. The circulant isomorphism helps us to utilize recent lit- erature on circulant algebra matrices [17], which simplifies our work in the next subsection. The circulant algebra in [17] breaks the modulus into n pieces such that the original number can be uniquely recovered without the planar and polar angles. Howe ver , for the ` 1 -norm at least, we need a single number for minimization purposes. Moreover , although our goal is phase preservation, we do not need to calculate the angles explicitly for the PCP problem. Consequently , we will stick with the original definition in (8). B. P olar n -Complex Matrices and Their Isomorphisms W e denote the set of l × m matrices with polar n -complex entries by K l × m n . For a polar n -complex matrix A ∈ K l × m n , we define its adjoint matrix via χ lm : K l × m n → R ln × mn [17]: χ lm ( A ) =      χ ( A 11 ) χ ( A 12 ) . . . χ ( A 1 m ) χ ( A 21 ) χ ( A 22 ) . . . χ ( A 2 m ) . . . . . . . . . . . . χ ( A l 1 ) χ ( A l 2 ) . . . χ ( A lm )      . (26) W e will now show that the R -linear map χ lm ( A ) : R mn → R ln operates in an identical manner as the K n -linear map A : K m n → K l n . Theorem 1. Let A ∈ K l × m n . Then the following holds: 1) χ mm ( I m ) = I mn if I m ∈ K m × m n ; 2) χ lr ( AB ) = χ lm ( A ) χ mr ( B ) if B ∈ K m × r n ; 3) χ lm ( A + B ) = χ lm ( A ) + χ lm ( B ) if B ∈ K l × m n ; 4) χ lm ( A ∗ ) = χ lm ( A ) ∗ ; 5) χ lm ( A − 1 ) = χ lm ( A ) − 1 if it exists. Pr oof. 1, 3, and 4 can be verified by direct substitution. 5 can be deriv ed from 1–2 via the equality AA − 1 = I . 2 can be prov en using (15) and (18): χ lm ( AB ) = χ               m P k =1 A 1 k B k 1 · · · m P k =1 A 1 k B kr . . . . . . . . . m P k =1 A lk B k 1 · · · m P k =1 A lk B kr               =        χ  m P k =1 A 1 k B k 1  · · · χ  m P k =1 A 1 k B kr  . . . . . . . . . χ  m P k =1 A lk B k 1  · · · χ  m P k =1 A lk B kr         =        m P k =1 χ ( A 1 k ) χ ( B k 1 ) · · · m P k =1 χ ( A 1 k ) χ ( B kr ) . . . . . . . . . m P k =1 χ ( A lk ) χ ( B k 1 ) · · · m P k =1 χ ( A lk ) χ ( B kr )        = χ lm ( A ) χ lm ( B ) . IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 4 T ABLE I S T EP - B Y - S T EP I L L US T R A T I O N O F T H E CF T F O R A ∈ K 2 × 2 2 ; S EE (10), (26) A N D (27) F OR D E FI NI T I O NS . I N GE N E R AL , D U E TO T H E P RO P E RT IE S O F T HE C I RC U L A NT B L O CK S , T HE C F T C AN B L O CK D I AG O NA L IZ E T H E AD J O I NT M A T R IX O F A NY P O L A R n - C O M PL E X M A T R I CE S . H ER E F 2 = 1 √ 2  1 1 1 − 1  A χ 2 , 2 ( A ) ( I 2 ⊗ F 2 ) χ 2 , 2 ( A )( I 2 ⊗ F ∗ 2 ) P 4 , 2 ( I 2 ⊗ F 2 ) χ 2 , 2 ( A )( I 2 ⊗ F ∗ 2 ) P − 1 4 , 2  a 0 + a 1 e 1 c 0 + c 1 e 1 b 0 + b 1 e 1 d 0 + d 1 e 1      a 0 a 1 c 0 c 1 a 1 a 0 c 1 c 0 b 0 b 1 d 0 d 1 b 1 b 0 d 1 d 0         a 0 + a 1 0 c 0 + c 1 0 0 a 0 − a 1 0 c 0 − c 1 b 0 + b 1 0 d 0 + d 1 0 0 b 0 − b 1 0 d 0 − d 1         a 0 + a 1 c 0 + c 1 0 0 b 0 + b 1 d 0 + d 1 0 0 0 0 a 0 − a 1 c 0 − c 1 0 0 b 0 − b 1 d 0 − d 1     In other words, the adjoint matrix χ lm ( A ) is an isomorphic representation of the polar n -complex matrix A . The above isomorphism is originally established for cir - culant matrix-vector multiplication [17], which we ha ve just extended to the case of matrix-matrix multiplication. This isomorphism simplifies our work both theoretically and ex- perimentally by allo wing us to switch to the adjoint matrix representation where it is more con venient. C. Singular V alue Decomposition For the SVD of A ∈ K l × m n , we first define the stride-by- s [22] permutation matrix of order m by: [ P m,s ] ik = [ I m ] is − ( m − 1) b is/m c ,k (27) for i, k = 0 , 1 , . . . , m − 1 . This is equi v alent to b ut more succinct than the standard definition in the literature [22]. The stride-by- s permutation greatly simplifies the definition of the two-dimensional shuffle in the following. W e define the circulant Fourier transform (CFT) and its inv erse (ICFT), in the same way as [17]: cft( A ) = P ln,l ( I l ⊗ F n ) χ lm ( A )( I m ⊗ F ∗ n ) P − 1 mn,m , (28) χ lm (icft( ˆ A )) = ( I l ⊗ F ∗ n ) P − 1 ln,l ˆ AP mn,m ( I m ⊗ F n ) , (29) where P ln,l ( · ) P − 1 mn,m shuffles an l n × mn matrix containing n × n diagonal blocks into a block diagonal matrix containing l × m blocks. Please refer to T able I to see this shuf fle in action. The purpose of cft( A ) is to block diagonalize the adjoint matrix of A into the follo wing form [17]: ˆ A = cft( A ) =    ˆ A 1 . . . ˆ A n    , (30) while icft( ˆ A ) in verts this operation. Here, ˆ A i can be under- stood as the eigen values of the input as produced in the Fourier transform order , as noted by [17]. The SVD of A can be performed blockwise through the SVD of cft( A ) [11]:    ˆ U 1 . . . ˆ U n       ˆ Σ 1 . . . ˆ Σ n       ˆ V 1 . . . ˆ V n    ∗ , (31) then we can use icft( ˆ U ) , icft( ˆ Σ ) , and icft( ˆ V ) to get U ∈ K l × l n , S ∈ K l × m n , and V ∈ K m × m n where U and V are unitary [11], [17]. This is equiv alent to the t-SVD in tensor signal processing (see Algorithm 1) [11], provided that we store the l × m polar n -complex matrix into an l × m × n real tensor, 5 then the n -point DFT along all tubes is equi valent to the CFT . Matrix multiplication can also be done blockwise in the CFT domain with the √ n scaling as before. Algorithm 1 t-SVD [11] Input: X ∈ C l × m × n // See footnote 5 for tensor notation. Output: U , S , V 1: ˆ X ← fft( X , n , 3) // Applies n -point DFT to each tube. 2: for i = 1 : n do 3: [ ˆ U :: i , ˆ S :: i , ˆ V :: i ] ← svd( ˆ X :: i ) // SVD each frontal slide. 4: end for 5: U ← ifft( ˆ U , n, 3); S ← ifft( ˆ S , n, 3); V ← ifft( ˆ V , n, 3) D. Pr oposed Extensions In order to study the phase angle between matrices, we define a new polar n -complex inner product as h A , B i = Re tr( AB ∗ ) , A , B ∈ K l × m n . (32) and use it to induce the polar n -complex Frobenius norm: k A k F = p h A , A i . (33) W e propose two further isomorphisms for polar n -complex matrices via ξ : K l × m n → R l × mn and ν : K l × m n → R lmn : ξ ( A ) = [Im 0 A , Im 1 A , . . . , Im n − 1 A ] , (34) ν ( A ) = vec ξ ( A ) . (35) These are the polar n -complex matrix counterparts of the tensor unfold and v ec operators, respecti vely . 6 W e end this subsection by enumerating two elementary algebraic properties of K l × m n , which will come in handy when we inv estigate the 5 By conv ention, we denote tensors with calligraphic letters. For a three- dimensional tensor A ∈ R n 1 × n 2 × n 3 , a fiber is a one-dimension subarray defined by fixing two of the indices, whereas a slice is a two-dimensional subarray defined by fixing one of the indices [23]. The ( i, k , l ) -th element of A is denoted by A ikl . If we indicate all elements of a one-dimensional subarray using the M ATL A B colon notation, then A : kl , A i : l , and A ik : are called the column, row and tube fibers, respectiv ely [23]. Similarly , A i :: , A : k : , and A :: l are called the horizontal, lateral, and frontal slides, respecti vely [23]. Notably , Kilmer , Martin, and Perrone [11] reinterprets an n 1 × n 2 × n 3 tensor as an n 1 × n 2 matrix of tubes (of length n 3 ). This is most relev ant to our present work when polar n 3 -complex numbers are seen as tubes. 6 Column unfolding reshapes the tensor A ∈ R n 1 × n 2 × n 3 into a matrix M ∈ R n 1 × n 2 n 3 by mapping each tensor element A ikl into the correspond- ing matrix element M i,k +( l − 1) n 2 [23]. IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 5 trace norm later in Theorem 9. The proofs are gi ven below for completeness. Proposition 2. If A , B ∈ K l × m n , then the following holds: 1) h A , B i = Re tr( A ∗ B ) = ν ( A ) T ν ( B ) ; 2) k A k 2 F = P i | σ i ( A ) | . wher e σ i ( A ) ar e the singular values of A obtained from icft( ˆ Σ ) after steps (30) and (31) . Pr oof. 1) This is a direct consequence of (20) after observing that Re tr( AB ∗ ) = Re P i,k A ik ¯ B ik . From this we can say that our polar n -complex inner product is Euclidean. As a corollary we hav e k A k 2 F = P i,k | A ik | 2 . 2) As the Frobenius norm is in variant under any unitary transformation [24], we can write k A k 2 F = k Σ k 2 F = P i | σ i ( A ) | 2 . I I I . E X T E N S I O N T O P O L A R n - B I C O M P L E X N U M B E R S One problem with the real numbers is that √ − 1 / ∈ R ; that is, they are not algebraically closed. This af fects the polar n -complex numbers too since their real and imaginary parts consist of real coefficients only . T o impose algebraic closure for certain applications, we can go one step further and use complex coefficients instead. More specifically , we extend the polar n -complex algebra by allowing for complex coefficients in (6), such that p = a 0 e 0 + a 1 e 1 + · · · + a n − 1 e n − 1 ∈ CK n , (36) where a 0 , a 1 , . . . , a n − 1 ∈ C . In other words, both real and imaginary parts of p no w contain complex numbers (effec- tiv ely doubling its dimensions). This constitutes our definition of the polar n -bicomplex numbers CK n . The first imaginary unit is still e 0 = 1 and e 1 , . . . , e n − 1 satisfies the same multiplication table in (7). W e can now write Re p = Re a 0 for the real part of p (note the additional Re ) and Im i p = a i for the imaginary parts for i = 0 , 1 , . . . , n − 1 (as before, the imag- inary part includes the real part for notational con venience). The modulus then becomes | p | = p | a 0 | 2 + | a 1 | 2 + · · · + | a n − 1 | 2 , (37) along with the same n − 1 angles in (11 – 14). For example, if g = (1 + 2 j ) + (3 + 4 j ) e 1 + (5 + 6 j ) e 2 , we hav e Re g = 1 , Im 0 g = 1 + 2 j , Im 1 g = 3 + 4 j , Im 2 g = 5 + 6 j , and | g | = √ 91 . The polar n -bicomplex numbers are ring-isomorphic to the same matrix in (15), and have the same properties (17 – 20). The multiplication can still be done in the Fourier domain if desired. The polar n -bicomplex conjugation can be defined in the same manner as (21). Given our new definition of Re , the scalar product is: h p, q i = Re p ¯ q . (38) Note that we still have h p, p i = | p | 2 , because Re p ¯ q = Re[ χ ( p ) χ ( q ) ∗ ] ii = Re P n k =1 [ χ ( p )] ik [ χ ( q )] ik for arbitrary i , which gives the Euclidean inner product (likewise for Re ¯ pq ). So given p = P n − 1 i =0 a i e i and q = P n − 1 i =0 b i e i , we now hav e Re p ¯ q = Re ¯ pq = Re n − 1 X i =0 a i ¯ b i . (39) A. P olar n -Bicomplex Matrices and Their Isomorphisms Analogously , we denote the set of l × m matrices with polar n -bicomplex entries by CK l × m n . The adjoint matrix of A ∈ CK l × m n can be defined similarly via χ lm : CK l × m n → C ln × mn : χ lm ( A ) =      χ ( A 11 ) χ ( A 12 ) . . . χ ( A 1 m ) χ ( A 21 ) χ ( A 22 ) . . . χ ( A 2 m ) . . . . . . . . . . . . χ ( A l 1 ) χ ( A l 2 ) . . . χ ( A lm )      . (40) Next we are going to show that the C -linear map χ lm ( A ) : C mn → C ln operates in the same manner as the CK n -linear map A : CK m n → CK l n . Theorem 3. Let A ∈ CK l × m n . Then we have: 1) χ mm ( I m ) = I mn if I m ∈ CK m × m n ; 2) χ lr ( AB ) = χ lm ( A ) χ mr ( B ) if B ∈ CK m × r n ; 3) χ lm ( A + B ) = χ lm ( A ) + χ lm ( B ) if B ∈ CK l × m n ; 4) χ lm ( A ∗ ) = χ lm ( A ) ∗ ; 5) χ lm ( A − 1 ) = χ lm ( A ) − 1 if it exists. Pr oof. See Theorem 1. The polar n -bicomplex SVD, inner product and Frobenius norm can be defined following (31), (32) and (33). The illus- tration in T able I still applies. The additional isomorphisms are defined via ξ : CK l × m n → R l × 2 mn and ν : CK l × m n → R 2 lmn : ξ ( A ) = [Re Im 0 A , Im Im 0 A , . . . , Im Im n − 1 A ] (41) ν ( A ) = vec ξ ( A ) . (42) Proposition 4. If A , B ∈ CK l × m n , then the following holds: 1) h A , B i = Re tr( A ∗ B ) = ν ( A ) T ν ( B ) ; 2) k A k 2 F = P i | σ i ( A ) | , wher e σ i ( A ) are the singular values of A . Pr oof. See Proposition 2. I V . P O L A R n - C O M P L E X A N D n - B I C O M P L E X P C P PCP algorithms [1], [2] are traditionally implemented by proximal optimization [25] which extends gradient projection to the nonsmooth case. Often, closed-form solutions for the proximity operators are available, like soft-thresholding [26] and singular value thresholding [27] in the real-valued case. A. Equivalence to Real-V alued Pr oximal Methods T o fix our notation, recall that the proximity operator of a function f : R m → R m is traditionally defined as [25]: pro x f z = arg min x 1 2 k z − x k 2 2 + f ( x ) , x ∈ R m . (43) For x ∈ K m n or CK m n we can use ν ( x ) instead of x and adjust f ( x ) accordingly . As k z − x k 2 2 is in variant under this transformation, we can equiv alently extend the domain of f to K m n or CK m n without adjusting f ( x ) in the following. This equiv alence establishes the validity of proximal minimization using polar n -complex and n -bicomplex matrices directly , without needing to con vert to the real domain temporarily . IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 6 B. The Proximity Operator for the ` 1 Norm W e deal with the ` 1 - and trace-norm regularizers in order . Lemma 5 (Y uan and Lin [28]) . Let { x (1) , . . . , x ( m ) } be a partition of x such that x = S m i =1 x ( i ) . The pr oximity operator for the gr oup lasso r e gularizer λ P m i =1 k x ( i ) k 2 is pro x λ P k·k 2 z = "  1 − λ k z ( i ) k 2  + z ( i ) # m i =1 , z ∈ R r , (44) wher e ( y ) + denotes max(0 , y ) , [ y i ] m i =1 = [ y T 1 , . . . , y T m ] T is a r eal column vector , and r is the sum of the sizes of x ( · ) . Pr oof. This result is standard in sparse coding [28], [29]. The group lasso is a variant of sparse coding that promotes group sparsity , i.e., zeroing entire groups of variables at once or not at all. When we put the real and imaginary parts of a polar n -complex or n -bicomplex number in the same group, group sparsity makes sense, since a number cannot be zero unless all its constituent parts are zero, as in the next theorem. Theorem 6. The polar n -complex or n -bicomplex lasso min x 1 2 k z − x k 2 2 + λ k x k 1 , z , x ∈ F m , (45) wher e F is K n or CK n , is equivalent to the gr oup lasso min ξ ( x ) 1 2 k ξ ( z − x ) k 2 F + λ k ξ ( x ) k 1 , 2 , (46) wher e k A k 1 , 2 is defined as P i p P k | A ik | 2 . Pr oof. The proof is straightforward: 1 2 k z − x k 2 2 + λ k x k 1 = X i 1 2 | z i − x i | 2 + λ | x i | = X i 1 2 k ξ ( z i − x i ) k 2 2 + λ k ξ ( x i ) k 2 = 1 2 k ξ ( z − x ) k 2 F + λ k ξ ( x ) k 1 , 2 . The first line inv okes the definitions of | · | in (8) and (37), while the second line is due to the proposed isomorphisms in (34) and (41). In other words, we have discovered a method to solve the no vel polar n -complex or n -bicomple x lasso problem using real-valued group lasso solvers. By combining Lemma 5 and Theorem 6, we arrive at the main result of this subsection. Corollary 7. F or the entrywise ` 1 -r egularizer λ k X k 1 , where X, Z ∈ K l × m n or CK l × m n , we may tr eat X as a long hyper complex vector of length l m without loss of generality . Simply assign each hypercomple x number to its own gr oup g i , for all 1 ≤ i ≤ l m numbers, and we obtain the pr oximity operator for λ k X k 1 using (44) : pro x F λ k·k 1 z =  1 − λ | z |  + z , z ∈ F lm , (47) wher e F is K n or CK n and z = v ec Z . Her e | z | corr esponds to the Euclidean norm in (44) and the gr ouping should follow the definition of ξ ( A ) for the respective algebra. Note how each entry corresponds to its real-isomorphic gr oup ξ ( · ) here . C. The Proximity Operator for the T race Norm Next we will treat the trace-norm regularizer . W e begin our proof by quoting a classic textbook inequality . In what follows, σ i ( A ) denotes the singular values of A . Lemma 8 (von Neumann [24]) . F or any A , B ∈ C l × m , the von Neumann trace inequality holds: Re tr( AB ∗ ) ≤ X i σ i ( A ) σ i ( B ) . (48) Pr oof. This is a standard textbook result [24]. Theorem 9. F or any A , B ∈ K l × m n or CK l × m n , the following extension to the von Neumann inequality holds: Re tr( AB ∗ ) ≤ X i | σ i ( A ) || σ i ( B ) | . (49) Pr oof. This theorem embodies the key insight of this paper . Our novel discovery is that we can switch to the block- diagonalized CFT space to separate the sums and switch back: Re tr( ˆ A ˆ B ∗ ) = n − 1 X k =0 Re tr( ˆ A k ˆ B ∗ k ) ≤ X i n − 1 X k =0 σ i ( ˆ A k ) σ i ( ˆ B k ) ≤ X i v u u t n − 1 X k =0 σ 2 i ( ˆ A k ) n − 1 X k =0 σ 2 i ( ˆ B k ) = X i | σ i ( ˆ A ) || σ i ( ˆ B ) | , where ˆ A = cft( A ) and ˆ B = cft( B ) , respectiv ely . The second line is by Lemma 8 and the third line is due to the Cauchy- Schwarz inequality . Using Parsev al’ s theorem, this theorem is prov ed. Theorem 10. The pr oximity operator for the polar n -complex or n -bicomplex trace norm λ P i | σ i ( X ) | , assuming X, Z ∈ K l × m n or CK l × m n , is: pro x λ k·k ∗ z = v ec U "  1 − λ | Σ |  + ◦ Σ # V ∗ , z ∈ F lm , (50) wher e z = v ec Z , UΣV ∗ is the SVD of Z with singular values Σ ii = σ i ( Z ) , the absolute value of Σ is computed entrywise, and F is K n or CK n . Pr oof. The proof follows [29] closely except that Theorem 9 allows us to extend the proof to the polar n -complex and n - bicomplex cases. Starting from the Euclidean inner product identity h z − x, z − x i = h z , z i − 2 h z , x i + h x, x i , which is applicable because of Propositions 2 and 4, we ha ve the following inequality: IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 7 k Z − X k 2 F = X i | σ i ( Z ) | 2 − 2 h Z , X i + X i | σ i ( X ) | 2 ≥ X i | σ i ( Z ) | 2 − 2 | σ i ( Z ) || σ i ( X ) | + | σ i ( X ) | 2 = X i ( | σ i ( Z ) | − | σ i ( X ) | ) 2 , where Theorem 9 is inv oked on the penultimate line. Thus: 1 2 k Z − X k 2 F + λ X i | σ i ( X ) | ≥ X i 1 2 ( | σ i ( Z ) | − | σ i ( X ) | ) 2 + λ | σ i ( X ) | = 1 2 k| σ ( Z ) | − | σ ( X ) |k 2 2 + λ k σ ( X ) k 1 , which is equiv alent to a lasso problem on the (elementwise) modulus of the singular values of a polar n -complex or n - bicomplex matrix. By applying Corollary 7 to the modulus of the singular values entrywise, the theorem is proved. Unlike the entrywise ` 1 -regularizer , the proximity operator in Theorem 10 first operates on the entire matrix all at once. Once the SVD is computed, the absolute value of its singu- lar values are then calculated entrywise (or real-isomorphic groupwise) to respect the properties of the underlying algebra. D. The Extended F ormulations of PCP W ith the new proximal operators in (47) and (50), we can finally define the polar n -complex and n -bicomplex PCP: min L , S k L k ∗ + λ k S k 1 s.t. X = L + S , (51) where X ∈ K l × m n for the polar n -complex PCP and X ∈ CK l × m n for the polar n -bicomplex PCP . W e can solve this by the same algorithms in [6], except that we should replace the soft-thresholding function: S λ [ x ] =    x − λ, if x > λ, x + λ, if x < − λ, 0 , otherwise (52) with pro x K n λ k·k 1 z and pro x CK n λ k·k 1 z for the polar n -complex and n -bicomplex PCP , respecti vely . The inexact augmented La- grange multiplier (IALM) method, also known as alternating direction method of multipliers, is well-established in the literature and its con ver gence has long been proven [30]–[32]. Our adaptation is shown in Algorithm 2. As the constraint X = L + S only uses simple additions, which are elementwise by definition, IALM will continue to work without change (via Proposition 2 and Proposition 4). In the original IALM formulation [6], their choice of Y 1 is informed by the dual problem, whereas their µ k ’ s are incremented geometrically to infinity . W e will simply follow them here. In theory , any initial value w ould work, b ut good guesses would conv erge faster [6]. As for µ k , any increasing sequence can be used, so long as it satisfies the conv ergence assumptions P ∞ k =1 µ k +1 /µ 2 k < ∞ and lim k →∞ µ k ( S k +1 − S k ) = 0 [6]. As both K n and CK n are isomorphic to the circulant algebra, the easiest option is to use Algorithm 2 Polar n -(Bi)complex PCP Input: X ∈ F l × m , F ∈ { K n , CK n } , λ ∈ R , µ ∈ R ∞ Output: L k , S k 1: Let S 1 = 0 , Y 1 = X / max  k X k 2 , λ − 1 k X k ∞  , k = 1 2: while not con ver ged do 3: L k +1 ← pro x 1 /µ k k·k ∗ ( X − S k + µ − 1 k Y k ) 4: S k +1 ← pro x F λ/µ k k·k 1 ( X − L k +1 + µ − 1 k Y k ) 5: Y k +1 ← Y k + µ k ( X − L k +1 − S k +1 ) 6: k ← k + 1 7: end while Algorithm 3 Optimized Polar n -(Bi)complex PCP Input: X ∈ F l × m , F ∈ { K n , CK n } , λ ∈ R , µ ∈ R ∞ Output: L , S 1: Let ˆ S = 0 , Y = X / max  k X k 2 , λ − 1 k X k ∞  , k = 1 2: ˆ X ← fft( X , n , 3) // Applies n -point DFT to each tube. 3: ˆ Y ← fft( Y , n , 3) 4: while not con ver ged do 5: ˆ Z ← ˆ X − ˆ S + µ − 1 k ˆ Y 6: for i = 1 : n do 7: [ ˆ U :: i , ˆ Σ :: i , ˆ V :: i ] ← svd( ˆ Z :: i ) 8: end for 9: ˆ Σ ← pro x F √ n/µ k k·k 1 ˆ Σ 10: for i = 1 : n do 11: ˆ L :: i = ˆ U :: i ˆ Σ :: i ˆ V ∗ :: i 12: end for 13: ˆ S ← pro x F λ √ n/µ k k·k 1 ( ˆ X − ˆ L + µ − 1 k ˆ Y ) 14: ˆ Y ← ˆ Y + µ k ( ˆ X − ˆ L − ˆ S ) 15: k ← k + 1 16: end while 17: L ← ifft( ˆ L , n, 3) 18: S ← ifft( ˆ S , n, 3) Gleich’ s circulant algebra matrix (CAMA T) toolbox [17] to implement the algorithms. Howe ver , CAMA T is slightly slo w due to unnecessary con versions to and from frequency domain at each iteration, so we reimplement this algebra from scratch and entirely in the Fourier domain, via (16). See Algorithm 3 for our optimized frequency domain implementation. The extra √ n scaling for the proximity operators is due to the fact that M AT LA B ’ s fft is unnormalized. 7 V . N U M E R I C A L S I M U L A T I O N S T o demonstrate the benefit of algebraic closure in polar n - bicomplex numbers (introduced in Section III), we will numer- ically reco ver hypercomplex matrices of various ranks from additiv e noises with different lev els of sparsity using hyper - complex PCP . Low-rank plus sparse matrices can be generated using Cand ` es et al. ’ s XY ∗ + S model [1], where X and Y are m × r matrices with independent and identically distrib uted (i.i.d.) Gaussian entries from N (0 , 1 /m ) , S is an m × m matrix with i.i.d. 0-1 entries from Bernoulli( ρ ) multiplied by uniformly random signs, and r and ρ are the desired rank 7 All the code for this paper (including Algorithms 3 and 4) is available at http://mac.citi.sinica.edu.tw/ikala/code.html to support reproducibility . IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 8 r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 (a) r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 (b) r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 r/m ρ 0 0.05 0.1 0.15 0.2 0.2 5 0 0.05 0.1 0.15 0.2 0.25 (c) Fig. 1. Recov ery success rates for (a) polar 4 -complex embedding, (b) polar 2 -bicomplex embedding, and (c) quaternionic embedding. Matrix generation and success criteria are detailed in Section V. From top to bottom: results for ε = 0 . 1 , 0 . 05 , and 0 . 01 , respectiv ely . Grayscale color indicates the fraction of success (white denoting complete success, black denoting total failure). and sparsity , respecti vely . T o accomodate complex coef ficients, we instead use the complex normal distribution C N (0 , I /m ) for X and Y , and replace the random signs for S with unit-modulus complex numbers whose phases are uniformly distributed. Follo wing [1], we consider square matrices of size m = 400 . For each ( r , ρ ) pair , we conduct 10 trials of the following simulation. In each trial, we generate two complex matrices, M 1 = X 1 Y ∗ 1 + S 1 and M 2 = X 2 Y ∗ 2 + S 2 , using the complexified model described above. Then we embed the two complex matrices into one hypercomplex matrix by: 1) Polar 4 -complex embedding: the matrices are combined into (Re M 1 ) + (Im M 1 ) e 1 + (Re M 2 ) e 2 + (Im M 2 ) e 3 . 2) Polar 2 -bicomplex embedding: the matrices are com- bined into M 1 + M 2 e 1 . 3) Quaternionic embedding [7]: the matrices are combined into M 1 + M 2  . For each embedding, we perform PCP with a relativ e error tolerance of 10 − 7 , as in [6]. W e call the M 1 part of the trial a success if the recov ered low-rank solution L 1 satisfies k L 1 − X 1 Y ∗ 1 k F / k X 1 Y ∗ 1 k F < ε . Like wise, the M 2 part of the trial is deemed successful if the recovered L 2 satisfies k L 2 − X 2 Y ∗ 2 k F / k X 2 Y ∗ 2 k F < ε . The results are sho wn in Fig. 1 for ε = 0 . 1 , 0 . 05 , and 0 . 01 . The color of each cell indicates the proportion of successful recov ery for each ( r, ρ ) pair across all 10 trials. Results suggest that quaternions and polar 2 -bicomplex numbers hav e comparable performance up to a sparsity of about 0 . 16 . Both markedly outperform polar 4 -complex numbers for all ε . As we decrease ε to 0 . 01 , the polar 4 -complex numbers hav e completely failed while the other two are still working well. It may be argued that the quaternions are better than polar 2 -bicomplex numbers for sparsities above 0 . 16 , but their main weakness is that the dimensionality is fixed at 4 so they are less fle xible than polar n -bicomplex numbers in general. In summary , our simulations hav e provided clear evidence for the importance of algebraic closure in hypercomplex systems. Next, we will use real data to test the practicality of our proposed algorithms. V I . E X P E R I M E N T S In this section, we use the singing voice separation (SVS) task to ev aluate the ef fectiveness of the polar n -bicomplex PCP . SVS is an instance of blind source separation in the field of music signal processing, and its goal is to separate the singing voice component from an audio mixture containing both the singing voice and the instrumental accompaniment (see Fig. 2). For applications such as singer modeling or lyric alignment [33], SVS has been shown an important pre- processing step for better performance. W e consider SVS in this evaluation because PCP has been found promising for this particular task, showing that to a certain degree the magnitude spectrogram of pop music can be decomposed into a low-rank instrumental component and a sparse voice component [2]. IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 9 S T F T P C P I S T F T I S T F T M i x t u r e V o i c e M u s i c S L X Fig. 2. Block diagram of a multichannel PCP-SVS system. For our ex- periments, PCP is either polar n -bicomplex PCP , polar 2 n -complex PCP , quaternionic PCP [7], or tensor RPCA [15]. A. Algorithms The following versions of PCP-SVS are compared: 1) Polar n -bicomplex PCP: the n -channel audio is repre- sented using X 1 e 0 + . . . + X n e n − 1 , where X i contains the complex spectrogram for the i -th channel. 2) Polar 2 n -complex PCP: the n -channel audio is rep- resented using (Re X 1 ) e 0 + (Im X 1 ) e 1 + . . . + (Re X n ) e 2 n − 2 + (Im X n ) e 2 n − 1 , where X i contains the complex spectrogram for the i -th channel. 3) Quaternionic PCP (if applicable) [7]: the two-channel audio is represented using X 1 + X 2  , where X i contains the complex spectrogram for the i -th channel. 4) T ensor RPCA [15]: the same spectrograms are repre- sented by complex matrices of tubes. The tensor RPCA is used, which is defined by: min L , S kLk T N N + λ kS k 1 , 1 , 2 s.t. X = L + S , (53) where kLk T N N is defined as the sum of the singular values of all frontal slices of ˆ L (obtained by a Fourier transform along each tube) and kS k 1 , 1 , 2 is defined by P i,k kS ik : k F [15]. T o facilitate comparison with polar n -bicomplex PCP , we retrofit (53) into our framew ork: min L , S k cft( L ) k ∗ + λ k S k 1 s.t. X = L + S , (54) where X ∈ K l × m n is the input. Our optimized implemen- tation is shown in Algorithm 4, where all calculations are done in the frequency domain. B. Datasets The following datasets will be used: 1) The MSD100 dataset from the 2015 Signal Separa- tion Evaluation Campaign (SiSEC). 8 The dataset is composed of 100 full stereo songs of different styles and includes the synthesized mixtures and the original sources of voice and instrumental accompaniment. T o reduce computations, we use only 30-second fragments (1’45” to 2’15”) clipped from each song, which is the only period where all 100 songs contain vocals. The MSD100 songs are divided into 50 dev elopment songs and 50 test songs, but SiSEC requires testing to be done on both sets. W e will follow their con vention here. 2) The Single- and Multichannel Audio Recordings Database (SMARD). 9 This dataset contains 48 measure- ment configurations with 20 audio recordings each [34]. 8 http://corpus- search.nii.ac.jp/sisec/2015/MUS/MSD100 2.zip 9 http://www .smard.es.aau.dk/ Algorithm 4 Optimized T ensor RPCA (cf. [15]) Input: X ∈ F l × m , F ∈ { K n , CK n } , λ ∈ R , µ ∈ R ∞ Output: L , S 1: Let ˆ S = 0 , Y = X / max  k X k 2 , λ − 1 k X k ∞  , k = 1 2: ˆ X ← fft( X , n , 3) // Applies n -point DFT to each tube. 3: ˆ Y ← fft( Y , n , 3) 4: while not con ver ged do 5: ˆ Z ← ˆ X − ˆ S + µ − 1 k ˆ Y 6: for i = 1 : n do 7: [ ˆ U :: i , ˆ Σ , ˆ V :: i ] ← svd( ˆ Z :: i ) 8: ˆ Σ ← S 1 /µ k [ ˆ Σ ] 9: ˆ L :: i = ˆ U :: i ˆ Σ ˆ V ∗ :: i 10: end for 11: ˆ S ← pro x F λ √ n/µ k k·k 1 ( ˆ X − ˆ L + µ − 1 k ˆ Y ) 12: ˆ Y ← ˆ Y + µ k ( ˆ X − ˆ L − ˆ S ) 13: k ← k + 1 14: end while 15: L ← ifft( ˆ L , n, 3) 16: S ← ifft( ˆ S , n, 3) SMARD configurations consist of four digits ( AB C D ): A denotes the loudspeaker equipment used, B denotes loudspeaker location, C denotes microphone type, and D denotes microphone array locations. T o simulate real life recordings, we require that voice and music come from dif ferent point sources, that is B = 0 for voice and 1 for music or vice versa. Secondly , we require C = 2 for circular microphone arrays, because they are better for spatial surround audio recording. Further we choose the first circular array which is closest to the sources, which giv es us six audio channels. Finally , we require voice and music to hav e the same A and D so it makes sense to mix the signals. For each chosen configuration, we mix the first 30 seconds of soprano with the first 30 seconds of each of the music signals (clarinet, trumpet, xylophone, ABB A, bass flute, guitar , violin) at 0 dB signal-to-noise ratio. F or soprano, we pad zero until it reaches 30 seconds; for music, we loop it until it reaches 30 seconds. This creates a repeating music accompaniment mixed with sparser vocals. W e single out two configurations as the training set (music from 2020 with soprano from 2120, music from 2021 with soprano from 2121), while using the remaining 10 configurations for testing. For both datasets, we downsample the songs to 22 050 Hz to reduce memory usage, then we use a short-time Fourier transform (STFT) with a 1 411-point Hann window with 75% ov erlap as in [35]. C. P arameters and Evaluation Follo wing [6], the con ver gence criteria is k X − L k − S k k F / k X k F < 10 − 7 , and µ is defined by µ 0 = 1 . 25 / k X k 2 and µ k +1 = 1 . 5 µ k . The v alue of c is determined by a grid search on the training set and is found to be 3 for SiSEC and 2 for SMARD (1 for SMARD with tensor RPCA). IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 10 The quality of separation will be assessed by BSS Eval toolbox version 3.0 10 in terms of signal-to-distortion ratio (SDR), source-image-to-spatial-distortion ratio (ISR), source-to-interference ratio (SIR), and sources-to-artifacts ra- tio (SAR), for the v ocal and the instrumental parts, respecti vely [36]. BSS Eval decomposes each estimated source h into four components (assuming that the admissible distortion is a time- in variant filter [37]): ˆ s h = s true h + e spat h + e interf h + e artif h , (55) where ˆ s is the estimated source, s true is the true source, e spat is the spatial distortion for multi-channel signals, e interf is the interference from other sources, and e artif is the artifacts of the source separation algorithm such as musical noise. The metrics are then computed as follows [36]: SDR h = 20 log 10 k s true h k k ˆ s h − s true h k , (56) ISR h = 20 log 10 k s true h k k e spat h k , (57) SIR h = 20 log 10 k s true h + e spat h k k e interf h k , (58) SAR h = 20 log 10 k ˆ s h − e artif h k k e artif h k . (59) All these measures are energy ratios expressed in decibels. Higher v alues indicate better separation quality . During param- eter tuning, h is dropped and the measures are av eraged over all sources. From SDR we also calculate the normalized SDR (NSDR) by computing the improv ement in SDR using the mixture itself as the baseline [38]. W e compute these measures for each song and then report the av erage result (denoted by the G prefix) for both the instrumental (L) and vocal (S) parts. The most important metric is GNSDR which measures the ov erall improvement in source separation performance. D. Results The results for the MSD100 dataset are shown in T able II. The best results are highlighted in bold. Broadly speaking, polar 2 -bicomplex PCP has the highest GNSDR in both L and S, followed by polar 4 -complex PCP . Both are also slightly better than tensor RPCA on all other performance measures except GISR and GSAR in L. Ov erall, the result for L is better than S because the instruments in this dataset are usually louder than the vocals (as reflected by the GSDR). It can be observed that the GNSDR for polar n -(bi)complex PCP are not inferior to that of quaternionic PCP , suggesting that they are good candidates for PCP with four-dimensional signals. For the SMARD dataset, the results are presented in T a- ble III. Both of our proposed algorithms are equally compet- itiv e, and both clearly outperform tensor RPCA in terms of GNSDR, GSDR, and GSIR. When we break down the results by configuration, we find that polar n -(bi)complex PCP are better than tensor RPCA in 8 out of 10 configurations. 10 http://bass- db .gforge.inria.fr/ T ABLE II R E SU LT S F O R M SD 1 0 0 I NS T RU M E NTA L (L ) A N D VO CA L ( S ) , I N D B GNSDR GSDR GISR GSIR GSAR Polar 2 -bi- L 5.01 10.36 19.22 10.68 23.57 complex PCP S 3.20 -1.33 2.63 9.02 0.44 Polar 4 - L 5.00 10.35 19.19 10.67 23.59 complex PCP S 3.18 -1.35 2.62 9.00 0.43 Quaternionic L 5.00 10.35 18.91 10.71 23.25 PCP S 3.15 -1.38 2.75 8.32 0.57 T ensor RPCA L 4.78 10.12 22.80 10.13 26.03 S 2.91 -1.62 1.32 8.53 -0.64 T ABLE III R E SU LT S F O R S MA R D I N ST RU M E N T A L ( L ) A N D VO C A L ( S ) , IN D B GNSDR GSDR GISR GSIR GSAR Polar 6 -bi- L 2.20 5.35 11.53 7.63 15.47 complex PCP S 2.37 2.83 5.82 7.31 12.49 Polar 12 - L 2.21 5.36 11.51 7.63 15.46 complex PCP S 2.37 2.84 5.82 7.32 12.50 T ensor RPCA L 1.42 4.57 9.55 6.83 16.65 S 1.58 2.05 5.85 3.06 14.20 V I I . D I S C U S S I O N A N D C O N C L U S I O N W e belie ve that we have demonstrated the superiority of our proposed hypercomplex algorithms. Theoretically , the tensor RPCA [15] is computing the nuclear norm in the CFT space (54), which is probably due to an erroneous belief that the CFT is unitary and thus does not change anything [39]. Howe ver , as t-SVD is based on the circulant algebra, where the singular val- ues are also circulants, the two trace norms are not equiv alent. As a result, we should not have omitted the ICFT , as tensor RPCA does. This omission is dif ficult to detect because tensors themselves do not hav e enough algebraic structures to guide us. In contrast, our formulation includes both the CFT and ICFT steps while computing the SVD of a polar n -bicomplex matrix, as described in the paragraph after (31), which does not violate the underlying circulant algebra. This observation hints at a new role for hypercomple x algebras—to provide additional algebraic structures that serve as a new foundation for tensor factorization. By way of example, let us consider Olariu’ s other work, the planar n -complex numbers, which hav e a ske w-circulant representation [16]. As skew circulants are diagonalizable by the skew DFT , 11 a new kind of t-SVD can be derived easily (see Algorithm 5). Here sft and isft stands for ske w DFT and in verse skew DFT , respectiv ely . Algorithm 5 t-SVD with a Ske w-Circulant Representation Input: X ∈ C l × m × n Output: U , S , V 1: ˆ X ← sft( X , n , 3) // W e use the ske w DFT instead. 2: for i = 1 : n do 3: [ ˆ U :: i , ˆ S :: i , ˆ V :: i ] ← svd( ˆ X :: i ) 4: end for 5: U ← isft( ˆ U , n, 3); S ← isft( ˆ S , n, 3); V ← isft( ˆ V , n, 3) 11 The ske w DFT of [ a 0 , a 1 , . . . , a n − 1 ] T is [ A 0 , A 1 , . . . , A n − 1 ] T where A k = P n − 1 i =0 a i e − πij (2 k +1) /n for k = 0 , 1 , . . . , n − 1 [40]. IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 11 What is more, the above procedure can be trivially extended to any commutativ e group algebras, 12 since the matrix repre- sentation of a commutative group algebra is diagonalizable by the DFT matrix for the algebra [41], viz. F n 1 ⊗ · · · ⊗ F n m where n 1 to n m can be uniquely determined [42]. In other words, we get the commutative group algebraic t-SVD simply by reinterpreting fft and ifft in Algorithm 1 according to the algebra’ s DFT matrix, for which fast algorithms are av ailable [42]. Going ev en further, we conjecture that the most fruitful results for hypercomple x SVD may originate from regular semigroup algebras (i.e., by relaxing the group axioms of identity and in vertibility to that of pseudoin vertibility [43]). By doing so, we gain a much larger modeling space (see T able IV) which may be desirable for data fitting applications. At present, harmonic analysis on semigroups [44] is still relativ ely unexplored in tensor signal processing. Regarding the hyperbolic numbers and tessarines that Alfs- mann has recommended, we find that both of them share the same circulant representation [10]:  a 0 a 1 a 1 a 0  , (60) where a 0 , a 1 ∈ R for hyperbolic numbers and a o , a 1 ∈ C for the tessarines. Thus, the hyperbolic numbers are isomor- phic to K 2 whereas the tessarines are isomorphic to CK 2 . Interestingly , the seminal paper on tessarine SVD [45] has advocated the e 1 − e 2 form to simplify computations, where they transform the inputs with ( a 0 , a 1 ) 7→ ( a 0 + a 1 , a 0 − a 1 ) , perform the SVDs, then transform the outputs back with ( A 0 , A 0 ) 7→ (( A 0 + A 1 ) / 2 , ( A 0 − A 1 ) / 2) . If we look closely , these are actually Fourier transform pairs (as used in Algo- rithm 1), hence the tessarine SVD can be considered as a special case of t-SVD when n = 2 . It can also be observed that, when n = 1 , the polar n -complex and polar n -bicomplex PCP degenerate into the real and complex PCP , respectiv ely . It should be emphasized that the complex numbers are not in K n , therefore we have introduced CK n for algebraic closure, and its importance has been confirmed by numerical simulations. W e further note that the two families of 2 N -dimensional hypercomplex numbers introduced by Alfsmann [10] are also commutativ e group algebras diagonalizable by the W alsh- Hadamard transform matrices F 2 ⊗ · · · ⊗ F 2 [10], [41]. T o conclude, we have extended the PCP to the polar n - complex and n -bicomple x algebras, with good results. Both algebras are representationally compact (does not require 2 N dimensions) and are computationally ef ficient in Fourier space. W e have found it beneficial to incorporate hypercomplex algebraic structures while defining the trace norm. More con- cretely , we hav e proven an extended von Neumann theorem, together with an adaptation of the group lasso, which in concert enable us to formulate and solve the hypercomplex PCP problem. In doing so, we are able to incorporate the correct algebraic structures into the objectiv e function itself. W e hav e demonstrated that the hypercomplex approach is useful because it can: 1) inform t-SVD-related algorithms by 12 Hypercomplex algebras where the real and imaginary units obey the commutativ e group axioms including associativity , commutativity , identity , and in vertibility . T ABLE IV N U MB E R O F D I S T IN C T ( S E M I ) GR O UP S O F O R D E R S U P T O 9 . F RO M T H E O N -L I N E E N C Y C LO P E D IA O F I N TE G E R S E Q U EN C E S , H T TP : / / OE I S . OR G / A0 0 0 6 88 A N D H TT P : / / O E I S . O R G / A0 0 1 4 27 Order Number of Com- Number of Reg- n mutati ve Groups ular Semigroups 1 1 1 2 1 3 3 1 9 4 2 42 5 1 206 6 1 1 352 7 1 10 168 8 3 91 073 9 2 925 044 imposing rele vant algebraic structures; and 2) generate new families of t-SVD’ s beyond the circulant algebra. W e hav e also established that tessarine SVD is a special case of t-SVD, and that the 2 N -hypercomplex family of Alfsmann is amenable to a straightforward extension of t-SVD which we call the commutativ e group algebraic t-SVD. Having formulated the first proper PCP algorithm on cyclic algebras, we would rec- ommend more crossover attempts between the hypercomplex and tensor-based approaches for future work. A C K N O W L E D G M E N T The authors would like to thank the anonymous revie wers for their numerous helpful suggestions. R E F E R E N C E S [1] E. J. Cand ` es, X. Li, Y . Ma, and J. Wright, “Robust principal component analysis?” J. ACM , vol. 58, no. 3, pp. 1–37, 2011. [2] P .-S. Huang, S. D. Chen, P . Smaragdis, and M. Hase gaw a-Johnson, “Singing-voice separation from monaural recordings using robust prin- cipal component analysis, ” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Pr ocess. , 2012, pp. 57–60. [3] Y . Ikemiya, K. Y oshii, and K. Itoyama, “Singing voice analysis and edit- ing based on mutually dependent f0 estimation and source separation, ” in Pr oc. IEEE Int. Conf. Acoust., Speech and Signal Pr ocess. , 2015, pp. 574–578. [4] Y . Peng, A. Ganesh, J. Wright, W . Xu, and Y . Ma, “RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images, ” in Pr oc. IEEE Comput. Soc. Conf. Comput. V ision and P attern Recognition , 2010, pp. 763–770. [5] T . Bouwmans and E. H. Zahzah, “Rob ust PCA via principal component pursuit: A review for a comparative ev aluation in video surveillance, ” Computer V ision and Image Understanding , vol. 122, pp. 22–34, 2014. [6] Z. Lin, M. Chen, L. W u, and Y . Ma, “The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, ” T ech. Rep. UILU-ENG-09-2215, 2009. [7] T .-S. T . Chan and Y .-H. Y ang, “Complex and quaternionic principal component pursuit and its application to audio separation, ” IEEE Signal Pr ocess. Lett. , vol. 23, no. 2, pp. 287–291, 2016. [8] I. L. Kantor and A. S. Solodovnikov , Hypercomplex Numbers . New Y ork: Springer , 1989. [9] P . Lounesto, Cliffor d Algebras and Spinors . Cambridge: Cambridge Univ ersity Press, 2001. [10] D. Alfsmann, “On families of 2 N -dimensional hypercomplex algebras suitable for digital signal processing, ” in Pr oc. European Signal Pr ocess. Conf. , 2006. [11] M. E. Kilmer, C. D. Martin, and L. Perrone, “A third-order generaliza- tion of the matrix SVD as a product of third-order tensors, ” T ech. Rep. TR-2008-4, 2008. [12] K. Braman, “Third-order tensors as linear operators on a space of matrices, ” Linear Algebra and its Applicat. , vol. 433, pp. 1241–1253, 2010. IEEE TRANSA CTIONS ON SIGNAL PROCESSING, V OL. XX, NO. XX, MONTH 2016 12 [13] M. E. Kilmer and C. D. Martin, “Factorization strate gies for third-order tensors, ” Linear Algebra and its Applicat. , vol. 435, pp. 641–658, 2011. [14] M. E. Kilmer, K. Braman, N. Hao, and R. C. Hoover , “Third-order tensors as operators on matrices: A theoretical and computational frame- work with applications in imaging, ” SIAM J. Matrix Anal. Applicat. , vol. 34, no. 1, pp. 148–172, 2013. [15] Z. Zhang, G. Ely , S. Aeron, N. Hao, and M. E. Kilmer , “Nov el methods for multilinear data completion and de-noising based on tensor-SVD, ” in Pr oc. IEEE Comput. Soc. Conf. Comput. V ision and P attern Recognition , 2014, pp. 3842–3849. [16] S. Olariu, Complex Numbers in N Dimensions . Amsterdam: Elsevier , 2002. [17] D. F . Gleich, C. Greif, and J. M. V arah, “The po wer and Arnoldi methods in an algebra of circulants, ” Numerical Linear Algebra Applicat. , v ol. 20, pp. 809–831, 2013. [18] P . J. Davis, Cir culant Matrices . New Y ork: Wiley , 1979. [19] D. P . Mandic, C. Jahanchahi, and C. C. T ook, “A quaternion gradient operator and its applications, ” IEEE Signal Process. Lett. , v ol. 18, no. 1, pp. 47–50, 2011. [20] I. Kra and S. R. Simanca, “On circulant matrices, ” Notices Amer . Math. Soc. , vol. 59, no. 3, p. 368, 2012. [21] P . M. Cohn, Further Algebra and Applications . London: Springer, 2003. [22] J. Granata, M. Conner, and R. T olimieri, “The tensor product: A mathematical programming language for FFTs and other fast DSP operations, ” IEEE Signal Pr ocess. Mag. , vol. 9, no. 1, pp. 40–48, 1992. [23] T . G. K olda and B. W . Bader , “T ensor decompositions and applications, ” SIAM Review , vol. 51, no. 3, pp. 455–500, 2009. [24] R. A. Horn and C. R. Johnson, Matrix Analysis , 2nd ed. Cambridge: Cambridge Univ ersity Press, 2013. [25] P . L. Combettes and J.-C. Pesquet, “Proximal splitting methods in signal processing, ” in F ixed-P oint Algorithms for Inver se Pr oblems in Science and Engineering , H. H. Bauschke, R. S. Burachik, P . L. Combettes, V . Elser , D. R. Luke, and H. W olkowicz, Eds. New Y ork: Springer, 2011, vol. 49, pp. 185–212. [26] D. L. Donoho, “De-noising by soft-thresholding, ” IEEE T rans. Inf. Theory , vol. 41, no. 3, pp. 613–627, 1995. [27] J.-F . Cai, E. J. Cand ` es, and Z. Shen, “A singular value thresholding algorithm for matrix completion, ” SIAM J. Optimization , vol. 20, no. 4, pp. 1956–1982, 2010. [28] M. Y uan and Y . Lin, “Model selection and estimation in regression with grouped variables, ” J. Roy . Stat. Soc. B , v ol. 68, no. 1, pp. 49–67, 2006. [29] R. T omioka, T . Suzuki, and M. Sugiyama, “ Augmented Lagrangian methods for learning, selecting, and combining features, ” in Optimiza- tion for Machine Learning , S. Sra, S. Nowozin, and S. J. Wright, Eds. Cambridge, MA: MIT Press, 2012, pp. 255–285. [30] P . L. Lions and B. Mercier, “Splitting algorithms for the sum of two nonlinear operators, ” SIAM J. Numerical Anal. , vol. 16, no. 6, pp. 964– 979, 1979. [31] J. Eckstein and D. P . Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone oper- ators, ” Math. Pr ogramming , vol. 55, pp. 293–318, 1992. [32] S. Kontogior gis and R. R. Meyer , “A variable-penalty alternating direc- tions method for con vex optimization, ” Math. Pr ogramming , vol. 83, pp. 29–53, 1998. [33] B. Zhu, W . Li, R. Li, and X. Xue, “Multi-stage non-negativ e matrix factorization for monaural singing v oice separation, ” IEEE T rans. Audio, Speech, Language Pr ocess. , vol. 21, no. 10, pp. 2096–2107, 2013. [34] J. K. Nielsen, J. R. Jensen, S. H. Jensen, and M. G. Christensen, “The single- and multichannel audio recordings database (SMARD), ” in Pr oc. Int. W orkshop Acoust. Signal Enhancement , 2014, pp. 40–44. [35] T .-S. Chan, T .-C. Y eh, Z.-C. Fan, H.-W . Chen, L. Su, Y .-H. Y ang, and R. Jang, “V ocal activity informed singing voice separation with the iKala dataset, ” in Pr oc. IEEE Int. Conf. Acoust., Speech and Signal Pr ocess. , 2015, pp. 718–722. [36] E. V incent, S. Araki, F . Theis, G. Nolte, P . Bofill, H. Saw ada, A. Ozerov , V . Gowreesunker , D. Lutter , and N. Q. K. Duong, “The signal sepa- ration evaluation campaign (2007–2010): Achievements and remaining challenges, ” Signal Pr ocess. , vol. 92, pp. 1928–1936, 2012. [37] E. V incent, R. Gribonv al, and C. Fev otte, “Performance measurement in blind audio source separation, ” IEEE T rans. Audio, Speech, Language Pr ocess. , vol. 14, no. 4, pp. 1462–1469, 2006. [38] C.-L. Hsu and J.-S. Jang, “On the improvement of singing voice separation for monaural recordings using the MIR-1K dataset, ” IEEE T rans. Audio, Speech, Language Process. , vol. 18, no. 2, pp. 310–319, 2010. [39] O. Semerci, N. Hao, M. E. Kilmer, and E. L. Miller , “T ensor-based formulation and nuclear norm regularization for multienergy computed tomography, ” IEEE Tr ans. Image Pr ocess. , vol. 23, no. 4, pp. 1678– 1693, 2014. [40] I. J. Good, “Skew circulants and the theory of numbers, ” The Fibonacci Quarterly , vol. 24, no. 2, pp. 47–60, 1986. [41] M. Clausen and U. Baum, F ast F ourier T ransforms . Mannheim: BI- W issenschaftsverlag, 1993. [42] G. Apple and P . Wintz, “Calculation of Fourier transforms on finite Abelian groups (Corresp.), ” IEEE T rans. Inf. Theory , vol. 16, no. 2, pp. 233–234, 1970. [43] M. Kilp, U. Knauer , and A. V . Mikhalev , Monoids, Acts and Categories with Applications to Wreath Pr oducts and Graphs: A Handbook for Students and Resear chers . Berlin: W alter de Gruyter , 2000. [44] C. Berg, J. P . R. Christensen, and P . Ressel, Harmonic Analysis on Semigr oups: Theory of P ositive Definite and Related Functions . New Y ork: Springer , 1984. [45] S.-C. Pei, J.-H. Chang, J.-J. Ding, and M.-Y . Chen, “Eigenv alues and singular value decompositions of reduced biquaternion matrices, ” IEEE T rans. Cir cuits Syst. I, Re g. P apers , v ol. 55, no. 9, pp. 2673–2685, 2008. T ak-Shing T . Chan (M’15) received the Ph.D. degree from the University of London in 2008. From 2006 to 2008, he was a Scientific Programmer at the University of Sheffield. In 2011, he worked as a Research Associate at the Hong K ong Polytechnic Univ ersity . He is currently a Postdoctoral Fellow at the Academia Sinica, T aiwan. His research interests include signal processing, cognitiv e informatics, dis- tributed computing, pattern recognition, and hyper- complex analysis. Y i-Hsuan Y ang (M’11) received the Ph.D. degree in communication engineering from National T aiwan Univ ersity in 2010. Since 2011, he has been affili- ated with Academia Sinica as an Assistant Research Fellow . He is also an Adjunct Assistant Professor with the National Cheng Kung Univ ersity . His re- search interests include music information retrie val, machine learning and af fectiv e computing. Dr . Y ang was a recipient of the 2011 IEEE Signal Processing Society (SPS) Y oung Author Best Paper A ward, the 2012 ACM Multimedia Grand Challenge First Prize, the 2014 T a-Y ou W u Memorial Research A ward of the Ministry of Science and T echnology , T aiwan, and the 2014 IEEE ICME Best Paper A ward. He is an author of the book Music Emotion Recognition (CRC Press 2011) and a tutorial speaker on music affect recognition in the International Society for Music Information Retriev al Conference (ISMIR 2012). In 2014, he serve as a T echnical Program Co-Chair of ISMIR, and a Guest Editor of the I E E E T R AN S AC T I ON S O N A FF E C T IV E C O M P UT I N G , and the A CM Transactions on Intelligent Systems and T echnology .

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment