Consensus and Sectioning-based ADMM with Norm-1 Regularization for Imaging with a Compressive Reflector Antenna

This paper presents three distributed techniques to find a sparse solution of the underdetermined linear problem $\textbf{g}=\textbf{Hu}$ with a norm-1 regularization, based on the Alternating Direction Method of Multipliers (ADMM). These techniques …

Authors: Juan Heredia-Juesas, Ali Molaei, Luis Tirado

Consensus and Sectioning-based ADMM with Norm-1 Regularization for   Imaging with a Compressive Reflector Antenna
1 Consensus and Sectioning-based ADMM with Norm-1 Re gularization for Imaging with a Compressi v e Reflector Antenna Juan Heredia-Juesas 1 , 2 , Ali Molaei 1 , Luis T irado 1 , and Jos ´ e ´ A. Mart ´ ınez-Lorenzo 1 , 2 Abstract —This paper presents thr ee distributed techniques to find a sparse solution of the underdetermined linear problem g = Hu with a norm-1 regularization, based on the Alternating Direction Method of Multipliers (ADMM). These techniques di- vide the matrix H in submatrices by rows, columns, or both rows and columns, leading to the so-called consensus -based ADMM, sectioning -based ADMM, and consensus and sectioning -based ADMM, respecti vely . These techniques ar e applied particularly for millimeter -wave imaging through the use of a Compressi ve Reflector Antenna (CRA). The CRA is a hard ware designed to increase the sensing capacity of an imaging system and reduce the mutual information among measurements, allowing an effective imaging of sparse targets with the use of Compressive Sensing (CS) techniques. Consensus -based ADMM has been pro ved to accelerate the imaging process and sectioning -based ADMM has shown to highly reduce the amount of information to be exchange among the computational nodes. In this paper , the mathematical f ormulation and graphical interpr etation of these two techniques, together with the consensus and sectioning - based ADMM approach, ar e presented. The imaging quality , the imaging time, the con vergence, and the communication efficiency among the nodes are analyzed and compared. The distrib uted capabitities of the ADMM-based approaches, together with the high sensing capacity of the CRA, allow the imaging of metallic targets in a 3D domain in quasi-real time with a reduced amount of information exchanged among the nodes. Index T erms —Compr essive Antenna, distrib uted ADMM, node communications, norm-1 regularization, real-time imaging. I . I N T RO D U C T I O N S EVERAL numerical techniques hav e been developed in the past decades for solving problems defined by a linear matrix equation [1] g = Hu , (1) where g ∈ C m and H ∈ C m × n are the known data, and u ∈ C n is the unkno wn vector to be determined. These techniques can be classified in direct and iterative methods. Direct methods are capable of finding an exact solution of the equation (if existing) with a finite number of operations; but they may require an impractical amount of time. Iterati ve methods theoretically con ver ge asymptotically to a solution with an infinite number of iterations; but an approximate solution, depending on the tolerance defined, can be achiev ed in a reduced amount of time. In both cases, the inv ersion of the matrix H or the matrix H ∗ H is a problem that need to be 1 Departments of Electrical & Computer Engineering, Northeastern Univer - sity , Boston, MA, USA. jmartinez@coe.neu.edu 2 Departments of Mechanical & Industrial Engineering, Northeastern Uni- versity , Boston, MA, USA. addressed too, and also direct and iterativ e methods have been proposed to this end [1]–[3]. Despite the power enhancement of computational units, which reduce the operation times, the increase of data in recent years leads to a preference for iterativ e methods. Additionally , the presence of uncertainties or noise in the data is better addressed with the iterative methods since they find an approximate solution, that is, a solution within bounded limits. These uncertainties can be modeled by adding a noise vector w ∈ C m to Eqn. (1) as follows: g = Hu + w . (2) Distributed techniques [1], [4]–[10] allo w to assign small pieces of information among sev eral computational nodes for solving smaller problems in a parallel and fast fashion, exchanging the results among the nodes for obtaining a final solution. These distributed techniques may relief the compu- tational load and speed up the conv ergence, but introduce the problem of communication among those computational nodes, which also has to be addressed [11]–[17]. Regarding the properties of the unkno wn vector , of inter - est in the recent years are those underdetermined problems ( m  n ) in which the solution sought is sparse; that is k u k 0  n , where k · k 0 represents the number of non-zero elements of the vector . These type of problems are generally solved via the use of Compressiv e Sensing (CS) techniques by adding a norm-1 regularization, such as Bayesian Compressive Sampling (BCS) [18], Fast Iterativ e Shrinkage-Thresholding Algorithm (FIST A) [19], Nesterov’ s Algorithm (NEST A) [20], or the Alternating Direction Method of Multipliers (ADMM) [4], [5], [14], [21]–[23]. This paper presents three iterati ve and distributi ve optimiza- tion techniques based on the ADMM to find a sparse solution of Eqn. (2), when the norm-1 regularization is applied. These techniques exploit the distributed capabilities of the ADMM by dividing the matrix H in submatrices and solving the problem in se veral computational nodes. Di viding the matrix in submatrices by ro ws has shown in [5] to reduce the time for finding a solution. In [24], [25] it has been proved that if the matrix is divided by columns, the amount of information to be shared among those computational units is highly reduced. This paper shows the mathematical formulation and graphical interpretation of the combination of the two previous tech- niques, with the aim of introducing more degrees of freedom for designing an appropriate optimization architecture. 2 Although these formulations are v alid for any problem that could be represented in terms of the Eqn. (2), this paper shows their performance for a millimeter -wav e imaging ap- plication through the use of a Compressiv e Reflector Antenna (CRA). A CRA is a hardware for increasing the sensing capacity of the imaging system, allo wing a reduced number of measurement collection for performing imaging with the use of norm-1 regularized CS techniques [26]–[28]. In this case H ∈ C N m × N p is called the sensing matrix, g ∈ C N m is the vector of measurements, and u ∈ C N p is the unknown vector of reflectivity , where N m represents the number of measurements collected and N p the number of pixels of the imaging domain. This paper is organized as follows: Section II introduces the algorithm, properties, and conditions of the ADMM. Section III dev elops the mathematical formulation, the graphical inter- pretation, and the conv ergence process of the three presented methods for solving Eqn. (2): • Consensus-based ADMM . Dividing the sensing matrix in submatrices by ro ws. • Sectioning-based ADMM . Dividing the sensing matrix in submatrices by columns. • Consensus and sectioning-based ADMM . Dividing the sensing matrix in submatrices by rows and columns. Section IV studies the communications among the computa- tional nodes, comparing the amount of information exchanged by one single node at one iteration for the three different techniques. Section V briefly introduces the description and operation of the CRA. The particular configuration and numer- ical results are shown in section VI, where the imaging quality , imaging time, conv ergence, and communication efficiency among the computational nodes are compared and discussed for the three proposed techniques. The paper concludes in section VII. I I . G E N E R A L F O R M U L AT I ON O F T H E A D M M The ADMM is an optimization algorithm for con vex func- tions that takes advantage of both the dual ascend decompos- ability , spliting the objectiv e function into simpler objecti ves, and the con ver gence properties of the method of multipliers, which relaxes the conditions of the objectiv e function. The general representation of the ADMM takes the following optimization form [4], [21]: minimize f 1 ( u ) + f 2 ( v ) s.t. Pu + Qv = c , (3) where the known matrices P ∈ C p × n and Q ∈ C p × q , and vector c ∈ C p determine the constraint over the unknown variable vectors u ∈ C n and v ∈ C q . The con ve x functions f 1 and f 2 hav e to be extended real v alue functions, that is f 1 : C n → R ∪ { + ∞} , (4a) f 2 : C q → R ∪ { + ∞} , (4b) and the y have to be closed and proper, that is, their ef fective domain (non-infinity values) has to be non-empty and they nev er reach −∞ , mathematically: ∃ u ∈ D om { f } | f ( u ) < + ∞ , and (5a) f ( u ) > −∞ , ∀ u ∈ D om { f } , (5b) The optimal v alue of (3) may be denoted by t ? as t ? = inf  f 1 ( u ) + f 2 ( v ) | Pu + Qv = c  . (6) T aking advantage of the method of multipliers [29], the augmented Lagrangian form of this problem is defined as follows: L ρ ( u , v , d ) = f 1 ( u ) + f 2 ( v ) + + d T ( Pu + Qv − c ) + ρ 2 k Pu + Qv − c k 2 2 , (7) where d ∈ C p is the Lagrangian multiplier or dual variable, and ρ > 0 is the augmented parameter . A more con venient expression of the augmented Lagrangian can be achie ved by the following simple algebraic transformation: d T r + ρ 2 k r k 2 2 = ρ 2 k r + s k 2 2 − ρ 2 k s k 2 2 , (8) for r = Pu + Qv − c , and s = 1 /ρ d being the scaled dual variable. Based on this, the general iterati ve algorithm of the ADMM is described as u ( k +1) := argmin u L ρ  u , v ( k ) , s ( k )  , (9a) v ( k +1) := argmin v L ρ  u ( k +1) , v , s ( k )  , (9b) s ( k +1) := s ( k ) +  Pu ( k +1) + Qv ( k +1) − c  . (9c) The fact that f 1 and f 2 are defined over different v ariables allows the optimization of u and v in an alternating direction fashion. T wo metrics are defined for e valuating the conv ergence of the ADMM algorithm. The primal r esidual , which measures the residual of the constraint; and the dual residual , which measures the residual of the dual variable optimization be- tween two consecutiv e iterations, are defined, respectively at iteration k , as follows [4]: r ( k ) p = Pu ( k ) + Qv ( k ) − c , (10a) r ( k ) d = ρ P T Q  v ( k ) − v ( k − 1)  (10b) I I I . A D M M D I S T R I B U T E D S O L V I N G M E T H O D S ADMM is a con venient method when applying CS for solving Eqn. (2). Under the assumption that the sensing matrix H satisfies the Restricted Isometry Property (RIP) [30], [31], and that the unknown vector u is sparse—that is, the number of non-zero elements N nz is much smaller than the total number of elements, N nz  n —Eqn. (2) can be solved by minimizing the sum of the conv ex function f 1 ( u ) = 1 2 k Hu − g k 2 2 and the norm-1 regularization f 2 ( v ) = λ k v k 1 . The particular ADMM formulation for solving Eqn. (2) takes the lasso form: minimize 1 2 k Hu − g k 2 2 + λ k v k 1 s.t. u − v = 0 . (11) The constrain—defined with P = I , Q = − I , and c = 0 — enforces that the variables u and v are equal. Since the dimensions of the sensing matrix H could be very large—having many pixels in the imaging domain and/or 3 many collected measurements—, a direct resolution of the problem (11) is not usually efficient. Some techniques have been proposed for solving this problem in a distributed fashion using the ADMM, such as [5] or [24], for solving fast imaging problems; or [7], [14], for solving a communications problem in the dual space. In this paper , three different methods, focused on solving imaging problems in the primal space, are presented. The aim is to find a sparse solution of Eqn. (2), while reducing the amount of information exchanged among the nodes, and the computational complexity and time. A. Consensus-based ADMM: Row-wise division As presented in [5], problem (11) can be solved in a distributed fashion, by splitting the original matrix H into M submatrices H i ∈ C N m M × N p in a row division, and the vector of measurements g into M subvectors g i ∈ C N m M , as shown in Fig. 1a. Then, M different underdetermined problems H i u = g i , for i = 1 , . . . , M , need to be solved. In particular, the summation of all of them may be optimized together with the norm-1 re gularization as follo ws: minimize 1 2 M P i =1 k H i u − g i k 2 2 + λ k v k 1 s.t. u = v . (12) In order to make the optimizations independent, M replicas of the unkno wn variable u may be defined as u i for i = 1 , . . . , M , turning the e xpression (12) into minimize 1 2 M P i =1   H i u i − g i   2 2 + λ k v k 1 s.t. u i = v , ∀ i ∈ { 1 , ..., M } . (13) The augmented Lagrangian function for this problem is as follows: L ρ  u 1 , . . . , u M , v , s 1 , . . . , s M  = = 1 2 M X i =1   H i u i − g i   2 2 + λ k v k 1 + (14) + ρ 2 M X i =1   u i − v + s i   2 2 − ρ 2 M X i =1   s i   2 2 , where a dual variable s i is introduced for each of the M constraints. The augmented parameter ρ enforces the con ve xity of the Lagrangian function. By iterating the following scheme, an optimal solution may be found: u i, ( k +1) =  H ∗ i H i + ρ I N p  − 1  H ∗ i g i + ρ  v ( k ) − s i ( k )  , (15a) v ( k +1) = S λ M ρ  ¯ u ( k +1) + ¯ s ( k )  , (15b) s i ( k +1) = s i ( k ) + u i ( k +1) − v ( k +1) , (15c) where ¯ u and ¯ s represent the mean of u i and s i , respecti vely , for all values of i ; I N p indicate the identity matrix of size N p ; and S κ ( a ) is the element-wise soft thresholding operator [32]: S κ ( a ) = ( a − κ sign( a ) , | a | > κ 0 | a | ≤ κ. (16) Fig. 1. (a) Division of the matrix equation system by rows. (b) Architecture of the consensus-based ADMM: a central node collects the updates of M sub- nodes, computes the soft-thresholding operator of the mean of them, and then distributes the solution again to the sub-nodes. (c) Graphical interpretation of the row-wise division: M independent images are optimized with few data allocated to each node. The final imaging is an average-lik e of all of them. The matrix in version lemma [33] may be applied for the computation of the term  H ∗ i H i + ρ I N p  − 1 , as shown in Eqn. (17). Therefore, just in verting M matrices of reduced size N m M × N m M , instead of M large matrices of size N p × N p , is required, highly accelerating the algorithm.  H ∗ i H i + ρ I N p  − 1 = I N p ρ − H ∗ i ρ 2  I N m M + H i H ∗ i ρ  − 1 H i , (17) In terms of con ver gence, the primal and dual residuals are computed, respectiv ely , as follows: r ( k ) p =  u 1 , ( k ) − v ( k ) , . . . , u M , ( k ) − v ( k )  , (18a) r ( k ) d = − ρ  v ( k ) − v ( k − 1) , . . . , v ( k ) − v ( k − 1)  , (18b) and their squared norms are k r ( k ) p k 2 2 = M X i =1 k u i, ( k ) − v ( k ) k 2 2 , (19a) 4 k r ( k ) d k 2 2 = ρ 2 M k v ( k ) − v ( k − 1) k 2 2 , (19b) noticing that Eqn. (19a) can be interpreted as a measure of the lack of consensus . It can be noticed in expressions (13) and (15b) that the variable v acts as a consensus , forcing that all variables u i con ver ge to the same solution. The architecture of this algorithm can be interpreted as a hierarchical structure, having a central node that collects all individual solution for each sub-node, performs the soft-thresholding a veraging, and then broadcasts the global solution to each sub-node, as represented in Fig. 1b . The purpose of this technique is to perform M independent images with few amount of data allocated to each node, and then create the final imaging as an a verage-like of the intermediate results, in the manner that Fig. 1c sho ws. As shown in [5], this technique highly reduces the compu- tational cost producing real-time imaging; howe ver , it has the problem of sharing the global solution v ( k ) from the central node to each sub-node, and the whole individual solution u i, ( k +1) from each sub-node to the central node, for each iteration. These vectors are of the size of the total number of pixels in the imaging domain and may be very large, producing a slow communication among the computational nodes. B. Sectioning-based ADMM: Column-wise division A dif ferent approach for finding a solution of problem (11) is by dividing the original matrix H into N submatrices H j ∈ C N m × N p N in a column basis and, accordingly , the vector of unknowns u into N subvectors u j ∈ C N p N , as done in [24]. This segmentation makes the problem to be solved in the following form: P N j =1 H j u j = P N j =1 ˆ g j = g , which requires the introduction of the so-called estimated data vectors ˆ g j , as represented in Fig. 2a. The problem is optimized, together with the norm-1 regularization, as follo ws: minimize 1 2      N P j =1 H j u j − g      2 2 + λ N P j =1 k v j k 1 s.t. u j = v j , ∀ j ∈ { 1 , ..., N } . (20) The augmented Lagrangian for this problem is defined o ver 3 N variables as in the following expression: L ρ ( u 1 , . . . , u N , v 1 , . . . , v N , s 1 , . . . , s N ) = = 1 2       N X j =1 H j u j − g       2 2 + λ N X j =1 k v j k 1 + (21) + ρ 2 N X j =1 k u j − v j + s j k 2 2 − ρ 2 N X j =1 k s j k 2 2 , where, again, s j is the dual variable introduced for each constraint j , and ρ is the augmented parameter . This problem can be solv ed by the following iterativ e scheme: u ( k +1) j =  H ∗ j H j + ρ I N p N  − 1  H ∗ j g ( k ) j + ρ  v ( k ) j − s ( k ) j  , (22a) v ( k +1) j = S λ ρ  u ( k +1) j + s ( k ) j  , (22b) s ( k +1) j = s ( k ) j + u ( k +1) j − v ( k +1) j , (22c) where g ( k ) j , required for computing Eqn. (22a), is obtained as g ( k ) j = g − N X q =1 q 6 = j H q u ( k ) q = g − N X q =1 q 6 = j ˆ g ( k ) q , (23) and it corresponds with the fraction of data determined for the update of the segment j of the vector u , taking into account the estimated data computed from the remaining nodes. S κ ( a ) is the soft thresholding operator as defined in Eqn. (16). In case of N m < N p N , the matrix inversion lemma can be applied to the term  H ∗ j H j + ρ I N p N  − 1 as follows:  H ∗ j H j + ρ I N p N  − 1 = I N p N ρ − H ∗ j ρ 2  I N m + H j H ∗ j ρ  − 1 H j . (24) In this case, only N matrices of sizes N m × N m need to be in verted. Ho wev er , if N m > N p N , the original in version is computationally more ef ficient. Fig. 2. (a) Division of the matrix equation system by columns. The measurements vector is decomposed in N estimated vectors. (b) Architecture of the sectioning-based ADMM: the problem is split into N nodes that optimize a part of the imaging. For each iteration, they share the small estimated data vector with the remaining nodes. (c) Graphical interpretation of the ADMM column-wise division: the image is sectioned into N regions. The final imaging is the concatenation of all of them. 5 In terms of con vergence, the primal and dual residual v ec- tors for this technique are computed, respecti vely , as follows: r ( k ) p =  u ( k ) 1 − v ( k ) 1 , . . . , u ( k ) N − v ( k ) N  , (25a) r ( k ) d = − ρ  v ( k ) 1 − v ( k − 1) 1 , . . . , v ( k ) N − v ( k − 1) N  ; (25b) and their squared norms are k r ( k ) p k 2 2 = N X j =1 k u ( k ) j − v ( k ) j k 2 2 , (26a) k r ( k ) d k 2 2 = ρ 2 N X j =1 k v ( k ) j − v ( k − 1) j k 2 2 . (26b) It is deducted from the analysis of Eqns. (22a) and (23) that, for performing the u ( k +1) j optimizations, each computational node j needs the submatrix H j , the whole vector g , and the estimated data coming from the remaining nodes ˆ g ( k ) q , for q 6 = j . Therefore, this problem can be interpreted as an N fully-connected net of nodes that individually optimize each fragment u ( k +1) j , introducing thereupon its update to the net in the format of the estimated data ˆ g ( k +1) j = H j u ( k +1) j ∈ C N m , creating a non-hierarchical architecture, as the one represented in Fig. 2b . This approach can be illustrated as a sectioning of the imaging domain due to splitting the unkno wn v ector u into N subv ectors, corresponding each u j to a specific region of the image, as it is schematized in Fig. 2c. These regions may be predetermined by the user by an appropriate division of the unknown vector u and, consequently , the matrix H would be divided accordingly . The final imaging solution is accomplished by connecting the N optimizations u = [ u 1 ; . . . ; u N ] . This technique takes advantage of this image sectioning, since the communication among the nodes requires sharing only small vectors ˆ g ( k ) q ∈ C N m , for each iteration k . Howe ver , it lacks the acceleration achiev ed in the row-wise division due to two main reasons: (i) for small values of N , the inv ersion of the matrices in Eqn. (24) might be expensiv e, and (ii) for large values of N , the known vector of measurements g is highly scattered into the N estimations ˆ g j , causing slow computation at each iteration because of the matrix-vector product. C. Consensus and sectioning-based ADMM. Row and column- wise division A combination of the two previous approaches may be performed when di viding the matrix H into M · N subma- trices H ij ∈ C N m M × N p N , the vector of measurements g into M subvectors g i ∈ C N m M , and the unknown vector u into N subvectors u j ∈ C N p N , as shown in Fig. 3. Now , M underdetermined problems P N j =1 H ij u j = P N j =1 ˆ g ij = g i , for i = 1 , . . . , M , need to be solved. Applying the same technique as in the division by rows, that is, minimizing the summation of all of them and creating M replicas of each segment j of the unknown vector u , namely u i j , the problem may be optimized, together with the norm-1 re gularization, as follo ws: minimize 1 2 M P i =1      N P j =1 H ij u i j − g i      2 2 + λ N P j =1 k v j k 1 s.t. u i j = v j , ∀ i ∈ { 1 , ..., M } , ∀ j ∈ { 1 , ..., N } . (27) Notice that this problem has M · N equality constraints. Fig. 3. Division of the matrix equation system by rows and columns. The measurements vector is divided into M subvectors and each of them is decomposed into N estimated vectors. The augmented Lagrangian function for this problem, with (2 M + 1) N variables, is e xpressed in the next equation: L ρ  u 1 1 , . . . , u M N , v 1 , . . . , v N , s 1 1 , . . . , s M N  = = 1 2 M X i =1       N X j =1 H ij u i j − g i       2 2 + λ N X j =1 k v j k 1 + (28) + ρ 2 M X i =1 N X j =1   u i j − v j + s i j   2 2 − ρ 2 M X i =1 N X j =1   s i j   2 2 , where s i j is the dual variable for the constraint with indices i and j , and ρ is, as in previous cases, the augmented parameter . This problem can be solved by the following iterativ e scheme: u i, ( k +1) j =  H ∗ ij H ij + ρ I N p N  − 1  H ∗ ij g ( k ) ij + ρ  v ( k ) j − s i, ( k ) j  , (29a) v ( k +1) j = S λ M ρ  ¯ u ( k +1) j + ¯ s ( k ) j  , (29b) s i, ( k +1) j = s i, ( k ) j + u i, ( k +1) j − v ( k +1) j , (29c) where g ( k ) ij = g i − N X q =1 q 6 = j H iq u i, ( k ) q = g i − N X q =1 q 6 = j ˆ g ( k ) iq , (30) corresponds with the fraction of data determined for the update of the i − th replica of the segment j of the vector u , which takes into account the estimated data computed for the remaining nodes of the same replica i . S κ ( a ) is the soft thresholding operator as defined in Eqn. (16), and ¯ u j and ¯ s j are the mean of u i j and s i j , respectively , for all replicas i of a giv en segment j . If N m M < N p N , the matrix in version lemma should be applied for the inv ersion of the term  H ∗ ij H ij + ρ I N p N  − 1 , as indicated in the Eqn. (31):  H ∗ ij H ij + ρ I N p N  − 1 = I N p N ρ − H ∗ ij ρ 2  I N m M + H ij H ∗ ij ρ  − 1 H ij . (31) 6 The primal and dual residuals, which are vectors of M · N components that measure the con vergence of the algorithm, are computed, respecti vely , as follows: r ( k ) p =  u 1 , ( k ) 1 − v ( k ) 1 , . . . , u M , ( k ) 1 − v ( k ) 1 , . . . u 1 , ( k ) N − v ( k ) N , . . . , u M , ( k ) N − v ( k ) N  , (32a) r ( k ) d = − ρ  v ( k ) 1 − v ( k − 1) 1 , . . . , v ( k ) 1 − v ( k − 1) 1 . . . v ( k ) N − v ( k − 1) N , . . . , v ( k ) N − v ( k − 1) N  , (32b) and their squared norms are k r ( k ) p k 2 2 = M X i =1 N X j =1 k u i, ( k ) j − v ( k ) j k 2 2 , (33a) k r ( k ) d k 2 2 = ρ 2 M N X j =1 k v ( k ) j − v ( k − 1) j k 2 2 . (33b) Equations (27)-(31) combine the particularities of both pre- vious approaches for solving the original problem introduced in Eqn. (2). The matrix H is divided in submatrices by rows ( i indices) and by columns ( j indices). For this reason, the unknown vector u is divided into N segments [ u 1 ; . . . ; u N ] and each of them is replicated M times ( u 1 j , . . . , u M j , for j = 1 , . . . , N ). For solving this problem there are two steps in which some information need to be shared. On one hand, Eqns. (27) and (29b) show that, for a given segment j , v ( k +1) j acts as a consensus v ariable, imposing the agreement among all u i, ( k +1) j for i = 1 , . . . , M , namely , among all replicas of the same segment. On the other hand, Eqns. (29a) and (30) show that, for a giv en replica i , the optimization of the variables u i, ( k +1) j for j = 1 , . . . , N , that is, the optimization of all segments of the same replica, require the knowledge of the subv ector g i , as well as the updates of the estimated data ˆ g ( k ) iq = H iq u i, ( k ) q ∈ C N m for q = 1 , . . . , N with q 6 = j , from the previous iteration. This e xplanation is depicted in Fig. 4. Therefore, as Fig. 5a shows, this technique can be seen as a net formed by N small nodes, each of them acting as the central node for optimizing a section of the image. These nodes collects the individual solution of M sub-nodes, perform the soft-thresholding av eraging, and distributes again the global result to each sub-node. There are a total of N · M sub-nodes, each one containing a small portion of information H ij of the general matrix H . F or a gi ven replica i , all sub- nodes ha ve to be in communication to e xchange their particular estimated data ˆ g ( k ) ij = H ij u i, ( k ) j . The final imaging solution is performed by connecting the N different solutions from each central node, v = [ v 1 ; . . . ; v N ] . As graphically shown in Fig. 5b, this technique sections the imaging domain into N small regions. F or each of them, M independent images are performed with less data each one. Fig. 4. Schematic of the ro ws and columns-wise division resolution. The unknown vector u is divided into N segments and replicated M times. For a fixed replica i , the optimization of each sub-variable u i, ( k ) j for j = 1 , . . . , N requires the knowledge of the subv ector g i and the estimated data ˆ g ( k ) iq obtained from the previous optimizations of the remaining sub-variables u i, ( k ) q for q = 1 , . . . , N with q 6 = j . For a giv en segment j , the sub-variable v ( k ) j acts as the consensus of all the replicas u i, ( k ) j , for i = 1 . . . . , M , of that segment. The final imaging for each region is computed as an average- like of these independent images. Finally , the global imaging solution is the re-connection of all those regions. In this sense, this technique combines the adv antages of both previous techniques: (i) by dividing by rows, the con ver gence process is faster since small optimizations are performed in a parallel fashion; (ii) by dividing by columns, small v ectors hav e to be shared among the nodes of the same replica; and (iii) when combining the division by rows and by columns, the vectors to be exchanged among the computational nodes of the net are of a much smaller size, reducing the communication ov erhead. A detailed analysis of the communication among the nodes is explained in Sect. IV. These two degrees of freedom enable performing the optimization in a fast and distrib uted fashion, making the imaging of lar ge domains feasible. I V . C O M M U N I C A T I O N A M O N G T H E N O D E S F O R T H E A D M M S O L U T I O N T E C H N I Q U E S A. Exchange of information for one single node The three techniques studied in Sect. III present three distributed ways for finding an optimal solution of expression (11), in which several computational nodes optimize indepen- dent sub-problems with few information allocated to each one. Howe ver , in all these three methodologies, there are concrete steps in which some information needs to be exchanged. In this section, the amount of data that is transmitted from and receiv ed by one single node at iteration k is analyzed for the three techniques: • In the case of dividing the sensing matrix by rows (Fig. 6a), each node i in the lower lev el has to receiv e the last version of v ( k ) ∈ C N p , and then, after the optimization, it 7 Fig. 5. (a) Architecture of the consensus and sectioning-based ADMM: the problem is split into N nodes, each of them acting as a central node that collects the updates of M sub-nodes, computes the soft-thresholding operator of the mean of them, and then broadcast the solution again to the sub-nodes. Each sub-node shares, for each iteration, the small estimated data vector with the remaining sub-nodes that correspond with the same replica. (b) Graphical interpretation of the row and column-wise di vision: the image is sectioned into N regions, and each of them is replicated M times for performing the imaging with few data allocated to each node. The solution for each region is an av erage-like of all the replicas. The final imaging solution is the concatenation of all the regions. has to send its whole new updated version u i, ( k +1) ∈ C N p to the main node (See Eqns. (15a)-(15b) and Fig. 1b). The exchange of information is performed in terms of the imaging and, therefore, a total of 2 N p elements need to be e xchanged at each iteration. • In the case of dividing the sensing matrix by columns (Fig. 6b), each sub-node j of the lower lev el recei ves the estimated data of the remaining N − 1 nodes ˆ g ( k ) q ∈ C N m for q = 1 , . . . , N , with q 6 = j , and also it broadcasts its own estimated data ˆ g ( k +1) j ∈ C N m to the remaining nodes (See Eqns. (22a) and (23), and Fig. 2b). Since the exchange of information is carried out in terms of the estimated data , a total of N · N m elements are shared by one node at each iteration. • In the case of performing the division of the sensing matrix in both rows and columns (Fig. 6c), as recalled, the unkno wn v ector u is divided into N segments and each of them is replicated M times. The sub-node ij , which optimizes the replica i of the segment j in the lower le vel, recei ves the latest version of v ( k ) j ∈ C N p N and N − 1 estimated data subv ectors ˆ g ( k ) iq ∈ C N m M for q = 1 , . . . , N , with q 6 = j . Once the variable u i, ( k +1) j ∈ C N p N is updated, it sends it to the central node of the segment j , and also it broadcasts its own estimated data subvector ˆ g ( k +1) ij ∈ C N m M to the remaining nodes (See Eqns. (29a), (29b), and (30), and Fig. 5a). Summarizing, at each iteration, a total of N N m M + 2 N p N elements are exchanged by one single node. In this case, the exchange of information is done as a combination of the imaging domain and the estimated data . Fig. 6. Schematic representation of the vectors and their lengths that are receiv ed from and transmitted by one single node at iteration k when the sensing matrix of the problem is divided in submatrices (a) by rows, (b) by columns, and (c) by both rows and columns. T able I shows the amount of elements to be receiv ed by and transmitted from one single node at iteration k for the three analyzed cases. T ABLE I N U MB E R O F E L E ME N T S E X CH A N G ED B Y O N E S I N G LE N O DE AT O N E I T ER ATI O N F O R T H E T H R E E A D M M D I ST R I BU T E D T E CH N I QU E S ADMM method # of elements shared for one node at iteration k Consensus-based (Row-wise division) 2 N p Sectioning-based (Column-wise division) N · N m Consensus and Sectioning-based (Row and column-wise division) N N m M + 2 N p N B. Communication efficiency of the thr ee distributed ADMM techniques In order to assess the efficienc y of the communications among the nodes for the three different techniques, the amount of information recei ved by and transmitted from one single node at iteration k is compared. Since the number of pixels N p and the number of measurements N m are always known, the ratio R = N p N m is considered as the reference for the analysis of the three cases. 1) Column-wise vs Row-wise division: The columns-wise division (Sectioning-based ADMM) is more ef ficient than the rows-wise division (Consensus-based ADMM) in terms of communications among the nodes if the following inequation is satisfied: N · N m < 2 N p . (34) This implies that the number of di visions by columns of the sensing matrix has to satisfy 1 < N < 2 R . (35) 8 Figure 7 graphically represents this inequation. Fig. 7. Boundary line comparing the efficienc y of the columns-wise division versus the rows-wise division. Dividing the sensing matrix in submatrices by columns is more ef ficient than dividing it by rows, in terms of communications among the nodes, for the integer and positiv e values of N that fall in the area indicated by the arrows, giv en R = N p N m . 2) Row and column-wise vs Row-wise division: The row and column-wise division (Consensus and sectioning- based ADMM) is more efficient than the row-wise divi- sion (Consensus-based ADMM) in terms of communications among the nodes if the following inequation is satisfied: N N m M + 2 N p N < 2 N p , (36) which implies that N 2 2 M ( N − 1) < R . (37) For a given ratio R , the number of column divisions N , in terms of the number of row di visions M , must satisfy the following inequation: 1 < N < p R 2 M 2 − 2 R M + RM ∼ 2 RM . (38) Figure 8 represents this inequation for some particular ratios R . The division of the sensing matrix in rows and columns is more efficient than the division in rows only for those integer and positi ve values of M and N that fall in the re gion indicated by the arro ws. 3) Rows and columns-wise vs Columns-wise division: The rows and columns-wise division (Consensus and sectioning- based ADMM) is more ef ficient than the columns-wise di vi- sion (Sectioning-based ADMM) in terms of communications among the nodes if the following inequation is satisfied: N N m M + 2 N p N < N · N m . (39) This implies that N 2 ( M − 1) 2 M < R . (40) Fig. 8. Boundary curves comparing the efficiency of the row and column- wise division versus the ro w-wise division. Dividing the sensing matrix in submatrices by ro ws and columns is more efficient, in terms of communi- cations among the nodes, than dividing it by rows only , for the integer and positiv e values of M and N that fall in the area indicated by the arrows, for a giv en ratio R = N p N m . Therefore, giv en a ratio R , the number of divisions by ro ws M in terms of the number of divisions by columns N must satisfy M > N 2 N 2 − 2 R . (41) Figure 9 represents this inequation for some specific ratios R . The di vision of the sensing matrix in rows and columns is more efficient than the di vision in columns only for those integer values of N and M that fall in the region indicated by the arrows. Fig. 9. Boundary curves comparing the efficiency of the row and column- wise division versus the column-wise division. Dividing the sensing matrix in submatrices by rows and columns is more efficient, in terms of communi- cations among the nodes, than dividing it by columns only , for those integer values of N and M that fall in the area indicated by the arrows, for a giv en ratio R = N p N m . 9 V . C O M P R E S S I V E R E FL E C T O R A N T E N NA Compressiv e Reflector Antenna (CRA) has been presented recently as a hardware capable of improving the sensing capacity of imaging systems in passiv e [34], [35] and activ e [36]–[40] mm-wav e radar applications. The CRA is built by distorting the surface of a T raditional Reflector Antenna (TRA) with some scatterers Ω i , characterized by their dimension { D x i , D y i , D z i } and electromagnetic properties: permittivity  i , permeability µ i , and conductivity σ i , as it is sho wn in Fig. 10. Other parameters, such as the aperture size D , the focal distance f , and the of fset height h o are in common with the TRA. This distortion modifies the well-kno wn planar phase front pattern of the TRA, creating pseudo-random patterns that can be considered as spatial and spectral codes in the near and far field of the antenna [41]. This phenomenon reduces the mutual information among the measurements, increases the sensing capacity of the system, and allows the use of CS techniques for performing the imaging of 3D objects [26]– [28], [42]. Fig. 10. 2D cross-section of a CRA in offset mode. The scatterers Ω i distort the phase front creating a pseudo-random pattern. Based on the configuration depicted in Fig. 11, N T x trans- mitting antennas and N Rx receiving antennas are facing the CRA 1 and CRA 2 , respectiv ely . The signal sent from each transmitter is collected by each recei ver after being scattered by the targets. The total number of measurements collected is N m = N T x · N Rx · N f , where N f is the total number of equally-spaced frequencies used within a bandwidth of B W around the central frequency f c . The imaging domain is discretized into N p pixels. A linear relationship can be established between the vector of measurements g ∈ C N m and the unkno wn v ector of reflecti vity u ∈ C N p as follows: g = Hu + w , (42) where H ∈ C N m × N p is the sensing matrix computed as described in [43] and w ∈ C N m is the noise collected for each measurement. V I . N U M E R I C A L R E S U LT S The ef fecti veness of the three ADMM techniques is assessed by the use of CRAs for mm-wav e imaging applications. Figure 11(a) shows a schematic of the configuration for the imaging problem. T w o h o -offset CRAs are tilted θ t and − θ t degrees, as shown in Fig. 11(b). The transmitting array , arranged along (a) (b) Fig. 11. (a) Geometry of the sensing system. A linear array of transmitters feed the CRA 1 , which illuminates the imaging domain. The field scattered by the targets is reflected by the CRA 2 and measured by another linear array of receiv ers, orthogonal to the transmitting one. (b) T op view of the sensing system. The faded CRAs and Tx and Rx arrays indicate their position before tilting. The green CRA (CRA 1 ) is tilted θ t degrees in the + ˆ y direction (counterclockwise), and the orange CRA (CRA 2 ) is tilted θ t degrees in the − ˆ y direction (clockwise). 10 the ˆ x -axis and centered in the focal point of the CRA 1 , is facing CRA 1 ; meanwhile the receiving array , linearly arranged in the YZ-plane and centered in the focal point of the CRA 2 , is facing the CRA 2 . The surfaces of the two CRAs are discretized into triangular patches, as it is described in [43]. A scatterer is constructed over each patch, with averaged sizes of h D x i and h D y i in the ˆ x and ˆ y dimensions, respectiv ely . The size in the ˆ z dimension D z i is defined as the product h D x i · tan( α t i ) , with α t i being the tilt angle for each scatterer, selected from a uniform random variable in the interval [0 , α tmax ] , allowing a maximum tilt angle of α tmax . The material of each scatterer is considered as a perfect electric conductor (PEC), therefore σ i = ∞ . The imaging domain, where the targets are contained, is located z T 0 meters away from the focal plane of the CRAs before tilting. It covers a parallelepiped-shaped volume defined by the ∆ x T 0 , ∆ y T 0 , and ∆ z T 0 dimensions, and it is discretized into N p pixels of dimensions l x , l y , and l z . The values for all these parameters are shown in T able II. T ABLE II P A R A ME T E R S O F T H E N U M ER I C A L S I M U LATI O N . P ARAM. V ALUE P ARAM. V ALUE f c 73 . 5 GH z ∆ x T 0 30 cm B W 7 GH z ∆ y T 0 30 cm λ c 4 . 1 · 10 − 3 m ∆ z T 0 6 cm D 50 cm l x λ c f 50 cm l y λ c h o 35 cm l z 5 λ c θ t 30 ◦ N T x 12 h D x i 5 λ c N Rx 12 h D y i 5 λ c N f 15 α tmax 3 ◦ N m 2160 z T 0 86 cm N p 22500 Figure 12 depicts the imaging results when applying the three ADMM techniques for the following parameters: ρ = 10 5 , λ = 10 − 2 , scaling factor scl = 10 − 4 (see Ref. [24]), and 50 iterations. The targets correspond to a metallic gun and dagger structures located in different planes. For the consensus-based ADMM, the sensing matrix H ∈ C N m × N p is divided into M = 4 submatrices by ro ws; for the sectioning- based ADMM, H is di vided into N = 3 submatrices by columns; and for the consensus and sectioning-based ADMM, H is divided into M · N = 4 · 3 = 12 submatrices by rows and columns. T able III shows the sizes of the submatrices for each case, the in version time applying the matrix in version lemma for those submatrices, the iterati ve conv ergence lapse time, and the total imaging time. The primal and dual conv ergences for each case are sho wn in Fig. 13. The times are computed by running an M code in a MA TLAB 2017b Parallel Computer T oolbox (PCT); with a GPU T itan V , 5120 CUD A cores (1335 MHz), NVIDIA driv er v390.25; in a Ubuntu Linux 16.04.4, k ernel 4.13.0-36, operativ e system. It can be considered that the three techniques perform the imaging in real time, especially the consensus- based ADMM, since it finds a solution in less than 1 s. In terms of communication among the computational nodes, T able IV shows the total amount of information that one single node has to exchange at one iteration, for the parameters (a) (b) (c) Fig. 12. Imaging reconstruction (top, front, and side views) using (a) consensus-based ADMM, (b) Sectioning-based ADMM, (c) Consensus and Sectioning-based ADMM. The targets are represented with transparent black triangles and the reconstructed reflectivity is presented in the colored map. 11 T ABLE III S U BM ATR I C ES S I Z ES A N D T I M E F O R T H E T H R EE A D M M T E C HN I Q U ES M N Submatrices sizes Submatrices sizes for in verting with the matrix inv ersion lemma In version time Con vergence time (50 iterations) Imaging time Consensus-based ADMM 4 1 540 × 22500 540 × 540 196 ms 616 ms 0 . 812 s Sectioning-based ADMM 1 3 2160 × 7500 2160 × 2160 381 ms 639 ms 1 . 020 s Consensus and Sectioning based ADMM 4 3 540 × 7500 540 × 540 248 ms 1171 ms 1 . 419 s Fig. 13. (a) Primal residual and (b) dual residual of the three ADMM techniques for the imaging example of Fig. 12. The primal residual for the Sectioning-based ADMM is almost zero since there is no consensus for this technique. of this e xample. It also sho ws the percentage of shared information reduction for the three techniques, taking the consensus-based ADMM as a reference. It is clear that the column-wise division (sectioning-based ADMM) is the most efficient technique in terms of communication, and the row- wise division (consensus-based ADMM) is the least efficient. T ABLE IV A M OU N T O F I N F OR M A T I ON E X C HA N G ED P E R N O DE A T O N E I T E RAT IO N ADMM method # of elements to be shared % of element reduction with respect to the consensus-based ADMM Consensus-based 45 , 000 0% Sectioning-based 6 , 480 85 . 6% Consensus and Sectioning-based 12 , 870 71 . 4% A. Discussion Comparing the results in terms of imaging quality , imaging time, con vergence, and amount of information shared among the computational nodes for the exposed example, none of the three ADMM techniques can be considered the best for all these features. The selection of one or other would depend on the feature of interest or on the physical restriction of the problem. In terms of imaging quality , ev en though the three techniques perform good imaging, the best option is either consensus- or sectioning and consensus-based ADMM, since they hav e slightly better performance. In terms of time, consensus-based ADMM has the fastest imaging time; but it is the worst when considering the amount of information exchanged among the nodes. Finally , in terms of con vergence and communication efficienc y , the sectioning-based ADMM is the winner; ho we ver , this method gets slower as the num- ber of divisions gets larger , and the amount of information exchanged increases linearly . Therefore, depending on the particular needs of the problem—accuracy of the imaging, speed, computational nodes architecture, etc.—the selection of one or another method can be considered. As a general consideration, the sectioning and consensus-based ADMM technique is always a good option, since it has more de grees of freedom that allow to get close to the best performance for the most of the features. V I I . C O N C L U S I O N Three ADMM-based techniques hav e been introduced to find a sparse solution of a linear matrix equation in a dis- tributed fashion. These techniques are particularly adapted to a mm-wav e imaging application. In the consensus -based ADMM, the sensing matrix is divided in submatrices by rows, creating sev eral replicas of the unknown imaging vector and solving them in parallel, reaching a consensus among different solutions and highly accelerating the imaging process. In the sectioning -based ADMM, the sensing matrix is divided in sub- matrices by columns, sectioning the imaging in small regions and optimizing them separately , highly reducing the amount of information to be shared by one node at each iteration. Finally , in the consensus and sectioning -based ADMM, the sensing matrix is divided in both rows and columns, segmenting the imaging and creating replicas of each region, combining the adv antages of imaging quality and reduced information exchanged among the computational nodes. A mm-wa ve imaging example through the use of two CRAs has been presented. The imaging quality , the imaging time, the conv ergence, and the communication among the computational nodes ha ve been analyzed and compared. The distributed capabilities of the three proposed techniques have demonstrated their ability of performing real-time imaging of metallic targets with a reduced number of measurements. Imaging structures that could reduce the mutual information among measurements even more could accelerate the imaging process. Also, more decentralized computational architectures can reduce further the amount of information exchanged among the nodes. Future analysis will also allow to perform non-regular divisions of the sensing matrix in both ro ws and columns, in which those di visions may be specified by the user depending on the particular conditions, requirements, and constraints of the problem to be solved. 12 AC K N OW L E D G E M E N T This work has been funded by NSF CAREER program (A ward # 1653671) and the Department of Energy (A ward # de-sc0017614). R E F E R E N C E S [1] D. P . Bertsekas and J. N. Tsitsiklis, P arallel and distrib uted computation: numerical methods . Prentice-Hall, Inc., 1989. [2] D. Greenspan, “Methods of matrix inv ersion, ” The American Mathemat- ical Monthly , vol. 62, no. 5, pp. 303–318, 1955. [3] H. Akaike, “Block toeplitz matrix in version, ” SIAM Journal on Applied Mathematics , vol. 24, no. 2, pp. 234–241, 1973. [4] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers, ” F oundations and Tr ends R  in Machine Learning , vol. 3, no. 1, pp. 1–122, July 2011. [5] J. Heredia-Juesas, A. Molaei, L. Tirado, W . Blackwell, and J. ´ A. Mart ´ ınez-Lorenzo, “Norm-1 regularized consensus-based admm for imaging with a compressive antenna, ” IEEE Antennas and Wir eless Pr opagation Letters , vol. 16, pp. 2362–2365, 2017. [6] P . A. Forero, A. Cano, and G. B. Giannakis, “Consensus-based dis- tributed support vector machines, ” The Journal of Machine Learning Resear ch , vol. 11, pp. 1663–1707, 2010. [7] J. F . Mota, J. Xavier , P . M. Aguiar , and M. Puschel, “Distributed basis pursuit, ” Signal Processing , IEEE T ransactions on , vol. 60, no. 4, pp. 1942–1956, April 2012. [8] J. Tsitsiklis, D. Bertsekas, and M. Athans, “Distributed asynchronous deterministic and stochastic gradient optimization algorithms, ” IEEE transactions on automatic contr ol , vol. 31, no. 9, pp. 803–812, 1986. [9] A. Olshevsk y and J. N. Tsitsiklis, “Conv ergence speed in distrib uted consensus and averaging, ” SIAM Journal on Control and Optimization , vol. 48, no. 1, pp. 33–55, 2009. [10] M. H. DeGroot, “Reaching a consensus, ” Journal of the American Statistical Association , vol. 69, no. 345, pp. 118–121, 1974. [11] L. Fang and P . J. Antsaklis, “On communication requirements for multi- agent consensus seeking, ” in Networked Embedded Sensing and Control . Springer , 2006, pp. 53–67. [12] D. Jakov etic, J. Xavier , and J. M. Moura, “Cooperativ e con vex opti- mization in networked systems: Augmented lagrangian algorithms with directed gossip communication, ” Signal Processing , IEEE Tr ansactions on , vol. 59, no. 8, pp. 3889–3902, August 2011. [13] M. Mehyar, D. Spanos, J. Pongsajapan, S. H. Low , and R. M. Murray , “Distributed averaging on asynchronous communication networks, ” in Decision and Contr ol, 2005 and 2005 European Contr ol Confer ence. CDC-ECC’05. 44th IEEE Conference on . IEEE, 2005, pp. 7446–7451. [14] J. F . Mota, J. M. Xavier , P . M. Aguiar, and M. Puschel, “D-admm: A communication-efficient distributed algorithm for separable optimiza- tion, ” Signal Pr ocessing, IEEE T ransactions on , vol. 61, no. 10, pp. 2718–2723, May 2013. [15] R. Olfati-Saber, J. A. Fax, and R. M. Murray , “Consensus and coop- eration in networked multi-agent systems, ” Pr oceedings of the IEEE , vol. 95, no. 1, pp. 215–233, 2007. [16] I. D. Schizas, A. Ribeiro, and G. B. Giannakis, “Consensus in ad hoc wsns with noisy linkspart i: Distributed estimation of deterministic signals, ” Signal Processing , IEEE T ransactions on , vol. 56, no. 1, pp. 350–364, January 2008. [17] I. D. Schizas, G. B. Giannakis, S. I. Roumeliotis, and A. Ribeiro, “Con- sensus in ad hoc wsns with noisy linkspart ii: Distributed estimation and smoothing of random signals, ” Signal Pr ocessing, IEEE T ransactions on , vol. 56, no. 4, pp. 1650–1666, April 2008. [18] G. Oliveri, P . Rocca, and A. Massa, “ A bayesian-compressive-sampling- based inversion for imaging sparse scatterers, ” IEEE T ransactions on Geoscience and Remote Sensing , vol. 49, no. 10, pp. 3993–4006, 2011. [19] A. Beck and M. T eboulle, “ A fast iterative shrinkage-thresholding algo- rithm for linear inverse problems, ” SIAM Journal on Imaging Sciences , vol. 2, no. 1, pp. 183–202, 2009. [20] S. Becker , J. Bobin, and E. J. Cand ` es, “Nesta: a fast and accurate first- order method for sparse recovery , ” SIAM Journal on Imaging Sciences , vol. 4, no. 1, pp. 1–39, 2011. [21] S. Boyd and L. V andenberghe, Con vex optimization . Cambridge univ ersity press, 2009. [22] J. Heredia Juesas, G. Allan, A. Molaei, L. T irado, W . Blackwell, and J. A. Martinez Lorenzo, “Consensus-based imaging using admm for a compressive reflector antenna, ” in Antennas and Pr opagation Symposium , July 2015. [23] T . Erseghe, D. Zennaro, E. Dall’Anese, and L. V angelista, “Fast consen- sus by the alternating direction multipliers method, ” Signal Processing, IEEE T ransactions on , vol. 59, no. 11, pp. 5523–5537, November 2011. [24] J. Heredia-Juesas, A. Molaei, L. Tirado, and J. ´ A. Mart ´ ınez-Lorenzo, “Sectioning-based admm imaging for fast node communication with a compressiv e antenna, ” IEEE Antennas and Wir eless Propa gation Letters , Under revie w . [25] ——, “Fast node communication admm-based imaging algorithm with a compressive reflector antenna, ” in Antennas and Propagation & USNC/URSI National Radio Science Meeting, 2018 IEEE International Symposium on . IEEE, July 2018. [26] A. Molaei, J. H. Juesas, and J. A. M. Lorenzo, “Compressiv e reflector antenna phased array , ” in Antenna Arrays and Beam-formation . InT ech, 2017. [27] J. Martinez Lorenzo, J. Heredia Juesas, and W . Blackwell, “ A single- transceiv er compressive reflector antenna for high-sensing-capacity imaging, ” IEEE Antennas and W ireless Pr opagation Letters , v ol. 15, pp. 968–971, March 2016. [28] ——, “Single-transceiv er compressive antenna for high-capacity sensing and imaging applications. ” in EuCAP2015 , Lisbon, In press. [29] D. P . Bertsekas, Constrained optimization and Lagrange multiplier methods . Academic press, 2014. [30] E. J. Candes, “The restricted isometry property and its implications for compressed sensing, ” Comptes Rendus Mathematique , vol. 346, no. 9- 10, pp. 589–592, 2008. [31] R. Obermeier and J. A. Martinez-Lorenzo, “Model-based optimization of compressiv e antennas for high-sensing-capacity applications, ” IEEE Antennas and Wir eless Propa gation Letters , 2016. [32] K. Bredies and D. A. Lorenz, “Linear con ver gence of iterativ e soft- thresholding, ” Journal of F ourier Analysis and Applications , vol. 14, no. 5-6, pp. 813–837, October 2008. [33] M. A. W oodbury , “In verting modified matrices, ” Memorandum report , vol. 42, p. 106, 1950. [34] A. Molaei, J. H. Juesas, W . Blackwell, and J. A. M. Lorenzo, “Inter- ferometric sounding using a metamaterial-based compressi ve reflector antenna, ” IEEE T ransactions on Antennas and Pr opagation , 2018. [35] A. Molaei, G. Allan, J. Heredia, W . Blackwell, and J. Martinez-Lorenzo, “Interferometric sounding using a compressive reflector antenna, ” in Antennas and Propagation (EUCAP 2016) , 2016. [36] H. Gomez-Sousa, O. Rubinos-Lopez, and J. A. Martinez-Lorenzo, “Hematologic characterization and 3d imaging of red blood cells using a compressi ve nano-antenna and ml-fma modeling, ” in Antennas and Pr opagation (EUCAP2016) , 2016. [37] A. Molaei, J. Heredia-Juesas, and J. Martinez-Lorenzo, “ A 2-bit and 3-bit metamaterial absorber-based compressiv e reflector antenna for high sensing capacity imaging, ” in T echnologies for Homeland Security (HST), 2017 IEEE International Symposium on . IEEE, 2017, pp. 1–6. [38] A. Molaei, J. H. Juesas, G. Allan, and J. Martinez-Lorenzo, “ Activ e imaging using a metamaterial-based compressive reflector antenna, ” in Antennas and Pr opagation (APSURSI), 2016 IEEE International Symposium on . IEEE, 2016, pp. 1933–1934. [39] A. Molaei, G. Ghazi, J. Heredia-Juesas, H. Gomez-Sousa, and J. Martinez-Lorenzo, “High capacity imaging using an array of com- pressiv e reflector antennas, ” in Antennas and Propagation (EUCAP), 2017 11th European Conference on . IEEE, 2017, pp. 1731–1734. [40] A. Molaei, J. H. Juesas, and J. Martinez-Lorenzo, “Single-pix el mm- wav e imaging using 8-bits metamaterial-based compressive reflector antenna, ” in Antennas and Pr opagation & USNC/URSI National Radio Science Meeting, 2017 IEEE International Symposium on . IEEE, 2017, pp. 847–848. [41] A. Molaei, K. Graham, L. T irado, A. Ghanbarzadeh, A. Bisulco, J. Heredia-Juesas, C. Liu, J. V on Hotenz, and J. Martinez-Lorenzo, “Experimental results of a compressiv e reflector antenna producing spatial coding, ” in Antennas and Pr opagation & USNC/URSI National Radio Science Meeting, 2018 IEEE International Symposium on . IEEE, July 2018. [42] R. Obermeier and J. A. Martinez-Lorenzo, “Model-based optimization of compressiv e antennas for high-sensing-capacity applications, ” IEEE Antennas and W ir eless Propa gation Letters. Accepted for publication , 2016. [43] J. Meana, J. Martinez-Lorenzo, F . Las-Heras, and C. Rappaport, “W ave scattering by dielectric and lossy materials using the modified equiva- lent current approximation (meca), ” Antennas and Pr opagation, IEEE T ransactions on , vol. 58, no. 11, pp. 3757–3761, Nov 2010.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment