Minimum-gain Pole Placement with Sparse Static Feedback

The minimum-gain eigenvalue assignment/pole placement problem (MGEAP) is a classical problem in LTI systems with static state feedback. In this paper, we study the MGEAP when the state feedback has arbitrary sparsity constraints. We formulate the spa…

Authors: Vaibhav Katewa, Fabio Pasqualetti

Minimum-gain Pole Placement with Sparse Static Feedback
1 Minimum-gain Pole Placement with Sparse Static Feedback V aibhav Kate wa, Member , IEEE , and Fabio Pasqualetti, Member , IEEE Abstract —The minimum-gain eigen value assignment/pole placement problem (MGEAP) is a classical pr oblem in L TI systems with static state feedback. In this paper , we study the MGEAP when the state feedback has arbitrary sparsity constraints. W e formulate the sparse MGEAP pr oblem as an equality-constrained optimization problem and present an ana- lytical characterization of its locally optimal solution in terms of eigen vector matrices of the closed loop system. This result is used to provide a geometric interpretation of the solution of the non-sparse MGEAP , thereby providing additional insights for this classical problem. Further , we develop an iterative projected gradient descent algorithm to obtain local solutions for the sparse MGEAP using a parametrization based on the Sylvester equation. W e present a heuristic algorithm to compute the projections, which also provides a nov el method to solve the sparse EAP . Also, a relaxed version of the sparse MGEAP is presented and an algorithm is developed to obtain approximately sparse local solutions to the MGEAP . Finally , numerical studies are presented to compare the pr operties of the algorithms, which suggest that the proposed pr ojection algorithm con verges in most cases. Index T erms —Eigen value assignment, Minimum-gain pole placement, Optimization, Sparse feedback, Sparse linear systems I . I N T RO D U C T I O N The Eigen value/Pole Assignment Problem (EAP) using static state feedback is one of the central problems in the design of Linear T ime Inv ariant (L TI) control systems (e.g., see [1], [2]). It plays a ke y role in system stabilization and shaping its transient behavior . Gi ven the following L TI system D x ( k ) = Ax ( k ) + B u ( k ) , (1a) u ( k ) = F x ( k ) , (1b) where x ∈ R n is the state of the L TI system, u ∈ R m is the control input, A ∈ R n × n , B ∈ R n × m , and D denotes either the continuous time differential operator or the discrete- time shift operator , the EAP in volves finding a real feedback matrix F ∈ R m × n such that the eigen values of the closed loop matrix A c ( F ) , A + B F coincide with a giv en set S = { λ 1 , λ 2 , · · · , λ n } that is closed under complex conjugation. It is well kno wn that the existence of F depends on the controllability properties of the pair ( A, B ) . Further , for single This work was supported in part by aw ards ARO-71603NSYIP and AFOSR-F A9550-19-1-0235. V . Kate wa was with the Department of Mechanical Engineering, Uni versity of California at Riv erside, Riv erside 92521 USA. He is now with the Department of Electrical Communication Engineering and the Robert Bosch Center for Cyber-Ph ysical Systems, Indian Institute of Science, Bengaluru 560012, India (e-mail: vkatewa@iisc.ac.in). F . Pasqualetti is with the Department of Mechanical Engineering, Univ ersity of California at Riverside, Riverside 92521 USA (e-mail: fabiopas@engr .ucr .edu). input systems ( m = 1 ), the feedback vector that assigns the eigen v alues is unique and can be obtained using the Ackermann’ s formula [3]. On the other hand, for multi-input systems ( m > 1 ), the feedback matrix is not unique and there exists a fle xibility to choose the eigen vectors of the closed loop system. This flexibility can be utilized to choose a feedback matrix that satisfies some auxiliary control criteria in addition to assigning the eigen values. For instance, the feedback matrix can be chosen to minimize the sensiti vity of the closed loop system to perturbations in the system parameters, thereby making the system robust. This is known as Robust Eigen- value Assignment Problem (REAP) [4]. Alternativ ely , one can choose the feedback matrix with minimum gain, thereby reducing the overall control effort. This is known as Minimum Gain Eigenv alue Assignment Problem (MGEAP) [5], [6]. Recently , considerable attention has been giv en to the study and design of sparse feedback control systems, where certain entries of the matrix F are required to be zero. Feedback spar- sity typically arises in decentralized control problems for large scale and interconnected systems with multiple controllers [7], where each controller has access to only some partial states of the system. Such constraints in decentralized control problems are typically specified by information patterns that govern which controllers have access to which states of the system [8], [9]. Sparsity may also be a result of the special structure of a centralized control system which prohibits feedback from some states to the controllers. The feedback design problem with sparsity constraints is considerably more difficult than the unconstrained case. There hav e been numerous studies to determine the optimal feedback control law for H2/LQR/LQG control problems with sparsity , particularly when the controllers have access to only local information (see [7]–[10] and the references therein). While the optimal H2/LQR/LQG design problems with sparsity hav e a rich history , studies on the REAP/MGEAP in the presence of arbitrary sparsity constraints are lacking. Even the problem of finding a particular (not necessary optimal) sparse feedback matrix that solves the EAP is not well studied. In this paper , we study the EAP and MGEAP with arbitrary sparsity constraints on the feedback matrix F . W e provide analytical characterization for the solution of sparse MGEAP and provide iterativ e algorithms to solve the sparse EAP and MGEAP . W e also briefly discuss the feasibility of the sparse EAP problem. Related work There hav e been numerous studies on the optimal pole placement problem without sparsity constraints. For the REAP , authors hav e considered optimizing dif ferent metrics which capture the sensitivity of the eigenv alues, such as the condition number of the eigen vector matrix [4], [11]– [14], departure from normality [15] and others [16], [17]. Most 2 of these methods use gradient-based iterativ e procedures to obtain the solutions. For surveys and comparisons of these REAP methods, see [11], [18], [19] and the references therein. Early works for MGEAP , including [20], [21], presented approximate solutions using low rank feedback and successi ve pole placement techniques. Simultaneous robust and minimum gain pole placement were studied in [14], [22]–[24]. For a surve y and performance comparison of these MGEAP studies, see [5] and the references therein. The regional pole placement problem was studied in [25], [26], where the eigen v alues were assigned inside a specified region. While these studies have provided useful insights on REAP/MGEAP , they do not con- sider sparsity constraints on the feedback matrix. In contrast, we study the sparse EAP/MGEAP by explicitly including the sparsity constraints in the problem formulation and solutions. There hav e also been numerous studies on EAP with sparse dynamic L TI feedback. The concept of decentralized fixed modes (DFMs) was introduced in [27] and later refined in [28]. Decentralized fixed modes are those eigenv alues of the system which cannot be shifted using a static/dynamic feedback with fully decentralized sparsity pattern (i.e. the case where controllers have access only to local states). The re- maining eigenv alues of the system can be arbitrarily assigned. Howe ver , this cannot be achieved in general using a static decentralized controller and requires the use of dynamic decen- tralized controller [27]. Other algebraic characterizations of the DFMs were presented in [29], [30]. The notion of DFMs was generalized for an arbitrary sparsity pattern and the concept of structurally fixed modes (SFMs) was introduced in [31]. Graph theoretical characterizations of structurally fixed modes were provided in [32], [33]. As in the case of DFMs, assigning the non-SFMs also requires dynamic controllers. These studies on DFMs and SFMs present feasibility conditions and analysis methods for the EAP problem with sparse dynamic feedback. In contrast, we study both EAP and MGEAP with sparse static controllers, assuming the sparse EAP is feasible. W e remark that EAP with sparsity and static feedback controller is in fact important for se veral network design and control problems, and easier to implement than its dynamic counterpart. Recently , there has been a renewed interest in studying linear systems with sparsity constraints. Using a different approach than [32], the original results regarding DFMs in [27] were generalized for an arbitrary sparsity pattern by the authors in [34], [35], where they also present a sparse dynamic controller synthesis algorithm. Further , there have been many recent studies on minimum cost input/output and feedback sparsity pattern selection such that the system has controllability [36] and no structurally fixed modes (see [37], [38] and the references therein). In contrast, we consider the problem of finding a static minimum gain feedback with a given sparsity pattern that solves the EAP . Contribution The contribution of this paper is three-fold. First, we study the MGEAP with static feedback and arbitrary sparsity constraints (assuming feasibility of sparse EAP). W e formulate the sparse MGEAP as an equality constrained opti- mization problem and present an analytical characterization of an locally optimal sparse solution. As a minor contribution, we use this result to provide a geometric insight for the non-sparse MGEAP solutions. Second, we show that determining the feasibility of the sparse EAP is NP-hard and present necessary and suf ficient conditions for feasibility . W e dev elop two heuris- tic iterativ e algorithms to obtain a local solution of the sparse EAP . The first algorithm is based on repeated projections on linear subspaces. The second algorithm is de veloped using the Sylvester equation based parametrization and it obtains a solution via projection of a non-sparse feedback matrix on the space of sparse feedback matrices that solve the EAP . Third, using the latter EAP projection algorithm, we de velop a projected gradient descent method to obtain a local solution to the sparse MGEAP . W e also formulate a relaxed version of the sparse MGEAP using penalty based optimization and develop an algorithm to obtain approximately-sparse local solutions. Paper organization The remainder of the paper is organized as follows. In Section II we formulate the sparse MGEAP optimization problem. In Section III, we obtain the solution of the optimization problem using the Lagrangian theory of optimization. W e also provide a geometric interpretation for the optimal solutions of the non-sparse MGEAP . In Section IV, we present two heuristic algorithms for solving the sparse EAP . Further , we present a projected gradient descent algorithm to solve the sparse MGEAP and also an approximately-sparse solution algorithm for a relaxed version of the sparse MGEAP . Section V contains numerical studies and comparisons of the proposed algorithms. In Section VI, we discuss the feasibility of the sparse EAP . Finally , Section VII concludes the paper . I I . S PA R S E M G E A P F O R M U L A T I O N A. Mathematical notation and pr eliminary pr operties W e use the following properties to derive our results [39], [40]: P .1 tr ( A ) = tr ( A T ) and tr ( AB C ) = tr ( C AB ) , P .2 k A k 2 F = tr ( A T A ) = vec T ( A ) vec ( A ) , P .3 vec ( AB ) = ( I ⊗ A ) vec ( B ) = ( B T ⊗ I ) vec ( A ) , P .4 vec ( AB C ) = ( C T ⊗ A ) vec ( B ) , P .5 ( A ⊗ B ) T = A T ⊗ B T and ( A ⊗ B ) H = A H ⊗ B H , P .6 1 T n ( A ◦ B )1 n = tr ( A T B ) , P .7 A ◦ B = B ◦ A and A ◦ ( B ◦ C ) = ( A ◦ B ) ◦ C , P .8 vec ( A ◦ B ) = vec ( A ) ◦ v ec ( B ) , ( A ◦ B ) T = A T ◦ B T , P .9 d dX tr ( AX ) = A T , d dX tr ( X T X ) = 2 X , d dx ( Ax ) = A , P .10 d ( X − 1 ) = − X − 1 dX X − 1 , P .11 Let D x f and D 2 x f be the gradient and Hessian of f ( x ) : R n → R . Then, d f = ( D x f ) T dx and d 2 f = ( dx ) T ( D 2 x f ) dx , P .12 Projection of a vector y ∈ R n on the null space of A ∈ R m × n is given by y p = [ I n − A + A ] y . The Kronecker sum of two square matrices A and B with dimensions n amd m , respectiv ely , is denoted by A ⊕ B = ( I m ⊗ A ) + B ⊗ I n . Further , we use the following notation throughout the paper: 3 k · k 2 Spectral norm k · k F Frobenius norm < · , · > F Inner (Frobenius) product |·| Cardinality of a set Γ( · ) Spectrum of a matrix σ min ( · ) Minimum singular value of a matrix tr ( · ) T race of a matrix ( · ) + Moore-Penrose pseudo in verse ( · ) T T ranspose of a matrix R ( · ) Range of a matrix A > 0 Positiv e definite matrix A ◦ Hadamard (element-wise) product ⊗ Kronecker product ( · ) ∗ Complex conjugate ( · ) H Conjugate transpose supp ( · ) Support of a vector vec ( · ) V ectorization of a matrix diag ( a ) n × n Diagonal matrix with diagonal elements given by n -dim vector a Re ( · ) Real part of a complex variable Im ( · ) Imaginary part of a complex variable 1 n (0 n ) n -dim vector of ones (zeros) 1 n × m (0 n × m ) n × m -dim matrix of ones (zeros) I n n -dim identity matrix e i i -th canonical vector T m,n Permutation matrix that satisfies vec ( A T ) = T m,n vec ( A ) , A ∈ R m × n B. Sparse MGEAP The sparse MGEAP in volves finding a real feedback matrix F ∈ R m × n with minimum norm that assigns the closed loop eigenv alues of (1a)-(1b) at some desired locations giv en by set S = { λ 1 , λ 2 , · · · , λ n } , and satisfies a given sparsity constraints. Let ¯ F ∈ { 0 , 1 } m × n denote a binary matrix that specifies the sparsity structure of the feedback matrix F . If ¯ F ij = 0 (respecti vely ¯ F ij = 1 ), then the j th state is unav ailable (respectiv ely av ailable) for calculating the i th input. Thus, F ij = ( 0 if ¯ F ij = 0 , and ? if ¯ F ij = 1 , where ? denotes a real number . Let ¯ F c , 1 m × n − ¯ F denote the complementary sparsity structure matrix. Further , (with a slight abuse of notation, c.f. (1a)) let X , [ x 1 , x 2 , · · · , x n ] ∈ C n × n , x i 6 = 0 n denote the non-singular eigen vector matrix of the closed loop matrix A c ( F ) = A + B F . The MGEAP can be mathematically stated as follows: min F,X 1 2 || F || 2 F (2) s.t. ( A + B F ) X = X Λ , (2a) ¯ F c ◦ F = 0 m × n , (2b) where Λ = diag ([ λ 1 , λ 2 , · · · , λ n ] T ) is the diagonal matrix of the desired eigenv alues. Equations (2a) and (2b) represent the eigen value assignment and sparsity constraints, respectiv ely . The constraint (2a) is not conv ex in ( F , X ) and, therefore, the optimization problem (2) is non-con ve x. Consequently , multiple local minima may exist. This is a common feature in v arious minimum distance and eigenv alue assignment prob- lems [41], including the non-sparse MGEAP . Remark 1. ( Choice of norm ) The F r obenius norm measures the element-wise gains of a matrix, which is informative in sparsity constrained pr oblems arising, for instance, in network contr ol pr oblems. It is also con venient for the analysis, par- ticularly to compute the derivatives of the cost function.  Definition 1. (F ixed modes [27], [34]) The fixed modes of ( A, B ) with respect to the sparsity constraints ¯ F ar e those eigen values of A which cannot be changed using LTI static (and also dynamic) state feedback, and ar e denoted by Γ f ( A, B , ¯ F ) , \ F : F ◦ ¯ F c = 0 Γ( A + B F ) . W e make the follo wing assumptions regarding the fixed modes and feasibility of the optimization problem (2). Assumption 1. The fixed modes of the triplet ( A, B , ¯ F ) are included in the desir ed eig en value set S , i.e., Γ f ( A, B , ¯ F ) ⊆ S . Assumption 2. There exists at least one feedback matrix F that satisfies constraints (2a) - (2b) for the given S . Assumption 1 is clearly necessary for the feasibility of the optimization problem (2). Assumption 2 is restrictiv e because, in general, it is possible that a static feedback matrix with a gi ven sparsity pattern cannot assign the closed loop eigen values to arbitrary locations (i.e. for an arbitrary set S satisfying Assumption 1) 1 . In such cases, only a fe w ( < n ) eigen values can be assigned independently and other remain- ing eigen values are a function of them. T o the best of our knowledge, there are no studies on characterizing conditions for the existence of a static feedback matrix for an arbitrary sparsity pattern ¯ F and eigen value set S [42] (although such characterization is av ailable for dynamic feedback laws with arbitrary sparsity pattern [31], [34], [35], and static output feedback for decentralized sparsity pattern [43]). Thus, for the purpose of this paper , we focus on finding the optimal feedback matrix assuming that at least one such feedback matrix exists. W e provide some preliminary results on the feasibility of the optimization problem (2) in Section VI. I I I . S O L U T I O N T O T H E S P A R S E M G E A P In this section we present the solution to the optimization problem (2). T o this aim, we use the theory of Lagrangian multipliers for equality constrained minimization problems. Remark 2. ( Conjugate eigenv ectors ) W e use the con vention that the right (respectively , left) eigen vectors ( x i , x j ) cor- r esponding to two conjugate eigen values ( λ i , λ j ) are also conjugate. Thus, if λ i = λ ∗ j , then x i = x ∗ j .  W e use the real counterpart of (2a) for the analysis. For two complex conjugate eigen values ( λ i , λ ∗ i ) and corresponding eigen vectors ( x i , x ∗ i ) , the following complex equation ( A + B F )  x i x ∗ i  =  x i x ∗ i  h λ i 0 0 λ ∗ i i 1 Note that a sparse dynamic feedback law can assign the eigen values to arbitrary locations under Assumption 1 [35]. 4 is equiv alent to the following real equation ( A + B F )  Re ( x i ) Im ( x i )  =  Re ( x i ) Im ( x i )  h Re ( λ i ) Im ( λ i ) − Im ( λ i ) Re ( λ i ) i . For each complex eigenv alue, the columns  x i x ∗ i  of X are replaced by  Re ( x i ) Im ( x i )  to obtain a real X R , and the sub-matrix h λ i 0 0 λ ∗ i i of Λ is replaced by h Re ( λ i ) Im ( λ i ) − Im ( λ i ) Re ( λ i ) i to obtain a real Λ R . The real eigen vectors in X and X R , and real eigenv alues in Λ and Λ R coincide. Clearly , X R is not the eigen vector matrix of A + B F (c.f. Remark 7), and X can be obtained through the columns of X R . Thus, (2a) becomes ( A + B F ) X R = X R Λ R , (3) and X R replaces the optimization variable X in (2). In the theory of equality constrained optimization, the first-order optimality conditions are meaningful only when the optimal point satisfies the following regularity condition: the Jacobian of the constraints, defined by J b , is full rank. This regularity condition is mild and usually satisfied for most classes of problems [44]. Before presenting the main result, we deri ve the Jacobian and state the regularity condition for the problem (2). Computation of J b requires vectorization of the matrix constraints (3) and (2b). F or this purpose, let x R , vec ( X R ) ∈ C n 2 , f , vec ( F ) ∈ R mn , and let z , [ x T R , f T ] T be the vector containing all the independent variables of the optimization problem. Further , let n s denote the total number of feedback sparsity constraints (i.e. number of 1 ’ s in ¯ F c ): n s = |{ ( i, j ) : ¯ F c = [ ¯ f c ij ] , ¯ f c ij = 1 }| . Note that the constraint (2b) consists of n s non-trivial sparsity constraints, and can be equiv alently written as Qf = 0 n s , (4) where Q =  e q 1 e q 2 . . . e q n s  T ∈ { 0 , 1 } n s × mn with { q 1 , . . . , q n s } = supp ( vec ( ¯ F c )) being the set of indices in- dicating the ones in vec ( ¯ F c ) . Lemma 3. (J acobian of the constraints) The Jacobian of the equality constraints (2a) - (2b) is given by J b ( z ) =  A c ( F ) ⊕ ( − Λ T R ) X T R ⊗ B 0 n s × n 2 Q  . (5) Pr oof. W e construct the Jacobian J b by rewriting the con- straints (3) and (2b), in vectorized form and taking their deriv ativ es with respect to z . Constraint (3) can be vectorized in the following two different ways (using P .3 and P .4): [( A + B F ) ⊕ ( − Λ T R )] x R = 0 n 2 , (6a) [ A ⊕ − (Λ T R )] x R + ( X T R ⊗ B ) f = 0 n 2 . (6b) Differentiating (6a) w .r .t. x R and (6b) w .r .t f yields the first (block) ro w of J b . Dif ferentiating (4) w .r .t. z yields the second (block) row of J b , thus completing the proof. W e no w state the optimality conditions for the problem (2). Theorem 4. (Optimality conditions) Let ( ˆ X , ˆ F ) (equiva- lently ˆ z = [ ˆ x T R , ˆ f T ] T ) satisfy the constraints (2a) - (2b) . Let ˆ L = [ ˆ l i ] , i = 1 , · · · , n be the left eigen vector matrix of A c ( ˆ F ) , and let ˆ L R be its r eal counterpart constructed by r eplacing [ ˆ l i , ˆ l ∗ i ] with [ Re ( ˆ l i ) , − Im ( ˆ l i )] . Let J b ( z ) be defined in Lemma 3 and P ( z ) = I n 2 + mn − J + b ( z ) J b ( z ) . Further , define ¯ L , 4 T n,m ( B T ˆ L R ⊗ I n ) and let ˆ D ,  0 n 2 × n 2 ¯ L T ¯ L 2 I mn  . (7) Then, ( ˆ X , ˆ F ) is a local minimum of the optimization pr oblem (2) if and only if ˆ F = − ¯ F ◦ ( B T ˆ L ˆ X T ) , (8a) ( A + B ˆ F ) ˆ X = ˆ X Λ (8b) ( A + B ˆ F ) T ˆ L = ˆ L Λ (8c) J b ( ˆ z ) is full rank, (8d) P ( ˆ z ) ˆ D P ( ˆ z ) > 0 . (8e) Pr oof. W e prove the result using the Lagrange theorem for equality constrained minimization. Let L R ∈ R n × n and M ∈ R m × n be the Lagrange multipliers associated with constraints (3) and (2b), respectively . The Lagrange function for the optimization problem (2) is giv en by L P. 2 = 1 2 tr ( F T F ) + 2 1 T n [ L R ◦ ( A c ( F ) X R − X R Λ R )]1 n +1 T m [ M ◦ ( ¯ F c ◦ F )]1 n P. 6 ,P . 7 = 1 2 tr ( F T F ) + 2 tr [ L T R ( A c ( F ) X R − X R Λ R )] + tr [( M ◦ ¯ F c ) T F ] . Necessity: W e next deriv e the first-order necessary condition for a stationary point. Differentiating L w .r .t. X R and setting to 0 , we get d dX R L P. 9 = 2[ A T c ( F ) L R − L R Λ T R ] = 0 n × n . (9) The real equation (9) is equiv alent to the complex equation (8c). Equation (8b) is a restatement of (2a) for the optimal ( ˆ F , ˆ X ) . Differentiating L w .r .t. F , we get d dF L P. 9 = F + 2 B T L R X T R + M ◦ ¯ F c = 0 m × n . (10) T aking the Hadamard product of (10) with ¯ F c and using (2b), we get (since ¯ F c ◦ ¯ F c = ¯ F c ) ¯ F c ◦ (2 B T L R X T R ) + M ◦ ¯ F c = 0 m × n (11) Replacing M ◦ ¯ F c from (11) into (10), we get F = − ¯ F ◦ (2 B T L R X T R ) ( a ) = − ¯ F ◦ ( B T LX T ) , where ( a ) follows from the definition of L R and X R and Remark 2. Equation (8d) is the necessary regularity condition and follows from Lemma 3 Sufficiency: Next, we deri ve the second-order sufficient con- dition for a local minimum by calculating the Hessian of L . T aking the differential of L twice, we get d 2 L = tr (( dF ) T dF ) + 4 tr ( L T R B dF dX R ) ( P. 2) = d f T d f + 4 vec T ( dF T B T L R ) dx R ( P. 3 ,P . 5) = d f T d f + d f T ¯ Ldx = 1 2  dx T d f T  D  dx d f  , 5 where D is the Hessian (c.f. P .11) defined in (7). The sufficient second-order optimality condition for the optimization prob- lem requires the Hessian to be positiv e definite in the kernel of the Jacobian at the optimal point [44, Chapter 11]. That is, y T D y > 0 , ∀ y : J b ( z ) y = 0 . This condition is equiv alent to P ( z ) DP ( z ) > 0 , since J b ( z ) y = 0 if and only if y = P ( z ) s for a s ∈ R n 2 + mn [44]. Since the projection matrix P ( z ) is symmetric, (8e) follows, and this concludes the proof. Observe that the Hadamard product in (8a) guarantees that the feedback matrix satisfies the sparsity constraints giv en in (2b). Howe ver , the optimal sparse feedback ˆ F cannot be obtained by sparsification of the optimal non-sparse feedback. The optimality condition (8a) is an implicit condition in terms of the closed loop right and left eigen vector matrices. Next, we provide an explicit optimality condition in terms of { ˆ L, ˆ X } . Corollary 5. ( Stationary point of (2) ) ˆ Z , [ ˆ X T , ˆ L T ] T is a stationary point of the optimization pr oblem (2) if and only if ¯ A ˆ Z − ˆ Z Λ = ¯ B 1 [ F ◦ ( ¯ B T 1 ¯ I ˆ Z ˆ Z T ¯ B 2 )] ¯ B T 2 ˆ Z , (12) wher e, ¯ A ,  A 0 n × n 0 n × n A T  , ¯ B 1 ,  B 0 n × n 0 n × m I n  , ¯ B 2 ,  I n 0 n × m 0 n × n B  , F ,  ¯ F 0 m × m 0 n × n ¯ F T  , and ¯ I ,  0 n × n I n I n 0 n × n  . Pr oof. Combining (8b) and (8c) and using Λ T = Λ , we get  A ˆ X − ˆ X Λ A T ˆ L − ˆ L Λ  = −  B ˆ F ˆ X ˆ F T B T ˆ L  ⇒ ¯ A ˆ Z − ˆ Z Λ = − ¯ B 1  ˆ F 0 0 ˆ F T  ¯ B T 2 ˆ Z = ¯ B 1  ¯ F ◦ ( B T ˆ L ˆ X T ) 0 0 ¯ F T ◦ ( ˆ X ˆ L T B )  ¯ B T 2 ˆ Z = ¯ B 1  F ◦  ¯ B T 1  ˆ L ˆ X T 0 0 ˆ X ˆ L T  ¯ B 2  ¯ B T 2 ˆ Z = ¯ B 1  F ◦ n ¯ B T 1 ¯ I ( ˆ Z ˆ Z T ◦ ¯ I ) ¯ B 2 o ¯ B T 2 ˆ Z = ¯ B 1 { F ◦ ( ¯ B T 1 ¯ I ˆ Z ˆ Z T ¯ B 2 ) } ¯ B T 2 ˆ Z , where the equalities follow from the Hadamard product. Remark 6. ( Partial spectrum assignment ) The r esults of Theor em 4 and Cor ollary 5 ar e also valid when specifying only p < n eigen values (the r emaining eigen values are functionally r elated to them; see also the discussion below Assumption 2). In this case, Λ ∈ C p × p , ˆ X ∈ C n × p and ˆ L ∈ C n × p . While partial assignment may be useful in some applications, in this paper we focus on assigning all the eigen values.  Remark 7. (General eigenstructure assignment) Although the optimization pr oblem (2) is formulated by considering Λ to be diagonal, the r esult in Theor em 4 is valid for any general Λ satisfying Γ(Λ) = S . F or instance, we can choose Λ in a Jor dan canonical form. However , note that for a gener al Λ , X will cease to be an eigen vector matrix.  A solution of the optimization problem (2) can be obtained by numerically/iterativ ely solving the matrix equation (12), which resembles a Sylvester type equation with a non-linear right side, and using (8a) to compute the feedback matrix. The regularity and local minimum of the solution can be verified using (8d) and (8e), respecti vely . Since the optimization prob- lem is not conv ex, only local minima can be obtained via this procedure. T o improve upon the local solutions, the procedure can be repeated for different initial conditions to solve (12). Howe ver , con vergence to a global minimum is not guaranteed. The con ver gence of the iterativ e techniques to solve (12) depends substantially on the initial conditions. If they are not chosen properly , con vergence may not be guaranteed. Further , the solution of (12) can also represent a local maxima. Therefore, instead of solving (12) directly , we use a dif ferent approach based on the gradient descent procedure to obtain a locally minimum solution. Details of this approach and corresponding algorithms are presented in Section IV. A. Results for the non-sparse MGEAP In this subsection, we present some results specific to the case when the optimization problem (2) does not ha ve any sparsity constraints (i.e. ¯ F = 1 m × n ). Although the non-sparse MGEAP has been studied previously , these results are novel and further illustrate the properties of an optimal solution. W e begin by presenting a geometric interpretation of the optimality conditions in Theorem 4 with B = I n , i.e. all the entries of A can be perturbed independently . In this case, the optimization problem (2) can be written as: min X 1 2 || A − X Λ X − 1 || 2 F . (13) Since A and R ( X ) , X Λ X − 1 are elements (or vectors) of the matrix inner product space with Frobenius norm, a solution of the optimization problem (13) is given by the projection of A on the manifold M , { R ( X ) : X is non-singular } . This projection can be obtained by solving the normal equation, which states that the optimal error vector ˆ F = A − ˆ X Λ ˆ X − 1 should be orthogonal to the tangent plane of the manifold M at the optimal point ˆ X [45]. The next result sho ws that the optimality conditions deriv ed in Theorem 4 are in fact the normal equations for the optimization problem (13). Lemma 8. ( Geometric interpretation ) Let ¯ F = 1 m × n and B = I n . Then, Equations (8a) - (8c) are equivalent to the following normal equation: < ˆ F , T M ( ˆ X ) > F = 0 , (14) wher e T M ( X ) denotes the tangent space of M at X . Pr oof. W e begin by characterizing the tangent space T M ( X ) , which is giv en by the first order approximation of R ( X ) : R ( X + dX ) = ( X + dX )Λ( X + dX ) − 1 ( P. 10) = R ( X ) + dX Λ X − 1 − X Λ X − 1 dX X − 1 + higher order terms . Thus, the tangent space is given by T M ( X ) = { Y Λ X − 1 − X Λ X − 1 Y X − 1 : Y ∈ C n × n } 6 Necessity: Using ˆ F given by (8a), we get < ˆ F , T M ( ˆ X ) > = tr ( ˆ F T ( Y Λ ˆ X − 1 − ˆ X Λ ˆ X − 1 Y ˆ X − 1 )) = − tr ( ˆ X ˆ L T Y Λ ˆ X − 1 ) + tr ( ˆ X ˆ L T ˆ X Λ ˆ X − 1 Y ˆ X − 1 ) ( P. 1) = − tr ( ˆ L T Y Λ) + tr ( ˆ L T ˆ X Λ ˆ X − 1 Y ) ( a ) = − tr ( ˆ L T Y Λ) + tr (Λ ˆ L T ˆ X ˆ X − 1 Y ) ( P. 1) = 0 , where ( a ) follows from the fact that Λ and ˆ L T ˆ X commute. Sufficiency: From (14), we get tr ( ˆ F T ( Y Λ ˆ X − 1 − ˆ X Λ ˆ X − 1 Y ˆ X − 1 )) = 0 ( P. 1) ⇒ tr [(Λ ˆ X − 1 ˆ F T − ˆ X − 1 ˆ F T ˆ X Λ ˆ X − 1 ) Y ] = 0 . Since the abov e equation is true for all Y ∈ C n × n , we get Λ ˆ X − 1 ˆ F T − ˆ X − 1 ˆ F T ˆ X Λ ˆ X − 1 = 0 n × n ⇒ ˆ X Λ ˆ X − 1 ˆ F T = ˆ F T ˆ X Λ ˆ X − 1 ⇒ A c ( ˆ F ) ˆ F T = ˆ F T A c ( ˆ F ) . Thus, A c ( ˆ F ) and ˆ F T commute and ha ve common left and right eigenspaces [46], i.e., ˆ F T = − ˆ X G ˆ X − 1 = − ˆ X ˆ L T , where G is a diagonal matrix. This completes the proof. Next, we show the equiv alence of the non-sparse MGEAP for two orthogonally similar systems. Lemma 9. ( In variance under orthogonal transformation ) Let ¯ F = 1 m × n and ( A 1 , B 1 ) , ( A 2 , B 2 ) be two orthogonally similar systems such that A 2 = P A 1 P − 1 and B 2 = P B 1 , with P being an orthogonal matrix. Let optimal solutions of (2) for the two systems be denoted by ( ˆ X 1 , ˆ L 1 , ˆ F 1 ) and ( ˆ X 2 , ˆ L 2 , ˆ F 2 ) , respectively . Then ˆ X 2 = P ˆ X 1 , ˆ L 2 = P ˆ L 1 , ˆ F 2 = ˆ F 1 P T , and || ˆ F 1 || F = || ˆ F 2 || F . (15) Pr oof. From (8b), we have ( A 2 + B 2 ˆ F 2 ) ˆ X 2 = ˆ X 2 Λ ⇒ ( P A 1 P − 1 + P B 1 ˆ F 1 P T ) P ˆ X 1 = P ˆ X 1 Λ ⇒ ( A 1 + B 1 ˆ F 1 ) ˆ X 1 = ˆ X 1 Λ . Similar relation can be shown between ˆ L 1 and ˆ L 2 using (8c). Next, from (8a), we have ˆ F 2 = − B T 2 ˆ L 2 ˆ X T 2 = − B T 1 ˆ L 1 ˆ X T 1 P T = ˆ F 1 P T . Finally , || ˆ F 1 || 2 F = tr ( ˆ F T 1 ˆ F 1 ) ( P. 1) = tr ( ˆ F T 2 ˆ F 2 ) = || ˆ F 2 || 2 F . Recall from Remark 6 that Theorem 4 is also valid for MGEAP with partial spectrum assignment. Next, we consider the case when only one real eigen value needs to assigned for the MGEAP while the remaining eigenv alues are unspecified. In this special case, we can explicitly characterize the global minimum of (2) as shown in the next result. Corollary 10. ( One real eigen value assignment ) Let ¯ F = 1 m × n , Λ ∈ R , and B = I n . Then, the global minima of the optimization pr oblem (2) is given by ˆ F g l = − σ min ( A − Λ I n ) uv T , wher e u and v are unit norm left and right singular vectors, r espectively , corr esponding to σ min ( A − Λ I n ) . Further , k ˆ F g l k F = σ min ( A − Λ I n ) . Pr oof. Since Λ ∈ R , ˆ X ∈ R n , ˆ x with k ˆ x k 2 = 1 , and ˆ L ∈ R n , ˆ l . Let ˆ l = β ˆ ˜ l where β , k ˆ l k 2 > 0 . Substituting ˆ F = − ˆ l ˆ x T from (8a) into (8b)-(8c), we get ( A − ˆ l ˆ x T ) ˆ x = ˆ x Λ ⇒ ( A − Λ I n ) ˆ x = β ˆ ˜ l and, ( A T − ˆ x ˆ l T ) ˆ l = ˆ l Λ ⇒ ( A − Λ I n ) T ˆ ˜ l = β ˆ x. The above two equations imply that the unit norm vectors ˆ x and ˆ ˜ l are left and right singular vectors of A − Λ I n associated with the singular value β . Since k ˆ F k 2 F = tr ( ˆ F T ˆ F ) = tr ( ˆ x ˆ l T ˆ l ˆ x T ) = β 2 , we pick β as the minimum singular v alue of A − Λ I n , and the proof is complete. W e conclude this subsection by presenting a brief com- parison of the non-sparse MGEAP solution with deflation techniques for eigen value assignment. For B = I n , an alternativ e method to solve the non-sparse EAP is via the W ielandt deflation technique [47]. Wielandt deflation achiev es pole assignment by modifying the matrix A in n steps A → A 1 → A 2 → · · · → A n . Step i shifts one eigen value of A i − 1 to a desired location λ i , while keeping the remaining eigen values of A i − 1 fixed. This is achiev ed by using the feedback F i d f = − ( µ i − λ i ) v i z T i , where µ i and v i are any eigen value and right eigen vector pair of A i − 1 , and z i is any vector such that z T i v i = 1 . Thus, the ov erall feedback that solves the EAP is given as F d f = P n i =1 F i d f . It is interesting to compare the optimal feedback expression in (8a), ˆ F = − P n i =1 ˆ l i ˆ x T i , with the deflation feedback. Both feedbacks are sum of n matrices, where each matrix has rank 1 . Howe ver , the W ielandt deflation has an inherent special structure and a restrictiv e property that, in each step, all except one eigenv alue remain unchanged. Furthermore, each rank − 1 term in ˆ F and F d f in volves the right/left eigen vectors of the closed and open loop matrix, respectively . Clearly , since ˆ F is the minimum-gain solution of (2), || ˆ F || F ≤ || F d f || F . I V . S O L U T I O N A L G O R I T H M S In this section, we present an iterativ e algorithm to obtain a solution to the sparse MGEAP in (2). T o de velop the algorithm, we first present two algorithms for computing non- sparse and approximately-sparse solutions to the MGEAP , respectiv ely . Next, we present two heuristic algorithms to obtain a sparse solution of the EAP (i.e. any sparse solution, which is not necessarily minimum-gain). Finally , we use these algorithms to develop the algorithm for sparse MGEAP . Note that although our focus is to develop the sparse MGEAP algorithm, the other algorithms presented in this section are nov el in themselves to the best of our knowledge. W e make the following assumptions: Assumption 3. The triplet ( A, B , ¯ F ) has no fixed modes, i.e., Γ f ( A, B , ¯ F ) = ∅ . Assumption 4. The open and closed loop eigen value sets are disjoint, i.e., Γ( A ) ∩ Γ(Λ) = ∅ , and B has full column rank. Assumption 4 is not restrictiv e since if there are any common eigenv alues in A and Λ , we can use a preliminary sparse feedback F p to shift the eigenv alues of A to some other locations such that Γ( A + B F p ) ∩ Γ(Λ) = ∅ . Due to 7 Assumption 3, such a F p always exists. Then, we can solve the modified MGEAP 2 with parameters ( A + B F p , B , Λ , ¯ F ) . If F is the sparse solution of this modified problem, then the solution of the original problem is F p + F . T o av oid complex domain calculations in the algorithms, we use the real eigen value assignment constraint (3). For con venience, we use a slight abuse of notation to denote X R and Λ R as X and Λ , respectively , in this section. Note that the inv ertibility of X is equiv alent to the in vertibility of X R . A. Algorithms for the non-sparse MGEAP W e now present two iterati ve algorithms to obtain non- sparse and approximately-sparse solutions to the MGEAP , respectiv ely . T o dev elop the algorithms, we use the Sylvester equation based parametrization [14], [48]. In this parametriza- tion, instead of defining ( F , X ) as free variables, we define a parameter G , F X ∈ R m × n as the free v ariable. W ith this parametrization, the non-sparse MGEAP is stated as: min G J = 1 2 || F || 2 F (16) s.t. AX − X Λ + B G = 0 , (16a) F = GX − 1 . (16b) Note that, for an y giv en G , we can solve the Sylvester equation (16a) to obtain X . Assumption 4 guarantees that (16a) has a unique solution [49]. Further , we can use (16b) to obtain a non-sparse feedback matrix F . Thus, (16) is an unconstrained optimization problem in the free parameter G . The Sylvester -based parametrization requires the unique solution X of (16a) to be non-singular , which holds gener- ically if (i) ( A, − B G ) is controllable and (ii) (Λ , − B G ) is observable [50]. Since the system has no fixed modes, ( A, B ) is controllable [27]. This implies that condition (i) holds generically for almost all G . Further , since B is of full column rank, condition (ii) is guaranteed if (Λ , − G ) is observable. These conditions are mild and are satisfied for almost all instances as confirmed in our simulations (see Section V). The next result provides the gradient and Hessian of the cost J w .r .t. to the parameter g , vec ( G ) . Lemma 11. (Gradient and Hessian of J ) The gradient and Hessian of the cost J in (16) with r espect to g is given by dJ dg = h ( X − 1 ⊗ I m ) + ( I n ⊗ B T ) ˜ A − T ( X − 1 ⊗ F T ) i | {z } , Z ( F, X ) f , (17) d 2 J d 2 g , H ( F , X ) = Z ( F, X ) Z T ( F , X ) + Z 1 ( F , X ) Z T ( F , X ) + Z ( F , X ) Z T 1 ( F , X ) , (18) wher e Z 1 ( F , X ) , ( I n ⊗ B T ) ˜ A − T ( X − 1 F T ⊗ I n ) T m,n , and ˜ A , A ⊕ ( − Λ T ) . 2 Although the minimization cost of the modified MGEAP is 0 . 5 || F p + F || 2 F , it can be solved using techniques similar to solving MGEAP in (2). Pr oof. V ectorizing (16a) using P .3 and taking the differential, ˜ Ax + ( I n ⊗ B ) g = 0 ⇒ dx = − ˜ A − 1 ( I n ⊗ B ) dg. (19) Note that due to Assumption 4, ˜ A is in vertible. T aking the differential of (16b) and vectorizing, we get dF P. 10 = dGX − 1 − GX − 1 | {z } F dX X − 1 (20) P. 3 ,P . 4 ⇒ d f = ( X − T ⊗ I m ) dg − ( X − T ⊗ F ) dx (19) = [( X − T ⊗ I m ) + ( X − T ⊗ F ) ˜ A − 1 ( I n ⊗ B )] | {z } P. 5 = Z T ( F , X ) dg . (21) The differential of cost J P. 2 = 1 2 f T f is gi ven by dJ = f T d f . Using (21) and P .11, we get (17). T o deriv e the Hessian, we compute the second-order dif ferentials of the in volved variables. Note that since g ( G ) is an independent variable, d 2 g = 0( d 2 G = 0) [40]. Further , the second-order differential of (16a) yields d 2 X = 0 . Rewriting (20) as dF X = dG − F dX , and taking its differential and vectorization, we get ( d 2 F ) X + ( dF )( dX ) = − ( dF )( dX ) ⇒ d 2 F = − 2( dF )( dX ) X − 1 P. 4 ⇒ d 2 f = − 2( X − T ⊗ dF ) dx. (22) T aking the second-order differential of J we get d 2 J = ( d f ) T d f + f T ( d 2 f ) = ( d f ) T d f + ( d 2 f ) T f (21) , (22) ,P. 5 = dg T Z Z T dg − 2 dx T ( X − 1 ⊗ ( dF ) T ) f | {z } P. 5 = v ec (( dF ) T F X − T ) P. 5 = dg T Z Z T dg − 2 dx T ( X − 1 F T ⊗ I n ) T m,n d f (19) , (21) = dg T Z Z T dg + 2 dg T ( I n ⊗ B T ) ˜ A − T ( X − 1 F T ⊗ I n ) T m,n Z T dg | {z } dg T ( Z 1 Z T + Z Z T 1 ) dg . (23) The Hessian in (18) follows from (23) and P .11. The first-order optimality condition of the unconstrained problem (16) is dJ dg = Z ( F , X ) f = 0 . The next result sho ws that this condition is equi valent to the first-order optimality conditions of Theorem 4 without sparsity constraints. Corollary 12. (Equivalence of first-order optimality condi- tions) Let ¯ F = 1 m × n . Then, the first-or der optimality condition Z ( ˆ F , ˆ X ) ˆ f = 0 of (16) is equivalent to (8a) - (8c) , wher e ˆ l , vec ( ˆ L ) = ˜ A − T ( ˆ X − 1 ⊗ ˆ F T ) ˆ f . (24) Pr oof. The optimality condition (8b) follows from (16a)- (16b). Equation (8a) can be re written as ˆ F ˆ X − T + B T ˆ L = 0 8 and its vectorization using P .3 yields Z ( ˆ F , ˆ X ) ˆ f = 0 . Finally , vectorization of the left side of (8c) yields vec [( A + B ˆ F ) T ˆ L − ˆ L Λ T ] = vec [ A T ˆ L − ˆ L Λ T + ( B ˆ F ) T L ] P. 3 ,P . 5 = ˜ A T ˆ l + ( I n ⊗ ( B ˆ F ) T ) ˆ l (24) = ( ˆ X − 1 ⊗ ˆ F T ) ˆ f + ( I n ⊗ ( B ˆ F ) T ) ˜ A − T ( ˆ X − 1 ⊗ ˆ F T ) ˆ f P. 5 = ( I n ⊗ ˆ F T ) Z ( ˆ F , ˆ X ) ˆ f = 0 . T o conclude, note that ˆ L is the right eigen vector matrix. Using Lemma 11, we next present a steepest/Newton de- scent algorithm to solve the non-sparse MGEAP (16) [44]. In the algorithms presented in this section, we interchangeably use the the matrices ( G, F, X ) and their respective vector - izations ( g , f , x ) . The con version of a matrix to the vector (and vice-versa) is not specifically stated in the steps of the algorithms and is assumed wherev er necessary . Algorithm 1: Non-sparse solution to the MGEAP Input: A, B , Λ , G 0 . Output: Local minimum ( ˆ F , ˆ X ) of (16). Initialize: G 0 , X 0 ← Solution of (16a), F 0 ← G 0 X − 1 0 repeat 1 α ← Compute step size (see below); 2 g ← g − αZ ( F , X ) f or ; 3 g ← g − α [ H ( F , X ) + V ( F , X )] − 1 Z ( F , X ) f ; X ← Solution of Sylvester equation (16a); F ← GX − 1 ; until conv ergence; retur n ( F , X ) Steps 2 and 3 of Algorithm 1 represent the steepest and (damped) Ne wton descent steps, respectiv ely . Since, in gen- eral, the Hessian H ( F , X ) is not positive-definite, the Newton descent step may not result in a decrease of the cost. Therefore, we add a Hermitian matrix V ( F , X ) to the Hessian to make it positiv e definite [44]. W e will comment on the choice of V ( F , X ) in Section V. In step 1, the step size α can be deter- mined by backtracking line search or Armijo’ s rule [44]. For a detailed discussion of the steepest/Newton descent methods, the reader is referred to [44]. The computationally intensive steps in Algorithm 1 are solving the Sylvester equation (16a) and ev aluating the in verses of X and H + V . Note that the expression of the gradient in (17) is similar to the expression provided in [14]. Howe ver , the expression of Hessian in (18) is new and it allows us to implement Newton descent whose con vergence is considerably faster than steepest descent. Next, we present a relaxation of the optimization prob- lem (2) and a corresponding algorithm that provides approximately-sparse solutions to the MGEAP . W e remov e the explicit feedback sparsity constraints (2b) and modify the cost function to penalize it when these sparsity constraints are violated. Using the Sylvester equation based parametrization, the relaxed optimization problem is stated as: min G J W = 1 2 || W ◦ F || 2 F (25) s.t. (16a) and (16b) hold true, where W ∈ R m × n is a weighing matrix that penalizes the cost for violation of sparsity constraints, and is giv en by W ij = ( 1 if ¯ F ij = 1 , and  1 if ¯ F ij = 0 . As the penalty weights of W corresponding to the sparse entries of F increase, an optimal solution of (25) becomes more sparse and approaches tow ards the optimal solution of (2). Note that the relaxed problem (25) corresponds closely to the non-sparse MGEAP (16). Thus, we use a similar gradient based approach to obtain its solution. Lemma 13. (Gradient and Hessian of J W ) The gradient and Hessian of the cost J W in (25) with respect to g is given by dJ W dg = Z ( F , X ) ¯ W f , (26) d 2 J W d 2 g , H W ( F , X ) = Z ( F, X ) ¯ W Z T ( F , X ) + Z 1 ,W ( F , X ) Z T ( F , X ) + Z ( F , X ) Z T 1 ,W ( F , X ) , (27) wher e ¯ W , diag ( vec ( W ◦ W )) and, Z 1 ,W ( F , X ) , ( I n ⊗ B T ) ˜ A − T ( X − 1 ( W ◦ W ◦ F ) T ⊗ I n ) T m,n . Pr oof. Since the constraints of problems (16) and (25) coin- cide, Equations (19)-(22) from Lemma 11 also hold true for problem (25). No w , J W P. 2 ,P . 8 = 1 2 ( vec ( W ) ◦ f ) T ( vec ( W ) ◦ f ) = 1 2 f T ¯ W f . Thus, dJ W = f T ¯ W d f and d 2 J W = ( d f ) T ¯ W d f + f T ¯ W d 2 f . Using the relation vec ( W ◦ W ◦ F ) = ¯ W f , the remainder of the proof is similar to proof of Lemma 11. Using Lemma 13, we next present an algorithm to obtain an approximately-sparse solution to the MGEAP . Algorithm 2: Approximately-sparse solution to the MGEAP Input: A, B , Λ , W, G 0 Output: Local minimum ( ˆ F , ˆ X ) of (25). Initialize: G 0 , X 0 ← Solution of (16a), F 0 ← G 0 X − 1 0 repeat 1 α ← Update step size; 2 g ← g − αZ ( F , X ) ¯ W f or ; 3 g ← g − α [ H W ( F , X ) + V W ( F , X )] − 1 Z ( F , X ) ¯ W f ; X ← Solution of Sylvester equation (16a); F ← GX − 1 ; until conv ergence; retur n ( F , X ) The step size rule and modification of the Hessian in Algorithm 2 is similar to Algorithm 1. B. Algorithms for the sparse EAP In this subsection, we present two heuristic algorithms to obtain a sparse solution to the EAP (not necessarily minimum- gain). This in volv es finding a pair ( F , X ) which satisfies the eigen value assignment and sparsity constraints (2a), (2b). W e begin with a result that combines these two constraints. 9 Lemma 14. (F easibility of ( F , X ) ) An in vertible matrix X ∈ R n × n satisfies (2a) and (2b) if and only if ˜ a ( x ) ∈ R ( ˜ B ( X )) wher e, (28) ˜ a ( x ) , ˜ Ax, ˜ B ( X ) , − ( X T ⊗ B ) P ¯ F , P ¯ F , diag ( vec ( ¯ F )) . Further , if (28) holds true, then the set of sparse feedback matrices that satisfy (2a) and (2b) is given by F X = { P ¯ F f ns : ˜ B ( X ) f ns = ˜ a ( x ) , f ns ∈ R mn } . (29) Pr oof. Any feedback f which satisfies the sparsity constraint (2b) can be written as f = P ¯ F f ns where f ns ∈ R mn is a non-sparse vector 3 . V ectorizing (2a) using P .3 and P .4, and substituting f = P ¯ F f ns , we get ˜ Ax = − ( X T ⊗ B ) P ¯ F f ns , (30) from which (28) and (29) follow . Based on Lemma 14, we dev elop a heuristic algorithm for a sparse solution to the EAP . The algorithm starts with a non- sparse EAP solution ( F 0 , X 0 ) that does not satisfy (2b) and (28). Then, it takes repeated projections of ˜ a ( x ) on R ( ˜ B ( X )) to update X and F , until a sparse solution is obtained. Algorithm 3: Sparse solution to EAP Input: A, B , Λ , ¯ F , G 0 , iter max . Output: ( F , X ) satisfying (2a) and (2b). Initialize: G 0 , X 0 ← Solution of (16a), F ns, 0 ← G 0 X − 1 0 , i ← 0 repeat 1 ˜ a ( x ) ← ˜ B ( X )[ ˜ B ( X )] + ˜ a ( x ) ; 2 x ← ˜ A − 1 ˜ a ( x ) ; 3 X ← Normalize X ; i ← i + 1 until conv ergence or i > iter max ; retur n ( f ∈ F X in (29) , X ) In step 1 of Algorithm 3, we update ˜ a ( x ) by projecting it on R ( ˜ B ( X )) . Step 2 computes x from ˜ a ( x ) using the fact that ˜ A is in vertible (c.f. Assumption 4). Finally , the normalization in step 3 is performed to ensure inv ertibility of X 4 . Next, we de velop a second heuristic algorithm for solv- ing the sparse EAP problem using the non-sparse MGEAP solution in Algorithm 1. The algorithm starts with a non- sparse EAP solution ( F 0 , X 0 ) . In each iteration, it sparsifies the feedback to obtain f = P ¯ F f ns (or F = ¯ F ◦ F ns ), and then solves the following non-sparse MGEAP min F ns ,X 1 2 || F ns − F || 2 F (31) s.t. ( A + B F ns ) X = X Λ , (31a) to update F ns that is close to the sparse F . This algorithm resembles to the alternating projection method [51] to find an intersection point of two sets. The operation F = ¯ F ◦ F ns 3 Since f satisfies (4), it can also be characterized as f = ( I mn − Q + Q ) f ns , and thus P ¯ F = I mn − Q + Q . 4 Since X is not an eigenvector matrix, we compute the eigenvectors from X , normalize them, and then recompute real X . computes the projection of F ns on the con ve x set of sparse feedback matrices. The optimization problem (31) computes the projection of F on the non-conv ex set of feedback matrices that assign the eigen v alues. Thus, using the heuristics of repeated sparsification of the solution of non-sparse MGEAP in (31), the algorithm obtains a sparse solution. The alternating projection method is not guaranteed to con verge in general when the sets are not conv ex. Howe ver , if the starting point is close to the two sets, con ver gence is guaranteed [52]. Note that a solution ˆ F ns of the problem (31) with parameters ( A, B , Λ , F ) satisfies ˆ F ns = F + ˆ K ns , where ˆ K ns is a solution of the optimization problem (16) with parameters ( A + B F , B , Λ) . Thus, we can use Algorithm 1 to solve (31). Algorithm 4: Projection-based sparse solution to EAP Input: A, B , Λ , ¯ F , G 0 , iter max . Output: ( F , X ) satisfying (2a) and (2b). Initialize: G 0 , X 0 ← Solution of (16a), F ns, 0 ← G 0 X − 1 0 , i ← 0 repeat F ← ¯ F ◦ F ns ; 1 ( K ns , X ) ← Algorithm 1 ( A + B F , B , Λ) ; F ns ← F + K ns ; i ← i + 1 ; until conv ergence or i > iter max ; retur n ( F , X ) Remark 15. (Comparison of EAP Algorithms 3 and 4) 1. Projection property: In gener al, Algorithm 3 r esults in a sparse EAP solution F that is considerably differ ent fr om the initial non-sparse F ns, 0 . In contrast, Algorithm 4 pr ovides a sparse solution F that is close to F ns, 0 . This is due to the fact that Algorithm 4 updates the feedback by solving the optimiza- tion pr oblem (31) , which minimizes the deviations between successive feedback matrices. Thus, Algorithm 4 pr ovides a good (although not necessarily orthogonal) pr ojection of a given non-sparse EAP solution F ns, 0 on the space of sparse EAP solutions. 2. Complexity: The computational complexity of Algorithm 4 is considerably larg er than that of Algorithm 3. This is because Algorithm 4 r equires a solution of a non-sparse MGEAP pr oblem in each iteration. In contrast, Algorithm 3 only r equires pr ojections on the rang e space of a matrix in each iteration. Thus, Algorithm 3 is considerably faster as compar ed to Algorithm 4. 3. Conv ergence: Although we do not formally pr ove the con verg ence of heuristic Algorithms 3 and 4 in this paper , a compr ehensive simulation study in Subsection V -B suggests that Algorithm 4 conver ges in almost all instances. In contrast, Algorithm 3 con ver ges in much fewer instances and its con- ver gence deteriorates considerably as the number of sparsity constraints incr ease (see Subsection V -B).  If the starting point F ns, 0 of Algorithm 4 is “sufficiently close” to a local minima ˆ F of (2), then its iterations will con verge (heuristically) to ˆ F . In this case, Algorithm 4 can be used to solve the sparse MGEAP . Howe ver , con vergence to ˆ F is not guaranteed for an arbitrary starting point. 10 C. Algorithm for sparse MGEAP In this subsection, we present an iterati ve projected gradient algorithm to compute the sparse solutions of the MGEAP in (2). The algorithm consists of two loops. The outer loop is same as the non-sparse MGEAP Algorithm 1 (using steepest descent) with an additional projection step, which constitutes the inner loop. Figure 1 represents one iteration of the algo- rithm. First, the gradient Z ( F k , X k ) f k is computed at a current point G k (equiv alently ( F k , X k ) , where F k is sparse). Next, the gradient is projected on the tangent plane of the sparsity constraints (4), which is giv en by T F = ( y ∈ R mn :  d ( Qf ) dg  T y = 0 ) =  y ∈ R mn : QZ T ( F , X ) y = 0  . (32) From P .12, the projection of the gradient on T F is giv en by P F k Z ( F k , X k ) f k , where P F k = I mn − [ QZ T ( F k , X k )] + [ QZ T ( F k , X k )] . Next, a mov e is made in the direction of the projected gradient to obtain G ns,k ( F ns,k , X ns,k ) . Finally , the orthogonal projection of G ns,k is taken on the space of sparsity constraints to obtain G k +1 ( F k +1 , X k +1 ) . This orthogonal projection is equi v alent to solving (31) with sparsity constraints (2b), which in turn is equiv alent to the original sparse MGEAP (2). Thus, the orthog- onal projection step is as difficult as the original optimization problem. T o address this issue, we use the heuristic Algo- rithm 4 to compute the projections. Although the projections obtained using Algorithm 4 are not necessarily orthogonal, they are typically good (c.f. Remark 15). G k G ns,k T F Z k f k P F k Z k f k Qf =0 G k +1 |{z} Algorithm 4 Fig. 1. A single iteration of Algorithm 5. Algorithm 5 is computationally intensiv e due to the use of Algorithm 4 in step 1 to compute the projection on the space of sparse matrices. In fact, the computational complexity of Algorithm 5 is one order higher than that of non-sparse MGEAP Algorithm 1. Howe ver , a way to considerably reduce the number of iterations of Algorithm 5 is to initialize it using the approximately-sparse solution obtained by Algorithm 2. In this case, Algorithm 5 starts near the the local minimum and, thus, its con vergence time reduces considerably . V . S I M U L A T I O N S T U D I E S In this section, we present the implementation details of the algorithms dev eloped in Section IV and provide numerical simulations to illustrate their properties. Algorithm 5: Sparse solution to the MGEAP Input: A, B , Λ , ¯ F , G 0 , iter max . Output: Local minimum ( ˆ F , ˆ X ) of (2). Initialize: ( F 0 , X 0 ) ← Algorithm 4( A, B , Λ , ¯ F , G 0 , iter max ) , G 0 ← F 0 X 0 , i ← 0 repeat α ← Update step size; g ns ← g − αP F Z ( F , X ) f ; X ns ← Solution of Sylvester equation (16a); F ns ← G ns X − 1 ns ; 1 ( F , X ) ← Algorithm 4 ( A, B , Λ , ¯ F , G ns , iter max ) ; G ← F X ; i ← i + 1 ; until conv ergence or i > iter max ; retur n ( F , X ) A. Implementation aspects of the algorithms In the Newton descent step (step 3) of Algorithm 1, we need to choose (omitting the parameter dependence nota- tion) V such that H + V is positive-definite. W e choose V = δ I mn − Z 1 Z T − Z Z T 1 where, 0 < δ  1 . Thus, from (18), we have: H + V = Z Z T + I mn . Clearly , Z Z T is positiv e-semidefinite and the small additiv e term I mn ensures that H + V is positive-definite. Note that other possible choices of V also exist. In step 1 of Algorithm 1, we use the Armijo rule to compute the step size α . Finally , we use    dJ dg    2 <  , 0 <   1 as the conv ergence criteria of Algorithm 1. For Algorithm 2, we analogously choose V W = δ I mn − Z 1 ,W Z T − Z Z T 1 ,W and the same con vergence criteria and step size rule as Algorithm 1. In both algorithms, if we encounter a scenario in which the solution X of (16a) is singular (c.f. paragraph below (16b)), we perturb G slightly such that the new solution is non-singular , and then continue the iterations. W e remark that such instances occur extremely rarely in our simulations. For the sparse EAP Algorithm 3, we use the conv ergence criteria e X = k [ I n 2 − ˜ B ( X )( ˜ B ( X )) + ]˜ a ( x ) k 2 <  , 0 <   1 . For Algorithm 4, we use the conv ergence criteria e F = k F − ¯ F ◦ F k F <  , 0 <   1 . Thus, the iterations of these algorithms stop when x lies in a certain subspace and when the sparsity error becomes sufficiently small (within the specified tolerance), respectiv ely . Further, note that Algorithm 4 uses Algorithm 1 in step 1 without specifying an initial condition G 0 for the latter . This is because in step 1, we effecti vely run Algorithm 1 for multiple initial conditions in order to capture its global minima. W e remark that the capture of global minima by Algorithm 1 is crucial for con ver gence of Algorithm 4. As the iterations of Algorithm 4 progress, the sparse matrix F achie ves eigen value assignment with increasing accuracy . As a result, near the con ver gence of Algorithm 4, the eigen- values of A + B F and Λ are very close to each other . This creates numerical difficulties when Algorithm 1 is used with parameters ( A + B F , B , Λ) in step 1 (see Assumption 4). T o a void this issue, we run Algorithm 1 using a preliminary 11 feedback F p , as explained below Assumption 4. Finally , for Algorithm 5, we use the following con vergence criteria:    P F dJ dg    2 <  , 0 <   1 . W e choose the stopping tolerance  between 10 − 6 and 10 − 5 for all the algorithms. B. Numerical study W e begin this subsection with the follo wing example: A =     − 3 . 7653 − 2 . 1501 0 . 3120 − 0 . 2484 1 . 6789 1 . 0374 − 0 . 5306 1 . 3987 − 2 . 1829 − 2 . 5142 − 1 . 2275 0 . 2833 − 13 . 6811 − 9 . 6804 − 0 . 5242 2 . 9554     , B =  1 1 2 5 1 3 4 2  T , ¯ F =  1 1 0 0 1 0 1 1  , S = {− 2 , − 1 , − 0 . 5 ± j } . The eigen v alues of A are Γ( A ) = {− 2 , − 1 , 1 ± 2 j } . Thus, the feedback F is required to move two unstable eigen v alues into the stable region while keeping the other two stable eigen- values fixed. T able I shows the non-sparse, approximately- sparse and sparse solutions obtained by Algorithms 1, 2 and 5, respectiv ely , and Figure 2 shows a sample iteration run of these algorithms. Since the number of iterations taken by the algorithms to con verge depends on their starting points, we report the average number of iterations taken over 1000 random starting points. Further , to obtain approximately-sparse solutions, we use the weighing matrix with W ij = w if ¯ F ij = 1 . All the algorithms obtain three local minima, among which the first is the global minimum. The second column in ˆ X and ˆ L is conjugate of the first column (c.f. Remark 2). It can be verified that the non-sparse and sparse solutions satisfy the optimality conditions of Theorem 4. For the non-sparse solution, the number of iterations taken by Algorithm 1 with steepest descent are considerably larger than the Newton descent. This is because the steepest descent con verges very slowly near a local minimum. Therefore, we use Ne wton descent steps in Algorithms 1 and 2. Next, observe that the entries at the sparsity locations of the locally minimum feedbacks obtained by Algorithm 2 have small magnitude. Further , the av erage number of Newton descent iterations for conv ergence and the norm of the feedback obtained of Algorithm 2 is larger as compared to Algorithm 1. This is because the approximately-sparse optimization problem (25) is effecti vely more restricted than its non-sparse counterpart (16). Finally , observe that the solutions of Algorithm 5 are sparse. Note that Algorithm 5 inv olves the use of projection Algo- rithm 4, which in turn in volves running Algorithm 1 multiple times. Thus, for a balanced comparison, we present the total number of Newton descent iterations of Algorithm 1 inv olved in the ex ecution of Algorithm 5. 5 From T able I, we can observe that Algorithm 5 inv olves considerably more Newton descent iterations compared to Algorithms 1 and 2, since it in volves computationally intensive projection calculations by Algorithm 4. One way to reduce its computation time is to initialize is near the local minimum using the approximately sparse solution of Algorithm 2. 5 The number of outer iteration of Algorithm 5 are considerably less, for instance, 20 in Figure 2. T ABLE I C O MPA R IS O N O F M G E A P S O LU T I O NS B Y A L G O RI T H M S 1 , 2 A N D 5 Non-sparse solutions by Algorithm 1 ˆ X 1 =    − 0 . 0831 + 0 . 3288 j ˆ x ∗ 1 − 0 . 5053 0 . 2031 0 . 1919 − 0 . 4635 j 0 . 5612 − 0 . 2505 0 . 1603 + 0 . 5697 j 0 . 5617 0 . 8441 0 . 3546 + 0 . 3965 j − 0 . 3379 0 . 4283    ˆ L 1 =    − 0 . 5569 + 1 . 4099 j ˆ l ∗ 1 − 0 . 9154 2 . 5568 − 0 . 2967 + 1 . 0244 j − 0 . 5643 1 . 7468 − 0 . 0687 + 0 . 0928 j 0 . 0087 0 . 2722 0 . 2259 − 0 . 2523 j 0 . 0869 − 0 . 4692    ˆ F 1 =  − 0 . 1111 − 0 . 1089 − 0 . 0312 − 0 . 4399 − 0 . 1774 − 0 . 2072 0 . 0029 0 . 1348  , k ˆ F 1 k F = 0 . 5580 ˆ F 2 =  0 . 3817 − 0 . 3349 0 . 7280 − 0 . 2109 − 0 . 0873 − 0 . 4488 − 0 . 4798 − 0 . 0476  , k ˆ F 2 k F = 1 . 1286 ˆ F 3 =  0 . 3130 2 . 0160 1 . 2547 − 0 . 6608 0 . 0683 − 0 . 7352 − 0 . 0748 − 1 . 0491  , k ˆ F 3 k F = 2 . 7972 A verage # of Steepest/Newton descent iterations = 5402 . 1 / 15 . 5 Appr oximately-sparse solutions by Algorithm 2 with w = 30 ˆ F 1 =  0 . 9652 − 1 . 3681 0 . 0014 − 0 . 0021 0 . 4350 − 0 . 0023 − 0 . 6746 − 0 . 1594  , k ˆ F 1 k F = 1 . 8636 ˆ F 2 =  − 1 . 0599 − 1 . 7036 − 0 . 0013 − 0 . 0071 − 0 . 1702 − 0 . 0057 0 . 0263 − 0 . 0582  , k ˆ F 2 k F = 2 . 0146 ˆ F 3 =  3 . 3768 2 . 2570 0 . 0410 − 0 . 0098 − 1 . 4694 − 0 . 0061 0 . 1242 − 3 . 8379  , k ˆ F 3 k F = 5 . 7795 A verage # of Newton descent iterations = 19 . 1 Sparse solutions by Algorithm 5 ˆ X 1 =    − 0 . 3370 + 0 . 3296 i ˆ x ∗ 1 − 0 . 4525 0 . 5168 0 . 2884 − 0 . 1698 i 0 . 1764 − 0 . 3007 − 0 . 4705 + 0 . 5066 i − 0 . 7478 0 . 7831 0 . 2754 + 0 . 3347 i − 0 . 4527 0 . 1711    ˆ L 1 =    25 . 2072 + 16 . 5227 i ˆ l ∗ 1 44 . 6547 82 . 3616 11 . 0347 + 11 . 8982 i 12 . 5245 35 . 8266 − 12 . 2600 − 5 . 8549 i − 24 . 5680 − 39 . 1736 − 0 . 6417 − 2 . 2060 i − 0 . 4383 − 3 . 6464    ˆ F 1 =  0 . 9627 − 1 . 3744 0 . 0000 0 . 0000 0 . 4409 0 . 0000 − 0 . 6774 − 0 . 1599  , k ˆ F 1 k F = 1 . 8694 ˆ F 2 =  − 1 . 0797 − 1 . 7362 0 . 0000 0 . 0000 − 0 . 1677 0 . 0000 0 . 0264 − 0 . 0610  , k ˆ F 2 k F = 2 . 0525 ˆ F 3 =  3 . 4465 2 . 2568 0 . 0000 0 . 0000 − 1 . 8506 − 0 . 0000 0 . 3207 − 4 . 0679  , k ˆ F 3 k F = 6 . 0866 A verage # of Newton descent iterations: 1. Using random initialization = 8231 2. Using initialization by Algorithm 2 = 715 Figure 3 sho ws a sample run of EAP Algorithms 3 and 4 for G 0 =  − 1 . 0138 0 . 6851 − 0 . 1163 0 . 8929 − 1 . 8230 − 2 . 2041 − 0 . 1600 0 . 7293  . The sparse feedback obtained by Algorithms 3 and 4 are F =  0 . 1528 − 2 . 6710 0 . 0000 0 . 0000 − 0 . 8382 0 . 0000 0 . 1775 − 0 . 1768  and F =  4 . 2595 4 . 2938 0 . 0000 0 . 0000 − 0 . 2519 0 . 0000 − 2 . 1258 − 1 . 3991  , respecti vely . The projection error e X and the sparsity error e F capture the con vergence of Algorithms 3 and 4, respectiv ely . Figure 3 shows that these errors decrease, thus indicating con ver gence of the algorithms. Next, we provide an empirical verification of the con ver- gence of heuristic Algorithms 3 and 4. Let the sparsity ratio ( S R ) be defined as the ratio of the number of sparse entries to the total number of entries in F (i.e. S R = Number of 0 0 s in ¯ F mn ). W e perform 1000 random executions of both the algorithms. In each execution, n is randomly selected between 4 and 20 and m is randomly selected between 2 and n . Then, matrices ( A, B ) are randomly generated with appropriate dimensions. 12 5 10 15 20 25 0 1 2 3 4 5 6 7 Fig. 2. Optimization costs for a sample run of Algorithms 1, 2 and 5 (the algorithms conv erge to their global minima). For Algorithm 5, number of outer iterations are reported. 0 50 100 150 -6 -4 -2 0 (a) 0 50 100 -6 -4 -2 0 (b) Fig. 3. Projection and sparsity errors for a sample run of (a) Algorithm 3 and (b) Algorithm 4, respectively . Next, a binary sparsity pattern matrix ¯ F is randomly generated with the number of sparsity entries gi ven by b S R × mn c , where b·c denotes rounding to the next lowest integer . T o ensure feasibility (c.f. Assumption 2 and discussion below), we pick the desired eigenv alue set S randomly as follows: we select a random F r which satisfies the selected sparsity pattern ¯ F , and select S = Γ( A + B F r ) . Finally , we set iter max = 1000 and select a random starting point G 0 ( F 0 , X 0 ) , and run both algorithms from the same starting point. Let F sol denote the feedback solution obtained by Algorithms 3 and 4, respec- tiv ely , and let d F sol ,F 0 , k F sol − F 0 k F denote the distance between the starting point F 0 and the final solution. Since Algorithm 4 is a projection algorithm, the metric d F sol ,F 0 captures its projection performance. T ABLE II C O NV E R G EN C E P RO P ERT I E S O F E A P A L GO R I T H M S 3 A N D 4 SR Algorithm 3 Algorithm 4 1 / 4 Con ver gence instances = 427 Conver gence instances = 997 A verage d F sol ,F 0 = 8.47 A verage d F sol ,F 0 = 1.28 1 / 2 Con ver gence instances = 220 Conver gence instances = 988 A verage d F sol ,F 0 = 4.91 A verage d F sol ,F 0 = 1.61 2 / 3 Con ver gence instances = 53 Con vergence instances = 983 A verage d F sol ,F 0 = 6.12 A verage d F sol ,F 0 = 1.92 T able II shows the con vergence results of Algorithms 3 and 4 for three different sparsity ratios. While the con ver gence of Algorithm 3 deteriorates as F becomes more sparse, Algorithm 4 con verges in almost all instances. This implies that in the context of Algorithm 5, Algorithm 4 provides a valid projection in step 1 in almost all instances. W e remark that in the rare case that Algorithm 4 fails to con verge, we can reduce the step size α to obtain a new G ns and compute its projection. Further , we compute the average of distance d F sol ,F 0 ov er all executions of the Algorithms 3 and 4 that con ver ge. Observe that the av erage distance for Algorithm 4 is smaller than Algorithm 3. This sho ws that Algorithm 4 provides considerably better projection of F 0 in the space of sparse matrices as compared to Algorithm 3 (c.f. Remark 15). Note that the above simulations focus on the con vergence properties as S R changes and they do not capture the individual effects of the number of sparsity constraints, m and n (for instance, small systems versus large systems). V I . F E A S I B I L I T Y O F T H E S P A R S E E A P In this section we provide a discussion of the feasibility of the sparse EAP , i.e., eigenv alue assignment with sparse, static state feedback. W e address certain aspects of this problem, and leave a detailed characterization for future research. Lemma 16. (NP-hardness) Determining the feasibility of the sparse EAP is NP-har d. Pr oof. W e prov e the result using the NP-hardness property of the static output feedback pole placement (SOFPP) problem [53], which is stated as follows: for a gi ven A ∈ R n × n , B ∈ R n × m , and C ∈ R p × n , determine if there exists a non- sparse K ∈ R m × p such that the eigenv alues of A + B K C are at some desired locations. Without loss of generality , we assume that C is full ro w rank. Thus, there exists an inv ertible T =  C + V  ∈ R n × n with C V = 0 . T aking the similarity transformation by T (which preserves the eigen values) we get T − 1 ( A + B K C ) T = T − 1 AT | {z } ¯ A + T − 1 B | {z } ¯ B  K 0  . Clearly , the above SOFPP problem is equiv alent to a sparse EAP with matrices ¯ A, ¯ B and sparsity pattern gi ven by ¯ F =  1 m × p 0 m × ( n − p )  . Thus, the NP-hardness of the sparse EAP follows form the NP-hardness of the SOFPP problem. Next, we present graph-theoretic necessary and sufficient conditions for arbitrary eigen value assignment by sparse static feedback. Due to space constraints, we briefly introduce the required graph-theoretic notions and refer the reader to [54] for more details. Gi ven a square matrix A = [ a ij ] ∈ R n × n , let G A denote its associated graph with n vertices, and let a ij be the weight of the edge from verte x j to vertex i . A closed directed path (sequence of consecutiv e vertices) is called a cycle if the start and end vertices coincide, and no vertex appears more than once along the path (except for the first vertex). A set of verte x disjoint cycles is called a cycle family . The width of a cycle family is the number of edges contained in all its cycles. Lemma 17. (Necessary conditions) Let H = [ A B F 0 ] , and let G H be its associated graph. Let n s be the number of sparsity constraints (zero entries) in the feedback matrix F ∈ R m × n . Further , let S k denote the set of cycle families of G H of width k , with k = 1 , . . . , n . Necessary conditions for arbitrary eigen value assignment with sparse, static state feedback are: (i) n s ≤ ( m − 1) n , that is, F has at least n nonzero entries, (ii) for each k = 1 , · · · , n , there exist a nonzer o entry f i k j k of F such that its corresponding edge appears in S k . 13 Pr oof. (i) Arbitrary eigen value assignment requires that the feedback F should assign all the n coefficients of the char- acteristic polynomial det ( sI − A − B F ) to arbitrary v alues. This requires the mapping h : R mn − n s → R n from the nonzero entries of F to the coefficients of the characteristic polynomial to be surjectiv e, which imposes that the dimension of the domain of h should not be less that the dimension of its codomain. (ii) Let c k , k = 1 , . . . , n denote the coefficients of the polynomial det( sI − A − B F ). Then, c k is a multiaf fine function of the nonzero entries of F , which appear in the cycle families in S k [54]. If there exists no feedback edge in S k , then c k is fixed and does not depend on F . Thus, arbitrary eigen value assignment is not possible in this case. Lemma 18. (Sufficient conditions) Let H = [ A B F 0 ] , and let G H be its associated graph. Let S k denote the set of cycle families of G H of width k , and let F k denote the set of feedback edges 6 contained in S k , with k = 1 , . . . , n . Then, each of the following conditions is sufficient for arbitrary eigen value assignment with sparse, static state feedback: (i) for each k = 1 , · · · , n , ther e e xist a feedback edge that appears in S k and not in S j , for all j 6 = k , (ii) ther e exists a permutation { i 1 , i 2 , · · · , i n } of { 1 , 2 , · · · , n } such that ∅ 6 = F i 1 ⊂ F i 2 ⊂ · · · ⊂ F i n . Pr oof. (i) Similar to the proof of Lemma 17, if there exist a nonzero entry f i k j k that is exclusi ve to S k , then such variable can be used to assign c k arbitrarily . If this holds for k = 1 , . . . , n , then all coefficients of the characteristic polynomial can be assigned arbitrarily , resulting in arbitrary eigenv alue assignment. (ii) Condition (ii) guarantees that, for j = 2 , · · · , n , the co- efficient c i j depends on the feedback v ariables of c i j − 1 and on some additional feedback variables. These additional variables can be used to assign the coefficient c i j arbitrarily . T o illustrate the results, consider the following example: A =   a 11 a 12 0 0 0 0 a 31 0 0   , B =   b 11 0 0 b 22 0 0   , F =  f 11 0 0 f 21 0 f 23  . The corresponding graph and cycle families are shown in Fig. 4. Note that the edges f 11 , f 21 and f 23 are exclusiv e to cycle families of widths 1 , 2 and 3 , respecti vely . Thus, condition (i) of Lemma 18 is satisfied and arbitrary eigen- value assignment is possible. Next, consider the feedback F = h 0 0 f 13 f 21 f 22 0 i . In this case, the sets of feedback edges in the family cycles of different widths are F 1 = { f 22 } , F 2 = { f 22 , f 13 , f 21 } and F 3 = { f 22 , f 13 } . W e observe that F 1 ⊂ F 3 ⊂ F 2 (condition (ii) of Lemma 18) and arbitrary eigen value assignment is possible. Further, if F = h 0 f 12 f 13 f 21 0 0 i , then F 1 = F 3 = ∅ , and the coef ficients c 1 , c 3 of det ( sI − A − B F ) are fixed. This violates condition (ii) of Lemma 17 and prev ents arbitrary eigenv alue assignment with the given F . Note that the conditions in Lemmas 17 and 18 are construc- tiv e, and can also be used to determine a sparsity pattern that 6 Feedback edges are those associated with the nonzero entries of F . 1 3 2 f 11 b 11 a 11 a 31 f 23 f 21 b 22 a 12 u 2 u 1 (a) Graph G H , where blue and orange nodes correspond to state and control vertices, respectively . 1 3 2 a 31 f 23 b 22 a 12 1 a 11 1 f 11 b 11 1 2 f 21 b 22 a 12 Width 1 Width 2 Width 3 u 1 u 2 u 2 (b) Cycle families of G H . All cycle families contain a single cycle. There are two cycle families of width 1 , and one cycle family each of width 2 and 3 . Fig. 4. Graph G H and its cycle families. u 1 , u 2 denote the control vertices. guarantees feasibility of the sparse EAP . W e leave the design of such algorithm as a topic of future in vestigation. Remark 19. (Comparison with exiting results) W e emphasize that the conditions pr esented in Lemmas 17 and 18 for arbi- trary eigen value assignment using static state feedback are not equivalent to the conditions for non-existence of SFMs studied in [27], [31]–[35], [38]. The r eason is that arbitrary assign- ment of non-SFMs necessarily r equires a dynamic contr oller and cannot, in general, be achieved by a static contr oller . Further , our graph-theor etic results are based on the feedback edges being suitably cover ed by cycle families, wher eas the r esults in [32], [33] are based on the state nodes/subgraphs being suitably cover ed by str ong components, cycles/cactus.  V I I . C O N C L U S I O N In this paper we studied the MGEAP for L TI systems with arbitrary sparsity constraints on the static feedback matrix. W e presented an analytical characterization of its locally optimal solutions, thereby providing explicit relations between an optimal solution and the eigen vector matrices of the associated closed loop system. W e also provided a geometric interpreta- tion of an optimal solution of the non-sparse MGEAP . Using a Sylvester -based parametrization, we dev eloped a heuristic projected gradient descent algorithm to obtain local solutions to the MGEAP . W e also presented two nov el algorithms for solving the sparse EAP and an algorithm to obtain approxi- mately sparse local solution to the MGEAP . Numerical studies suggest that our heuristic algorithm con ver ges in most cases. Further , we also discussed the feasibility of the sparse EAP and provided necessary and sufficient conditions for the same. 14 The analysis in the paper is de veloped, for the most part, under the assumption that the sparse EAP problem with static feedback is feasible. A future direction of research includes a more detailed characterization of the feasibility of the EAP , a constructiv e algorithm to determine feasible sparsity patterns, a con vex relaxation of the sparse MGEAP with guaranteed distance from optimality , and a more rigorous analysis of con vergence of Algorithms 4 and 5. R E F E R E N C E S [1] R. Schmid, L. Ntogramatzidis, T . Nguyen, and A. Pandey . A unified method for optimal arbitrary pole placement. Automatica , 50(8):2150– 2154, 2014. [2] Y . Peretz. On parametrization of all the exact pole-assignment state- feedbacks for L TI systems. IEEE T ransactions on Automatic Contr ol , 62(7):3436–3441, 2016. [3] P . J. Antsaklis and A. N. Michael. Linear systems . Birkhauser , 2005. [4] J. Kautsky , N. K. Nichols, and P . V an Dooren. Robust pole assignment in linear state feedback. International Journal of Control , 44(5):11291155, 1985. [5] A. Pandey , R. Schmid, and T . Nguyen. Performance survey of minimum gain exact pole placement methods. In European Control Confer ence , pages 1808–1812, Linz, Austria, 2015. [6] Y . J. Peretz. A randomized approximation algorithm for the minimal- norm static-output-feedback problem. Automatica , 63:221–234, 2016. [7] B. Bamieh, F . Paganini, and M. A. Dahleh. Distributed control of spatially inv ariant systems. IEEE T ransactions on Automatic Control , 47(7):1091–1107, 2002. [8] M. Rotkowitz and S. Lall. A characterization of conv ex problems in decentralized control. IEEE Tr ansactions on Automatic Contr ol , 51(2):274–286, 2006. [9] A. Mahajan, N. C. Martins, M. C. Rotkowitz, and S. Y uksel. Information structures in optimal decentralized control. In IEEE Conf. on Decision and Contr ol , pages 1291–1306, Maui, HI, USA, December 2012. [10] F . Lin, M. Fardad, and M. R. Jov anovi ´ c. Augmented Lagrangian approach to design of structured optimal state feedback gains. IEEE T ransactions on Automatic Contr ol , 56(12):2923–2929, 2011. [11] R. Schmid, A. Pandey , and T . Nguyen. Robust pole placement with moores algorithm. IEEE T ransactions on Automatic Control , 59(2):500– 505, 2014. [12] M. Ait Rami, S. E. Faiz, A. Benzaouia, and F . T adeo. Robust exact pole placement via an LMI-based algorithm. IEEE Tr ansactions on Automatic Control , 54(2):394–398, 2009. [13] R. Byers and S. G. Nash. Approaches to robust pole assignment. International Journal of Contr ol , 49(1):97–117, 1989. [14] A. V arga. Robust pole assignment via Sylvester equation based state feedback parametrization. In IEEE International Symposium on Computer-Aided Control System Design , Anchorage, AK, USA, 2000. [15] E. K. Chu. Pole assignment via the Schur form. Systems & Control Letters , 56(4):303–314, 2007. [16] A. L. Tits and Y . Y ang. Globally con vergent algorithms for robust pole assignment by state feedback. IEEE T ransactions on Automatic Control , 41(10):1432–1452, 1996. [17] E. K. Chu. Optimization and pole assignment in control system design. International Journal of Applied Mathematics and Computer Science , 11(5):1035–1053, 2001. [18] A. Pandey , R. Schmid, T . Nguyen, Y . Y ang, V . Sima, and A. L. Tits. Performance survey of robust pole placement methods. In IEEE Conf. on Decision and Contr ol , pages 3186–3191, Los Angeles, CA, USA, 2014. [19] B. A. White. Eigenstructure assignment: a survey . Pr oceedings of the Institution of Mechanical Engineers , 209(1):1–11, 1995. [20] L. F . Godbout and D. Jordan. Modal control with feedback-gain constraints. Proceedings of the Institution of Electrical Engineers , 122(4):433–436, 1975. [21] B. Kouv aritakis and R. Cameron. Pole placement with minimised norm controllers. IEE Proceedings D - Control Theory and Applications , 127(1):32–36, 1980. [22] H. K. T am and J. Lam. Newton’ s approach to gain-controlled robust pole placement. IEE Pr oceedings - Control Theory and Applications , 144(5):439–446, 1997. [23] A. V arga. A numerically reliable approach to robust pole assignment for descriptor systems. Futur e Generation Computer Systems , 19(7):1221– 1230, 2003. [24] A. V arga. Robust and minimum norm pole assignment with periodic state feedback. IEEE T ransactions on Automatic Contr ol , 45(5):1017– 1022, 2000. [25] S. Datta, B. Chaudhuri, and D. Chakraborty . Partial pole placement with minimum norm controller . In IEEE Conf. on Decision and Control , pages 5001–5006, Atlanta, GA, USA, 2010. [26] S. Datta and D. Chakraborty . Feedback norm minimization with regional pole placement. International Journal of Control , 87(11):2239–2251, 2014. [27] S. H. W ang and E. J. Davison. On the stabilization of decentralized con- trol systems. IEEE T ransactions on Automatic Control , 18(5):473478, 1973. [28] J. P . Corfmat and A. S. Morse. Decentralized control of linear multiv ariable systems. Automatica , 12(5):479495, 1976. [29] B. D. O. Anderson and D. J. Clements. Algebraic characterization of fixed modes in decentralized control. Automatica , 17(5):703712, 1981. [30] E. J. Davison and U. Ozguner . Characterizations of decentralized fixed modes for interconnected systems. Automatica , 19(2):169182, 1983. [31] M. E. Sezer and D. D. Siljak. Structurally fixed modes. Systems & Contr ol Letters , 1(1):6064, 1981. [32] V . Pichai, M. E. Sezer, and D. D. Siljak. A graph-theoretic characteri- zation of structurally fixed modes. Automatica , 20(2):247250, 1984. [33] V . Pichai, M. E. Sezer, and D. D. Siljak. A graphical test for structurally fixed modes. Mathematical Modelling , 4(4):339348, 1983. [34] A. Alavian and M. Rotkowitz. Fixed modes of decentralized systems with arbitrary information structure. In International Symposium on Mathematical Theory of Networks and Systems , pages 913–919, Gronin- gen, The Netherlands, 2014. [35] A. Alavian and M. Rotkowitz. Stabilizing decentralized systems with arbitrary information structure. In IEEE Conf. on Decision and Contr ol , pages 4032–4038, Los Angeles, CA, USA, 2014. [36] S. Pequito, G. Ramos, S. Kar , A. P . Aguiar , and J. Ramos. The robust minimal controllability problem. Automatica , 82:261–268, 2017. [37] S. Pequito, S. Kar, and A. P . Aguiar. A framework for structural input/output and control configuration selection in large-scale systems. IEEE T ransactions on Automatic Contr ol , 61(2):303318, 2016. [38] S. Moothedath, P . Chaporkar, and M. N. Belur. Minimum cost feedback selection for arbitrary pole placement in structured systems. IEEE T ransactions on Automatic Contr ol , 63(11):3881–3888, 2018. [39] Petersen K. B. and M. S. Pedersen. The Matrix Cookbook . T echnical Univ ersity of Denmark, 2012. [40] J. R. Magnus and H. Neudecker . Matrix Differential Calculus with Applications in Statistics and Econometrics . John Wile y and Sons, 1999. [41] D. Kressner and M. V oigt. Distance problems for linear dynamical systems. In Numerical Algebra, Matrix Theory , Differ ential-Algebraic Equations and Contr ol Theory , pages 559–583. Springer , 2015. [42] J. Rosenthal and J. C. W illems. Open problems in the area of pole placement. In Open Problems in Mathematical Systems and Contr ol Theory , page 181191. Springer, 1999. [43] J. Lev entides and N. Karcanias. Sufficient conditions for arbitrary pole assignment by constant decentralized output feedback. Mathematics of Contr ol, Signals and Systems , 8(3):222–240, 1995. [44] D. G. Luenberger and Y . Y e. Linear and Nonlinear Progr amming . Springer , 2008. [45] P . A. Absil, R. Mahony , and R. Sepulchre. Optimization Algorithms on Matrix manifolds . Princeton Univ ersity Press, 2008. [46] P Lancaster . The Theory of Matrices . American Press, 1985. [47] J. H. Wilkinson. The Algebraic Eigenvalue Problem . Clarendon Press, 1988. [48] S. P . Bhattacharyya and E. De Souza. Pole assignment via Sylvesters eqauation. Systems & Contr ol Letters , 1(4):261263, 1982. [49] N. J. Hingham. Accuracy and Stability of Numerical Algorithms . SIAM, 2002. [50] S. P . Bhattacharyya and E. De Souza. Controllability , observability and the solution of AX-XB=C. Linear algebra and its applications , 39(1):167188, 1981. [51] R. Escalante and M. Raydan. Alternating Projection Methods . SIAM, 2011. [52] A. S. Lewis and J. Malick. Alternating projections on manifolds. Mathematics of Operations Researc h , 33(1):216–234, 2008. [53] M. Fu. Pole placement via static output feedback is NP-hard. IEEE T ransactions on Automatic Contr ol , 49(5):855–857, 2004. [54] K. J. Reinschke. Multivariable Control: A Graph-Theoretic Appr oach . Springer , 1988.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment