Solving Fractional Polynomial Problems by Polynomial Optimization Theory
This work aims to introduce the framework of polynomial optimization theory to solve fractional polynomial problems (FPPs). Unlike other widely used optimization frameworks, the proposed one applies to a larger class of FPPs, not necessarily defined …
Authors: Andrea Pizzo, Alessio Zappone, Luca Sanguinetti
1 Solving Fractional Polynomial Problems by Polynomial Optimizati on Th eory Andrea Pizzo, Student Member , I EEE , Alessio Zappon e , Senior Member , IEEE , Luca Sanguinetti, S enior Member , IEEE Abstract —This work aims to introduce the framew ork of polynomial optimization theory to solv e fractional polynomial problems (FPPs). Unlike other widely used optimization frame- works, t h e proposed one applies to a larg er class of FPP s, n ot necessarily defined by concav e and con vex functions. An iterativ e algorithm th at is prov ably conv erge nt and enjoys asymptotic optimality properties is proposed. Numerical results are used to validate its accuracy in the non-asymptotic regime when applied to the energy efficiency maximization in multiuser multiple-inp ut multiple-outp u t communication systems. I . I N T R O D U C T I O N C ONSIDER the fraction a l p olynom ial pro blem (FPP) r ⋆ = max x ∈X f ( x ) g ( x ) (1) with x = [ x 1 , . . . , x n ] T and X = { x ∈ R n | h i ( x ) ≥ 0 , i = 1 , . . . , m } (2) where f ( x ) , g ( x ) , h i ( x ) : R n → R are m ultiv ariate polyno - mial fun c tions and X ⊆ R n is a co mpact semialg ebraic set, not n e cessarily conv ex. Problem s of the for m in (1 ) arise in different areas o f signal processing, e.g., energy efficiency maximization [1], filter design [2], [3], remote sensing [4], and control the o ry [5]. More g enerally , [6] sh owed th at any n o n- linear func tion can be approxima ted using ration al f unctions, achieving better accur acy than with a trun cated T aylor series. The standard a pproach to tackle fractional p roblems is fractional prog ramming theory [ 7]. However , this theory pro - vides algor ithms with limited complexity only if f ( x ) and g ( x ) are co ncave and co nvex, respecti vely , an d the co nstraint function s { h i ( x ) } are conv ex. I f any of these assumption s is not fulfilled, subop timal method s are need ed. On e o f them is the alternating optimizatio n method [8], which decomposes the origina l prob lem in subpro blems who se solutio ns can be computed with affordable complexity . Howe ver, this is not always th e case f or (1), since f ( x ) , g ( x ) and { h i ( x ) } may not be conv ex or concave fun c tio ns even with respect to th e individual variables { x i } . Anoth e r po ssible ap proach is g i ven by sem idefinite relax ation [9] . This method, howe ver , applies only to the q u adratic case, and is op timal only when at m ost two constraints are enfor c ed. Finally , another approach to tackle non-conc ave fraction al prob lems is the framew ork of sequential fractio nal progr amming [10], [11 ] , e ven though it is in gen eral suboptimal and its application to th e case of general multiv a r iate polyno mial fu nctions is no t straightf o rward. A. Pizzo and L. Sanguinetti are with the Univ ersity of Pisa, Dipar- timento di Ingegneria dell’Informazione , Ita ly (a ndrea.piz zo@ing.uni pi.it, luca.sangu inetti @unipi.it). A. Zappone is with the Large Systems and Net works Group, Centrale Sup ´ elec, France (alessio.zappone@l 2s.centra lesupelec.fr) Motiv ated by this backgro u nd, this work aims at in troducin g to the field of sign a l pro cessing an altern ati ve approach based on polynomial optimization th eory [13], and more specifically on the so-c a lled sum-of-sq uares’ (SOSs) ref o rmulation [ 12]. Combined with th e classical fractiona l programm in g theor y [7], we show how the po lynomial optimiza tion theory can b e used to globally solve (1) as the o rder of the SOS refo r mula- tion gr ows to in finity . This is achieved without req uiring any a priori assumptio n on the con vexity or concavity of the inv olved function s. A few preliminaries on multivariate poly nomial theory in connection with the SOS method is provid e d in the Appen dix. Due to space lim itations, we limit ourselves to an introdu ctory discussion only . For a more compreh ensiv e overview on polynomial pro grammin g by the SOS method the reader is r eferred to [12 ], [13]. The de veloped fr amew ork is then ap plied to the m axi- mization of energy efficiency (EE), defined as the b enefit- cost ratio in terms of amount o f informatio n that can be reliably transferr ed per unit o f time, in multiuser m ultiple- input multiple- output (MU-M I MO) commun ication systems, over the total co nsumed power . Particularly , the optimization is carried out with respect to the numb er o f users and an- tennas [ 14], [1 5]. Nume rical results are used to sh ow tha t the de veloped framework closely approx imates the o ptimal configur ation ( obtained b y exhau stive search ) with affordab le complexity . I I . P RO P O S E D F R A M E W O R K The state-of-th e-art approach to solve (1) is the Din kelbach’ s algorithm [ 7], which oper ates as follows. 1 Algorithm 1 Dinkelbach ’ s a lgorithm Set k = 0 ; λ k = 0 ; λ k − 1 = − 1 ; 0 < ε < 1 ; while | λ k − λ k − 1 | ≥ ε d o x k = arg max x ∈X { p k ( x ) = f ( x ) − λ k g ( x ) } ; (3) F ( λ k ) = f ( x k ) − λ k g ( x k ) , λ k +1 = f ( x k ) g ( x k ) ; k = k + 1 ; end while Algorithm 1 conver ges to the glob al op tim um x ⋆ of (1) with sup er-linear conv ergence rate, but each iteratio n k re- quires to solve the no n -fraction al auxiliary problem in (3). Unfortu n ately , (3 ) is in general non-c o n vex in the settin g o f (1), wh ich makes the direc t implem entation of Algorithm 1 computatio nally u nfeasible. The aim of this work is to show 1 W ithout loss of generali ty , we assume that g ( x ) > 0 . If g ( x ) < 0 , one can alway s replace f ( x ) g ( x ) by f ( x ) g ( x ) g 2 ( x ) , w hich has a positi ve denominator . 2 how (3 ) can be glo bally solved wh en p k ( x ) is a generic (non - conv ex) p olynom ia l function. W e begin by reformula tin g (3) into its e pigraph for m: 2 r ⋆ λ = max x ∈ R n ,t ∈ R t sub ject to h i ( x ) ≥ 0 i = 1 , . . . , m h 0 ( x , t ) = p ( x ) − t ≥ 0 . (4) In order to solve (4), we resort to the SOS reform ulation [13], [2 7], who se basic id ea is to app roximate non- negati ve polyno mials as a sum of squar e s. Specifically , fo llowing [13] , [16], the first step of the method is to emb ed all constraint function s in ( 4) into the single con stra in t σ ( x ) + σ 0 ( x ) h 0 ( x , t ) + m X i =1 σ i ( x ) h i ( x ) ≥ 0 (5) wherein, σ ( x ) ∈ SOS ℓ , σ 0 ( x ) ∈ SOS ℓ − v , and σ i ( x ) ∈ SOS ℓ − deg ( h i ) , with SOS q denoting the set of all p olynomia ls of degree q that c an be written as a sum of squares, namely: SOS q = { p ( x ) , deg ( p ) ≤ q : p ( x ) = J X j =1 θ 2 j ( x ) , deg ( θ j ) ≤ ⌈ q / 2 ⌉} , whereas v = max { deg( f ) , deg( g ) , max i deg( h i ) } , and ℓ > v is the orde r of the SOS refor mulation. I t is interesting to observe that the representation in (5) can be viewed as a generalized Lagran gian function [13 ], [17] associated with the constrain ed o ptimization prob lem in (4), with the SOS polyno mials σ ( · ) and { σ i ( · ) } m i =0 playing the role of non- negativ e Lag r ange multip liers, as in tradition al duality theory [18, Ch. 5 ] . Next, based on (5), the f ollowing SOS r eformula tio n o f (4 ) is ob tained: r ⋆ sos ,ℓ = max x ∈ R n ,t ∈ R t sub ject to σ ( x ) + σ 0 ( x ) h 0 ( x , t ) + m X i =1 σ i ( x ) h i ( x ) ≥ 0 σ i ( x ) ∈ SOS ℓ − deg( h i ) i = 1 , . . . , m σ ( x ) ∈ SOS ℓ , σ 0 ( x ) ∈ SOS ℓ − v . (6) For gener al poly nomials, Pro b lem (6 ) is still difficult to solve since its constrain t function s might no t be concave. Nev ertheless, it can b e sh own th at Problem (6) and its dual have z e ro duality gap [13 ] and that the dual of Problem (6 ) can be cast as a semi-defin ite prog ram (SDP), which therefo re can be solved in polyn omial time by using standar d semi- definite program ming tools [18]. M oreover , for a sufficiently large ℓ the solu tion of Problem (6) is also the g lobal solution of Problem (4) (and he n ce o f Pro blem ( 3), too). Formally , denote by M d and { p α } the moment matrix 3 and th e vector of coord inates in th e mon omial ba se of polyn omial p ( · ) , by M d − deg( h i ) / 2 and { h i, α } the momen t matrix an d the vector of coo rdinates in the m onomial base o f polyn omial h i , for i = 1 , . . . , M . Th en, the following theo r em holds. 2 The subscript k is omitted hereafter for notational simpli city . 3 The definitio n of moment matrix of a polynomial and of polynomial expa nsion ov er the monomial base are formally introduced in the Appendix . Theorem 1. [13] The du al of Pr oblem ( 6 ) is the SDP r ⋆ mom ,d = min y ∈ R s n,d X α ∈ N n d p α y α sub ject to M d ( y ) 0 M d −⌈ deg( h i ) / 2 ⌉ ( h i, α y ) 0 i = 0 , . . . , m , y (0 ,..., 0) = 1 , (7) and for any SOS r eformulation o rder ℓ , str ong duality holds, i.e. r ⋆ sos ,ℓ = r ⋆ mom ,d [13, Theor em 4.2]. Mor e over , r ⋆ sos ,ℓ → r ⋆ λ when ℓ → ∞ . The final step of the procedur e is to recover th e optimal x ⋆ ∈ R n from th e glo b al solution o f (7 ), say y ⋆ ∈ R s n,d . Follo wing [13], [19] , if (7) is feasible, then th e momen t matrix is gu aranteed to have rank one and th erefore th ere exists one vector v such that M d ( y ⋆ ) = vv T . Finally , the optimal solution of ( 3) is fou nd to be equal to x ⋆ = z ⋆ , with z ⋆ = v (2 : n + 1) . Theorem 1 e nsures that we can appro ach th e global solution of (3) within any desired tolera nce, if the SOS refor mulation order ℓ is cho sen large en ough 4 . As a con sequence, the propo sed implementation of Algor ithm 1 in which (3) is solved in each iteration by solving (7), con verges to the globa l solution of Pr oblem (1). The comp utational comp lexity of Algor ithm 1 depen d s on the nu m ber of iterations re q uired to con verge and the computatio nal complexity to solve (3). The latter is formulated as an SDP in (7), which accounts for a total number of m + 1 LMIs. Eac h LMI includ e a system of s n,d single LMIs eac h of dimension s n,d × s n,d . T h us, the comp utational com p lexity of solvin g (7) throu g h, e.g ., interior poin t me th od, is in th e order of O n 2 ms 3 n,d + nms 4 n,d arithmetic oper a tions [20, Ch. 11]. Also, sinc e s n,d ≈ n d , the overall co mplexity g rows polyno mially 5 with bo th nu mber of primal variables n an d d (which depends on the order ℓ of the SOS reformulatio n), and linearly with the n umber of po lynomial constrain ts m . I I I . A P P L I C AT I O N : E N E R G Y E FFI C I E N C Y M A X I M I Z A T I O N The framework developed a b ove is applied next to solve an EE max imization pr oblem in cellular networks. A. Pr oblem statem e nt Inspired by [1 5], we look for th e optima l deployme nt of a cellular n etwork for maximal EE w h ile im posing the average signal-to-in terference- plus-noise ratio ( SINR) be larger th an a given con straint γ . T he optimization v ariables are the pilot reuse factor β , the number K of users per cell and the n umber M of antenna s at each base station. Th e op tim al β is proved to be such that the SINR constraint is satisfied w ith equality . 4 In practice, we do not need to solve (7) for increasin g value s of ℓ until con verge nce, but it is enough to solve it just once , for a large enough ℓ . The numerica l analysis in Section III shows that ℓ = 12 (i.e., d = 6 ) leads to global optimality for the considered proble m. 5 Although solving (7) s eems unpractica l for problem of large size, practical problems exhibit an affordabl e computat ional complexit y for modest value s of n and d . In addit ion, most polynomials have only a few nonzero monomial coef ficients and thus sparsi ty can be le veraged; see [21] and [22]. 3 EE ( K, M ) = 1 − K τ B 1 ( K,M ) γ M − B 2 ( K ) γ B lo g 2 (1 + γ ) K C 0 + C 1 + U τ K + D 0 M + D 1 K M + 1 − K τ B 1 ( K,M ) γ M − B 2 ( K ) γ K U + A B lo g 2 (1 + γ ) (8) EE ( x 1 , x 2 ) = f (1 , 0) x 1 + f (2 , 0) x 2 1 + f (1 , 1) x 1 x 2 + f (2 , 1) x 2 1 x 2 + f (3 , 0) x 3 1 g (0 , 0) + g (1 , 0) x 1 + g (0 , 1) x 2 + g (2 , 0) x 2 1 + g (1 , 1) x 1 x 2 + g (0 , 2) x 2 2 + g (2 , 1) x 2 1 x 2 + g (1 , 2) x 1 x 2 2 + g (3 , 0) x 3 1 (9) T ABLE I: Polynomial coef ficients associated with the constraints in (11). Parame ter V alue Parame ter V alue h 1 (0 , 0) − γ τ 1 + 2 α − 2 h 1 (2 , 0) − γ 4 ( α − 2) 2 + 1 α − 1 + 2 α − 2 h 1 (1 , 0) − γ SNR 2 α − 2 − γ τ 1 + 2 α − 2 1 + 1 SNR h 2 (0 , 0) γ SNR 2 α − 2 + 1 + 1 SNR h 1 (0 , 1) τ h 2 (1 , 0) γ 4 ( α − 2) 2 + 1 α − 1 + 2 α − 2 + γ 1 + 2 α − 2 1 + 1 SNR h 1 (1 , 1) − γ α − 1 h 2 (0 , 1) γ 1 α − 1 − 1 This y ields β ⋆ = B 1 ( K,M ) γ M − B 2 ( K ) γ where B 1 ( K, M ) and B 2 ( K ) are defined in [15 , Eqs. (19)- (20)]. 6 The optimization over ( K, M ) relies on th e f ollowing pro b lem [15, Eq. (22) ] max ( K,M ) ∈ R 2 EE( K, M ) sub ject to 1 ≤ B 1 ( K, M ) γ M − B 2 ( K ) γ ≤ τ K (10) where the objectiv e f unction E E( K , M ) is given in (8) with known param eters 7 U , A , B , {C i } , {D i } . In [15] the problem is tackled by an alternating max imization approac h, which is in general suboptimal. Here, Prob le m (10 ) is tackled by the considered polynom ial fram ew o rk. T o elaborate, we define x = [ K, M ] T ∈ R 2 and rewrite the optimization prob lem a s in (1), wh ic h yield s max x EE( x ) = f ( x ) g ( x ) sub ject to h i ( x ) ≥ 0 , i = 1 , 2 (11) where EE( x ) is in (9) an d h 1 ( x ) = h 1 (0 , 0) + h 1 (1 , 0) x 1 + h 1 (0 , 1) x 2 + h 1 (1 , 1) x 1 x 2 + h 1 (2 , 0) x 2 1 and h 2 ( x ) = h 2 (0 , 0) + h 2 (1 , 0) x 1 + h 2 (0 , 1) x 2 ; the coor d inates { h 1 α } , { h 2 α } f or α ∈ N 2 2 are shown in T able I and can be easily der i ved as done in the Example 1 giv en in Appendix . Next, we solve (11) by using the framework discussed in Section II. B. Numerica l va lidation Numerical resu lts are now u sed to validate the a c curacy o f Algorithm 1 when applied with a finite o rder ℓ o f the SOS reform u lation. The har dware coefficients are taken f rom [15], while we fix the SINR co nstraint to γ = 3 , the base station density to λ = 5 BS / km 2 and the average signal-to-no ise ratio (SNR) to 0 dB. At e a ch iteration of Algorithm 1, ( 3 ) is solved by using Y ALMIP [23 ] with the solver SDPT3 [24 ]. The code is available on line at https://github.com/lucasangu inetti/ 6 W e neglect the hardwa re impairments for the sak e of simplicity . 7 Those depend on a varie ty of fixed hardwa re coef fi cient s, whose typical v alues strongly depend on the actual hardware equipment and the state-of- the-art in circuit implementation ; details can be found in [15]. 0 0 200 2 400 600 0 4 10 20 800 30 40 6 K M EE [M bit / Joule] x ⋆ = (8 , 136) x ⋆ mom = (8 , 133) k = 0 k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 Fig. 1: EE (in Mbit/Joule) as a function of M and K . The optimal is computed either by mean s of the iterativ e Algorithm 1 with ( d = 6) ℓ = 12 (red square) or exhausti ve search (black t r i angle). 1 2 3 4 5 6 7 8 9 10 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 Number of itera tions k Relat i ve error ǫ d = 2 d = 4 d = 6 d = 8 Fig. 2: Relative error ǫ of Algorithm 1 as a function of the number of iterations k and for dif ferent ℓ -order S OS reformulation in (6)–(7). EE- Polynomial- Theor y f or testing d ifferent network config- urations. Fig. 1 shows the E E (measu red in Mbit/Joule) as a function of M and K , obtain ed with a exhaustive search. Algo rithm 1 conv erges in less th an ten iteratio ns to the po int x ⋆ mom = ( ⌊ K ⋆ mom ⌉ , ⌊ M ⋆ mom ⌉ ) = (8 , 1 3 3) , which yields an e rror of ǫ = 1 − r ⋆ mom /r ⋆ λ ≈ 10 − 4 with respect to the glob al o ptimal x ⋆ = (8 , 13 5) . This is ach iev ed by using 4 d = 6( ℓ = 12) . Since n = 2 , we ha ve s = 6 4 = 15 . Th us, the overall com plexity in so lving (11) is roughly O 10 5 arithmetic operation s, which with a pro cessing unit op erating at 1 0 Gflops/s [14] takes 1 0 µs per iteratio n. Fig. 2 p lots th e relativ e er r or a s a f unction of the nu mber o f iteratio ns fo r d ∈ { 2 , 4 , 6 , 8 } , which means ℓ ∈ { 4 , 8 , 12 , 1 6 } . As it can b e seen, Algorithm 1 is provably conv ergent even f or small d a nd within f ew iterations. I V . C O N C L U S I O N S W e proposed a framework for fractional poly nomial op - timization b y employing SOS reformulation metho ds with in Dinkelbach’ s iterativ e algorithm . The p roposed appr o ach ap - plies to a wid e r set of prob lems th an com peting alternatives and enjoys optimality proper ties as the o rder ℓ of the SOS reform u lation g rows to in finity . The fr amew ork was applied to the EE ma x imization o f cellular network s, and with d = 6 , ( ℓ = 12 ) was shown to conv erge in fiv e iterations and exhibits near-optimal p erform ance when compared to an exhaustive search algorithm. I t sho u ld also be observed that the consid- ered fram ework could be app lied to power contro l pr oblems for EE maximization , upon expanding all non -polyno mial function s b y T aylor series or le veragin g th e appro ximation method f rom [ 6]. A P P E N D I X Consider n, v ∈ N , the po lynomial p ( x ) : R n → R , and define th e sets: N n v = { α ∈ N n : n X i =1 α i ≤ v } , | N n v | = n + v v = s n,v . (12) Every p ( x ) with degree v , i.e., deg ( p ) = v , may be u niquely written as a finite linear com bination of mo nomials with maximum degree less than o r equal to v [ 25] p ( x ) = X α ∈ N n v p α x α with x α = x α 1 1 x α 2 2 . . . x α n n ∈ R (13) with co e fficients p α ∈ R for α = [ α 1 , . . . , α n ] T ∈ N n v . Th e expression in (13) represents th e expansion o f the poly nomial p ( x ) over th e can onical monomial base, and the vector p α collects the c oordina te s of the expansion. While the represen - tation in (13 ) ap plies to any mu lti-variate polyn omial, if p ( x ) can be written as an SOS, then it also admits the repre sentation p ( x ) = m d ( x ) T Wm d ( x ) , W 0 (14) where m d ( x ) = [1 , x 1 , . . . , x n , x 2 1 , x 1 x 2 , . . . , x d n ] T ∈ R s n,d is the f ull mo nomial ba sis inclu ding all the mono mials up to degree d = ⌈ v / 2 ⌉ , and W is a p ositi ve semidefinite matrix. By rea rranging ( 14) as tr( m d ( x ) m d ( x ) T W ) and u sing the fact that W 0 we infer that it must hold m d ( x ) m d ( x ) T = 1 x 1 . . . x n x 1 x 2 1 . . . x 1 x n . . . . . . . . . . . . x n x 1 x n . . . x d n 0 (15) in order f or p ( x ) to be po siti ve. Elaboratin g fur th er on (15) fo llowing [13], [17], we introdu c e the so-c alled moment matrix rep r esentation of (15). Specifically , by a lin e a rization appro ach, each entry of the matrix in (15 ), i.e., x α x β ∈ R , is replaced by y α + β ∈ R , for α , β ∈ N n d . Denoting by y = [ y 00 ... 0 , y 10 ... 0 , . . . , y 00 ...d , . . . , y 00 ... 2 d ] T ∈ R s n,d the collection of the linearized variables, (15) is reform ulated as M d ( y ) = y 00 ... 0 y 10 ... 0 . . . y 0 ...d y 10 ... 0 y 20 ... 0 . . . y 10 ...d . . . . . . . . . . . . y 00 ...d y 10 ...d . . . y 00 ... 2 d 0 , ( 16) which is by definitio n th e moment matrix of the po lynomial p ( x ) . SOS ref o rmulation tu rns out very u seful when it is ne e ded to check the non- negati vity of a multivariate functio n w ( x ) [12], which is in gen e r al an NP-hard proble m [26]. In th is context, it is normally easier to check whether w ( x ) can be reformulated as an SOS polyno mial, which clearly implies non-negativity . In th e spe c ial case of gen eric v - degree polyn omial func tions, for which w ( x ) = p ( x ) as in (13), the SOS set is co n vex an d the feasibility test reduces to solv ing a semidefinite program (SDP) [27, Lemma 3.1]. Clearly , although being an SOS implies non-n egati vity , the co ntrary does not usu a lly hold 8 and thus, in gener al, solving a pr oblem inv o king SOS r ather th a n non-n egati vity lea ds to a subop timal solution. Howev er , by increasing the degree of the SOS po lynomials that are used to represent p ( x ) , it is possible to appr o ach the optimal solution within any pred e fin ed to lerance ( see Theo rem 1). Example 1 . In ord er to g rasp th e potential of th e above framework, we provide h ere an easy (u nconstrain e d ) problem as an example. T o this end, conside r the following: min x ∈ R 2 p ( x ) = ( x 2 − 2) 2 + 2 x 2 1 + x 1 x 2 + 5 . (17) The solutio n to (17) is clearly x ⋆ = ( x 1 , x 2 ) = ( − 1 , 2) . T o use the fr a mew ork developed above, we first write down the set N 1 2 = [(0 , 0) , (1 , 0 ) , (0 , 1 ) , (2 , 0 ) , (1 , 1 ) , (0 , 2 )] T , (18) and then retrieve { p α } = [9 , 0 , − 4 , 2 , 1 , 2] T from p ( x ) in (17). Next, the moment m atrix o f p ( x ) is obtain ed as: M 1 ( y ) = y 0 , 0 y 1 , 0 y 0 , 1 y 1 , 0 y 2 , 0 y 1 , 1 y 0 , 1 y 1 , 1 y 0 , 2 . (19) Then, exploitin g the result in Theorem 1, (17) can be ref or- mulated as th e dua l prob lem min y 9 y 0 , 0 − 4 y 0 , 1 + 2 y 2 , 0 + y 1 , 1 + 2 y 0 , 2 sub ject to M 1 ( y ) 0 , (20) whose so lu tion is fou n d to be y ⋆ = (1 , − 1 , 2 , ∗ , ∗ , ∗ ) from which we obtain x ⋆ = ( − 1 , 2) . Thus, zero duality gap is shown. 8 If n = 1 , d = 2 or ( n, v ) = (2 , 4) , then the two definition coincide s [28]. 5 R E F E R E N C E S [1] A. Zappone and E. Jorswieck, “Energy ef ficienc y in wireless netw orks via fractional programming theory , ” F oundations and T r ends R in Com- municati ons and Information Theory , vol. 11, no. 3-4, pp. 185–396, 2015. [2] H. L eung and S. Haykin, “Detect ion and estimation using an adapti ve ra- tional funct ion filter , ” IEEE T ransactions on Signal Pr ocessing , vol. 42, no. 12, pp. 3366–3 376, Dec 1994. [3] S. C. Chan and K. L. Ho, “ A ne w algorithm for arbitrary transformation of polynomial and rational functions, ” IEEE T ransact ions on Signal Pr ocessing , vol. 40, no. 2, pp. 456– 460, Feb 1992. [4] Z. Xiong and Y . Zhang, “Bundle adjustment with rational polynomial camera models base d on generi c m ethod, ” IEEE T ransaction s on Geo- science and Remote Sen sing , vol. 49, no. 1, pp. 190–202, Jan 2011. [5] “Sparse estimation of polynomial and rational dynamical models, ” IEEE T ransaction s on Automatic Contr ol , vol. 59, no. 11, pp. 2962–2977, Nov 2014. [6] D. Braess, Nonlinear Appr oximati on Theory . Berlin, Heidelber g: Springer -V erlag, 1986. [7] W . Dinkelba ch, “On nonlinear fract ional programming, ” Manag ement Scienc e , vol. 13, no. 7, pp. 492–498, 1967. [8] L. Grippo and M. Sciandrone, Operati ons Researc h Lett ers , vol. 26, no. 3, pp. 127 – 136, 2000. [9] Z. Q. L uo, W . K. Ma, A. M. C. So, Y . Y e, and S. Zhang, “Semidefinite relaxa tion of quadratic optimization problems, ” IEEE Signal Proc essing Magazi ne , vol. 27, no. 3, pp. 20–34, May 2010. [10] A. Zappone, E. Bj ¨ ornson , L. Sanguinetti, and E. A. Jorswieck, “Globally optimal ener gy-ef ficient po wer control and recei ver design in wireless netw orks, ” IEEE T ransactions on Signal Pro cessing , vol. 65, no. 11, pp. 2844–2859, June 2017. [11] Y . Y ang and M. P esavent o, “ A unified successi ve pseudo-con ve x approx- imation frame work, ” IEE E Tr ansacti ons on Signal Pr ocessing , vol. 65, no. 13, pp. 3313–3 328, July 2017. [12] P . A. Parril o, “Structured semidefin ite progra ms and semia lgebrai c geometry methods in robustness and optimizatio n, ” Ph.D. dissertat ion, Califor nia Institute of T echnology , May 2000. [13] J. B. Lasserre, “Global optimiza tion with polynomials and the probl em of moments, ” SIAM Journa l on Optimizatio n , vol. 11, no. 3, pp. 796– 817, 2001. [14] E. Bj ¨ ornson, J. Hoydis, and L. Sanguinetti, “Massi ve MIMO networks: Spectra l, energy , and hardware efficien cy , ” F oundations and T rends R in Signal Pro cessing , vol. 11, no. 3-4, pp. 154–655, 2017. [15] E. Bj ¨ orn son, L. Sanguinetti, and M. Kount ouris, “Deploy ing dense netw orks for maximal energy efficien cy: Small cells meet massiv e MIMO, ” IEEE J ournal on Select ed Areas in Communications , vol. 34, no. 4, pp. 832–847 , April 2016 . [16] M. Putinar , “Positi ve polynomials on compact semi-algebra ic sets, ” Indiana Univer sity Mathematics Journal , vol. 42, no. 3, pp. 969–984, 1993. [17] M. Kojima and M. Muramatsu, “ An extension of sums of squares relaxa tions to polynomial optimizatio n problems over symmetric cones, ” Resear ch Reports on Mathemati cal and Compu ting Sci ences Seri es B : Operat ions Resear ch , 2004. [18] S. Boyd and L. V andenber ghe, Con vex Optimization . Ne w Y ork, NY , USA: Cambri dge Uni versity Press, 2004. [19] J. N ie, “Sum of squar es method for sensor network localizat ion, ” Computati onal Optimization and Applications , vol. 43, no. 2, pp. 151– 179, Jun 2009. [20] Y . Nestero v and A. Nemirovskii , Interior-P oint P olynomial Algorithms in Con vex Pro gramming . Society for Industrial and Applied Mathematics, 1994. [21] J. B. Lasserre, “Moments and sums of squares for polynomial optimiza- tion and related problems, ” Jou rnal of Global Optimization , vol. 45, no. 1, pp. 39–61, Sep 2009. [22] J. L ¨ ofbe rg, “Pre- and post-processing sum-of-squares programs in prac- tice, ” IE EE T ransactions on Automatic Contr ol , vol. 54, no. 5, pp. 1007– 1011, 2009. [23] J. Lofberg, “Y almip : a toolbox for modeling and optimiza tion in matlab, ” in 2004 IEE E International Confer ence on Roboti cs and Automat ion (IEEE Cat. No.04CH37508) , Sept 2004, pp. 284–289 . [24] K. T oh, M. T odd, and R. T utuncu, “SDPT3 - a matlab software package for semidefinite programming, ” Optimization Methods and Software , vol. 11, no. 3, pp. 545–581, 1999. [25] M. Laurent, Sums of Squar es, Mome nt Matrice s and Optimization Over P olynomial s . Ne w Y ork, NY : Springer New Y ork, 2009, pp. 157–27 0. [26] Y . Nesterov , Squared Functional Systems and Optimizatio n P roble ms . Boston, MA: Springer US, 2000, pp. 405–440. [27] P . A . Parrilo and B. Sturmfels, “Minimizing Polynomial Functions, ” ArXiv Mathematics e-prints , Mar . 2001. [28] B. Reznick, “Some concrete aspects of hilbert ’ s 17th problem, ” in In Contempor ary Mathematics . American Mathematic al Societ y , 1996, pp. 251–272.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment