Algorithm for Optimal Mode Scheduling in Switched Systems
This paper considers the problem of computing the schedule of modes in a switched dynamical system, that minimizes a cost functional defined on the trajectory of the system's continuous state variable. A recent approach to such optimal control proble…
Authors: Yorai Wardi, Magnus Egerstedt
Algorithm f or Optimal Mode Scheduling in Switched Systems Y . W ardi and M. Egerstedt Abstract — This paper considers the problem of computing the schedule of modes in a switched d ynamical system, that minimizes a cost functional defined on the trajectory of the system’ s continuous state varia ble. A rec ent approach to such optimal control problems consists of algorithms that alternate between computing the optimal switching times b etween modes in a giv en sequen ce, a nd up dating the mo de-sequ ence by inserting to it a fi nite n umber of new mo des. These algorithms hav e an in herent ineffi ciency due to their sparse update of the mode-sequences, while spendin g most of the computing times on optimizing with respect to the switching times for a giv en mode-sequen ce. This paper p roposes an algorithm that operates d irectly in t he schedu le space wit hout resorting to the timing optimization problem. It is based on th e Armijo step size along cer tain G ˆ ateaux deriv ativ es of the perf ormance func- tional, thereby av oiding some of th e computational d ifficulties associated with d iscrete sch eduling param eters. Its con ver gence to local minima as well as its rate of conv ergence are prov ed, and a simu lation example on a nonlinear system exhib its quite a fast conv ergence. I . I N T R O D U C T I O N Switched-mo de hyb rid dynamical systems o ften are char- acterized b y the following equation , ˙ x = f ( x, v ) , (1) where x ∈ R n is the state variable, v ∈ V with V b eing a giv en finite set, and f : R n × V → R n is a suitable func tion. Suppose th at the system evolves o n a ho rizon-in terval [0 , T ] for som e T > 0 , and that the initial state x (0) = x 0 is giv en for some x 0 ∈ R n . The input co ntrol of this system, v ( t ) , is discrete since V is a finite set, and we assum e that the fun ction v ( t ) ch anges its values a finite numb er of tim es during th e hor izon interval [0 , T ] . Such systems have be en in vestigated in the past se veral years due to their relevance in control applicatio ns such as mobile robotics [7], vehicle contro l [1 9], switchin g cir cuits [1] and referenc es therein , teleco mmunicatio ns [14], [9], an d situations where a contro ller has to switch its attention among multiple subsystems [1 0] or da ta sources [5]. Of a particu lar interest in these applica tions is an optima l control p roblem where it is d esirable to minim ize a cost fun ctional (criter ion) of the fo rm J := Z T 0 L ( x ) dt (2) for a given T > 0 , where L : R n → R is a cost function defined on th e state trajectory . This general non linear optim al-contro l pr oblem was for- mulated in [4], wh ere th e par ticular values of v ∈ V a re associated with the various modes of the system. 1 Sev eral variants of the m aximum principle were derived for this problem in [18], [11], [15], and subseque ntly pr ov ably - conv ergent optimization a lgorithms were developed in [20], [15], [17], [ 8], [2]. W e po int out that two kind s of prob lems were co nsidered: those where the sequen ce of mod es is fixed and the co ntrolled v ariable consists o f the switch ing times between th em, and th ose where the co ntrolled variable is comprised of the sequen ce of mod es as we ll as the switch ing times between them. W e call the fo rmer problem the timing optimization pr oblem , and the latter pro blem, th e scheduling optimization pr oblems . The timing o ptimization p roblem gen erally is simp ler th an the scheduling op timization problem since essentially it is a nonlinear-pro gramming prob lem (albeit with a special struc- ture) h a ving o nly con tinuous variables, wh ile the scheduling problem has a discrete seq uencing- variable as w ell. Fur- thermor e, schedu ling p roblems generally are NP hard, and computatio nal techniques have to search for solutions that are suboptimal in a suitable sense. Thus, while the algorithms that were p roposed early focu sed o n the timin g o ptimization problem , several different (and apparen tly complemen tary) approa ches to the schedulin g-optimizatio n pro blem have emerged as well. Zo ning algorithms that compute (itera- ti vely) th e mode sequence s b ased o n g eometric p roperties o f the p roblem have been d ev elope d in [1 6], needle-variations technique s were presented in [3], and r elaxation methods were proposed in [ 6]. In contrast, the alg orithm con sidered in this p aper comp utes its iterations d irectly in th e schedule space withou t resorting to relaxations, and as argued later in the sequel, m ay compu te optimal (or suboptimal) sch edules quite effectively . Our stating point is the algo rithm we de veloped in [3] which altern ates between the following two steps: (1). Given a seq uence of mo des, com pute the switching times amon g them that m inimize the fun ctional J . (2). Up date the mod e- sequence by inserting to it a single mode at a (co mputed) time that would lead to the g reatest-possible reduction rate in J . Then repeat Step 1, etc. The second step deserves some explanation . Fix a time t ∈ [0 , T ] , an d let us deno te the system’ s m ode at that tim e by M α . Now suppose that we rep lace this m ode by ano ther mode, d enoted by M β , over the time-interval [ t, t + λ ] fo r some giv en λ > 0 , and denote by ˜ J ( λ ) th e cost f unctional 1 The setting in [4] is more general since it in volv es a continuous-t ime control u ∈ R k as well as a discrete control v . In this paper we focus only on the discrete control since it captures the salient points of switche d- mode systems, and we defer discussion of the general case to a forthco ming publica tion. J de fined by (2) as a function of λ . W e call the o ne-sided deriv ative d ˜ J dλ + (0) the insertion gradient , and we note that if d ˜ J dλ + (0) < 0 then in serting M β for a br ief amou nt of time at time t wou ld result in a decrease in J , while if d ˜ J dλ + (0) > 0 then such an insertio n w ould result in an increase in J . Now th e second step of the algorith m co mputes the time t ∈ [0 , T ] and mod e M β that minimize the insertion grad ient, and it performs the insertion accordin gly . W e men tion that if the insertion gradien t is non-negative for every mo de M β and time t ∈ [0 , T ] then the sch edule in question satisfies a necessary optimality conditio n and no insertion is performed. The aforem entioned algo rithm has a pecu liar feature in that it solves a timing optimization prob lem between c on- secutiv e mode-in sertions. This featu re appe ars awkward and suggests that the algorithm can be quite inefficient, but it is required for the conver gence- proof derived in [3]. In f act, that proof breaks d own if the insertio ns are made for sche dules that do n ot necessarily com prise solution po ints of the timing optimization prob lem for the ir gi ven mode-seque nces. The reason seem s to be in the fact that the insertio n grad ient is not co ntinuou s in the time-points at which the insertio n of a giv en mode M β is made. Howe ver , this lac k of continuity can be overcome by oth er pro perties of the pro blem at hand , and this lead s to the development of the algorithm that is propo sed in this paper, wh ich ap pears to be more efficient than th e o ne in [3]. The algorithm we describe her e compu tes its iterations directly in the space o f mode-sched ules withou t ha ving to solve any timing optimizatio n prob lems. Fur thermore , at each iteration it switche s the mo de not at a fin ite set of times, but at sets com prised of union s of positive-length interv als in the tim e-horizo n [0 , T ] . T he a lgorithm is based on the idea of the Armijo step size used in gradien t-descent technique s [13], and it uses the Leb esgue measu re o f sets where the modes are to be changed as the step-size par ameter . T o the best of our kn owledge this id ea has not been used in extant algorithm s for optimal con trol problems, and while it app ears natural in th e setting o f switched -node systems, it may hav e extensions to other optimal-control s ettings as well. W e prove the alg orithm’ s conver gence an d its conv ergence-r ate, which we show to be ind epende nt of the n umber of intervals wh ere the modes are chan ged at a given iteration. The rest of the pap er is organ ized as follows. Sectio n II sets the mathematical formulatio n of th e pro blem and recoun ts som e established results. Section III carries out the analysis, while Section IV presen ts a simulatio n examp le. Finally , Section V con cludes the p aper . I I . P RO B L E M F O R M U L A T I O N A N D S U RV E Y O F R E L E V A N T R E S U L T S Consider the state e quation (1) an d recall that th e initial state x 0 and th e final time T > 0 ar e given. W e make the following assumption regarding the vector field f ( x, v ) an d the state trajec tory { x ( t ) } . Assumption 1: (i). For every v ∈ V , the fun ction f ( x, v ) is twice-con tinuously differentiable ( C 2 ) throu ghout R n . (ii). The state trajector y x ( t ) is continuo us at all t ∈ [0 , T ] . Every mode-sched ule is associated with an in put contro l function v : [0 , T ] → V , an d we d efine an admissible mode schedule to b e a schedule whose associa ted co ntrol fu nction v ( · ) changes its values a finite n umber of times thr ougho ut the interval t ∈ [0 , T ] . W e den ote the sp ace of admissible schedules by Σ , an d a typ ical ad missible sched ule b y σ ∈ Σ . Giv en σ ∈ Σ , we define the length of σ as th e nu mber of consecutive different values of v on the h orizon inter val [0 , T ] , and de note it by ℓ ( σ ) . Fur thermor e, we d enote the i th successiv e value o f v in σ by v i , i = 1 , . . . , ℓ ( σ ) , and the switching time between v i and v i +1 will be deno ted by τ i . Further defining τ 0 := 0 and τ ℓ ( σ ) = T , we observe that the input co ntrol fu nction is defin ed by v ( t ) = v i ∀ i ∈ [ τ i − 1 , τ i ) , i = 1 , . . . , ℓ ( σ ) . W e req uire that ℓ ( σ ) < ∞ but impo se no upper b ound on ℓ ( σ ) . Giv en σ ∈ Σ , define th e c ostate p ∈ R n by the following differential equation, ˙ p = − ∂ f ∂ x ( x, v ) T p − dL dx ( x ) T (3) with th e b ound ary condition p ( T ) = 0 . Fix time s ∈ [0 , T ) , w ∈ V , and λ > 0 , and co nsider replacing the value of v ( t ) by w for every t ∈ [ s, s + λ ) . T his amo unts to chang ing the mode- sequence σ by inserting the mode associated with w throughou t the interval [ s, s + λ ) . Deno ting by ˜ J ( λ ) th e value o f the cost func tional resulting from this insertion, the insertio n gradien t is defined b y d ˜ J dλ + (0) . Of course this insertion gradient dep ends o n the mod e-schedule σ , the inserted mo de associated with w ∈ V , an d the insertion time s , and hence we d enote it b y D σ,s,w . W e have the f ollowing result (e.g., [8]): D σ,s,w = p ( s ) T f ( x ( s ) , w ) − f ( x ( s ) , v ( s )) . (4) As mentioned earlier, if D σ,s,w < 0 then insertin g to σ the mode associated with w on a small inte rval starting at time s would r educe the cost functio nal. On the other hand, if D σ,s,w ≥ 0 f or all w ∈ V and s ∈ [0 , T ] then we can th ink of σ as satisfying a local optimality condition . Formally , defin e D σ,s := min { D σ,s,w : w ∈ V } , and defin e D σ := inf { D σ,s : s ∈ [0 , T ] } . Observe th at D σ,s,v ( s ) = 0 since v ( s ) is associated with the sam e mode at time s and hence σ is not mod ified, and consequently , b y defin ition, D σ,s ≤ 0 and D σ ≤ 0 as well. The condition D σ = 0 is a n atural first-or der nec essary optim ality cond ition, and the purpo se o f the algorithm described belo w is to compute a mode-sch edule σ th at satisfies it. Our algorithm is a descent method based on the princip le of the Armijo step size. Given a sched ule σ ∈ Σ , it co mputes the next sch edule, σ next , b y chan ging the mo des associated with po ints s ∈ [0 , T ] where D σ,s < 0 . Th e main poin t of departur e from existing algor ithms ( and especially those in [3]) is that the set of such po ints s is no t finite or discrete, but has a p ositiv e Lebesgue measure. More over , the Leb esgue measure of th is set acts as the parameter for the Ar mijo proced ure. Now one of the basic requiremen ts of algor ithms in the general setting of nonlinear pr ogramm ing is th at every accumulatio n po int of a co mputed sequenc e o f iteration points satisfies a certain optimality condition, like stationarity or the Kuhn-T ucker condition. Howe ver , in our case such a conver gence pro perty is meaningless since the schedule- space Σ is neither finite dime nsional no r co mplete, th e latter due to the req uirement that ℓ ( σ ) < ∞ ∀ σ ∈ Σ . Co nse- quently convergence of our algorith m has to be ch aracterized by other m eans, a nd to this end we use Polak’ s con cept of minimizing sequences [1 2]. Acco rdingly , th e qu antity D σ acts as an optimality function [13], namely the optim ality condition in question is D σ = 0 , while | D σ | ind icates an extent to which σ fails to satisfy that optimality con dition. Con vergence of an algorithm means th at, if it comp utes a sequence of sch edules { σ k } ∞ k =1 then, lim sup k →∞ D σ k = 0; (5) in som e cases the strong er cond ition lim k →∞ D σ k = 0 applies. In either case, for ev ery ǫ > 0 the algor ithm yield s an ad missible mo de-sched ule σ ∈ Σ satisfying the inequ ality D σ > − ǫ . Our analysis will yield Equ ation (5 ) b y p roving a u niformly -linear co n vergence rate of th e algor ithm. 2 Since the Arm ijo step-size techniq ue will p lay a key role in o ur algorithm, we conclud e this sectio n with a recou nt of its main featu res. Con sider the gene ral setting of n onlinear progr amming where it is desirable to minimize a C 2 function f : R n → R , and suppose that the Hessian d 2 f dx 2 ( x ) is bound ed on R n . Given x ∈ R n , a steepest descent from x is any vector in the d irection −∇ f ( x ) ; we n ormalize the gradient by defining h ( x ) := ∇ f ( x ) ||∇ h ( x ) || , and c all − h ( x ) the steepest-descent directio n. Let λ ( x ) ≥ 0 denote the step size so th at th e n ext point computed by the a lgorithm, d enoted by x next , is define d as x next = x − λ ( x ) h ( x ) . (6) The Arm ijo step size proced ure defines λ ( x ) by an approx - imate line min imization in the f ollowing w ay ( see [13]): Giv en constants α ∈ (0 , 1) and β ∈ (0 , 1) , define the integer j ( x ) by j ( x ) : min n j = 0 , 1 , . . . , : f ( x − β j ∇ f ( x )) − f ( x ) ≤ − αβ j ||∇ f ( x ) || 2 o , (7) and d efine λ ( x ) = β j ( x ) ||∇ f ( x ) || . (8) Now the steepest descent alg orithm with Armijo step size computes a sequence of iteration points x k , k = 1 , 2 , . . . , by the fo rmula x k +1 = x k − λ ( x k ) h ( x k ) ; λ ( x k ) is called the Armijo step size at x k . Th e main convergence property 2 The rea son for the “li msup” in (5) instead of the stronger form of con ve rgence (with “lim” inste ad of “limsup”) is due to tec hnical peculiar ities of the optimality function D σ that will be discussed later . W e will argue tha t the stronger form of con ver gence applies exc ept for pathologic al situations. Furthermore, w e will define an alternat iv e optima lity function and prove the stronger form of conv erge nce for it. The choice of the most-suitable optimali ty function is large ly theoreti cal and will not be addressed in this paper . of this a lgorithm [1 3] is th at every accumulation point ˆ x of a computed seq uence { x k } ∞ k =1 satisfies the stationarity co n- dition ∇ f ( ˆ x ) = 0 . Se veral results concerning co n vergence rate have been derived as well, an d the on e of inter est to us is given by Pro position 1 below . I ts p roof is contained in the argum ents of the p roof of T heorem 1.3. 7 and especially Equation (8b ) in [13], but since we have not seen the result stated in the same way as in Proposition 1, we provid e a brief p roof in the appen dix. Pr oposition 1: Supp ose that f ( x ) is C 2 , and that ther e exists a con stant L > 0 such that, fo r e very x ∈ R n , || H ( x ) || ≤ L , where H ( x ) := d f 2 dx 2 ( x ) . The n th e following two statements are true: ( 1). For every x ∈ R n and for ev ery λ ≥ 0 such that λ ≤ 2 L (1 − α ) ||∇ f ( x ) || , f ( x − λh ( x )) − f ( x ) ≤ − αλ ||∇ f ( x ) || . (9) (2). For every x ∈ R n , λ ( x ) ≥ 2 L β (1 − α ) ||∇ f ( x ) || . (10) This im plies the following conver gen ce re sult: Cor ollary 1: (1 ). There exists c > 0 such that ∀ x ∈ R n , f ( x next ) − f ( x ) ≤ − c ||∇ f ( x ) || 2 . (11) (2). If the alg orithm c omputes a bound ed sequ ence { x k } ∞ k =1 then lim k →∞ ∇ f ( x k ) = 0 . (12) Pr oof: (1 ). Defin e c := 2 L α (1 − α ) β . T hen (11) fo llows directly f rom Equ ations (9) and ( 10). (2). Follows immediately fro m part ( 1) and the fact that the sequence { f ( x k ) } ∞ k =1 is monoto ne n on-inc reasing. I I I . A L G O R I T H M F O R M O D E - S C H E D U L I N G M I N I M I Z A T I O N T o simplify the notation and an alysis w e assume first th at the set V c onsists only of two elemen ts, namely the system is bi-mod al. This assumption incur s no significan t loss o f generality , and at th e end o f this section we will po int out a n extension to the gene ral case wher e V consists of an arbitrary finite num ber o f poin ts. Let u s den ote th e two elements o f V by v 1 and v 2 . A mo de-sched ule σ alternates betwee n these two poin ts, and we den ote by { v 1 , . . . , v ℓ ( σ ) } th e sequ ence of values of v associated with the mod e-sequenc e co mprising σ . Den oting by v c the comp lement of v , we have that v i +1 = ( v i ) c for all i = 1 , . . . , ℓ ( θ ) − 1 . Consider a m ode-sched ule σ ∈ Σ that doe s not satisfy the necessary optima lity co ndition, namely D σ < 0 . Define the set S σ, 0 as S σ, 0 := { s ∈ [0 , T ] : D σ,s < 0 } , an d note that S σ, 0 6 = ∅ . Recall th at v ( s ) denote s the value of v at the time s . Then for every s ∈ S σ, 0 which is not a switching time, an insertio n of the comple mentary m ode v ( s ) c at s for a small-enoug h period would result in a decrease of J . Our goal is to flip th e mod es (na mely , to switch them to their compleme ntary o nes) in a large subset of S σ, 0 that would re sult in a substantial decrease in J , where by th e term “sub stantial decrease” we mean a decr ease by at least aD 2 σ for some co nstant a > 0 . This “sufficient descent” in J is akin to th e d escent pr operty of th e Ar mijo step size as reflected in Eq uation (11). This sufficient-descent property cannot be g uaranteed by flipping the m ode at every tim e s ∈ S σ, 0 . Instead, we search for a subset o f S σ, 0 where, flip ping the mode at ev ery s in that sub set would gua rantee a su fficient d escent. This su bset will consist os points s wh ere D σ,s is “more negative” th an at typ ical p oints s ∈ S σ, 0 . Fix η ∈ (0 , 1) and define the set S σ,η by S σ,η = s ∈ [0 , T ] : D σ,s ≤ η D σ . (13) Obviously S σ,η 6 = ∅ since D σ < 0 . Let µ ( S σ,η ) denote the Lebesgue mea sure of S σ,η , an d more gen erally , let µ ( · ) denote the Lebesgue measure on R . For every su bset S ⊂ S σ,η , consider flipping the mode at e very point s ∈ S , and denote by σ ( S ) th e resulting mode-sche dule. In the forthco ming we will search for a set S ⊂ S σ,η that will giv e u s the desired sufficient descent. Fix η ∈ (0 , 1) . Let S : [0 , µ ( S σ,η )] → 2 S σ,η (the latter object is the set of subsets of S σ,η ) be a map ping h aving the following two p roperties: (i) ∀ λ ∈ [0 , µ ( S σ,η )] , S ( λ ) is the finite union of closed intervals; and (ii) ∀ λ ∈ [0 , µ ( S σ,η )] , µ ( S ( λ )) = λ . W e de fine σ ( λ ) to be the mode-sch edule obtained from σ by flipping the mod e at every time-p oint s ∈ S ( λ ) . For example, ∀ λ ∈ [0 , µ ( S σ,η )] define s ( λ ) := inf { s ∈ S σ,η : µ ([0 , s ] ∩ S σ,η ) = λ } , and defin e S ( λ ) := [0 , s ( λ )] ∩ S σ,η . Then σ ( λ ) is the sch edule obtain ed f rom σ by flipping the modes lying in the leftmost subset of S σ,η having Leb esgue-mea sure λ , an d it is the finite union of closed inter vals if so is S σ,η . W e next use such a map ping S ( λ ) to define an Armijo step-size proced ure for compu ting a schedu le σ next from σ . Giv en constants α ∈ (0 , 1) and β ∈ (0 , 1 ) , in ad dition to η ∈ (0 , 1) . Consider a g iv en σ ∈ Σ such that D σ < 0 . For ev ery j = 0 , 1 , . . . , define λ j := β j µ ( S σ,η ) , and de fine j ( σ ) by j ( σ ) := min n j = 0 , 1 , . . . , : J ( σ ( λ j )) − J ( σ ) ≤ αλ j D σ o . (14) Finally , define λ ( σ ) := λ j ( σ ) , an d set σ next := σ ( λ ( σ )) . Observe th at the Armijo step-size proced ure is applied here not to the steepest descent (which is no t defined in our prob lem setting) b ut to a descent direction defined by a G ˆ ateaux de riv ati ve of J with respect to a subset of the interval [0 , T ] wher e the modes are to be flippe d. Gener ally this G ˆ ateu x de riv ati ve is n ot necessarily continuo us in λ and henc e the standard arguments fo r sufficient d escent do not app ly . However , the p roblem has a special structu re guaran teeing sufficient descent and the algo rithm’ s conver - gence in the sen se of minim izing sequences. Furthermo re, the suf ficient de scent p roperty d epends on µ ( S σ,η ) b ut is indepen dent of both the string size ℓ ( σ ) and the particular choice of the m apping S : [0 , µ ( S σ,η )] → 2 S σ,η . This guaran tees that the conver gence rate o f the algorithm is no t reduced wh en the string lengths of the sched ules comp uted in successi ve iteratio ns grows unbo unded ly . W e n ext p resent the algorithm formally . Given con stants α ∈ (0 , 1) , β ∈ (0 , 1) , an d η ∈ (0 , 1) . Suppose th at for ev ery σ ∈ Σ such that D σ < 0 there exists a map ping S : [0 , µ ( S σ,η )] → 2 S σ,η with the af orementio ned pro perties. Algorithm 1 : S tep 0: Start with an arbitr ary schedule σ 0 ∈ Σ . Set k = 0 . Step 1: Com pute D σ k . If D σ k = 0 , sto p and exit; other wise, continue. Step 2: Compute S σ k ,η as de fined in (13) , namely S σ k ,η = { s ∈ [0 , T ] : D σ k ,s ≤ η D σ k } . Step 3 : Compu te j ( σ k ) as d efined by (14), n amely j ( σ k ) = min n j = 0 , 1 , . . . , : J ( σ k ( λ j )) − J ( σ k ) ≤ αλ j D σ k o (15) with λ j := β j µ ( S σ k ,η ) , an d set λ ( σ k ) := λ j ( σ k ) . Step 4: Defin e σ k +1 := σ k ( λ ( σ k )) , namely the schedu le obtained from σ k by flipping the mode at every time-poin t s ∈ S ( λ ( σ k )) . Set k = k + 1 , and g o to Step 1. It mu st be mentio ned that the com putation of the set S σ k ,η at Step 2 typ ically r equires an adequate app roximation . This paper analyzes the algo rithm u nder th e assumption of an exact comp utation of S σ k ,η , while the case inv olving adapti ve precision will b e treated in a later, more co mprehe nsiv e publication . The fo rthcoming an alysis is carried out u nder Assumption 1, ab ove. It require s th e f ollowing two preliminary results, whose p roofs follow as corollaries fro m established results on sensitivity an alysis of solu tions to differential equation s [13], and he nce are re legated to the appen dix. Gi ven σ ∈ Σ , consider an inter val I := [ s 1 , s 2 ] ⊂ [0 , T ] of a po siti ve length, such th at the m odes associated with all s ∈ I are the same, i.e., v ( s ) = v ( s 1 ) ∀ s ∈ I . Den ote b y σ s 1 ( γ ) the mode-seq uence obtained from σ by flipping the mo des at ev ery time s ∈ [ s 1 , s 1 + γ ] , and con sider the resu lting cost function J ( σ s 1 ( γ )) as a function o f γ ∈ [0 , s 2 − s 1 ] . Lemma 1: Th ere exists a constant K > 0 such that, fo r ev ery σ ∈ Σ , a nd for every interval I = [ s 1 , s 2 ] a s ab ove, the function J ( σ s 1 ( · )) is twice-contin uously differentiable ( C 2 ) on the interval γ ∈ [0 , s 2 − s 1 ] ; and fo r every γ ∈ [0 , s 2 − s 1 ] , | J ( σ s 1 ( γ )) ′′ | ≤ K (“p rime” indicates derivati ve with respect to γ ) . Pr oof: Please see the app endix. W e remar k that the C 2 proper ty of J ( σ s 1 ( · )) is in force only a s long as v ( s ) = v ( s 1 ) ∀ s ∈ [ s 1 , s 2 ] . T he second assertion of the above lemm a d oes no t qu ite fo llow f rom the first one; the bo und K is ind ependen t of the specific interval [ s 1 , s 2 ] . Lemma 1 in conjunction with Co rollary 1 (a bove) can yield sufficient descen t only in a local sense, as long as the same mode is scheduled according to σ . At mod e-switching times D σ,s is no lo nger c ontinuo us in s , and hence Lemma 1 cann ot b e extended to inter vals where v ( · ) does not hav e a constant value. Nonetheless we ca n prove the sufficient- descent prop erty in a mor e g lobal sense with the aid of th e following result, whose v alidity is due to the special structure of the pr oblem. Lemma 2: Th ere exists a co nstant K > 0 such that for ev ery σ ∈ Σ , f or every interval I = [ s 1 , s 2 ] as ab ove (i.e., such that σ h as the same mode throu ghout I ), for ev ery γ ∈ [0 , s 2 − s 1 ) , an d for every s ≥ s 2 , | D σ s 1 ( γ ) , s − D σ,s | ≤ K γ . (16) Pr oof: Please see the app endix. T o explain this result, recall tha t σ s 1 ( γ ) is the mode- schedule obtained from σ by flippin g all the mo des on the interval [ s 1 , s 1 + γ ] . Th us, Equ ation (16) provide s an upper boun d on the mag nitude o f the difference between the insertion grad ients of the sequ ences σ a nd σ s 1 ( γ ) at the same point s . Furth ermore, Lem ma 2 im plies a unifo rm Lipsch itz continuity of the insertion g radient at every point s > s 2 with respect to the length of the insertion in terval γ . Th is is not the same as con tinuity of D σ,s with r espect to s , which we k now is no t true. Recall the f ollowing terminolo gy: given σ ∈ Σ and S ⊂ [0 , T ] , σ ( S ) denotes the sched ule obtained by flipping the mode of σ at every τ ∈ S . Cor ollary 2: Th ere exists K > 0 su ch that, f or every σ ∈ Σ , for ev ery subset S ⊂ [0 , T ] co mprised o f a finite number of inter vals, an d fo r every s ≥ sup { ˜ s ∈ S } , | D σ ( S ) ,s − D σ,s | ≤ K µ ( S ) . (17) Pr oof: Let K > 0 be the co nstant given by Lemma 2 . Fix σ ∈ Σ , a subset S ⊂ [0 , T ] comp rised of a finite nu mber of intervals, and s ≥ sup { ˜ s ∈ S } . W e can assume witho ut loss o f g enerality th at e ach o ne of the in tervals comprising S contains its lower-boundary p oint but not its u pper-bounda ry point. Den ote these in tervals b y I j := [ s 1 ,j , s 2 ,j ) , j = 1 , . . . , m for some m ≥ 1 , so that S = ∪ m j =1 [ s 1 ,j , s 2 ,j ) . Furthermo re, by subdividing these intervals if n ecessary , we can assume th at v ( τ ) h as a constant value th rough out eac h interval I j , namely all the modes in I j are the same according to σ . Note that these intervals n eed no t be con tiguous, i. e., it is possible to h av e s 1 ,j +1 > s 2 ,j for some j = 1 , . . . , m − 1 . Define S j := ∪ j i =1 I i , j = 1 , . . . , m , and note th at S = S m . Further more, µ ( S ) = P m j =1 ( s 2 ,j − s 1 ,j ) . Next, we hav e that D σ ( S ) ,s − D σ,s = D σ ( S 1 ) ,s − D σ,s + m X j =2 ( D σ ( S j ) ,s − D σ ( S j − 1 ) ,s ) . (18) By Lem ma 2, | D σ ( S 1 ) ,s − D σ,s | ≤ K ( s 2 , 1 − s 1 , 1 ) , and fo r ev- ery j = 2 , . . . , m , | D σ ( S j ) ,s − D σ ( S j − 1 ) ,s | ≤ K ( s 2 ,j − , s 1 ,j ) . By (16) | D σ ( S ) ,s − D σ,s | ≤ K P m j =1 ( s 2 ,j − s 1 ,j ) , and since µ ( S ) = P m j =1 ( s 2 ,j − s 1 ,j ) , (1 7) follows. W e now can state the algo rithm’ s property of sufficient descent. Pr oposition 2: Fix η ∈ (0 , 1) , β ∈ (0 , 1) , and α ∈ (0 , η ) . There exists a co nstant c > 0 such that, for every σ ∈ Σ satisfying D σ < 0 , and fo r every λ ∈ [0 , µ ( S σ,η )] such that λ ≤ c | D σ | , J ( σ ( λ )) − J ( σ ) ≤ αλD σ . (19) Pr oof: Consider σ ∈ Σ and an inter val I := [ s 1 , s 2 ) such that σ h as the same mod e thr ougho ut I . By Lemma 1, J ( σ s 1 ( γ )) is C 2 in γ ∈ [0 , s 2 − s 1 ) , and by (4), J ( σ s 1 (0)) ′ = D σ,s 1 . Fix a ∈ ( α η , 1) . Suppose th at D σ,s 1 < 0 . By Proposition 1 (Eq uation (9)) there exists ξ > 0 su ch that, for every γ ≥ 0 satisfying γ ≤ min {− ξ D σ,s 1 , s 2 − s 1 } , J ( σ s 1 ( γ )) − J ( σ ) ≤ aγ D σ,s 1 . (20) Furthermo re, ξ does n ot depend o n the mode- schedule σ o r on the in terval I . Next, by Cor ollary 2 th ere exists a constant K > 0 such that, fo r ev ery σ ∈ Σ , f or every set S ⊂ [0 , T ] con sisting of the fin ite unio n of intervals, and f or every point s ≥ sup { ˜ s ∈ S } , | D σ ( S ) ,s − D σ,s | ≤ K µ ( S ) . (21) Fix c > 0 such th at c < min { 2 aK ( aη − α ) , η K } ; (22) we next pr ove the assertion of the pro position for this c . Fix σ ∈ Σ such that D σ < 0 , and consider a set S ⊂ S σ,η consisting of the finite union o f disjoint intervals. By subdividing these intervals if n ecessary we can ensure that the leng th o f each on e of them is less than − ξ η D σ . Deno te these intervals by I j := [ s 1 ,j , s 2 ,j ) , j = 1 , . . . , m (for some m > 0 ), define γ j := s 2 ,j − s 1 ,j , an d define λ := P M j =1 γ j . Since s 1 ,j ∈ S σ,η we hav e that D σ,s 1 ,j ≤ − η D σ , and we recall that γ j ≤ − ξ ηD σ ∀ j = 1 , . . . , m . Next, we defin e the mod e-schedu les σ j , j = 1 , . . . , m , in the following recur si ve mann er . For j = 1 , σ 1 = σ s 1 , 1 ( γ 1 ) ; and fo r e very j = 2 , . . . , m , σ j := σ j − 1 s 1 ,j ( γ j ) . In words, σ 1 is obtained from σ by flipping the mode at every time s ∈ I 1 ; and fo r e very j = 2 , . . . , m , σ j is obtain ed from σ j − 1 by flipping the mode at ev ery time s ∈ I j . Observe that σ j is also obtained f rom σ by flipp ing the mode a t ev ery time s ∈ ∪ j i =1 I i . In particular , σ m is obtained f rom σ by flipping the mod es at every time s ∈ S . Since b y assumption µ ( S ) = P m j =1 γ j = λ , we will use the notation σ ( S ) := σ ( λ ) . Suppose that λ ≤ − cD σ ; we n ext e stablish Equatio n (1 9), and this will complete the pr oof. Consider the difference- term J ( σ j ) − J ( σ ) for j = 1 , . . . , m . For j = 1 , J ( σ 1 ) − J ( σ ) ≤ aγ 1 D σ,s 1 ,j (by (20) ); a nd since s 1 ,j ∈ S σ,η , D σ,s 1 ,j ≤ η D σ , and hence J ( σ 1 ) − J ( σ ) ≤ aγ 1 η D σ . (23) Nest, consider j = 2 , . . . , m . An inequality like (23) does not necessarily hold since σ j is o btained f rom σ b y flipp ing the mode at ev ery s ∈ ∪ j i =1 I j and µ ( ∪ j i =1 I j ) may be larger than − ξ D σ,s 1 ,j , and theref ore an inequa lity like (20) cann ot be app lied. A different argument is needed. Consider the term J ( σ j ) − J ( σ ) . Sub tracting and ad ding J ( σ j − 1 ) we ob tain, J ( σ j ) − J ( σ ) = J ( σ j ) − J ( σ j − 1 ) + J ( σ j − 1 ) − J ( σ ) . ( 24) Now σ j is o btained from σ j − 1 by flippin g the mode at ev ery time s ∈ I j and hence σ j = σ j − 1 s 1 ,j ( γ j ) , while σ j − 1 = σ j − 1 s 1 ,j (0) since in the latter term no mode is being flipped. Therefo re, J ( σ j ) − J ( σ j − 1 ) = J ( σ j − 1 s 1 ,j ( γ j )) − J ( σ j − 1 s 1 ,j (0)) . (25) W e next show that D σ j − 1 ,s 1 ,j < 0 in order to be ab le to use Equation ( 20). By (17 ), | D σ j − 1 ,s 1 ,j − D σ,s 1 ,j | ≤ K Σ j − 1 i =1 γ i . (26) By definition Σ j − 1 i =1 γ i ≤ P m i =1 γ i = λ ; by assump tion λ ≤ c | D σ | ; and by (22) K ≤ η c ; co nsequently , a nd by ( 26), | D σ j − 1 ,s 1 ,j − D σ,s 1 ,j | ≤ η | D σ | . But s 1 ,j ∈ S σ,η and hence D σ,s 1 ,j ≤ η D σ , and this implies that D σ j − 1 ,s 1 ,j ≤ 0 . An application o f (20 ) to (25) now yields that J ( σ j ) − J ( σ j − 1 ) ≤ a γ j D σ j − 1 ,s 1 ,j . (27) W e do not know wh ether or n ot D σ j − 1 ,s 1 ,j ≤ η D σ , but we know th at D σ,s 1 ,j ≤ η D σ (since s 1 ,j ∈ S σ,η ) . Applyin g (2 6) to (27) we obtain that J ( σ j ) − J ( σ j − 1 ) ≤ aγ j D σ j − 1 ,s 1 ,j = aγ j D σ,s 1 ,j + aγ j ( D σ j − 1 ,s 1 ,j − D σ,s 1 ,j ) ≤ aγ j D σ,s 1 ,j + aγ j K j − 1 X i =1 γ i . (28) But D σ,s 1 ,j ≤ η D σ (since s 1 ,j ∈ S σ,η ), and hence, J ( σ j ) − J ( σ j − 1 ) ≤ aγ j η D σ + aK γ j P j − 1 i =1 γ i . Using this in equality in (24) yield s the following one, J ( σ j ) − J ( σ ) ≤ aγ j η D σ + aK γ j j − 1 X i =1 γ i + J ( σ j − 1 ) − J ( σ ) . (29) Apply (29 ) rep eatedly and rec ursiv ely with j = 1 , . . . , m to obtain, after som e algebr a, the fo llowing ine quality: J ( σ m ) − J ( σ ) ≤ a ( m X i =1 γ i ) η D σ + aK m X i,ℓ =1 ,i 6 = ℓ γ i γ ℓ . (30) But P m i =1 γ i = λ , P m i,ℓ =1 ,i 6 = ℓ γ i γ ℓ ≤ 1 2 ( P m i =1 γ i ) 2 = 1 2 λ 2 , and σ m := σ ( λ ) , and hence , J ( σ ( λ )) − J ( σ ) ≤ aλη D σ + 1 2 aK λ 2 . (31) By assumption λ ≤ − cD σ , an d by (22) aK c < 2( aη − α ) , and th is, tog ether with (31) , implies (19). Th e p roof is now complete. General resu lts concernin g suf ficient descent, an alogous to Proposition 2, provid e key argume nts in pr oving asymp totic conv ergence of n onlinear-prog ramming algorithms (see, e.g., [13]). In ou r case, the o ptimality f unction h as the pec uliar proper ty that it is discontinuous in th e Lebesgue measure of the set where a m ode is flipped . T o see this, recall that D σ,s,v ( s ) = p ( s ) T f ( x ( s ) , v ( s ) c ) − f ( x ( s ) , v ( s )) (see Equation (4 )), and hen ce a c hange of th e mo de at time s would flip the sign of D σ,s,v ( s ) . Th is can result in situation s where | D σ | is “large” w hile S σ,η is “small”, and for this re a- son, convergence of Algo rithm 1 is character ize by Equatio n (5) with th e l im s u p rather than with th e stro nger assertio n with l i m . This is the sub ject of the following result. Cor ollary 3: Sup pose that Algor ithm 1 comp utes a se- quence of schedules, { σ k } ∞ k =1 . Then Equation (5) is in for ce, namely lim sup k →∞ D σ k = 0 . Pr oof: Suppose, for the sake of contrad iction, that Equation (5 ) do es no t hold . Then the sufficient-descent prop- erty proved in Proposition 2 implies that lim k →∞ λ ( σ k ) = 0 , for o therwise Equa tion (19) would yield lim k →∞ J ( σ k ) = −∞ wh ich is imp ossible. Next, By the d efinition o f λ ( σ k ) (Step 3 of the algorithm ), th ere exists k 0 such that ∀ k ≥ k 0 , λ ( σ k ) = µ ( S σ k ,η ) , and h ence lim k →∞ µ ( S σ k ,η ) = 0 . In this case, σ k +1 is obtained f rom σ k by flipping th e modes at ev ery s ∈ S σ k ,η . By the perturbation theo ry of differential equations (e. g., Pro position 5.6 .7 in [13]), x and p are Lipschitz continu ous in their L ∞ norms with respect to the Lebesgue measu re of the sets where the mode s are flipped , i.e. µ ( S σ k ,η ) . Th erefore, and by (4) and the d efinition o f S σ k ,η , the re exist k 1 ≥ k 0 and ζ ∈ (0 , 1 ) such that ∀ k ≥ k 1 , D σ k +1 ,η ≥ ζ D σ k ,η , implying that lim k →∞ D σ k ,η = 0 . Howe ver th is is a contr adiction to the assumption that (5) does no t hold , thus c ompleting th e proo f. Alternative op timality functions can be co nsidered as well, like the term D σ µ ( S σ,η ) , where it is ap parent E quation (19 ) that lim k →∞ D σ k µ ( S σ k ,η ) = 0 . The cho ice of the “most approp riate” o ptimality fun ction is an interesting theoretical question that will be addressed elsewhere, while here we consider the simp lest and (in o ur opinion ) most intuitive optimality fu nction D σ , despite its technical peculiarities. Finally , a word m ust be said abo ut the g eneral case wh ere the set V con sists of mo re than two po ints. The a lgorithm and much o f its analysis rem ain u nchang ed, except tha t for a giv en σ ∈ Σ , at a time s , the mode associated with v ( s ) should be switched to the mod e associated with the poin t w ∈ V that min imizes the term D σ,s,w . I V . N U M E R I C A L E X A M P L E W e tested th e algorith m o n th e doub le-tank system shown in Figure 1. The in put to the sy stem, v , is th e inflo w rate to the upper tank, controlled by the v alve an d ha ving two possible values, v 1 = 1 and v 2 = 2 . x 1 and x 2 are the fluid lev els at the up per tank and lower tan k, r espectively , as shown in the figure. According to T oricelli’ s law , the state equation is ˙ x 1 ˙ x 2 = v − √ x 1 √ x 1 − √ x 2 , (32) with the (cho sen) initial co ndition x 1 (0) = x 2 (0) = 2 . 0 . Notice th at bo th x 1 and x 2 must satisfy the inequalities 1 ≤ x i ≤ 4 , a nd if v = 1 indefinitely tha n lim t →∞ x i = 1 , w hile if v = 2 indefinitely then lim t →∞ x i ( t ) = 4 , i = 1 , 2 . x 1 x 2 v Fig. 1. T wo-tank s ystem The objective of the o ptimization problem is to have th e fluid lev el in the lower tank track the given value of 3.0, and hence we ch ose the perfor mance criterion to be J = 2 Z T 0 x 2 − 3 2 dt, (33) for th e final-time T = 20 . The various in tegrations were computed b y the for ward-Euler metho d with ∆ t = 0 . 01 . For the algo rithm we chose the par ameter-v alues α = β = 0 . 5 and η = 0 . 6 , and we r an it fro m th e initial mod e-schedule associated with the control in put v ( t ) = 1 ∀ t ∈ [0 , 10] and v ( t ) = 2 ∀ t ∈ (10 , 20] . Results o f a typic al run , consisting of 100 iteratio ns of the algorithm, ar e sho wn in Figu res 2 -5. Figure 2 sh ows the c ontrol c omputed af ter 1 00 iteratio ns, namely the input control v associated with σ 100 . T he graph is not surpr ising, since we expect the optimal contro l initially to consist of v = 2 so that x 2 can rise to a value close to 3, an d then to enter a sliding m ode in or der for x 2 to ma intain its p roximity to 3. This is evident fr om Figure 2, where the sliding mode has begun to be constructe d. Figure 3 sho ws the resulting state trajectories x 1 ( t ) and x 2 ( t ) , t ∈ [0 , T ] , associate d with the last-compu ted sche dule σ 100 . T he jagg ed cur ve is o f x 1 while the smoothe r curve is o f x 2 . It is evident that x 2 climbs tow ards 3 initially a nd tends to stay ther e the reafter . Figure 4 shows the gra ph of the co st criter ion J ( σ k ) as a function of the iter ation coun t k = 1 , . . . , 100 . T he initial sched ule, σ 1 , is far away fr om the minimum and its associated cost is J ( σ 1 ) = 70 . 9 0 , and the cost of the last-com puted schedule is J ( σ 100 ) = 4 . 8 7 . Note th at J ( σ k ) goes down to under 8 after 3 iterations. Figure 5 shows the optimality function D σ k as a fu nction of the iter ation cou nt k . Initially D σ 1 = − 14 . 92 while at the last-c omputed schedu le D σ 100 = − 0 . 23 , and it is seen that D σ k makes sign ificant c limbs towards 0 in few iterations. W e also ran the algorith m for 200 iterations fro m the same initial schedule σ 1 , in o rder to verify that J ( σ k ) and D σ k stabilize. Inde ed they do, and J declined f rom J ( σ 100 ) = 4 . 87 to J ( σ 200 ) = 4 . 78 , while th e optima lity function s con tinues to rise tow ards 0, from D σ 100 = − 0 . 23 to D σ 200 = − 0 . 0 62 . 0 200 400 600 800 1000 1200 1400 1600 1800 2000 1 1.2 1.4 1.6 1.8 2 Fig. 2. Control (schedule) obtaine d after 100 iterati ons 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 Fig. 3. x 1 and x 2 vs. t V . C O N C L U S I O N S This paper prop oses a n ew algorith m for the op timal mode-sch eduling problem , wh ere it is desirab le to minimize an integral-co st criterion define d o n the system’ s state trajec- tory a s a fun ction of th e mod es’ schedule. The alg orithm is based o n the principle of grad ient descent with Ar mijo step sizes, com prised of th e Leb esgue measures o f sets where the modes are be ing changed . Asymptotic convergence is p roved in th e sense o f m inimizing seque nces, and simulatio n results support the theoretical developments. Future research will refine the propo sed algo rithmic framew ork and apply it to large-scale problem s. V I . A P P E N D I X The p urpose o f this app endix is to provide proo fs to Proposition 1 , and Lemmas 1 and 2 . Pr oof of Pr oposition 1. (1). The main argum ent is based on the f ollowing f orm of the second-o rder T aylor series expansion : For every x ∈ R n and y ∈ R n , f ( x + y ) − f ( x ) = h∇ f ( x ) , y i + Z 1 0 (1 − ξ ) h H ( x + ξ y ) y , y i dξ , (34) where h·i denotes in ner product in R n . Apply this with y = 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 Fig. 4. Cost criteri on vs. iterati on count 0 10 20 30 40 50 60 70 80 90 100 −15 −10 −5 0 Fig. 5. Optimality function vs. iteration count − λh ( x ) to o btain, f ( x − λh ( x )) − f ( x ) = − λ h∇ f ( x ) , h ( x ) i + λ 2 Z 1 0 (1 − ξ ) h H ( x − ξ λh ( x )) h ( x ) , h ( x ) i dξ . (35) Add αλ ||∇ f ( x ) || to both sides of th is equa tion, and use the fact that || H ( · ) || ≤ L , to obtain (af ter some algebra) th at f ( x − λh ( x )) − f ( x ) + αλ ||∇ f ( x ) || ≤ − λ (1 − α ) ||∇ f ( x ) || − λ 2 L . (36) Now if 0 ≤ λ ≤ 2 L (1 − α ) ||∇ f ( x ) || th en the Righ t-Hand side o f (36 ) is non- positiv e, h ence Eq uation (9) is satisfied. (2). Follows directly fro m Part (1), Eq uation (7), and the definition of λ ( x ) (8) . The proofs of Lemma 1 and Lemma 2 follow a s corollaries from established results on sensitivity analy sis of solution s to d ifferential equ ations, presented in Section 5.6 of [13]. In fact, the results of interest h ere in volve mode-in sertions via needle variations, which is a special c ase of the setting in [13] wher e gener al variations in the control are consider ed. Furthermo re, the p erturbatio ns here are p arameterized by a one-dim ensional variable and henc e the re sults are in terms of der i vati ves in the usual sense, while those in [13] are in terms of G ˆ atea ux or Fr ´ ech et deriv atives. Pr oof of Lemma 1. By Proposition 5.6 .5 in [13] and the Bellman -Gronwall Lem ma, the terms || x ( t ) || L ∞ are unifor mly b ound ed over the space of controls v associated with ev ery σ ∈ Σ . The costate equation (3) yields a similar result for || p ( t ) || L ∞ . Next, reca ll th at v ( · ) ha s a constant value throug hout th e interval [ s 1 , s 2 ] , an d hence the differentiability assum ptions of Theor em 5 .6.10 in [13] are valid. This th eorem implies that J ( σ s 1 ( γ )) ′′ exists and is expr essed in terms of the Hamiltonian and its first two deriv atives, hence it is unif ormly boun ded. Pr oof of Lemma 2. Since v ( · ) has a constant value throug hout the interval [ s 1 , s 2 ] , the assumptio ns mad e in the statement of Le mma 5.6.7 in [13] ar e in forc e. Th is implies a un iform Lipschitz co ntinuity o f x and p with respect to variations in γ . In th e setting of L emma 2, the needle variations is made at th e same point s ≥ s 2 for both mode-sch edules σ and σ s 1 ( γ ) , and h ence, an d by Equation (4), Equation (16 ) follows. R E F E R E N C E S [1] S. Alm ´ er , S. Mari ´ ethoz , and M. Morari. Optimal Sampled Dat a Control of PWM Systems Using Piece wise Affine Approximations, Proc . 49th CDC , Atlanta, Georgia, December 15-17, 2010. [2] S.A. Attia, M. Alamir , and C. Canudas de W it. Sub Optimal Control of Switched Nonlinea r Systems Under Location and Switching Con- straints. Pr oc. 16th IF AC W orld Congr ess , Prague, the Czech Republic, July 3-8, 2005. [3] H. Axelsson , Y . W ardi, M. Egerstedt, and E . V erriest. A Gradient Descent Approach to Optimal Mode Scheduling in Hybrid Dynamical Systems. J ournal of Optimizati on Theory and Applicat ions , V ol. 136, pp. 167-186, 2008. [4] M.S. Branick y , V .S. Borkar , and S. K. Mitter . A Unified Framew ork for Hybrid Control: Model and Optimal Control Theory . IEE E T rans- actions on Automatic Contr ol , V ol. 43, pp. 31-45, 1998. [5] R. Brocke tt. Stabili zation of Motor Networks. IEEE Confer ence on Decision and Contr ol , pp. 1484–1488, 1995. [6] T . Caldwell and T . Murphy . An Adjoint Method for S econd-Ord er Switchin g Time Optimizat ion. P r oc. 49th CDC , Atlanta, G eorg ia, December 15-17, 2010. [7] M. E gerstedt . Beha vior Based Robotics Using Hybrid Automata. Lectur e Notes in Computer Science: Hybrid Systems III: Computation and Con tr ol , Springer V e rlag, pp. 103-116, Pittsb urgh, P A, March 2000. [8] M. Egerstedt, Y . W ardi, and H. Axelsson . Transit ion-Ti me Optimiza - tion for Switched Systems. IEEE T ransacti ons on Automatic Contr ol , V ol . A C-51, No. 1, pp. 110-115, 2006. [9] D. Hristu-V arsak elis. Feedback Control Systems as Users of Shared Networ k: Communicati on Sequence s that Guarantee Stability . IE EE Confer ence on Decision and Contr ol , pp. 3631–3631, O rlando, FL, 2001. [10] B. Lincoln and A. Rantze r . Optimizing Line ar Systems Switching. IEEE Confer ence on Decision and Contr ol , pp. 2063–2068, Orlando, FL, 2001. [11] B. Piccoli. Hybrid Systems and Optimal Control . P r oc. IEEE Confer - ence on Decision and Contr ol , T ampa, Florida, pp. 13-18, 1998. [12] E. Polak and Y . W a rdi. A Stud y of Minimiz ing Sequence s. SIAM J ournal on Contr ol and Optimizati on , V ol. 22, No. 4, pp. 599-609, 1984. [13] E. Polak. Optimizat ion Algorithms and Consistent Approxi mations . Springer -V erlag, New Y ork, Ne w Y ork, 1997. [14] H. Rehbin der and M. Sanfirdson. Schedul ing of a Limited Commu- nicat ion Channel for Optimal Control. IEE E Confer ence on Decision and Contr ol , Sidne y , Australia, Dec. 2000. [15] M.S. S haikh and P . Caines. On Tra jectory Optimization for Hybrid Systems: Theory and Algorithms for Fixed Schedules. IEEE Confer- ence on Decision and Contr ol , Las V egas, NV , Dec. 2002. [16] M.S. Shaikh and P .E . Caines. Optimali ty Z one Algorithms for Hybrid Systems Computeat ion and Contro l: From Exponential to Linear Com- ple xity . Pr oc. IEEE Confer ence on Decision and Contr ol/Eur opean Contr ol Confer ence , pp. 1403-1408, Se ville, Spain, December 2005. [17] M.S. Shaikh and P .E. C aines. On the Hybrid Optimal Control Proble m: Theory and Algorithms. IEEE T rans. Automatic Contr ol , V ol. 52, pp. 1587-1603, 2007. [18] H.J. Sussmann. A Maximum Principle for Hybrid Optimal Control Problems. Procee dings of the 38th IE EE Confer ence on Decision and Contr ol , pp. 425-430, Phoenix, AZ, Dec. 1999. [19] L.Y . W ang, A. Beydoun, J. Cook, J. Sun, and I. Kolmano vsky . Optimal Hybrid Control with Applicat ions to Automotiv e Powe rtrain Systems. In Contro l Using Logic-Based Switc hing , V ol. 222 of LNCIS, pp. 190- 200, Springer-V erl ag, 1997. [20] X. Xu and P . Antsaklis. Optimal Contro l of Switched Autonomous Systems. IEEE Confer ence on Decision and Contr ol , Las V egas, NV , Dec. 2002.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment