CM Modeling of Trajectory

Information about the waypoints of a moving object, e.g., an airliner in an air traffic control (ATC) problem, should be considered in trajectory modeling and prediction. Due to the ATC regulations, trajectory design criteria, and restricted motion c…

Authors: Reza Rezaie, X. Rong Li

CM Modeling of Trajectory
CM Modeling of T rajectory Reza Rezaie and X. Rong Li Department of Electrical Engineering Univ ersity of New Orleans New Orleans, LA 70148 rrezaie@uno.edu and xli@uno.edu Abstract Information about the waypoints of a mo ving object, e.g., an airliner in an air traffic control (A TC) problem, should be considered in trajectory modeling and prediction. Due to the A TC regulations, trajectory design criteria, and restricted motion capability of airliners there are long range dependencies in trajectories of airliners. W aypoint information can be used for modeling such dependencies in trajectories. This paper proposes a conditionally Markov (CM) sequence for modeling trajectories passing by waypoints. A dynamic model gov erning the proposed sequence is obtained. Filtering and trajectory prediction formulations are presented. The use of the proposed sequence for modeling trajectories with waypoints is justified. Keyw ords: T rajectory modeling and prediction, conditionally Markov (CM) sequence, dynamic model, Gaussian sequence, air traffic control (A TC). 1 Introduction Markov processes hav e been widely used for modeling random phenomena. A Marko v process has two main components: an initial distribution and an ev olution law . Howe ver , for some problems Markov processes are not adequate. Then, sometimes a higher-order (e.g., second-order) Markov process is used. But such a model does not fit some phenomena well, for e xample, a time-varying phenomenon with some information av ailable about its future. An example is the problem of predicting the trajectory of an airliner in the presence of waypoint information. The Marko v process does not fit such a problem because the future distribution of a Marko v process is completely determined by its initial distribution and e volution la w . T rajectory modeling and prediction in the presence of an intent or a destination has been studied in the literature. [ 1 ]–[ 6 ] presented some intent-based trajectory prediction approaches for air traffic control (A TC). Some trajectory prediction approaches were presented in [ 1 ]–[ 3 ] based on hybrid estimation aided by intent information. In [ 4 ], the interacting multiple model (IMM) approach was used for trajectory prediction, where a higher weight was assigned to the model with the closest heading to wards the waypoint. [ 5 ] presented an approach for trajectory prediction using an inferred intent based on a database. In [ 6 ], the use of waypoint information for trajectory prediction in A TC was discussed. Ship trajectories were modeled by a Gauss-Marko v model in [ 7 ], where predicti ve information was incorporated. After quantizing the state space, [ 8 ]–[ 10 ] used finite-state reciprocal sequences for intent inference and a generalization of reciprocal sequences for trajectory modeling. A problem with quantized state space is the complexity of the corresponding estimation algorithms. So, the complexity of the algorithms used in [ 8 ]–[ 10 ] was also addressed. The Gaussian counterpart of the generalized reciprocal sequence defined in [ 10 ] was studied in [ 11 ]. [ 12 ]–[ 13 ] used bridging distributions for the purpose of intent inference, for example, in selecting an icon on an in-vehicle Preprint interactiv e display . A CM sequence was used in [ 14 ] for trajectory modeling with destination information. A systematic frame work for modeling trajectories with waypoints is desired. Inspired by [ 15 ], a class of CM sequences, called C M L , was defined, modeled, and characterized in [ 16 ]. A second-order nearest-neighbor dynamic model dri ven by locally correlated dynamic noise was presented in [ 17 ] for the nonsingular Gaussian (NG) reciprocal sequence. As special CM sequences, in [ 18 ]–[ 20 ] NG reciprocal sequences were studied from the CM vie wpoint. Also, some dynamic models with white dynamic noise were presented for the NG reciprocal sequence. Consider stochastic sequences defined o ver [0 , N ] = { 0 , 1 , . . . , N } . For con v enience, let the index be time. A sequence is Markov if and only if (iff) conditioned on the state at an y time k , the subsequences before and after k are independent. A sequence is reciprocal if f conditioned on the states at any two times k 1 and k 2 , the subsequences inside and outside the interval [ k 1 , k 2 ] are independent. In other words, “inside" and “outside" are independent gi ven the boundaries. A sequence is C M L iff conditioned on the state at time N , the sequence is Markov o ver [0 , N − 1] . The main components for modeling trajectories without any future information (no information about future waypoints or destination) are an origin and an ev olution law . Because a Markov process is determined by its initial distrib ution and e volution la w , Mark ov processes can model such trajectories. The main components for modeling trajectories with destination information (called destination- dir ected trajectories ) are an origin, an evolution law , and a destination. The main elements of the C M L sequence are a joint endpoint distribution (i.e., an initial distribution and conditioned on it a final distribution) and a Markov-lik e evolution la w . The C M L sequence can model the main components of destination-directed trajectories [ 14 ], but not trajectories passing by waypoints. Due to the A TC regulations, trajectory design criteria, restricted motion capability of airliners, and the A TC trajectory repeatability and predictability requirements [ 21 ] there are long range dependencies in trajectories of airliners. W aypoint information can be used for modeling such dependencies in trajectories. Assume an airliner broadcasts its next waypoint by the time it passes its current w aypoint. T rajectories start from an origin, pass the waypoints, and end at a destination (which can be seen as the last waypoint). This paper proposes a CM sequence for modeling such trajectories. Properties of the proposed CM sequence for modeling trajectories with w aypoints are discussed. The corresponding dynamic model, filter , and trajectory predictor are obtained. The paper is organized as follo ws. In Section 2, modeling of destination-directed trajectories by the C M L sequence is discussed. Then, a CM sequence is presented for modeling trajectories passing by waypoints. Also, the corresponding dynamic model is obtained. In Section 3, the filter and the trajectory predictor are presented. In Section 4, the presented model is simulated for trajectory prediction with waypoints. Section 5, includes conclusions. 2 T rajectory Modeling Using CM Sequences The following notation is used for time interv als and stochastic sequences: [ i, j ] , { i, i + 1 , . . . , j − 1 , j } [ x k ] j i , { x k , k ∈ [ i, j ] } [ x k ] , [ x k ] N 0 where k in [ x k ] j i is a dummy v ariable. Also, ZMNG and NG stands for “zero-mean nonsingular Gaussian" and “nonsingular Gaussian", respectiv ely . W e consider sequences defined over [0 , N ] . F ( ·|· ) denotes a conditional cumulativ e distribution function (CDF). 2.1 C M L Sequences for Destination-Dir ected T rajectory Modeling W e revie w the definition and a dynamic model of the C M L sequence for destination-directed trajectory modeling [14], [16], [18]–[19]. Definition 2.1. [ x k ] is Markov if ∀ j, k ∈ [0 , N ] , j < k , F ( x k | [ x i ] j 0 ) = F ( x k | x j ) (1) 2 Sample paths of some Markov sequences can be used for modeling trajectories without waypoint or destination information. F or example, a nearly constant velocity , acceleration, or turn motion model (with white noise) is a Markov model. Lemma 2.2. A ZMNG [ x k ] with covariance function C l 1 ,l 2 is Markov if f its evolution is governed by x k = M k,k − 1 x k − 1 + e M k , k ∈ [1 , N ] , x 0 = e M 0 (2) wher e [ e M k ] is a zer o-mean white NG sequence with covariances M k . The Marko v sequence is not powerful enough for modeling an origin, an ev olution law , and a destination. Since the future distribution of a Marko v sequence is determined by its initial distrib ution and e volution law , it is not po werful enough to model future information. A more general class of stochastic sequences ( C M L sequences) was used in [ 14 ] for modeling trajectories with destination information (destination-directed trajectories). It can be justified as follows. Let destination-directed trajectories be modeled as the sample paths of a sequence [ x k ] . Since the destination of the trajectories (i.e., density of x N ) is kno wn, the e volution law can be modeled as a conditional density gi ven the destination x N . This conditional density is chosen to be a Markov density , i.e., [ x k ] N − 1 0 being Markov conditioned on x N . This e volution law , which is simple and desirable for modeling destination- directed trajectories, corresponds to the C M L sequence defined as follows [16]. Definition 2.3. [ x k ] is C M L if ∀ j, k ∈ [0 , N − 1] , j < k , F ( x k | [ x i ] j 0 , x N ) = F ( x k | x j , x N ) (3) In other words, [ x k ] is C M L iff conditioned on x j and x N ( ∀ j ∈ [1 , N − 2] ), the subsequences [ x k ] N − 1 j +1 and [ x k ] j − 1 0 are independent. A dynamic model for the ev olution of the C M L sequence, called a C M L model, is as follows [16]. Theorem 2.4. A ZMNG [ x k ] with con variance function C l 1 ,l 2 is C M L iff its evolution is go verned by x k = G k,k − 1 x k − 1 + G k,N x N + e k , k ∈ [1 , N − 1] (4) wher e [ e k ] is a zer o-mean white NG sequence with covariances G k , and either boundary conditions x 0 = e 0 , x N = G N , 0 x 0 + e N (5) x N = e N , x 0 = G 0 ,N x N + e 0 (6) A non-zero-mean Gaussian sequence is C M L (or Markov) if f its zero-mean part follows the dynamic model of Theorem 2.4 (or Lemma 2.2). The same is true for the sequence defined later . Therefore, for simplicity and bre vity , we consider zero-mean sequences, but in simulations non-zero-mean sequences are used. An approach for the C M L model parameter design for modeling destination-directed trajectories is as follows [ 14 ]. Such trajectories can be modeled by combining (superimposition of) two key assumptions: (i) the moving object follo ws a Markov model (2) (e.g., a nearly constant velocity model) without considering the destination information, and (ii) the destination density is known (which can differ from the destination density of the Markov model in (i)). Let [ s k ] be a Markov sequence gov erned by (2) (e.g., a nearly constant velocity model). Since every Mark ov sequence is C M L , [ s k ] can also obey a C M L model as s k = G k,k − 1 s k − 1 + G k,N s N + e s k , k ∈ [1 , N − 1] (7) s N = e s N , s 0 = G s 0 ,N s N + e s 0 (8) where [ e s k ] is a zero-mean white NG sequence with cov ariances G k , k ∈ [1 , N − 1] , G s 0 , and G s N . Parameters of (7) can be obtained as follows. By (2) , we hav e p ( s k | s k − 1 ) = N ( s k ; M k,k − 1 s k − 1 , M k ) . Since [ s k ] is Markov , we hav e ( k ∈ [1 , N − 1] ) p ( s k | s k − 1 , s N ) = p ( s k | s k − 1 ) p ( s N | s k ) p ( s N | s k − 1 ) (9) = N ( s k ; G k,k − 1 s k − 1 + G k,N s N ; G k ) 3 where G k,k − 1 , G k,N , and G k are obtained as G k,k − 1 = M k,k − 1 − G k,N M N | k − 1 (10) G k,N = G k M 0 N | k C − 1 N | k (11) G k = ( M − 1 k + M 0 N | k C − 1 N | k M N | k ) − 1 (12) M N | k = M N ,N − 1 · · · M k +1 ,k , k ∈ [1 , N − 1] , M N | N = I C N | k = N − 1 X n = k M N | n +1 M n +1 M 0 N | n +1 , k ∈ [1 , N − 1] p ( s N | s i ) = N ( s N ; M N | i s i , C N | i ) , i ∈ [0 , N − 1] and M k,k − 1 , M k , k ∈ [1 , N ] , are parameters of (2). Now , we construct a sequence [ x k ] gov erned by x k = G k,k − 1 x k − 1 + G k,N x N + e k , k ∈ [1 , N − 1] (13) x N = e N , x 0 = G 0 ,N x N + e 0 (14) where [ e k ] is a zero-mean white NG sequence with cov ariances G k , k ∈ [1 , N − 1] , G 0 , and G N . Note that (13) and (7) hav e the same parameters ( G k,k − 1 , G k,N , G k , k ∈ [1 , N − 1] ), but parameters of (14) ( G 0 ,N , G 0 , G N ) and parameters of (8) ( G s 0 ,N , G s 0 , G s N ) are different. Parameters of (14) ( G 0 ,N , G 0 , G N ) can be chosen arbitrarily (i.e. G 0 ,N can be any matrix with suitable dimension, and G 0 and G N any positiv e definite matrix with suitable dimension). Thus, [ x k ] can have any joint density of x 0 and x N . So, [ s k ] and [ x k ] hav e the same C M L model ( (7) and (13) ) (in other words, the same transition density (9) ), but [ x k ] can hav e any joint endpoint density . It means any origin and destination of [ x k ] can be so modeled. Therefore, combining assumptions (i) and (ii) above naturally leads to the C M L sequence [ x k ] whose C M L model is the same as that of [ s k ] while the former can model any origin and destination. Reciprocal sequences are special C M L sequences. Definition 2.5. [ x k ] is r ecipr ocal if ∀ k 1 , k , k 2 ∈ [0 , N ] , k 1 < k < k 2 , F ( x k | [ x i ] k 1 0 , [ x i ] N k 2 ) = F ( x k | x k 1 , x k 2 ) (15) The C M L model (13) with (10) – (12) is called a C M L model induced by a Markov model. By Theorem 2.6 belo w , such a C M L model gov erns a reciprocal sequence (so it is called a reciprocal C M L model [ 18 ]). Also, Theorem 2.6 sho ws that every reciprocal C M L model can be induced by a Markov model follo wing the above approach [19]. Theorem 2.6. A ZMNG [ x k ] is recipr ocal iff it obeys (4) and (6) , where ( G k,k − 1 , G k,N , G k ) , k ∈ [1 , N − 1] , ar e given by (10) – (12) , M k,k − 1 , k ∈ [1 , N ] , ar e square matrices, and M k , k ∈ [1 , N ] , ar e positive definite matrices with the dimension of x k . A non-zero-mean Gaussian C M L sequence for modeling destination-directed trajectories is as follows. Let µ 0 ( µ N ) and C 0 ( C N ) be the mean and cov ariance of the origin (destination) distribution. Also, let C 0 ,N be the cross-cov ariance of the states at the origin and the destination. So, x N ∼ N ( µ N , C N ) . Then, x 0 = µ 0 + G 0 ,N ( x N − µ N ) + e 0 , where G 0 ,N = C 0 ,N C − 1 N and Cov ( e 0 ) = C 0 − C 0 ,N C − 1 N · ( C 0 ,N ) 0 . In addition, the state e volution for k ∈ [1 , N − 1] is gov erned by (4). 2.2 A CM Sequence for T rajectory Modeling with W aypoint Information Consider trajectories of airliners in A TC. An airliner passes se veral waypoints before reaching the destination. The waypoint information (about the location of the waypoint and the time at which the airliner should pass the waypoint) is broadcast ahead of time by the airliner . Let N n denote the time for the n th waypoint (the time at which the airliner should pass the waypoint). Assume the destination is not known. By time N n , the airliner broadcasts its next waypoint (the ( n + 1) th waypoint) information. The main elements for trajectory modeling in this problem are consecuti ve waypoints and the motion between them. A simple model capable of describing these main elements is desirable. 4 The states x j ( j ∈ [ N n , N n +1 − 1] ) and x N n +1 together can provide reasonable information about the past (the time before j ) and the intent of an airliner in order to model the trajectory between j and N n +1 . Therefore, given x j and x N n +1 , it is assumed that the trajectories o ver [ j, N n +1 ] and before j are independent. On the other hand, the waypoint sequence is a rough (grand scale) description of the trajectory . It is assumed that giv en the state at the n th waypoint (i.e., x N n ), the states at later waypoints (i.e., x N q , q > n ) are independent of the states at earlier waypoints (i.e., x N q , q < n ). In addition, airliners usually follo w their flight plan, i.e., satisfy the waypoint requirements. It means the state at a waypoint ( x N n ) can represent the state of the trajectory before (and especially close to) the waypoint. Thus, it is assumed that gi ven x N n , the states x N q , q > n are independent of all states before x N n . The abo ve assumptions seem reasonable for trajectories passing waypoints. In the following, a CM sequence for trajectory modeling in the abo ve problem is defined. Definition 2.7. [ x k ] is a stochastic sequence, wher e (i) ∀ n ∈ [1 , m ] and ∀ j, k , 0 = N 0 < N 1 < · · · < N n − 1 ≤ j < k < N n < N n +1 < · · · < N m = N F ( x k | [ x i ] j 0 , x N n ) = F ( x k | x j , x N n ) (16) (ii) ∀ n ∈ [1 , m ] and ∀ h < n F ( x N n | [ x i ] N h 0 ) = F ( x N n | x N h ) (17) By Definition 2.7, [ x k ] is C M L ov er [ N n − 1 , N n ] . Conditioned on x N n and x j , x k , k ∈ [ j + 1 , N n − 1] is independent of [ x k ] j − 1 0 . Also, gi ven x N h , x N n is independent of [ x k ] N h − 1 0 . A stochastic sequence [ x k ] can be generated in many dif ferent ways. Let the density function of [ x k ] exist and be denoted by p ([ x k ]) . Then, sample paths of [ x k ] can be generated in time order (i.e., x 0 , x 1 , . . . , x N ) according to the following decomposition p ([ x k ]) = p ( x N | [ x k ] N − 1 0 ) p ( x N − 1 | [ x k ] N − 2 0 ) · · · p ( x 1 | x 0 ) p ( x 0 ) (18) Based on (18) , first x 0 is generated from p ( x 0 ) . Then, gi ven x 0 , x 1 is generated from p ( x 1 | x 0 ) , and so on. Based on its properties, a simple way for sample-path generation of the sequence [ x k ] with Definition 2.7 is as follo ws. First, x 0 is generated from p ( x 0 ) and x N 1 is generated from p ( x N 1 | x 0 ) . Then, giv en x 0 and x N 1 , [ x k ] N 1 − 1 1 are generated in time order , i.e., x 1 ∼ p ( x 1 | x 0 , x N 1 ) , then x 2 ∼ p ( x 2 | x 1 , x N 1 ) , and so on. Then, giv en [ x i ] N 1 0 , x N 2 is generated from p ( x N 2 | x N 1 ) . Generation of [ x k ] N 2 − 1 N 1 +1 is in time order similar to that of [ x k ] N 1 − 1 1 . This approach is used until the end of the sequence. The dynamic model of Theorem 2.9 below clarifies this approach. Before presenting the dynamic model, we hav e a lemma. Lemma 2.8. A Gaussian sequence [ x k ] follows Definition 2.7 iff (i) ∀ n ∈ [1 , m ] and ∀ j, k , 0 = N 0 < N 1 < · · · < N n − 1 ≤ j < k < N n < N n +1 < · · · < N m = N E [ x k | [ x i ] j 0 , x N n ] = E [ x k | x j , x N n ] (19) (ii) ∀ n ∈ [1 , m ] and ∀ h < n E [ x N n | [ x i ] N h 0 ] = E [ x N n | x N h ] (20) A dynamic model gov erning a Gaussian sequence with Definition 2.7 is obtained using Lemma 2.8. Theorem 2.9. A ZMNG sequence [ x k ] with covariance function C l 1 ,l 2 , l 1 , l 2 ∈ [0 , N ] follows Definition 2.7 iff ∀ k ∈ [ N n − 1 + 1 , N n − 1] and ∀ n ∈ [1 , m ] , x k = G k,k − 1 x k − 1 + G k,N n x N n + e k (21) x N n = G N n ,N n − 1 x N n − 1 + e N n (22) wher e [ e k ] N 1 is a zero-mean white Gaussian sequence with nonsingular covariances G k , uncorr elated with x 0 with nonsingular covariance G 0 . 5 A non-zero-mean Gaussian sequence satisfies Definition 2.7 if f its zero-mean part is gov erned by the model in Theorem 2.9. The use of Definition 2.7 for trajectory modeling with waypoints is discussed. Let the trajectories be modeled by [ x k ] followi ng Definition 2.7. Similar to destination-directed trajectories, the subsequence [ x k ] N n N n − 1 is governed by a C M L model induced by a Markov model (Theorem 2.6). So, parameters of (21) are gi ven by (10) – (12) (see Section 4). By time N n − 1 the ne xt waypoint information is av ailable. It means the position mean of x N n (and potentially other information, e.g., turn rate, navigation accuracy) is gi ven. The remaining information corresponding to the next waypoint (i.e., the velocity mean at the waypoint, the cov ariance of the state at the waypoint C N n , and the cross-covariance of the states at two consecutive w aypoints C N n ,N n − 1 ) can be learned in adv ance based on a set of trajectories or can be designed. The impact of any mismatch in these parameters ( µ N n , C N n , and C N n ,N n − 1 ) is studied in section 4. P arameters of (22) are obtained using the covariance of the jointly Gaussian density of x N n − 1 and x N n as follows ( n > 0 ): G N n ,N n − 1 = C N n ,N n − 1 ( C N n ) − 1 G N n = C N n − C N n ,N n − 1 ( C N n ) − 1 ( C N n ,N n − 1 ) 0 where in (22) , G 0 = C 0 . In section 4, a non-zero-mean Gaussian CM sequence with Definition 2.7 is used for trajectory modeling and prediction. The corresponding CM sequence is governed by the dynamic model in Theorem 2.9, where instead of (22) we hav e x N n = µ N n + G N n ,N n − 1 ( x N n − 1 − µ N n − 1 ) + e N n (23) 3 Filtering and Prediction 3.1 Filtering Consider model in Theorem 2.9, where instead of (22) we hav e (23), and measurement model z k = H k x k + v k , k ∈ [1 , N ] (24) where [ v k ] N 1 is zero-mean white Gaussian noise with Cov ( v k ) = R k , uncorrelated with x 0 and [ e k ] N 1 . W e want to obtain ˆ x k = E [ x k | z k ] and its mean square error (MSE) matrix gi ven all measurements from the start to time k , denoted by z k = { z 1 , z 2 , . . . , z k } . For k ∈ [0 , N 1 − 1] , let y k = [ x 0 k , x 0 N 1 ] . Given the jointly Gaussian density N ( y 0 ; µ y 0 , C y 0 ) , the minimum MSE (MMSE) estimate of y 0 and its MSE matrix are ˆ y 0 = µ y 0 and Σ 0 = C y 0 . For n = 1 , (21) can be written as y k = G y k,k − 1 y k − 1 + e y k − 1 , k ∈ [1 , N 1 − 1] (25) where G y k,k − 1 =  G k,k − 1 G k,N 1 0 I  (26) e y k =  e k +1 0  , G y k = Cov ( e y k ) =  G k +1 0 0 0  (27) In addition, (24) is written as z k = H y k y k + v k , k ∈ [1 , N ] (28) where H y k = [ H k , 0] . Based on (25) and (28), the MMSE estimator and its MSE matrix are ˆ y k = E [ y k | z k ] = ˆ y k | k − 1 + C y k ,z k C − 1 z k  z k − H y k ˆ y k | k − 1  (29) Σ k = E [( y k − ˆ y k )( y k − ˆ y k ) 0 ] = Σ k | k − 1 − C y k ,z k C − 1 z k ( C y k ,z k ) 0 (30) 6 where ˆ y k | k − 1 = G y k,k − 1 ˆ y k − 1 Σ k | k − 1 = G y k,k − 1 Σ k − 1 ( G y k,k − 1 ) 0 + G y k − 1 C y k ,z k = Σ k | k − 1 ( H y k ) 0 C z k = H y k Σ k | k − 1 ( H y k ) 0 + R k and the estimate of x k and its MSE are ( k ∈ [1 , N 1 − 1] ) ˆ x k = [ I , 0] ˆ y k P k = [ I , 0]Σ k [ I , 0] 0 Giv en ˆ y N 1 − 1 and Σ N 1 − 1 , ˆ x N 1 | N 1 − 1 = [0 , I ] ˆ y N 1 − 1 P N 1 | N 1 − 1 = [0 , I ]Σ N 1 − 1 [0 , I ] 0 where ˆ x N 1 | N 1 − 1 is the estimate of x N 1 giv en all measurements up to time N 1 − 1 , and P N 1 | N 1 − 1 is its MSE matrix. Then, gi ven z N 1 , ˆ x N 1 | N 1 − 1 and P N 1 | N 1 − 1 are updated as ˆ x N 1 = ˆ x N 1 | N 1 − 1 + C x N 1 ,z N 1 C − 1 z N 1 ( z N 1 − H N 1 ˆ x N 1 | N 1 − 1 ) (31) P N 1 = P N 1 | N 1 − 1 − C x N 1 ,z N 1 C − 1 z N 1 ( C x N 1 ,z N 1 ) 0 (32) where C x N 1 ,z N 1 = P N 1 | N 1 − 1 ( H N 1 ) 0 C z N 1 = H N − 1 P N 1 | N 1 − 1 ( H N 1 ) 0 + R N 1 p ( x N 1 , x N 2 | z N 1 ) is the posterior jointly Gaussian density of x N 1 and x N 2 . T o estimate x N 1 +1 (based on (21) ), we need to calculate p ( x N 1 , x N 2 | z N 1 ) with the following conditional mean and conditional cov ariance  E [ x N 1 | z N 1 ] E [ x N 2 | z N 1 ]  ,  C N 1 | N 1 C N 1 ,N 2 | N 1 C 0 N 1 ,N 2 | N 1 C N 2 | N 1  (33) where we hav e C k 1 | k = Cov ( x k 1 | z k ) and C k 1 ,k 2 | k = Cov ( x k 1 , x k 2 | z k ) . W e already have E [ x N 1 | z N 1 ] = ˆ x N 1 and C N 1 | N 1 = P N 1 . E [ x N 2 | z N 1 ] and C N 2 | N 1 are calculated as follows. By (23) and the whiteness and the uncorrelatedness of [ v k ] N 1 , [ e k ] N 1 , and x 0 , we hav e p ( x N 2 | x N 1 ) = p ( x N 2 | x N 1 , z N 1 ) (34) Thus, p ( x N 2 | z N 1 ) = Z p ( x N 2 | x N 1 ) p ( x N 1 | z N 1 ) dx N 1 (35) Giv en µ N 2 , C N 2 , and C N 2 ,N 1 , based on (35), we hav e E [ x N 2 | z N 1 ] = µ N 2 + G N 2 ,N 1 ( ˆ x N 1 − µ N 1 ) C N 2 | N 1 = G N 2 + G N 2 ,N 1 P N 1 ( G N 2 ,N 1 ) 0 where G N 2 ,N 1 = C N 2 ,N 1 ( C N 1 ) − 1 and G N 2 = C N 2 − C N 2 ,N 1 · ( C N 1 ) − 1 ( C N 2 ,N 1 ) 0 . C N 1 ,N 2 | N 1 is calculated as follows. Consider the mean and cov ariance of p ( x N 1 , x N 2 | z N 1 ) giv en by (33). W e hav e E [ x N 2 | x N 1 , z N 1 ] = E [ x N 2 | z N 1 ] + C N 2 ,N 1 | N 1 ( C N 1 | N 1 ) − 1 ( x N 1 − E [ x N 1 | z N 1 ]) = µ N 2 + C N 2 ,N 1 ( C N 1 ) − 1 ( ˆ x N 1 − µ N 1 ) + C N 2 ,N 1 | N 1 ( C N 1 | N 1 ) − 1 ( x N 1 − ˆ x N 1 ) (36) 7 Also, E [ x N 2 | x N 1 ] = µ N 2 + C N 2 ,N 1 ( C N 1 ) − 1 ( x N 1 − µ N 1 ) (37) By (34), we hav e E [ x N 2 | x N 1 ] = E [ x N 2 | x N 1 , z N 1 ] (38) Substituting (36)–(37) into (38), after some manipulation, yields ( C N 2 ,N 1 ( C N 1 ) − 1 − C N 2 ,N 1 | N 1 ( C N 1 | N 1 ) − 1 )( x N 1 − ˆ x N 1 ) = 0 (39) (39) holds for e very 1 x N 1 − ˆ x N 1 ∈ < d (where d is the dimension of the state v ector x k ), i.e., < d is the null space and thus G N 2 ,N 1 = C N 2 ,N 1 | N 1 ( C N 1 | N 1 ) − 1 which results in C N 2 ,N 1 | N 1 = G N 2 ,N 1 C N 1 | N 1 = ( C N 1 ,N 2 | N 1 ) 0 (40) where C N 1 | N 1 = P N 1 . Giv en p ( x N 1 , x N 2 | z N 1 ) , filtering for k ∈ [ N 1 + 1 , N 2 − 1] (which is similar to the filtering for k ∈ [1 , N 1 − 1] ) is based on the follo wing model for y k = [ x 0 k , x 0 N 2 ] 0 : y k =  G k,k − 1 G k,N 2 0 I  y k − 1 + e y k − 1 Similarly , giv en p ( x N n − 1 | z N n − 1 ) , n > 1 , we have E [ x N n | z N n − 1 ] = µ N n + G N n ,N n − 1 ( ˆ x N n − 1 − µ N n − 1 ) C N n | N n − 1 = G N n + G N n ,N n − 1 P N n − 1 ( G N n ,N n − 1 ) 0 C N n ,N n − 1 | N n − 1 = G N n ,N n − 1 C N n − 1 | N n − 1 = ( C N n − 1 ,N n | N n − 1 ) 0 where G N n ,N n − 1 = C N n ,N n − 1 ( C N n − 1 ) − 1 and G N n = C N n − C N n ,N n − 1 ( C N n − 1 ) − 1 ( C N n ,N n − 1 ) 0 . Therefore, p ( x N n − 1 , x N n | z N n − 1 ) is av ailable. Then, for k ∈ [ N n − 1 + 1 , N n − 1] , filtering is performed using the following model for y k = [ x 0 k , x 0 N n ] 0 : y k =  G k,k − 1 G k,N n 0 I  y k − 1 + e y k − 1 (41) 3.2 Prediction Giv en measurements up to time k ∈ [ N n − 1 , N n − 1] , trajectory predition for different k + r is discussed as follows. For k + r ∈ [ k + 1 , N n − 1] , trajectory prediction is based on the following posterior density ( y k = [ x 0 k , x 0 N n ] 0 ) p ( y k + r | z k ) = Z p ( y k + r | y k ) p ( y k | z k ) dy k (42) where the second term of the integrand is the output of the filter (see (29) – (30) ), and the first term of the integrand is kno wn by (41) . So, for k + r ∈ [ k + 1 , N n − 1] , the predicted state and its MSE matrix are obtained as ˆ y k + r | k = G y k + r | k ˆ y k (43) Σ k + r | k = B k + r | k + G y k + r | k Σ k ( G y k + r | k ) 0 (44) 1 Actually , (38) holds almost surely , but we consider a regular v ersion of the conditional expectations [ 22 ], where we hav e (39) for ev ery x N 1 − ˆ x N 1 ∈ < d . 8 where G y k,k − 1 =  G k,k − 1 G k,N n 0 I  and G y k + r | k = G y k + r,k + r − 1 G y k + r − 1 ,k + r − 2 · · · G y k +1 ,k G y k | k = I , ∀ k B k + r | k = k + r − 1 X i = k G y k + r | i +1 G y i ( G y k + r | i +1 ) 0 Then, the predicted estimate of x k + r and its MSE matrix are ˆ x k + r | k = [ I , 0] ˆ y k + r | k (45) P k + r | k = [ I , 0]Σ k + r | k [ I , 0] 0 (46) For k + r = N n , trajectory prediction is based on p ( x N n | z k ) , which is available from the filter because it is a marginal of p ( x k , x N n | z k ) . W e hav e ˆ x N n | k = [0 , I ] ˆ y k (47) P N n | k = [0 , I ]Σ k [0 , I ] 0 (48) The ( n + 1) th waypoint is broadcast by N n . So, the ( n + 1) th waypoint might be a vailable at time k . Generally (even if later waypoints are kno wn), for k + r = N q , n < q , we hav e p ( x N q | z k ) = R p ( x N q | x N n ) p ( x N n | z k ) dx N n , where the first term of the integrand is giv en by (23) and the second term of the integrand is gi ven by the filter (see (31)–(32)). Then, ˆ x N q | k = E [ x N q | z k ] = µ N q + G N q ,N n ( ˆ x N n | k − µ N n ) (49) P N q | k = C N q | k = G N q + G N q ,N n P N n | k ( G N q ,N n ) 0 (50) where G N q ,N n = C N q ,N n ( C N n ) − 1 and G N q = C N q − C N q ,N n ( C N n ) − 1 ( C N q ,N n ) 0 . If later waypoints (up to ( q + 1) th) are known, for k + r ∈ [ N q + 1 , N q +1 − 1] , n ≤ q , we have ( y k = [ x 0 k , x 0 N q +1 ] 0 ) p ( y k + r | z k ) = Z p ( y k + r | y N q ) p ( y N q | z k ) dy N q (51) where the first term of the integrand is kno wn by ( y k = [ x 0 k , x 0 N q +1 ] 0 ) y k =  G k,k − 1 G k,N q +1 0 I  y k − 1 + e y k − 1 (52) The second term of the integrand of (51) , p ( x N q , x N q +1 | z k ) , should be calculated. The terms E [ x N q | z k ] , C N q | k , E [ x N q +1 | z k ] , and C N q +1 | k are obtained by (49) – (50) . Also, similar to (40) , we hav e C N q +1 ,N q | k = G N q +1 ,N q C N q | k = ( C N q ,N q +1 | k ) 0 , where G N q +1 ,N q = C N q +1 ,N q ( C N q ) − 1 . Giv en p ( x N q , x N q +1 | z k ) and (52), based on (51), we hav e ˆ y k + r | k = G y k + r | N q ˆ y N q | k (53) Σ k + r | k = B k + r | N q + G y k + r | N q Σ N q | k ( G y k + r | N q ) 0 (54) where G y k + r | N q = G y k + r,k + r − 1 G y k + r − 1 ,k + r − 2 · · · G y N q +1 ,N q , B k + r | N q = P k + r − 1 i = N q G y k + r | i +1 G y i · ( G y k + r | i +1 ) 0 , and G y k | k = I , ∀ k . Then, the predicted estimate at k + r ∈ [ N q + 1 , N q +1 − 1] is gi ven by (45)–(46), where ˆ y k + r | k and Σ k + r | k are giv en by (53)–(54). 9 4 Simulations The CM sequence of Definition 2.7 is simulated for modeling and prediction of trajectories with waypoints. Consider a two-dimensional scenario, where the state of an airliner at time k is x k = [ x , v x , y , v y ] 0 k , where [ x k , y k ] 0 is the position, and [ v x k , v y k ] 0 is the velocity . T rajectories between four consecutiv e waypoints are simulated. Means and covariances of the states at waypoints and cross-cov ariances between the states at two consecuti ve waypoints are µ N 1 = [10000 , 80 , 5000 , 30] 0 (55) µ N 2 = [90000 , 70 , 30000 , 50] 0 (56) µ N 3 = [170000 , 60 , 170000 , 60] 0 (57) µ N 4 = [250000 , 90 , 200000 , 30] 0 (58) C N i =    10000 400 0 0 400 100 0 0 0 0 10000 400 0 0 400 100    , i = 1 , 2 , 3 , 4 (59) C N i ,N i − 1 =    8000 200 0 0 200 70 0 0 0 0 8000 200 0 0 200 70    , i = 2 , 3 , 4 (60) State ev olution between waypoints ( (21) ) is governed by a C M L model induced by a Markov model (Theorem 2.6). The corresponding Mark ov model is as follo ws. Consider Marko v model (2) with M k +1 ,k = diag ( F , F ) , F =  1 T 0 1  , ∀ k (61) M k = diag ( Q, Q ) , Q = q  T 3 / 3 T 2 / 2 T 2 / 2 T  (62) where T = 15 seconds and q = 0 . 01 . The parameters of (21) are gi ven by (10) – (12) . The w aypoint times are N 1 = 0 , N 2 = 50 , N 3 = 110 , and N 3 = 150 . Also, the measurement equation is z k = H x k + v k , k ∈ [1 , N ] , H =  1 0 0 0 0 0 1 0  , where [ v k ] N 1 ( Cov ( v k ) = diag (100 , 100) ) is a zero-mean white NG sequence uncorrelated with [ x k ] . Fig. 1 sho ws sev eral trajectories of the CM sequence governed by model (21) and (23) from the first to the fourth waypoint. Assume measurements are a vailable up to k = 4 (the output of the filter is a vailable at k = 4 ). The goal is to predict the trajectory . Also, it is assumed that, in addition to the second waypoint, the third waypoint has already been broadcast and av ailable. But the fourth waypoint is not kno wn at k = 4 . As mentioned abov e, the ev olution of the state between two consecuti ve waypoints is go verned by a C M L model induced by the abov e Markov model. The joint endpoint distribution is an important part of a C M L model. Since there is no information about the fourth waypoint at k = 4 , it is natural to assume that the e volution of the state after the third waypoint is go verned by the Marko v model (2) with parameters (61) – (62) and the initial distrib ution equal to the distribution at the third waypoint. This modeling assumption is well justified based on the definition of a C M L model induced by a Marko v model (Section 2), as follo ws. Consider a Markov sequence governed by a Mark ov model (2) . It is possible to obtain a C M L model gov erning this Markov sequence (i.e., the C M L model induced by the Marko v model (Theorem 2.6)). Assigning the right endpoint distrib ution to this induced C M L model, the corresponding C M L model (with its boundary conditions) gov erns the original Markov sequence, which is also gov erned by the original Markov model. T o study the impact of a mismatch in the parameters, several mismatched cases are considered. The matched case, i.e., (55)–(60), is considered as case (i). The mismatched cases are: 10 Figure 1: T rajectories and waypoints. Figure 2: Logarithm of AEE of position predictions (log 10 ( AEE 4+ r | 4 ) ). • Case (ii): µ N 1 = [10000 , 60 , 5000 , 50] 0 µ N 2 = [90000 , 50 , 30000 , 70] 0 µ N 3 = [170000 , 40 , 170000 , 80] 0 C N i = diag (10 4 , 10 4 , 10 4 , 10 4 ) , i = 1 , 2 , 3 C N i ,N i − 1 = diag (7000 , 6000 , 7000 , 6000) , i = 2 , 3 • Case (iii): Same as case (ii) except that C N i ,N i − 1 = 0 , i = 2 , 3 . Fig. 2 sho ws the logarithm of the average Euclidean errors (AEE) [ 23 ] of the predictions of the position vector [ x k , y k ] 0 for cases (i)–(iii). Giv en measurements up to time k , the AEE of position prediction at time k + r ( AEE k + r | k ) is 1 M P M i =1 p ( x k + r − ˆ x k + r | k ) 2 + ( y k + r − ˆ y k + r | k ) 2 , where [ x k + r , y k + r ] 0 is the true position at k + r ( k + r = 5 , . . . , 150 ) and [ ˆ x k + r | k , ˆ y k + r | k ] 0 is its prediction 11 Figure 3: Logarithm of AEE of position predictions (log 10 ( AEE 4+ r | 4 ) ). using measurements up to time k = 4 , and M = 1000 is the number of Monte Carlo runs. In cases (ii) and (iii), the means and co variances of the velocity at waypoints are highly mismatched. Case (iii) assumes that there is no correlation between states at dif ferent waypoints. In case (ii), the correlation coefficients between position components (and v elocity components) at two consecuti ve waypoints are less than the true one. An underestimate of the correlation coefficient in case (ii) improves the prediction performance compared with case (iii) with zero correlation coef ficient. Note that an ov erestimate of the correlation coeffi cient can degrade the performance, especially due to the mean and cov ariance mismatches at waypoints. Although it is reasonable to assume that the means of position components at w aypoints are av ailable, the impact of a mismatch in the means of position components in trajectory prediction is studied. The following mismatched cases are considered: • Case (iv): The differences with case (ii) are: µ N 1 = [10500 , 60 , 5500 , 50] 0 µ N 2 = [90500 , 50 , 30500 , 70] 0 µ N 3 = [170500 , 40 , 170500 , 80] 0 • Case (v): Same as case (i v) except that C N i ,N i − 1 = 0 , i = 2 , 3 . • Case (vi): The dif ferences with case (iv) are: C N i = diag (10 5 , 10 4 , 10 5 , 10 4 ) , i = 1 , 2 , 3 C N i ,N i − 1 = diag (70000 , 6000 , 70000 , 6000) , i = 2 , 3 • Case (vii): Same as case (vi) except that C N i ,N i − 1 = 0 , i = 2 , 3 . Fig. 3 sho ws the logarithm of the AEE of position predictions for case (i) and cases (i v) to (vii). The prediction performance degradation around the w aypoints is due to the mismatch of the corresponding position means. A lar ge cov ariance can compensate for the bias due to the mismatched mean. 5 Summary and Conclusions Due to the air traf fic control (A TC) regulations there are long range dependencies in trajectories of airliners. Such dependencies can be modeled by taking the w aypoint information into account. In 12 this paper , a conditionally Markov (CM) sequence has been proposed for modeling trajectories with waypoints. A dynamic model governing the proposed sequence has been presented. Filtering and trajectory prediction formulations hav e been obtained. First, the proposed CM sequence pro vides a simple and systematic approach for modeling trajectories with w aypoints. Second, it is fle xible to incorporate any kind of information av ailable about the waypoints. Third, there is no restriction on the parameters of the presented dynamic model (this is good for analysis of the model and design of its parameters). Fourth, the presented dynamic model provides a systematic approach for reducing the uncertainty about the intent of an airliner as more measurements are receiv ed. It is based on calculation of the posterior state density at the next waypoint gi ven measurement at the current time. Fifth, the presented dynamic model provides a systematic approach for handling inaccurate information about the waypoints (e.g., the state mean at a waypoint), based on appropriate cov ariance matrices. Suitable CM sequences can be systematically defined for trajectory modeling in different scenarios with waypoints and/or destination information available. The coorresponding dynamic models are simple and easy to apply . This is not necessarily the case about other stochastic sequences. For example, a generalization of the reciprocal sequence using the dynamic model of [ 17 ] is not necessarily easy due to the structure of the model and its correlated dynamic noise [11]. More results about CM and reciprocal sequences can be found in []–[]. Acknowledgments Research was supported by N ASA Phase03-06 through grant NNX13AD29A. References [1] I. Hwang and C. E. Seah. Intent-Based Probabilistic Conflict Detection for the Next Generation Air T ransportation System. Pr oceedings of the IEEE . V ol. 96, No. 12, pp. 2040-2059, 2008. [2] J. Y epes, I. Hwang, and M. Rotea. An Intent Based Trajectory Prediction Algorithm for Air Traf fic Control. AIAA Guidance, Navigation, and Contr ol Conference , San Francisco, CA, Aug. 2005. [3] J. Y epes, I. Hwang, and M. Rotea. Algorithms for Aircraft Intent Inference and Trajectory Prediction. AIAA Journal of Guidance , Contr ol, and Dynamics , V ol. 30, No. 2, pp. 370-382, 2007. [4] Y . Liu and X. R. Li. Intent Based T rajectory Prediction by Multiple Model Prediction and Smoothing. AIAA Guidance, Navigation, and Contr ol Conference , Kissimmee, Florida, Jan. 2015. [5] J. Krozel and D. Andrisani. Intent Inference and Strate gic Path Prediction. AIAA Guidance, Navigation, and Contr ol Confer ence , San Francisco, CA, Aug. 2005. [6] K. Mueller and J. Krozel. Aircraft ADS-B Intent V erification Based on Kalman Tracking Filter . AIAA Guidance, Navigation, and Contr ol Conference , Den ver , Colorado, Aug. 2000. [7] D. A. Castanon, B. C. Levy , and A. S. W illsky . Algorithms for Incorporation of Predictive Information in Surveillance Theory . International J ournal of Systems SCI. , V ol. 16, No. 3, 367-382, 1985. [8] M. Fanaswala, V . Krishnamurthy , and L. B. White. Destination-A ware T arget T racking V ia Syntactic Signal Processing. IEEE International Conference on Acoustics, Speec h and Signal Processing (ICASSP) , Prague, Czech Republic, May 2011. [9] M. Fanaswala, V . Krishnamurthy . Detection of Anomalous T rajectory Patterns in T arget Tracking via Stochastic Context-Free Grammer and Reciprocal Process Models. IEEE J ournal of Selected T opics in Signal Pr ocessing , V ol. 7, No. 1, pp. 76-90, 2013. [10] L. B. White and F . Carravetta. Normalized Optimal Smoothers for a Class of Hidden Generalized Recipro- cal Processes. IEEE T rans. on Automatic Contr ol , V ol. 62, No. 12, pp. 6489-6496, 2017. [11] L. B. White and F . Carravetta. Stochastic Realisation and Optimal Smoothing for Gaussian Generalised Reciprocal Processes. IEEE 56th Annual Conference on Decision and Contr ol (CDC) , Melbourne, Dec. 2017. [12] B. I. Ahmad, J. Murphy , P . M. Langdon, R. Hardy , and S. J. Godsill. Destination Inference using Bridging Distributions. IEEE International Confer ence on Acoustics, Speech and Signal Pr ocessing (ICASSP) , Brisbane, Australia, Apr . 2015. 13 [13] B. I. Ahmad, J. K. Murphy , S. J. Godsill, P . M. Langdon, and R. Hardy . Intelligent Interactive Displays in V ehicles with Intent Prediction: A Bayesian framew ork. IEEE Signal Pr ocessing Magazine , V ol. 34, No. 2, 2017. [14] R. Rezaie and X. R. Li. Destination-Directed Trajectory Modeling and Prediction Using Conditionally Markov Sequences. IEEE W estern New Y ork Imag e and Signal Pr ocessing W orkshop , Rochester , NY , USA, Oct. 2018, pp. 1-5. [15] C. B. Mehr and J. A. McFadden, Certain Properties of Gaussian Processes and their First-Passage T imes. Journal of Royal Statistical Society , V ol. 27, pp. 505-522, 1965. [16] R. Rezaie and X. R. Li. Nonsingular Gaussian Conditionally Marko v Sequences. IEEE W est. New Y ork Image and Signal Pr ocessing W orkshop . Rochester , NY , USA, Oct. 2018, pp. 1-5. [17] B. C. Le vy , R. Frezza, and A. Krener . Modeling and Estimation of Discrete-T ime Gaussian Reciprocal Processes. IEEE T rans. on Automatic Contr ol . V ol. 35, No. 9, pp. 1013-1023, 1990. [18] R. Rezaie and X. R. Li. Gaussian Reciprocal Sequences from the V iewpoint of Conditionally Markov Sequences. Inter . Confer ence on V ision, Image and Signal Pr ocessing , Las V egas, NV , USA, Aug. 2018., pp. 33:1-33:6. [19] R. Rezaie and X. R. Li. Models and Representations of Gaussian Reciprocal and Conditionally Markov Sequences. Inter . Confer ence on V ision, Image and Signal Pr ocessing , Las V egas, NV , USA, Aug. 2018, pp. 65:1-65:6. [20] R. Rezaie and X. R. Li. Explicitly Sample-Equi valent Dynamic Models for Gaussian Conditionally Mark ov , Reciprocal, and Markov Sequences. Inter . Conf . on Contr ol, A utomation, Robotics, and V ision Engineering , New Orleans, LA, USA, No v . 2018, pp. 1-6. [21] R TCA Inc. Minimum A viation System Performance Standards: Required Navigation Performance for Area Navigation. 2014. [22] M. Loeve. Pr obability Theory II . 4th Edition, Springer-V erlag, 1977. [23] X. R. Li and Z. Zhao. Ev aluation of Estimation Algorithms Part I: Incomprehensi ve Measures of Perfor - mance. IEEE T rans. on Aer ospace and Electr onic Systems , V ol. 42, No. 4, pp. 1340-1358, 2006. [24] R. Rezaie and X. R. Li. Gaussian Conditionally Marko v Sequences: Algebraically Equi valent Dynamic Models. IEEE T rans. on Aer ospace and Electr onic Systems , 2019, DOI: 10.1109/T AES.2019.2951188. [25] R. Rezaie and X. R. Li. Gaussian Conditionally Mark ov Sequences: Singular/Nonsingular . IEEE T rans. on Automatic Contr ol , 2019, DOI: 10.1109/T AC.2019.2944363. [26] R. Rezaie. Gaussian Conditionally Markov Sequences: Theory with Application . Ph.D. Dissertation, Dept of Electrical Engineering, Univ ersity of New Orleans, July 2019. [27] R. Rezaie and X. R. Li. Gaussian Conditionally Markov Sequences: Dynamic Models and Rep- resentations of Reciprocal and Other Classes. IEEE T rans. on Signal processing , May 2019, DOI: 10.1109/TSP .2019.2919410. 14

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment