Time and Space Varying Copulas

In this article we review existing literature on dynamic copulas and then propose an n-copula which varies in time and space. Our approach makes use of stochastic differential equations, and gives rise to a dynamic copula which is able to capture the…

Authors: Glenis Crane

Time and Space V arying Copulas Glenis Crane 1 Scho ol of Mathematic al Scienc es University o f A delaid e South Austr alia 1 Abstract In this article we review existing literature on dynamic copulas and then prop ose an n -copula which v aries in time and spa ce. O ur appr oac h makes use of sto chastic differential eq uations, a nd gives r is e to a dynamic copula which is able to capture the dep endence b et ween m ultiple Markov diffusion pro cesses. This mo del is s uita ble for pricing ba sk et deriv atives in finance and may also be applicable to other area s such as bioinfor matics and environmental science. 2 In tro duction and mot iv ation Mapping joint probability distribution functions to copula functions is straight forward when they are static, due to Sklar’s Theorem. On the other ha nd, mapping time dep enden t pr o babilit y functions to copula functions is more pro ble ma tic. In this article we (a) review the techniques for creating time dep enden t copulas a nd (b) extend the metho d describ ed in [4], [5], since it inco rpora tes both time and space. These equations a re the first of their k ind in higher dimensions , s ince only 2-dimensional examples hav e previously b een des cribed. There are at leas t tw o are as in which the time dep endent co pulas of this chapter are applicable, • Cr e dit derivatives . W e would a ssume in this application that we have a p ortfolio o f n firms, and X i ( t ) is the v alue of a i -th fir m’s assets at time t . Each margina l distribution asso ciated 1 Author to whom corresp ondenc e should b e addressed, Dr G.J. Crane, Sc hool of Mathematical Sciences, Univ ersity of Adelaide, South Australia 5005, Australia, ph: 618 83036184, Emai l: glenis. crane @adelaide.edu.au 1 with X i ( t ) would represent the pro babilit y o f the firm’s v alue falling b elo w some thre s hold, given certain informatio n at time zero. The time v arying copula would repr esen t the evolution of the joint distribution or state of the e n tire p ortfolio. • Genetic drift . F or example, each X i ( t ) may r e pr esen t the f requency of a pa r ticular gene at time t . E ac h margina l dis tribution would represent the proba bilit y that the f requency of a particular ge ne had fallen be low some thres hold. The copula would r elate to the evolution of a group of genes o f interest. 2.1 Notation and Definitions In order to understand some of the issue s s urrounding the mapping o f copula e to distributions it is necessary to g o ba c k to s ome of the ba s ic definitions a nd so me nota tion in relation to the probability distributions of interest. The notation used for a univ aria te probability transition function here will b e Pr { X i ( t i ) ≤ x i | X j ( t j ) = x j } = F ( t i , x i | t j , x j ) . (1) If t j = 0 then it is quite common to s uppress the zero a nd so the notation the distribution in this case would b e F ( t i , x i | x i 0 ). 2.2 Metho d of Darsow et al Authors in [1] were the first to attempt to ma p a transition probability function to a copula . First let us recall the definition of a biv a riate copula . Definition 1 . 2-c opula . A function C : [0 , 1] 2 → [0 , 1] is a co pula if it satisfies the following prop erties; 1. C ( u 1 , 0) = 0, C (0 , u 2 ) = 0, for all u 1 , u 2 ∈ [0 , 1 ] 2. C ( u 1 , 1) = u 1 , C (1 , u 2 ) = u 2 , for all u 1 , u 2 ∈ [0 , 1] and 3. F or every u a , u b , v a , v b ∈ [0,1], s uc h that u a ≤ u b , v a ≤ v b , the volume of C , V C ([ u a , u b ] × [ v a , v b ]) ≥ 0, that is C ( u b , v b ) − C ( u b , v a ) − C ( u a , v b ) + C ( u a , v a ) ≥ 0 . 2 Sklar’s Theorem . Supp ose H is a biv a riate joint distribution with mar ginal distributio ns F 1 and F 2 then there ex is ts a 2-co pula C , such that for all x 1 , x 2 ∈ ¯ R H ( x 1 , x 2 ) = C ( F 1 ( x 1 ) , F 2 ( x 2 )) . (2) If F 1 and F 2 are co n tinuous distr ibutio ns then C is unique, other wis e C is uniquely determined o n RanF 1 × R anF 2 , s ee Nelsen [6 ]. Definition 5 . 1 . Ma rkov Pr op ert y . A sto c has tic pr ocess X i ( t ) and x i ∈ R , a ≤ t ≤ b is sa id to satisfy the Mar kov prop ert y if for a n y a ≤ t 1 ≤ t 2 . . . , ≤ t n ≤ t , the equa lit y Pr { X i ( t ) ≤ x i | X i ( t 1 ) , X i ( t 2 ) , . . . , X i ( t n ) } = Pr { X i ( t ) ≤ x i | X i ( t n ) } holds for an y x i ∈ R . A sto chastic pro cess is called a Markov Pr o c ess if it satisfies the Ma r k ov prop ert y descr ib ed in Definition 5 .1. The following notation will b e used for an unconditional pro babilit y function a t time t i ≥ 0, Pr { X i ( t i ) ≤ x i } = F t i ( x i ) (3) for a sto c hastic pro cess X i ( t i ) and x i ∈ R . Let ∇ x i F = ∂ F ∂ x i , then the cor responding density function f in this case is such that f ( x i ) = ∇ x i F ( x i ) . A univ a riate transition probability function F (Marko v pro cess) can b e ma pped to a biv ariate copula C by setting F ( t i , x i | t j , x j ) = ∇ u 2 C  F t i ( x i ) , F t j ( x j ))  , (4) where ∇ u 2 is the partial deriv ative with res pect to the se c ond a rgumen t of C . This mapping enables us to build in time, see [1]. The first marg inal distribution is a s sociated with time t i and the seco nd with time t j . W e take the partial deriv ative of the copula, since the proba bilit y to whic h it is mapp ed is conditional. This metho d is par ticularly useful for building Mar k ov chains. One of the most imp ortant innov ations which enabled the author s in [1] to link copulas to Markov pro cesses was to introduce the idea of a copula pr oduct; 3 Definition 5.2 . Copula pr o duct . Let C a and C b be biv ar iate copulas, then the pro duct o f C a and C b is the function C a ∗ C b : [0 , 1] 2 → [0 , 1], such that ( C a ∗ C b )( x, y ) = Z 1 0 ∇ z C a ( x, z ) ∇ z C b ( z , y ) dz . (5) This pro duct is esse ntially the copula equiv alent of the Chapman-Ko lmogorov equatio n, as s tated in Theorem 3.2 of [1]. W e restate that theorem here (with mo dified notation). Theorem 3. 2 . Let X i ( t ), t ∈ T b e a real sto c ha stic pro cess, and and for each s , t ∈ T let C st denote the copula of the r andom v aria bles X i ( s ) and X i ( t ). The following are equiv alent: 1. The transition pr obabilities F ( t, A | s, x s ) = Pr { X i ( t ) ∈ A | X i ( s ) = x s } of the pro cess satisfy the Chapman-Ko lmogorov equations F ( t, A | s, x s ) = Z R F ( t, A | u, ξ ) F ( u, dξ | s, x s ) (6) for all Borel sets A , for all s < t ∈ T , fo r all u ∈ ( s, t ) ∩ T and for almo st all x s ∈ R . 2. F or all s, u, t ∈ T s a tisfying s < u < t , C st = C su ∗ C ut . (7) The work in [1] has adv anced b o th the theory of copulas and techniques for building Mar kov pro- cesses. This method is also used in [9] to formulate a Markov chain mo del of the dep endence in credit risk. The discrete sto c hastic v ar iable X i ( t ) is interpreted as the r ating gra de of a firm at a particular po in t in time. A v arie ty o f copulas were fitted to the data a nd gave mixed results. There- fore, no copula was the b e st for all data sets. This type of mapping of the transition distribution to the co pula is very simple, how ever, o ne consequence is that an n -dimensional transitio n function requires a 2 n -dimensio nal copula. In other words, as the dimens io n of the copula increa ses, the calculation of the tra nsition function b ecomes mor e and mo re computationa lly cum be r some. The metho d in [1] has als o b een extended in [8], so that an n - dimensional Marko v pro cess ca n b e 4 represented by a c o m bination of biv ar iate copulas a nd mar gins. Hence, Pr { X i ( t 1 ) ≤ x 1 , . . . , X i ( t n ) ≤ x n } = n Y i =2 Pr { X i ( t i ) ≤ x i | X i ( t 1 ) = x 1 , . . . , X i ( t i − 1 ) = x i − 1 } Pr { X i ( t 1 ) ≤ x 1 } = n Y i =2 Pr { X i ( t i ) ≤ x i | X i ( t i − 1 ) = x i − 1 } Pr { X i ( t 1 ) ≤ x 1 } = Q n i =2 C t i − 1 ,t i ( F t i − 1 ( x i − 1 ) , F t i ( x i )) Q n − 1 i =2 F t i ( x i ) . (8) 2.3 Conditional Copula of Patton Another appr o ac h to building time into a copula was for m ulated in [7]. In order to explain this approach, we need to recall mo re definitions and set up notation. Firstly , let F b e a filtration, then Pr { X i ≤ x i | F } = F i ( x i | F ) (9) The multiv ar iate analo gue of e quation (9) is Pr { X ≤ x | F } = H ( x | F ) , (10) for x = ( x 1 , x , . . . , x n ) T such tha t the volume of H , V H ( R ) ≥ 0, for a ll recta ngles R ∈ R n with their vertices in the doma in of H , see [8], H (+ ∞ , x i , + ∞ , . . . , + ∞ | F ) = F i ( x i | F ) , and H ( −∞ , x i , . . . , x n | F ) = 0 for all x 1 , . . . , x n ∈ R . Here F i is the i -th univ ariate marg inal distribution of H . Se e [7] for a biv a riate version of H . As exp ected, the density of the co nditional H is h ( x | F ) = ∇ x 1 ,...,x n H ( x | F ) . (11) In e q uation (9), the distr ibution is a t ypical since it may b e conditiona l o n a vector o f v a riables, not just one, as opp osed to a typical univ a riate tra nsition distribution. 5 The author in [7] mapp ed the conditional distr ibutio n H ( x | F ), defined ab o ve, to a copula of the same order . Tha t is, for all x i ∈ R and i = 1 , 2 , . . . , n , H ( x 1 , . . . , x n | F ) = C ( F 1 ( x 1 | F ) , F 2 ( x 2 | F ) , . . . , F n ( x n | F ) | F ) . (12) F is a sub-alg ebra or in other words a conditio ning set. Such conditioning is neces sary for C to satisfy all the conditions of a co n ven tio nal c opula. The rela tio nship b et ween the conditional density h and copula density c is h ( x 1 , x 2 , . . . , x n | F ) = c ( u 1 , u 2 , . . . , u n | F ) n Y i =1 f i ( x i | F ) , (13) where u i ≡ F i ( x i | F ), i = 1 , 2 , . . . , n and f i , i = 1 , 2 , . . . , n are univ ar iate conditional densities . In ter ms of time v ary ing distributions, we can think of the conditioning set as the history o f all the v ariables in the distribution. In the case of the Mar k ov pro cesses, it is only the last time p oint which is of imp ortance. The implica tio n of this type of conditioning is that the marg inal distributions in the co pula can no long er b e typical tra nsition pro babilities, but are atypical co nditional proba- bilities. Hence, if each X i represented the v a lue of an a sset at time t , the asso ciated distribution F i would r e presen t the distribution o f X i , given that we knew the v alue of a ll the assets in the mo del, X 1 , X 2 , . . . , X n , at so me previo us time, for example t − 1 . In other words, we ca n rewrite the time-v arying version of the distribution and copula ab o ve as H t ( x 1 t , x 2 t , . . . , x n t | F t − 1 ) = C t ( F 1 t ( x 1 t | F t − 1 ) , F 2 t ( x 2 t | F t − 1 ) , . . . , F n t ( x n t | F t − 1 ) | F t − 1 ) , (14) where F t − 1 = σ ( x 1 t − 1 , x 2 t − 1 , . . . , x n t − 1 , x 1 t − 2 , x 2 t − 2 , . . . , x n t − 2 , . . . , x 1 1 , x 2 1 , . . . , x n 1 ) . In [7], the margina l distributions are characterized by Autor egressive (AR) and genera lized autore- gressive conditional heteroskedasticity (GARCH) pro cesses. Ultimately , they are handled in the same wa y as other time series pro cesses. 2.4 Pseudo-copulas of F erman ian and W egk amp As we hav e seen ab o ve, Mar k ov pro cesses a re o nly de fined with res p ect to their own histor y , not the histor y of o ther pr o cesses. Ther efore, the metho d in [7] is go od for some applica tio ns but not 6 practical for others. If we wan t marginal distributions of pro cesses, conditiona l o n their own history , for example Markov pr ocesses , and wan t to use a mapping simila r to that shown in [7], then it is p ossible via a c o nditional pseudo-co pula. Authors in [3] introduced the notion of conditiona l pseudo-copula in order to cov er a wider rang e o f applica tions than the co nditional copula in [7]. The definition of a ps eudo-copula is Definition 5.3 . Pseudo-c opula . A function C : [0 , 1] n → [0 , 1] is ca lled an n -dimensiona l pseudo- copula if 1. for every u ∈ [0 , 1] n , C ( u ) = 0 when at least o ne co ordinate of u is z ero, 2. C (1 , 1 , . . . , 1) = 1, and 3. for every u , v ∈ [0 , 1] n such that u ≤ v , the volume o f C, V C ≥ 0. The pseudo- copula satisfies mos t of the conditions o f a conv ent ional copula except for C (1 , 1 , u k , 1 , . . . , 1) = u k , so the marginal distributions o f a pse udo -copula may not b e uniform. The definition o f a con- ditional pseudo-copula is Definition 5. 4 . Conditional pseudo-c opula . Given a joint distribution H a ssociated with X 1 , X 2 , . . . , X n , an n -dimensional conditio nal pseudo-co pula with resp ect to sub-algebr as F = ( F 1 , F 2 , . . . , F n ) and G is a random function C ( · | F , G ) : [0 , 1] n → [0 , 1] such that H ( x 1 , x 2 , . . . , x n ) = C ( F 1 ( x 1 | F 1 ) , F 2 ( x 2 | F 2 ) , . . . , F n ( x n | F n ) | F , G ) (15) almost everywhere, for every ( x 1 , x 2 , . . . , x n ) T ∈ R n , see [2]. 2.5 Galic hon mo del More rece n tly , a dynamic biv ariate co pula was us e d to co rrelate Marko v diffusion pr ocesses , see [4], [5]. Unlik e the previous mo dels o f time dep enden t copulas, this mo del a ddresses the issue of s patial as well as time dependenc e . The mo del uses a partial differential approach to obtain a r epresen tation of the time dep endent copula . An outline of the main result fo llows. Co nsider tw o Ma rk ov diffusion pro cesses X 1 ( t ) and X 2 ( t ), t ∈ [0 , T ], which represent t wo ris ky financial ass ets, for example o ptions 7 with a maturity date T . The diffusio ns are such that dX 1 ( t ) = µ 1 ( X ( t )) dt + ˜ σ 1 ( X ( t )) dB 1 ( t ) dX 2 ( t ) = µ 2 ( X ( t )) dt + ˜ σ 2 ( X ( t )) dB 2 ( t ) dB 1 ( t ) dB 2 ( t ) = ρ 12 ( X 1 ( t ) , X 2 ( t )) dt, (16) where X ( t ) = ( X 1 ( t ) , X 2 ( t )) T , µ i , ˜ σ i , for i = 1 , 2, a re the drift a nd diffusion co efficien ts, resp ec- tively . The Br o wnian motion terms are corr elated with co efficien t ρ 12 ∈ [ − 1 , 1]. One would like an expressio n for the evolution o f a copula b et ween the distributions F 1 , F 2 of X 1 ( t ) and X 2 ( t ), conditional on informatio n a t time t = 0, F t 0 . Firstly a joint biv ar ia te distribution H is mapp ed to a copula C , by H ( t, x 1 , x 2 | F t 0 ) = C ( t, F 1 ( t, x 1 | F t 0 ) , F 2 ( t, x 2 | F t 0 ) | F t 0 ) (17) then the Kolmo gorov forward equatio n is used to o bta in an expressio n fo r ∇ t C . Letting u 1 = F 1 ( t, x 1 | F t 0 ) and u 2 = F 2 ( t, x 2 | F t 0 ) , u 1 , u 2 ∈ [0 , 1] , x = ( x 1 , x 2 ) T and s hortening the notation for the copula to C ( t, u 1 , u 2 ), then the time dependent copula in [4] is ∇ t C ( t, u 1 , u 2 ) = 1 2 ˜ σ 2 1 ( x ) f 2 1 ( t, x 1 | F t 0 ) ∇ 2 u 1 C ( t, u 1 , u 2 ) + 1 2 ˜ σ 2 2 ( x ) f 2 2 ( t, x 2 | F t 0 ) ∇ 2 u 2 C ( t, u 1 , u 2 ) − ∇ u 1 C ( t, u 1 , u 2 ) B 1 F 1 ( t, x 1 | F t 0 ) + Z ( −∞ ,x 2 ] ∇ u 1 ,u 2 C ( t, u 1 , u 2 ) f 2 ( t, z 2 | F t 0 ) B 1 F 1 ( t, z 1 | F t 0 ) dz 2 − ∇ u 2 C ( t, u 1 , u 2 ) B 2 F 2 ( t, x 2 | F t 0 ) + Z ( −∞ ,x 1 ] ∇ u 1 ,u 2 C ( t, u 1 , u 2 ) f 1 ( t, z 1 | F t 0 ) B 2 F 2 ( t, z 2 | F t 0 ) dz 1 + ˜ σ 1 ( x ) ˜ σ 2 ( x ) ρ 12 ( x 1 , x 2 ) f 1 ( t, x 1 | F t 0 ) f 2 ( t, x 2 | F t 0 ) ∇ u 1 ,u 2 C ( t, u 1 , u 2 ) , (18) where B 1 and B 2 are the following op erators, given a n y function g ∈ C 2 ( R ), B 1 g =  ∇ x 1  1 2 ˜ σ 2 1 ( x )  − µ 1 ( x )  ∇ x 1 g +  1 2 ˜ σ 2 1 ( x )  ∇ 2 x 1 g B 2 g =  ∇ x 2  1 2 ˜ σ 2 2 ( x )  − µ 2 ( x )  ∇ x 2 g +  1 2 ˜ σ 2 2 ( x )  ∇ 2 x 2 g 8 and ∇ x i g = ∂ g ∂ x i , ∇ 2 x i g = ∂ 2 g ∂ x 2 i . F or the g reatest flexibility we would choose inf { x i : F i ( t, x i | F t 0 ) ≥ u i } = F − 1 i ( t, u i | F t 0 ) , u i ∈ [0 , 1] . That is, F − 1 i is the pseudo- in verse. If X 1 ( t ) and X 2 ( t ) ar e individually Markov, that is , ˜ σ i and µ i depe nd only on x i , for i = 1 , 2, then the for m ula for the time dep enden t co pula simplifies to ∇ t C ( t, u 1 , u 2 ) = ˜ σ 1 ( x ) ˜ σ 2 ( x ) ρ 12 ( x 1 , x 2 ) f 1 ( t, x 1 | F t 0 ) f 2 ( t, x 2 | F t 0 ) ∇ u 1 ,u 2 C ( t, u 1 , u 2 ) + 1 2 ˜ σ 2 1 ( x ) f 2 1 ( t, x 1 | F t 0 ) ∇ 2 u 1 C ( t, u 1 , u 2 ) + 1 2 ˜ σ 2 2 ( x ) f 2 2 ( t, x 2 | F t 0 ) ∇ 2 u 2 C ( t, u 1 , u 2 ) . (19) The main aim of this chapter is to extend some of these curre n t mo dels of time dep enden t co pulas. W e derive an n -dimensional version of the mo del in [4]. A refo rm ulation is als o given, in which linear combinations of indep enden t Brownian mo tion terms are use d. 3 n -dimensional Galic hon Mo del fo r CDOs Suppo se we hav e an n × n system of sto c has tic differential equations, such that X ( t ) ∈ R n and B ( t ) is an n - dimensional Brownian motion. The vector X ( t ) could represent a p ortfolio of r isky assets , as in a Collatera lized Debt Obliga tio n. W e wan t to find a partial differential eq uation with resp ect to a time dep enden t n -copula, which gives us information on the riskiness o f the pack age of a ssets. As in the 2-dimensio nal mo del, t is a scalar such that t ∈ (0 , T ]. Let F t be a σ -algebra ge nerated by { B ( s ); s ≤ t } a nd assume X ( t ) is measura ble with resp ect to F t . In this cas e the diffusio ns are such that dX ( t ) = µ ( X ( t )) dt + ˜ AdB ( t ) (20) dB i ( t ) dB j ( t ) = ρ ij ( X i ( t ) , X j ( t )) dt, (21) 9 where dX ( t ) =         dX 1 ( t ) dX 2 ( t ) . . . dX n ( t )         , dB ( t ) =         dB 1 ( t ) dB 2 ( t ) . . . dB n ( t )         µ ( X ( t )) =         µ 1 ( X ( t )) µ 2 ( X ( t )) . . . µ n ( X ( t ))         , ˜ σ ( X ( t )) =         ˜ σ 1 ( X ( t )) ˜ σ 2 ( X ( t )) . . . ˜ σ n ( X ( t ))         . Note that in this ca se µ a nd ˜ σ ar e n -vector functions which represent the drift and diffusion co effi- cients of the pr ocess, resp ectiv ely . Let ˜ A b e ˜ A = diag ( ˜ σ ( X ( t ))) =         ˜ σ 1 ( X ( t )) 0 . . . . . . 0 0 ˜ σ 2 ( X ( t )) 0 . . . 0 . . . . . . 0 . . . . . . 0 ˜ σ n ( X ( t )) .         The corr elation c oefficients ρ ij ∈ [ − 1 , 1] and let ρ b e ρ =         1 ρ 12 ( X 1 ( t ) , X 2 ( t )) . . . ρ 1 n ( X 1 ( t ) , X n ( t )) ρ 21 ( X 2 ( t ) , X 1 ( t )) 1 . . . ρ 2 n ( X 2 ( t ) , X n ( t )) . . . . . . ρ n 1 ( X n ( t ) , X 1 ( t )) . . . . . . 1         . Three conditions ar e required for the ex is tence and uniquenes s of a solution to equation (20): 1. Co efficien ts µ ( x ) and ˜ σ ( x ) must b e defined for x ∈ R n and measurable with resp ect to x . 2. F or x, y ∈ R n , there exists a constant K such that k µ ( x ) − µ ( y ) k ≤ K k x − y k , k ˜ σ ( x ) − ˜ σ ( y ) k ≤ K k x − y k , k µ ( x ) k 2 + k ˜ σ ( x ) k 2 ≤ K 2 (1+ k x k 2 ) 10 and 3. X (0) do es not depend on B ( t ) a nd E [ X (0 ) 2 ] < ∞ . Theorem 5 .1. The time dep enden t n -copula ∇ t C ( t, u ) b et ween a vector of distributions u i = F i ( t, x i | x 0 ), i = 1 , . . . , n , asso ciated w ith the Markov diffusions X ( t ) = [ X 1 ( t ) , . . . , X n ( t )] T , condi- tional on infor mation at time t = 0, F t 0 = x 0 is ∇ t C ( t, u ) = 1 2 n X i =1 Z ( −∞ , ¯ x ] ˜ σ i ( z ) 2 f 2 i ( t, x i | x 0 ) ∇ z 1 ,., ˆ z i ,.,z n ∇ 2 u i C ( t, u ) d ¯ z + n X i =1  − ∇ u i C ( t, u ) B i t F i ( t, x i | x 0 ) + Z ( −∞ , ¯ x ] ∇ z 1 ,., ˆ z i ,.,z n ∇ u i C ( t, u ) B i t F i ( t, z i | x 0 ) d ¯ z  + 1 2 n X i,j =1 i 6 = j Z ( −∞ , ˇ x ] ρ ij ( x i , x j ) ˜ σ i ( z ) ˜ σ j ( z ) f i ( t, x i | x 0 ) f j ( t, x j | x 0 ) ∇ z 1 ,., ˆ z i , ˆ z j ,.,z n ∇ u i ,u j C ( t, u ) d ˇ z , (22) where x i = F − 1 i ( t, u i | F t 0 ), i = 1 , . . . , n . The interv als for the integration are ( −∞ , ¯ x ] = ( −∞ , x 1 ] × . . . × ( −∞ , x i − 1 ] × ( −∞ , x i +1 ] × . . . × ( −∞ , x n ] and ( −∞ , ˇ x ] = ( −∞ , x 1 ] × . . . × ( −∞ , x i − 1 ] × ( −∞ , x i +1 ] × . . . × ( −∞ , x j − 1 ] × ( −∞ , x j +1 ] . . . × ( −∞ , x n ] . Also note that d ¯ z = dz 1 dz 2 . . . dz i − 1 dz i +1 . . . dz n − 1 dz n and d ˇ z = dz 1 dz 2 . . . dz i − 1 dz i +1 . . . dz j − 1 dz j +1 . . . dz n − 1 dz n . Thu s, the i -th term is excluded in the firs t tw o int egrals on the right hand side of Theorem 5.1 . Similarly , in the last integral the i -th and j -th terms ar e excluded. F ur thermore, for a n y smo oth function g ∇ z 1 ,., ˆ z i ,.,z n g = ∂ n − 1 g ∂ z 1 . . . ∂ z i − 1 ∂ z i +1 . . . ∂ z n and ∇ z 1 ,., ˆ z i , ˆ z j ,.,z n g = ∂ n − 2 g ∂ z 1 . . . ∂ z i − 1 ∂ z i +1 . . . z j − 1 ∂ z j +1 . . . ∂ z n . 11 The op erators B i t , i = 1 , . . . , n a r e the same as in the tw o dimensional mo del. If the diffusions are individua lly Markov, that is, each σ k and µ k depe nds only on x k then the expressio n for ∇ t C simplifies to ∇ t C ( t, u ) = 1 2 n X i =1 ˜ σ i ( x i ) 2 f 2 i ( t, x i | x 0 ) ∇ 2 u i C ( t, u ) + 1 2 T r  H C u ( t, u ) − diag {∇ 2 u 1 C ( t, u ) , ∇ 2 u 2 C ( t, u ) , . . . , ∇ 2 u n C ( t, u ) }  D ˜ Aρ ˜ A T D T  , where D =         f 1 0 . . . 0 0 f 2 . . . 0 0 0 . . . 0 0 0 . . . f n         . Pro of. In this case 1-dimens io nal Ito formula for ea c h comp onen t o f X ( t ) is dg ( X i ( t )) =  ∇ x i g ( X i ( t )) µ i ( X ( t )) + 1 2 ∇ 2 x i g ( X i ( t )) ˜ σ 2 i ( X ( t ))  dt + ∇ x i g ( X i ( t )) ˜ σ i ( X ( t )) dB i ( t ) . Define the vector ∇ x of partia l deriv atives with resp ect to comp onen ts o f x , as ∇ x g ( X ( t )) =         ∇ x 1 g ( X ( t )) ∇ x 2 g ( X ( t )) . . . ∇ x n g ( X ( t ))         and the Hessian matrix o f g ( X ( t )) H g x ( X ( t )) ≡  ∇ x i x j g ( X ( t ))  1 ≤ i,j ≤ n . (23) In this cas e, assume g ∈ C 2 ( R n ), then the n - dimensional Ito formula for g ( X ( t )) is dg ( X ( t )) =  h∇ x g ( X ( t )) , µ ( X ( t )) i + 1 2 T r  H g x ( X ( t )) ˜ Aρ ˜ A T  dt + ∇ x g ( X ( t )) T ˜ AdB ( t ) , where h a, b i = a T b for any vectors a and b . Let the op erators A on distributions (Ko lmogorov back- ward equations), analogo us to those in [4], [5], b e ca lled A i t and A n t for the 1- an n -dimensio nal c ase, 12 resp ectiv ely . With resp ect to typical distributions F i ( t, x i | τ , ξ i ) and H ( t, x | τ , ξ ), the ope r ators are A i t F i ( t, x i | τ , ξ i ) = µ i ( x ) ∇ ξ i F i ( t, x i | τ , ξ i ) + 1 2 ˜ σ 2 i ∇ 2 ξ i F i ( t, x i | τ , ξ i ) (24 ) and A n t H ( t, x | τ , ξ ) = h∇ ξ H ( t, x | τ , ξ ) , µ ( x ) i + 1 2 T r  H H ξ ( t, x | τ , ξ ) ˜ Aρ ˜ A T  . (25) The op erators A i t , i = 1 , . . . , n and A n t are not used in the rest of the for m ulation, but are mentioned briefly , in view of the fact the Kolmogo ro v for ward equa tions, which are required, are the asso ciated adjoint o perator s of these . Assuming the density functions o f H and F are h and f , resp ectiv ely , then the adjoint op erators A i ∗ t , i = 1 , . . . , n and A n ∗ t hav e the for m A i ∗ t f i ( t, x i | τ , ξ i ) = −∇ x i  µ i ( x ) f i ( t, x i | τ , ξ i )  + ∇ 2 x i  1 2 ˜ σ 2 i f i ( t, x i | τ , ξ i )  (26) and A n ∗ t h ( t, x | τ , ξ ) = − n X i =1 ∇ x i  µ i ( x ) h ( t, x | τ , ξ )  + 1 2 n X i,j =1 ∇ x i ,x j  ρ ij ( x i , x j ) ˜ σ i ( x ) ˜ σ j ( x ) h ( t, x | τ , ξ )  . (27) The marg inal densit y functions f i and joint density h , are such that f i ( t, x i | F t 0 ) = f i ( t, x i | x 0 ), and H ( t, x | F t 0 ) = H ( t, x | x 0 ), where x 0 = ( x 1 = X 1 (0) , x 2 = X 2 (0) , . . . , x n = X n (0)), see App endix 5.A. In other words, the as s umpt ion made here is tha t all the distributions are conditional on the e ntire vector o f realiza tions of x at time zer o . As in the 2 -dimensional ca se, it is po ssible to express the op erator A n ∗ t in terms o f the op erators A i ∗ t asso ciated with the univ aria te distributions; A n ∗ t g = n X i =1 A i ∗ t g + 1 2 n X i,j =1 i 6 = j ∇ x i ,x j  ρ ij ( x i , x j ) ˜ σ i ( x ) ˜ σ j ( x ) g  . (28) Given that ∇ t f i ( t, x i | x 0 ) = A i ∗ t f i ( t, x i | x 0 ) , (29) 13 we can integrate the left hand side o f (29) with r espect to x i , c all it B i t , and we o btain B i t F i ( t, x i | x 0 ) = Z ( −∞ ,x i ] ∇ t f i ( t, z i | x 0 ) dz i = Z ( −∞ ,x i ] ∇ t ∇ z i F i ( t, z i | x 0 ) dz i = ∇ t F i ( t, x i | x 0 ) . (30) Int egrating the rig h t hand side o f (29) with resp ect to x i gives us Z ( −∞ ,x i ] A i ∗ t f i ( t, z i | x 0 ) dz i = − µ i ( x ) f i ( t, x i | x 0 ) + ∇ x i  1 2 ˜ σ 2 i ( x ) f i ( t, x i | x 0 )  =  ∇ x i { 1 2 ˜ σ 2 i ( x ) } − µ i ( x )  ∇ x i F i ( t, x i | x 0 ) + 1 2 ˜ σ 2 i ( x ) ∇ 2 x i F i ( t, x i | x 0 ) , so B i t F i ( t, x i | x 0 ) =  ∇ x i { 1 2 ˜ σ 2 i ( x ) } − µ i ( x )  ∇ x i F i ( t, x i | x 0 ) + 1 2 ˜ σ 2 i ( x ) ∇ 2 x i F i ( t, x i | x 0 ) . (31) Similarly , integrating over A n ∗ t will give us the a na logous oper ator B i n for the multiv ar iate distribution H . Now, since Z ( −∞ ,x ] A n ∗ t h ( t, z | x 0 ) dz = n X i =1 Z ( −∞ ,x ] A i ∗ t h ( t, z | x 0 ) dz + 1 2 n X i,j =1 i 6 = j Z ( −∞ ,x ] ∇ z i ,z j  ρ ij ( z i , z j ) ˜ σ i ( z ) ˜ σ j ( z ) h ( t, z | x 0 )  dz , (32) where ( −∞ , x ] = ( −∞ , x 1 ] × . . . × ( −∞ , x n ], it is p ossible to get an expressio n for B n t in terms of B i t . That is, let B n t H ( t, x | x 0 ) = ∇ t H ( t, x | x 0 ) and g iv en tha t h ( t, x | x 0 ) = ∇ x 1 ,.,x n H ( t, x | x 0 ), we hav e B n t H ( t, x | x 0 ) = 1 2 n X i,j =1 i 6 = j Z ( −∞ ,x ] ∇ z i ,z j  ρ ij ( z i , z j ) ˜ σ i ( z ) ˜ σ j ( z ) ∇ z 1 ,...,z n H ( t, z | x 0 )  dz + n X i =1 Z ( −∞ ,x ] A i ∗ t ∇ z 1 ,...,z n H ( t, z | x 0 ) dz . (33) The right hand side of equation (33) can b e ex pr essed in terms in terms of the univ ariate op erators 14 B i t , i = 1 , 2 . . . , n . B n t H ( t, x | x 0 ) = 1 2 n X i,j =1 i 6 = j Z ( −∞ , ˇ x ] ρ ij ( x i , x j ) ˜ σ i ( z ) ˜ σ j ( z ) ∇ z 1 ,., ˆ z i , ˆ z j ,.,z n ∇ z i ,z j H ( t, z | x 0 ) d ˇ z + n X i =1 Z ( −∞ , ¯ x ] B i t ∇ z 1 ,., ˆ z i ,.,z n H ( t, z | x 0 ) d ¯ z . (34) Let H ( t, x | x 0 ) = C ( t, F 1 ( t, x 1 | x 0 ) , F 2 ( t, x 2 | x 0 ) , . . . , F n ( t, x n | x 0 ) | x 0 ) (35) where C is an n -copula defined on [0 , T ] × [0 , 1] n . A t this po in t we shorten the notation so that C ( t, F ( t, x | x 0 )) is the same copula as ab ove. W e now seek an express io n for B n t C ( t, F ( t, x | x 0 )) by substituting for H with C in equation (34). Letting F i ( t, x i | x 0 ) = u i , i = 1 , 2 , . . . , n , a nd u = ( u 1 , . . . , u n ) T , then from the first term in eq uation (34) we obta in (see overpage) 15 n X i =1 Z ( −∞ , ¯ x ] B i t ∇ z 1 ,., ˆ z i ,.,z n H ( t, z | x 0 ) d ¯ z = n X i =1 Z ( −∞ , ¯ x ] B i t ∇ z 1 ,., ˆ z i ,.,z n C ( t, F ( t, z | x 0 )) d ¯ z = n X i =1  Z ( −∞ , ¯ x ]  ∇ z i ˜ σ 2 i ( z ) 2 − µ i ( z )  ∇ z i ∇ z 1 ,., ˆ z i ,.,z n C ( t, F ( t, z | x 0 )) d ¯ z + Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) 2 ∇ 2 z i ∇ z 1 ,., ˆ z i ,.,z n C ( t, F ( t, z | x 0 )) d ¯ z  = n X i =1  Z ( −∞ , ¯ x ]  ∇ z i ˜ σ 2 i ( z ) 2 − µ i ( z )  f i ( t, z i | x 0 ) ∇ u i ∇ z 1 ,., ˆ z i ,.,z n C ( t, u ) d ¯ z + Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) 2 f 2 i ( t, z i | x 0 ) ∇ 2 u i ∇ z 1 ,., ˆ z i ,.,z n C ( t, u ) d ¯ z + Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) 2 ∇ z i f i ( t, z i | x 0 ) ∇ u i ∇ z 1 ,., ˆ z i ,.,z n C ( t, u ) d ¯ z  = n X i =1  Z ( −∞ , ¯ x ]  ∇ z i ˜ σ 2 i ( z ) 2 − µ i ( z )  ∇ z i F i ( t, z i | x 0 ) ∇ u i ∇ z 1 ,., ˆ z i ,.,z n C ( t, u ) d ¯ z + Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) 2 ∇ 2 z i F i ( t, z i | x 0 ) ∇ u i ∇ z 1 ,., ˆ z i ,.,z n C ( t, u ) d ¯ z + Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) 2 f 2 i ( t, z i | x 0 ) ∇ 2 u i ∇ z 1 ,., ˆ z i ,.,z n C ( t, u ) d ¯ z  = n X i =1 Z ( −∞ , ¯ x ] ∇ z 1 ,., ˆ z i ,.,z n ∇ u i C ( t, u ) B i t F i ( t, z i | x 0 ) d ¯ z + 1 2 n X i =1 Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) f 2 i ( t, z i | x 0 ) ∇ z 1 ,., ˆ z i ,.,z n ∇ 2 u i C ( t, u ) d ¯ z . Since z is a dummy v ar iable and the multiple int egrals exclude that ov er ( −∞ , x i ], we can write n X i =1 Z ( −∞ , ¯ x ] B i t ∇ z 1 ,., ˆ z i ,.,z n H ( t, z | x 0 ) d ¯ z = n X i =1 Z ( −∞ , ¯ x ] ∇ z 1 ,., ˆ z i ,.,z n ∇ u i C ( t, u ) B i t F i ( t, z i | x 0 ) d ¯ z + 1 2 n X i =1 Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) f 2 i ( t, x i | x 0 ) ∇ z 1 ,., ˆ z i ,.,z n ∇ 2 u i C ( t, u ) d ¯ z . (36) 16 F rom the se c ond term in equation (34) we have 1 2 n X i,j =1 i 6 = j Z ( −∞ , ˇ x ] ρ ij ( x i , x j ) ˜ σ i ( z ) ˜ σ j ( z ) ∇ z 1 ,., ˆ z i , ˆ z j ,.,z n ∇ z i ,z j H ( t, z | x 0 ) d ˇ z = 1 2 n X i,j =1 i 6 = j Z ( −∞ , ˇ x ] ρ ij ( x i , x j ) ˜ σ i ( z ) ˜ σ j ( z ) ∇ z 1 ,., ˆ z i , ˆ z j ,.,z n ∇ z i ,z j C ( t, F ( t, z | x 0 )) d ˇ z = 1 2 n X i,j =1 i 6 = j Z ( −∞ , ˇ x ] ρ ij ( x i , x j ) ˜ σ i ( z ) ˜ σ j ( z ) f i ( t, x i | x 0 ) f j ( t, x j | x 0 ) ∇ z 1 ,., ˆ z i , ˆ z j ,.,z n ∇ u i ,u j C ( t, u ) d ˇ z (37) so B n t C ( t, u ) = n X i =1 Z ( −∞ , ¯ x ] ∇ z 1 ,., ˆ z i ,.,z n ∇ u i C ( t, u ) B i t F i ( t, z i | x 0 ) d ¯ z + 1 2 n X i =1 Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) f 2 i ( t, x i | x 0 ) ∇ z 1 ,., ˆ z i ,.,z n ∇ 2 u i C ( t, u ) d ¯ z + 1 2 n X i,j =1 i 6 = j Z ( −∞ , ˇ x ] ρ ij ( x i , x j ) ˜ σ i ( z ) ˜ σ j ( z ) f i ( t, x i | x 0 ) f j ( t, x j | x 0 ) ∇ z 1 ,., ˆ z i , ˆ z j ,.,z n ∇ u i ,u j C ( t, u ) d ˇ z . (38) Now, we also have ∇ t H ( t, x | x 0 ) = B n t H ( t, x | x 0 ) = ∇ t C ( t, F ( t, x | x 0 )) + n X i =1 ∇ u i C ( t, F ( t, x | x 0 )) ∇ t F i ( t, x i | x 0 ) = ∇ t C ( t, u ) + n X i =1 ∇ u i C ( t, u ) B i t F i ( t, x i | x 0 ) . (39) 17 Matching equation (38) a nd (39) and r e a rranging, we obtain ∇ t C ( t, u ) = n X i =1  − ∇ u i C ( t, u ) B i t F i ( t, x i | x 0 ) + Z ( −∞ , ¯ x ] ∇ z 1 ,., ˆ z i ,.,z n ∇ u i C ( t, u ) B i t F i ( t, z i | x 0 ) d ¯ z  + 1 2 n X i =1 Z ( −∞ , ¯ x ] ˜ σ 2 i ( z ) f 2 i ( t, x i | x 0 ) ∇ z 1 ,., ˆ z i ,.,z n ∇ 2 u i C ( t, u ) d ¯ z + 1 2 n X i,j =1 i 6 = j Z ( −∞ , ˇ x ] ρ ij ( x i , x j ) ˜ σ i ( z ) ˜ σ j ( z ) f i ( t, x i | x 0 ) f j ( t, x j | x 0 ) ∇ z 1 ,., ˆ z i , ˆ z j ,.,z n ∇ u i ,u j C ( t, u ) d ˇ z . If the equatio ns a re individually Ma r k ov, so that ea c h σ k and µ k depe nds only on x k , then n X i =1  − ∇ u i C ( t, u ) B i t F i ( t, x i | x 0 ) + Z ( −∞ , ¯ x ] ∇ z 1 ,., ˆ z i ,.,z n ∇ u i C ( t, u ) B i t F i ( t, z i | x 0 ) d ¯ z  = 0 , so the expre s sion for ∇ t C simplifies to ∇ t C ( t, u ) = 1 2 n X i =1 ˜ σ i ( x i ) 2 f 2 i ( t, x i | x 0 ) ∇ 2 u i C ( t, u ) + 1 2 T r  H C u ( t, u ) − diag {∇ 2 u 1 C ( t, u ) , ∇ 2 u 2 C ( t, u ) , . . . , ∇ 2 u n C ( t, u ) }  D ˜ Aρ ˜ A T D T  .  4 Conclusion W e hav e describ ed a dynamic n - copula which v aries in time and space. This copula is the firs t of its kind in greater than 2 dimensions. The dynamic 2-copula was previo us ly describ ed in [5]. In that case, the co pula could be a pplied the pricing of pairs o f options a nd other credit deriv atives. In the n -dimensional case, it is p ossible to use the dyna mic copula for the pric ing of any basket deriv a tiv es or a n umber of commo dities. F uture work in this area may inv o lve n umerical exp eriments, sensitivity testing and simulations in order to de ter mine how r obust the mo del is. O ther p ossible applications include that of the hea lth industry and environmen tal scie nc e . I would like to a cknowledge Alfred Galichon for his v aluable discussio ns in rela tion to this work. 18 References [1] W. F. Darsow, B. Nguyen, a nd E.T. Elwoo d, Copulas and Markov Pr o c esses , Illinois Jour na l of Mathematics 36 (1 9 92), no. 4, 6 00–642. [2] J-D. F erma nian and O. Scaillet, Some statistic al pitfal ls in c opula mo deling for financial appli- c ations , 200 4, www.crest.fr/pa g eperso / fermanian/ pitfalls copula.p df. [3] J-D. F ermanian and M. W eg k amp, Time dep endent c opulas , 2004, www.crest.fr/pa geperso /fermanian/cond copula10.p df. [4] A. Galichon, Coupling t wo Markov Diffusions , 2 006, W orking pap er, Department of Economics, Harv ard Universit y . [5] A. Galichon, Mo del ling Corr elation b et we en two Markov Diffusion Pr o c esses: A c opula appr o ach with applic ation to sto chastic c orr elation mo del ling , 200 6, W orking pap er, Department of E c o- nomics, Harv ar d University . [6] R. Nelsen, An In tr o duction to Copulas , Springer, N.Y., USA, 1 999. [7] A.J. Patton, Mo del ling t ime-va rying exchange r ate dep endenc e using the c onditional c opula , 200 1, UCSD Discussion pap er 20 0 1-09. Department of Economics, University of Ca lifornia, USA. Av ail- able at http://www.econ.ucsd.edu/ apatton/re s earc h.ht ml. [8] V. Schmit z, Copulas and St o chastic Pr o c esses , 20 03, Diplom-Mathematiker, Institute of Sta tis- tics, Aachen University , Germany . [9] V. Zitzmann, Mo deling of p ortfolio dep endenc e in terms of c opulas. A R ating-b ase d appr o ach , T ech. r eport, Europ ean Financial Management Asso ciation, 2005. 19

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment