We study (backward) stochastic differential equations with noise coming from a finite state Markov chain. We show that, for the solutions of these equations to be `Markovian', in the sense that they are deterministic functions of the state of the underlying chain, the integrand must be of a specific form. This allows us to connect these equations to coupled systems of ODEs, and hence to give fast numerical methods for the evaluation of Markov-Chain BSDEs.
Over the past 20 years, the role of stochastic methods in control has been increasing. In particular, the theory of Backward Stochastic Differential Equations, initiated by Pardoux and Peng [9], has shown itself to be a useful tool for the analysis of a variety of stochastic control problems (see, for example, El Karoui, Peng and Quenez [6] for a review of applications in finance, or Yong and Zhou [11] for a more general control perspective). Recent work [4,5] has considered these equations where noise is generated by a continuous-time finite-state Markov Chain, rather than by a Brownian motion.
Applications of BSDEs frequently depend on the ability to compute solutions to these equations numerically. While part of the power of the theory of BSDEs is its ability to deal with non-Markovian control problems, the numerical methods that have been developed are typically still restricted to the Markovian case (see, for example, [2,1]). In this paper, we ask the question When does a (B)SDE with underlying noise from a Markov Chain admit a ‘Markovian’ solution, that is, one which can be written as a deterministic function of the current state of the chain?
As we shall see, such a property implies strong restrictions on the parameters of the (B)SDE. However, these restrictions form a type of nonlinear Feynman-Kac result, connecting solutions of these SDEs to solutions of coupled systems of ODEs. This connection yields simple methods of obtaining numerical solutions to a wide class of BSDEs in this context.
Consider a continuous-time finite-state Markov chain X on a probability space (Ω, P). (The case where X is a countable state process can also be treated in this manner, we exclude it only for technical simplicity.) Without loss of generality, we shall represent X as taking values from the standard basis vectors e i of R N , where N is the number of states. An element ω ∈ Ω can be thought of as describing a path of the chain X.
Let {F t } be the completion of the filtration generated by X, that is,
As X is a right-continuous pure jump process which does not jump at time 0, this filtration is right-continuous. We assume that X 0 is deterministic, so F 0 is the completion of the trivial σ-algebra.
Let A denote the rate matrix1 of the chain X. As we do not assume timehomogeneity, A is permitted to vary (deterministically) through time. We shall assume for simplicity that the rate of jumping from any state is bounded, that is, all components of A are uniformly bounded in time. Note that (A t ) ij ≥ 0 for i = j and i A ij = 0 for all j (the columns of A all sum to 0).
It will also be convenient to assume that P(X t = e i ) > 0 for any t > 0 and any basis vector e i ∈ R N , that is, there is instant access from our starting state to any other state of the chain. None of our results depend on this assumption in any significant way, however without it, we shall be constantly forced to specify very peculiar null-sets, for states which cannot be accessed before time t. If we were to assume time-homogeneity (that is, A is constant in t), this assumption would simply be that our chain is irreducible. However, this assumption does not mean that A ij > 0 for all i = j.
From a notational perspective, as e i denotes the ith standard basis vector in R N , the ith component of a vector v is written e * i v, where [•] * denotes vector transposition. For example, this implies that useful quantities can be written simply in terms of vector products. For example, we have I Xt=ei = e * i X t .
We now relate our Markov chain to a N -dimensional martingale process, with which we can study SDEs. To do this, we write our chain in the following way
where M is a locally-finite-variation pure-jump martingale in R N . Our attention is then on the properties of stochastic integrals with respect to M . We shall make some use of the following seminorm, which arises from the Itō isometry. Definition 1. Let Z be a vector in R N . Define the stochastic seminorm
where Tr denotes the trace and
the matrix of derivatives of the quadratic covariation matrix of M . This seminorm has the property that
for any predictable process Z of appropriate dimension. We define the equivalence relation ∼ M on the space of predictable processes by Z ∼ M Z if and only if Z t -Z t Mt = 0 dt × dP-a.s.
Remark 1. A consequence of this choice of seminorm is that Z + c1 ∼ M Z for any Z and any predictable scalar process c. This is simply because i e * i Ae j = i A ij = 0 for all j, and so all row and column sums of Ψ(A t , X t ) are zero. Theorem 1. Every scalar square-integrable martingale L can be written in the form
for some predictable process Z taking values in R N . The process Z is unique up to equivalence ∼ M .
Proof. See [4].
The key SDEs which we shall study are equations of the form
where
up to indistinguishability. This assumption is important, as it ensures that f only considers Z in the same way as it affects the integral Z * dM .
In terms of
This content is AI-processed based on open access ArXiv data.