Optimal double stopping time

Reading time: 6 minute
...

📝 Original Info

  • Title: Optimal double stopping time
  • ArXiv ID: 0909.3363
  • Date: 2009-09-21
  • Authors: Researchers from original ArXiv paper

📝 Abstract

We consider the optimal double stopping time problem defined for each stopping time $S$ by $v(S)=\esssup\{E[\psi(\tau_1, \tau_2) | \F_S], \tau_1, \tau_2 \geq S \}$. Following the optimal one stopping time problem, we study the existence of optimal stopping times and give a method to compute them. The key point is the construction of a {\em new reward} $\phi$ such that the value function $v(S)$ satisfies $v(S)=\esssup\{E[\phi(\tau) | \F_S], \tau \geq S \}$. Finally, we give an example of an american option with double exercise time.

💡 Deep Analysis

Deep Dive into Optimal double stopping time.

We consider the optimal double stopping time problem defined for each stopping time $S$ by $v(S)=\esssup\{E[\psi(\tau_1, \tau_2) | \F_S], \tau_1, \tau_2 \geq S \}$. Following the optimal one stopping time problem, we study the existence of optimal stopping times and give a method to compute them. The key point is the construction of a {\em new reward} $\phi$ such that the value function $v(S)$ satisfies $v(S)=\esssup\{E[\phi(\tau) | \F_S], \tau \geq S \}$. Finally, we give an example of an american option with double exercise time.

📄 Full Content

Our present work on the optimal double stopping times problem consists, following the optimal one stopping time problem, in proving the existence of the maximal reward, finding necessary or sufficent conditions for the existence of optimal stopping times, and giving a method to compute these optimal stopping times.

The results are well known in the case of the optimal one stopping time problem. Consider a reward given by a RCLL postive adapted process (φ t , 0 ≤ t ≤ T ) on F = (Ω, F , (F t , ) 0≤t≤T , P ), F satisfying the usual conditions, and look for the maximal reward

where T 0 is the set of stopping times lesser than T . In order to compute v(0) we introduce for each S ∈ T 0 the value function v(S) = ess sup{ E[φ τ | F S ], τ ∈ T S }, where T S is the set of stopping times in T 0 greater than S. The family { v(S), S ∈ T 0 } can be aggregated in a RCLL adapted process (v t , 0 ≤ t ≤ T ) such that v S = v(S). The process (v t , 0 ≤ t ≤ T ) is the Snell enveloppe of (φ t , 0 ≤ t ≤ T ), that is the smallest supermartingale that dominates φ.

Moreover, when the reward (φ t , 0 ≤ t ≤ T } is continuous an optimal stopping time is given by θ

We show in the present work that computing the value function for the optimal double stopping times problem

reduces to computing the value function for an optimal one stopping time problem

where the new reward φ is no longer a RCLL process but a family { φ(θ), θ ∈ T 0 } of positive random variables which satisfy some compatibility properties.

In section 1, we revisit the optimal one-stopping time problem for admissible families. In section 2, we solve the optimal two-stopping times problem. In section 3, we give an example of american exchange option with double exercise time.

Let F = (Ω, F , (F t ) 0≤t≤T , P ) be a probability space equipped with a filtration (F t ) 0≤t≤T satisfying the usual conditions of right continuity and augmentation by the null sets of F = F T . We suppose that F 0 contains only sets of probability 0 or 1. The time horizon is a fixed constant T ∈ [0, ∞[. We denote by T 0 the collection of stopping times of F with values in [0, T ]. More generally, for any stopping times S, we denote by T S (resp. T S ) the class of stopping times θ ∈ T 0 with S ≤ θ a.s (resp. θ ≤ S a.s). Also, we will use the following notation: we note t n ↑ t if lim n∞ t n = t with t n ≤ t for each n.

1 The optimal one stopping time problem revisited Definition 1.1. We say that a family { φ(θ), θ ∈ T 0 } is admissible if it satisfies the following conditions 1) for all θ ∈ T 0 φ(θ) is a F θ -measurable positive random variable (r.v.), 2) for all θ, θ

) is a progressive process, then the family defined by {φ(θ) = φ θ ), θ ∈ T 0 } is admissible.

The value function at time S, where S ∈ T 0 , is given by

Proposition 1.1. Let { φ(θ), θ ∈ T 0 } be an admissible family, then the family of r.v { v(S), S ∈ T 0 } is admissible and is a supermartingale system, that is for any stopping times θ, θ

Definition 1.2. An admissible family { φ(θ), θ ∈ T 0 } is said to be right (resp. left) continuous along stopping times in expectation if for any θ ∈ T 0 and for any θ n ↓ θ (resp.

Recall the following classical lemma (see El Karoui (1979))

Lemma 1.1. Let { φ(θ), θ ∈ T 0 } be an admissible family right continuous along stopping times in expectation such that E[ess sup θ∈T0 φ(θ)] < ∞. Then, the family { v(S), S ∈ T 0 } is right continuous along stopping times in expectation.

We will now state the existence of of an optimal stopping time under quite weak assumptions. For each S ∈ T 0 , let us introduce the following F S -measurable random variable θ * (S) defined by θ * (S) :

Note that θ * (S) is a stopping time. Indeed, for each S ∈ T 0 , one can easily show that the set T S = { θ ∈ T S , v(θ) = φ(θ) a.s } is closed under pairwise maximization. By a classical result, there exists a sequence (θ n ) n∈N of stopping times in T S such that θ n ↓ θ * (S) a.s. Futhermore, we state the following theorem which generalizes the classical existence result of optimal stopping times to the case of a reward given by an admissible family of r.v. (instead of a RCLL adapted process).

Theorem 1.1. Suppose that E[ess sup θ∈T0 φ(θ)] < ∞.

Let { φ(θ), θ ∈ T 0 } be an admissible family, right and left continuous along stopping times in expectation. Let { v(S), S ∈ T 0 } be the family of value function defined by (1.2). Then, for each S ∈ T 0 , the stopping time θ * (S) defined by (1.3) is optimal for v(S)

Remark 1.2. When φ is given by a (RCLL) right continuous left limited adapted process, since the value function can be aggregated by a RCLL process (v t ) t∈[0,T ] , θ * (S) satisfies equality (0.1) a.s.

Sketch of the Proof : Fix S ∈ T 0 . As in the case of a reward process, we begin by constructing a family of stopping times that are approximatively optimal. For λ ∈ ]0, 1[, define the F S -measurable random variable θ λ (S) by

For all S ∈ T 0 , the function λ → θ λ (S) is increasing on ∈]0, 1[ (and b

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut