A Truncation Approach for Fast Computation of Distribution Functions

Reading time: 7 minute
...

📝 Original Info

  • Title: A Truncation Approach for Fast Computation of Distribution Functions
  • ArXiv ID: 0802.3455
  • Date: 2008-08-07
  • Authors: Researchers mentioned in the ArXiv original paper

📝 Abstract

In this paper, we propose a general approach for improving the efficiency of computing distribution functions. The idea is to truncate the domain of summation or integration.

💡 Deep Analysis

This research explores the key findings and methodology presented in the paper: A Truncation Approach for Fast Computation of Distribution Functions.

In this paper, we propose a general approach for improving the efficiency of computing distribution functions. The idea is to truncate the domain of summation or integration.

📄 Full Content

Theorem 1 Let u i , v i , α i , β i be real numbers such that

Then,

Proof. Obviously, P ′ ≤ P is true since D ′ is a subset of D. Thus, it suffices to show P ≤ P ′ + m i=1 (α i + β i ). Note that

where

By the definitions of P and P ′ ,

Hence,

This completes the proof of the theorem.

To ensure that P ′ ≤ P ≤ P ′ + η for a prescribed η > 0, it suffices to choose

As can be seen from Theorem 1, a critical step is to determine u and v for a random variable

for prescribed α, β ∈ (0, 1). For this purpose, we have the following theorem.

Theorem 2 Let X be a random variable with mean

Then the following statements hold true: (I) For any z > µ, Pr {X ≥ z} ≤ C (z).

(II) For any z < µ, Pr {X ≤ z} ≤ C (z).

(III) Both C (µ + ∆) and C (µ -∆) are monotonically decreasing with respect to ∆ > 0.

(IV) For any α ∈ (0, 1), there exists a unique ∆ > 0 such that C (µ -∆) = α. (V) For any β ∈ (0, 1), there exists a unique ∆ > 0 such that C (µ + ∆) = β.

Proof. By Jensen’s inequality E[e t(X-z) ] ≥ e tE[X-z] .

Hence, if z < µ, we have E[e t(X-z) ] ≥ e tE[X-z] ≥ 1 for t ≥ 0. Similarly, if z > µ, we have E[e t(X-z) ] ≥ e tE[X-z] ≥ 1 for t ≤ 0. Combing these observations and the fact that

we have

By the Chernoff bounds [1],

for z > µ. This completes the proof of statements (I) and (II).

To show that C (µ + ∆) is monotonically decreasing with respect to ∆ > 0, let t ∆ be the number such that inf

Then, t ∆ is positive and

It follows that

Similarly, to show that C (µ -∆) is monotonically decreasing with respect to ∆ > 0, let t ∆ be the number such that inf

Then, t ∆ is negative and

Consequently,

This concludes the proof of statements (III).

To show statement (IV), note that

and that lim

Hence, (IV) follows from ( 1), ( 2) and the fact that C (µ -∆) is monotonically decreasing with respect to ∆ > 0.

To show statement (V), note that

and that lim

Hence, (V) follows from ( 3), (4) and the fact that C (µ + ∆) is monotonically decreasing with respect to ∆ > 0.

As can be seen from Theorem 2, since C (µ -∆) is monotonically decreasing with respect to ∆ > 0, we can determine ∆ > 0 such that C (µ -∆) = α by a bisection search. Then, setting u = µ -∆ yields Pr{X ≤ u} ≤ α as desired. Similarly, we can determine ∆ > 0 such that C (µ + ∆) = β by a bisection search and set v = µ + ∆ to ensure Pr{X ≥ v} ≤ β.

The approach of reducing the domain D to its subset D ′ is referred to as truncation technique in this paper. By the Chebyshev’s inequality, it can be visualized that if the variances of X i are small, then the size of the truncated domain D ′ can be much smaller than that of domain D, even though η is extremely small. For the truncation technique to be of practical use, it is desirable that functions C (z) associated X i have closed form. This is indeed the case for many important distributions. For example, when X is the average of i.i.d Bernoulli random variables Y 1 , • • • , Y n such that Pr{Y i = 1} = p for 1 ≤ i ≤ n, the Hoeffding’s inequality [2] asserts that

For another example, when X is the average of i.i.

where

.

Similar truncation techniques can be developed for hypergeometric distribution, negative binomial distribution, hypergeometric waiting-time distribution, etc.

In the case that simple and tight bounds of C (z) are available, it is convenient to use the bounds in the truncation of D. In this regard, we have established the following result.

where p ∈ (0, 1) and n is a positive integer. Then, for arbitrary real numbers a, b and any η ∈ (0, 1),

with ⌊.⌋ and ⌈.⌉ denoting the floor and ceiling functions respectively.

We would like to remark that T + -T -can be much smaller than ba even though η is chosen as an extremely small positive number.

To prove Theorem 3, we need some preliminary results.

Then, for any fixed µ ∈ (0, 1), M (z, µ) is monotonically increasing from -∞ to 0 as z increases from -2µ to µ, and is monotonically decreasing from 0 to -∞ as z increases from µ to 3 -2µ. This establishes our claim that ln(µ) < M (1, µ). It follows that Pr X n ≥ z < exp (nM (z, µ)) holds for z = 1.

In Case (iii), since 0 ≤ X n ≤ 1, we have Pr X n ≥ z = 0 < exp (nM (z, µ)) for z ∈ (1, 3-2µ).

To show Pr X n ≤ z < exp (nM (z, µ)) for any z ∈ (-2µ, µ), we shall consider three cases as follows.

In the case of z ∈ (0, µ), we define y = 1-z and Y n =

we have Pr Y n ≥ y < exp (nM (y, 1µ)) = exp (nM (z, µ)) for 1µ < y < 1, i.e., 0 < z < µ. This shows that Pr X n ≤ z < exp (nM (z, µ)) holds for z ∈ (0, µ).

In the case of z = 0, we have Pr

We claim that ln(1µ) < M (0, µ). To prove this claim, it suffices to show ln(1µ) < 9µ 4(2µ-3) for any µ ∈ (0, 1), since M (0, µ) = 9µ 4(2µ-3) . For simplicity of notation, define h(µ) = ln(1µ) -9µ 4(2µ-3) . Then, the first derivative of h(µ) with respect to

4(1-µ)(2µ-3) 2 < 0 for any µ ∈ (0, 1). This implies that h(µ) is monotonically decreasing with respect to µ ∈ (0, 1). By virtue of such monotonicity and the fact that h(0) = 0, we can conclude that h(µ) < 0 for any µ ∈ (0, 1). This establishes our claim that ln(1µ) < M (0, µ). It follows that Pr X n ≤ z < exp (nM (z, µ)) holds for z = 0.

In the case of z ∈ (-2µ, 0), since 0 ≤ X n ≤ 1, we have Pr X n ≤ z = 0 < exp (nM (z, µ)) for z ∈ (-2µ, 0). This completes the proof of the lemma.

Now we are in a position to prove Theorem 3. By Lemma 1, we have that, for any η ∈ (0, 1), there exist two real numbers z 1 ∈ (-2p, p) and z 2 ∈ (p, 3 -2p) such that exp (nM (z 1 , p)) = exp (nM (z 2 , p)) = η 2 . Observing that exp (nM (z, p)) = η 2 can be transformed into a quadratic equation with respect to z, we can obtain explicit expressions for z 1 and z 2 as On the other hand, Pr{T -≤ K ≤ T + } ≤ Pr{a ≤ K ≤ b} is trivially true. This completes the proof of Theorem 3.

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut