New upper and lower bounds are presented on the capacity of the free-space optical intensity channel. This channel is characterized by inputs that are nonnegative (representing the transmitted optical intensity) and by outputs that are corrupted by additive white Gaussian noise (because in free space the disturbances arise from many independent sources). Due to battery and safety reasons the inputs are simultaneously constrained in both their average and peak power. For a fixed ratio of the average power to the peak power the difference between the upper and the lower bounds tends to zero as the average power tends to infinity, and the ratio of the upper and lower bounds tends to one as the average power tends to zero. The case where only an average-power constraint is imposed on the input is treated separately. In this case, the difference of the upper and lower bound tends to 0 as the average power tends to infinity, and their ratio tends to a constant as the power tends to zero.
Deep Dive into On the Capacity of Free-Space Optical Intensity Channels.
New upper and lower bounds are presented on the capacity of the free-space optical intensity channel. This channel is characterized by inputs that are nonnegative (representing the transmitted optical intensity) and by outputs that are corrupted by additive white Gaussian noise (because in free space the disturbances arise from many independent sources). Due to battery and safety reasons the inputs are simultaneously constrained in both their average and peak power. For a fixed ratio of the average power to the peak power the difference between the upper and the lower bounds tends to zero as the average power tends to infinity, and the ratio of the upper and lower bounds tends to one as the average power tends to zero. The case where only an average-power constraint is imposed on the input is treated separately. In this case, the difference of the upper and lower bound tends to 0 as the average power tends to infinity, and their ratio tends to a constant as the power tends to zero.
We consider a channel model for short-range optical communication in free space such as the infrared communication between electronic handheld devices. We assume a channel model based on intensity modulation, where the input signal modulates the optical intensity of the emitted light. Thus, the input signal is proportional to the light intensity and is therefore nonnegative. We further assume that at the receiver a front-end photodetector measures the incident optical intensity of the incoming light and produces an output signal which is proportional to the detected intensity. We model the ambient light conditions by a Gaussian disturbance. Moreover, we assume that the line-of-sight component is dominant and ignore any effects due to multiple-path propagation like fading or inter-symbol interference. 1 Optical communication is restricted not only by battery power but also, for safety reasons, by the maximum allowed peak power. We therefore consider simultaneously two constraints: an average-power constraint E and a maximum allowed peak power . The situation where only a peak-power constraint is imposed, corresponds to E = . The case of only an average-power constraint is treated separately.
The described system is called the free-space optical intensity channel and has previously been studied in [1], [2], [3], [4], [5]. In [3] it has been proved that the capacity-achieving probability measure for this channel is discrete, and in [4], [5] upper and lower bounds on this channel’s capacity have been derived. Related channel models used to describe optical communication are the Poisson channel, see [6], [7], [8], [2] for the discrete-time channel and [9], [10], [11], [12], [13], [14], [15] for the continuous-time channel, and a variation of the free-space optical optical intensity where the noise depends on the input [2, Chapter 4], [16].
In this work we present new upper and lower bounds on the capacity of the free-space optical intensity channel and study the capacity’s asymptotic behavior at high and low powers. The maximum gap between the upper and lower bounds never exceeds 1 nat when the ratio of the average-power constraint to the peak-power constraint is larger than 0.03 or when only an average-power constraint is imposed. For the case of average-power and peak-power constraints, asymptotically when the available average and peak power tend to infinity with their ratio held fixed, the upper and lower bound coincide, i.e., their difference tends to 0. When the available average and peak power tend to 0, with their ratio held fixed, the ratio of the upper and lower bound tends to 1. For the case of only an average-power constraint the proposed upper and lower bound coincide asymptotically for high power, i.e., their difference tends to 0 as the power tends to infinity. At low power their ratio tends to 2 √ 2. The derivation of the upper bounds is based on a general technique introduced in [17] using a dual expression of mutual information. We will not state it in its full generality but only in the form needed in this paper. For more details and for a proof see [17,Sec. V], [2,Ch. 2].
Proposition 1. Assume a memoryless channel with input alphabet X = R + 0 and output alphabet Y = R where conditional on the input x ∈ X the distribution on the output Y is denoted by the probability measure W (•|x). 2 Then, for arbitrary distribution R(•) over Y, the channel capacity under a peak-power constraint and an average-power constraint E is upper-bounded by
where the supremum is taken over all probability laws Q on the input X satisfying Q(X > ) = 0 and E Q [X] ≤ E. Here, D(• •) stands for the relative entropy [18,Ch. 2].
Proof. See [17, Sec. V].
There are two challenges in using (1). The first is in finding a clever choice of the law R that will lead to a good upper bound. The second is in upper-bounding the supremum on the right-hand side of (1). To handle this second challenge we shall resort to some further bounding, e.g., Jensen’s inequality [18,Ch. 2.6].
To derive the lower bounds we apply two different techniques: one for the highpower regime, and the other for the low-power regime. For high powers we use the entropy power inequality (see Lemma 16) and the theory of entropy maximizing distributions [18,Ch. 11]. Asymptotically, the differences of these lower bounds and some of the upper bounds derived using duality tend to 0 as the power tends to infinity, and thus the bounds are tight at high power. At low powers we lowerbound capacity considering binary input distributions; a choice which was inspired by [19] and [3]. In the cases involving a peak-power constraint, the asymptotic behavior of the corresponding mutual information is studied using [20]. When only an average-power constraint is imposed, a lower bound on the asymptotic behavior of the mutual information is derived. In the cases involving a peak-power constraint the asymptotic expression of the mutual information for binary inputs and some
…(Full text truncated)…
This content is AI-processed based on ArXiv data.