Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals
📝 Abstract
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.
💡 Analysis
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.
📄 Content
1 Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals Joel A. Tropp, Member, IEEE, Jason N. Laska, Student Member, IEEE, Marco F. Duarte, Member, IEEE, Justin K. Romberg, Member, IEEE, and Richard G. Baraniuk, Fellow, IEEE Abstract— Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the ban- dlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations. Index Terms— analog-to-digital conversion, compressive sam- pling, sampling theory, signal recovery, sparse approximation Dedicated to the memory of Dennis M. Healy. I. INTRODUCTION T HE Shannon sampling theorem is one of the founda- tions of modern signal processing. For a continuous-time signal f whose highest frequency is less than W/2 Hz, the theorem suggests that we sample the signal uniformly at a rate of W Hz. The values of the signal at intermediate points in time are determined completely by the cardinal series f(t) = X n∈Z f n W sinc (Wt −n) . (1) In practice, one typically samples the signal at a somewhat higher rate and reconstructs with a kernel that decays faster than the sinc function [1, Ch. 4]. This well-known approach becomes impractical when the bandlimit W is large because it is challenging to build sam- pling hardware that operates at a sufficient rate. The demands of many modern applications already exceed the capabilities of current technology. Even though recent developments in Submitted: 30 January 2009. Revised: 12 September 2009. A preliminary report on this work was presented by the first author at SampTA 2007 in Thessaloniki. JAT was supported by ONR N00014-08-1-0883, DARPA/ONR N66001- 06-1-2011 and N66001-08-1-2065, and NSF DMS-0503299. JNL, MFD, and RGB were supported by DARPA/ONR N66001-06-1-2011 and N66001-08- 1-2065, ONR N00014-07-1-0936, AFOSR FA9550-04-1-0148, NSF CCF- 0431150, and the Texas Instruments Leadership University Program. JR was supported by NSF CCF-515632. Pseudorandom Number Generator Seed Fig. 1. Block diagram for the random demodulator. The components include a random number generator, a mixer, an accumulator, and a sampler. analog-to-digital converter (ADC) technologies have increased sampling speeds, state-of-the-art architectures are not yet ad- equate for emerging applications, such as ultrawideband and radar systems because of the additional requirements on power consumption [2]. The time has come to explore alternative techniques [3]. A. The Random Demodulator In the absence of extra information, Nyquist-rate sampling is essentially optimal for bandlimited signals [4]. Therefore, we must identify other properties that can provide additional leverage. Fortunately, in many applications, signals are also sparse. That is, the number of significant frequency compo- nents is often much smaller than the bandlimit allows. We can exploit this fact to design new kinds of sampling hardware. This paper studies the performance of a new type of sampling system—called a random demodulator—that can be used to acquire sparse, bandlimited signals. Figure 1 displays a block diagram for the system, and Figure 2 describes the intuition behind the design. In summary, we demodulate the signal by multiplying it with a high-rate pseudonoise sequence, which smears the tones across the entire spectrum. Then we apply a lowpass anti-aliasing filter, and we capture the signal by sampling it at a relatively low rate. As illustrated in Figure 3, the demodulation process ensures that each tone has a distinct signature within the passband of the filter. Since there are few tones present, it is possible to identify the tones and their amplitudes from the low-rate samples. The major advantage of the random demodulator is that it bypasses the need for a high-rate ADC. Demodulation is typically much easier to implement than sampling, yet it allows us to use a low-rate ADC. As a result, the system can be constructed from robust, low-power, readily available arXiv:0902.0026v2 [cs.IT] 22 Sep 2
This content is AI-processed based on ArXiv data.