A Rigorous Analysis of Least Squares Sine Fitting Using Quantized Data: the Random Phase Case
This paper considers least-square based estimation of the amplitude and square amplitude of a quantized sine wave, done by considering random initial record phase. Using amplitude- and frequency-domain modeling techniques, it is shown that the estima…
Authors: Paolo Carbone, Johan Schoukens
1 A Rigorous Analysis of Least Squares Sine Fitting Using Quantized Data: the Random Phase Case P . Carbone Senior Member , IEEE , and J. Schoukens, F ellow Member , IEEE c 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/TIM.2013.2282220 Abstract —This paper considers least–square based estimation of the amplitude and square amplitude of a quantized sine wav e, done by considering random initial record phase. Using amplitude– and frequency–domain modeling techniques, it is shown that the estimator is inconsistent, biased and has a variance that may be underestimated if the simple model of quantization is applied. The effects of both sine wav e offset values and additive Gaussian noise are taken into account. General estimator pr operties are derived, without making simplifying assumptions on the role of the quantization pr ocess, to allow assessment of measurement uncertainty , when this least–square procedur e is used. Index T erms —Quantization, least–squares, signal pr ocessing, amplitude estimation, estimation theory . I . I N T RO D U C T I O N When assessing the properties of systems subject to quan- tized input data, the simple noise model of quantization [1] may be used to obtain results quickly . Howe ver , it may also lead to sev ere approximations when necessary hypotheses for its application do not hold true. This model is based on the assumption that the ef fects of quantization may be represented by a source of uncorrelated uniformly distributed random variables. Practitioners can discriminate among applicability situations, on the basis of their technical intuition. Ne verthe- less, only sound mathematical modeling can compensate for lack of insight into the properties of complex algorithms based on quantized data. A. Least Squares Estimation and Quantization Parametric estimation based on Least Squares (LS), is widely used as an all–purpose estimation technique, with applications in many engineering domains. This is the case, for instance, of the the 3 – or 4 –parameter sine fit method described in [2], used to estimate the meaningful parameters of a sampled sine wa ve, when testing Analog-to-Digital Con verters (ADC). In practice, measurement data are almost always obtained by con verting analog signals in the digital domain, by means of an ADC. As a consequence, they rarely satisfy the conditions for the LS approach to be considered optimal [3][4]. Con versely , the quantization process will result in estimator properties, that can not easily be analyzed, if the simple noise model of quantization is adopted. This latter simplification can ease the e valuation of order of magnitudes in P . Carbone is with the Univ ersity of Perugia - Department DIEI, via G. Duranti, 93 - 06125 Perugia Italy , on lea ve at the Vrije Uni versiteit Brussels, Department ELEC, Pleinlaan 2, B1050 Brussels, Belgium. J. Schoukens is with the Vrije Univ ersiteit Brussel, Department ELEC, Pleinlaan 2, B1050 Brussels, Belgium. assessing the estimator bias and variance. Howe ver , it can be misleading if precise measurements and accurate assessments of corresponding uncertainties are necessary . This occurs in metrology and in many areas of electrical engineering where quantization plays a role. In fact, it is well known that this model breaks down, e ven when very simple input signals enter the processing chain, such as a direct current value. Nev ertheless, engineers often rely on this approximation, and trade accuracy in the analysis of considered systems for speed and ease in the solution of associated calculations. Also estimations based on a metrological–sound approach can not rely on simplified procedures but must include all av ailable knowledge needed to provide bounds on estimation errors. This applies, for instance, when ADCs are tested using sine wa ves as source signals. The estimation of the sine sine wa ve amplitude needed to estimate other meaningful parameters, e.g. number of effecti ve bits, is done by using the quantized data at the ADC output [2]. This work sho ws that the simple model fails to predict both the bias and the variance of the LS–based estimator of the amplitude and square amplitude of a quantized sine w av e. By adopting sev eral modeling techniques it also shows ho w to include the effect of quantization into calculations, without making simplifying assumptions. The used analytical tools may serve as a fully dev eloped example about how solving similar problems, when analogous measuring conditions apply . B. The State–of–the–Art Previous work on the characterization of a LS–based es- timator applied to the parameters of a sine wav e subject to Gaussian noise, has been published recently in [5][9]. Refer- ence [5] contains a detailed analysis of the LS–based estimator properties, when assuming the input sine wav e corrupted by additiv e Gaussian noise. It is shown that noise contributes to estimation bias, also in absence of quantization. In [9], the author takes [5] as a starting point and e xtends its results by using alternativ e deri v ations and by adding simple closed– form asymptotics, valid for small errors. This work broadens the analysis to the case in which the sine wa ve is quantized before amplitude estimation is performed. The same topic is addressed in [10] where, through simulations, a description of the LS–based estimator bias is obtained, under the assumption of quantized data, and a modified technique for bias reduction is illustrated. Rele v ant results to this work are presented in [6] where the definition of the effecti ve number of bits (ENOB) in an ADC is processed to account for de viations in testing signal offset and amplitude. Previous work on the same subject was 2 carried out in [7], where again the definition of ENOB was discussed in relation to the properties of the quantization error characteristics of a noiseless quantizer and a new analytical method for its assessment was proposed. C. The Main Results in this P aper In this paper, at first a practical example is giv en to motiv ate the necessity for this analysis. Then, the properties of the estimator of the square amplitude of a quantized sine wa ve are analyzed, under the assumption of randomly distributed initial record phase. This is a choice of practical interest, because phase synchronization is not always feasible in the engineering practice, when repeated or reproduced measurements results are needed. In these cases, algorithms are applied on quantized data, without knowledge of the initial record phase. Thus, their properties can be analyzed by assuming this phase uniformly distributed in the interval [0 , 2 π ) . This assumption is relev ant to the analysis of the estimator properties prior to ex ecuting measurements. When an experiment is performed, a realization of the random variable modeling the initial record phase occurs. From that moment on, the sequence has a deterministic behavior . Thus, results presented here, serve to understand what to expect before taking measurements and how to interpret results when comparing estimations obtained when sampling is not synchronized. As it is still customary , to use the noise model of quanti- zation to produce estimates of error mean values and vari- ances, the results presented in this paper will serve two major purposes: warn against the unconditional application of this model, as it can lead to underestimates of measurement uncertainties and sho w how to use v arious mathematical tools to av oid simplistic assumptions and to obtain estimators with predictable properties. The described analysis proves that the properties of the LS–based estimator of the square amplitude of a quantized sine wave are those listed in T ab . I. The rest of this paper aims at proving these properties: while the following sections describe and comment the results in T ab . I, all mathematical deri vations and proofs are put in Appendices, so to improve readability and usability of results. I I . S Y S T E M M O D E L A N D A M OT I V A T I N G E X A M P L E Q LS s = H ✓ Fig. 1. Signal chain of the analyzed system. In this paper , the signal chain depicted in Fig. 1 is con- sidered. In this Figure, s represents the input sequence, θ the vector of parameters to be identified and H the observation matrix, that is the matrix with kno wn entries that linearly relates θ to the input sequence. This analysis framework is customary in the identification of models that are linear in the parameters [4]. The use of an observ ation matrix is a more con venient way to express dependencies between parameters and observ able quantities, than using lists of time–v arying equations. The specification of the element in this matrix is done on the basis of the characteristics of the identification problem to be analyzed and follo ws standard signal processing approaches [3]. The input sequence is subjected to quantiza- tion, so that the cascaded block performs an LS analysis on a quantized version of the input signal, to provide an estimate ˆ θ of θ . T o highlight the limits of the simple approach, consider the following example, based on the signal chain depicted in Fig. 1. Assume H as a vector containing known samples of a cosine function driv en by a uniformly distrib uted random phase, with independent outcomes H : = [ h 0 · · · h N − 1 ] T , h i : = cos( ϕ i ) s i : = θ h i , ϕ i ∈ U (0 , 2 π ] , i = 0 , . . . , N − 1 , and consider the estimation of the constant amplitude θ . By assuming a 3 –bit midtread quantizer, with input range [ − 1 , 1] , the quantizer output sequence y i becomes a nonlinear deterministic function of the input s i . The LS–based estimator of θ , which also provides the Best Linear Approximation (BLA) of the overall nonlinear system is [3] ˆ θ = H T H − 1 H T y = 1 N P N − 1 i =0 y i h i 1 N P N − 1 i =0 h 2 i . (1) In calculating bias and v ariance of ˆ θ , two approaches may be taken. The first approach ( simple calculation ) considers the quantization error as due to a noise source of independent random variables with uniform distribution in − ∆ 2 , ∆ 2 with ∆ = 2 / 2 b as the quantization step and b as the number of quantizer bits. The second approach ( exact calculation ) takes quantization effects into consideration and precisely allows assessment of estimator bias and variance. It will be shown next how the simple approach may lead to inaccurate results, underestimating the effects of quantization. By using the simple calculation approach, y i becomes y i = θ h i + e i , i = 0 , . . . , N − 1 , where e i represents the quantization error , considered indepen- dent from θ h i and uniformly distributed in − ∆ 2 , ∆ 2 . Under this assumption, in App. A it is shown that, E ( ˆ θ ) ' θ, V ar ( ˆ θ ) ' ∆ 2 6 N . (2) T o verify the validity of the simplified approach, simulations hav e been carried out. Both the variance of ˆ θ obtained by the simplified calculation and that estimated using the samples at the quantizer output are plotted in Fig. 2. In this Figure, data hav e been normalized to ∆ 2 / (6 N ) and plotted as a function of the input amplitude normalized to ∆ . As it is clear from Fig. 2, the simplified calculation provides a variance that un- derestimates the true v ariance by more than 100% for se veral values of the parameter θ . This is not surprising because of the strong nonlinear distortion introduced by the quantizer, but needs to be addressed. The standard deviation of ˆ θ is related to the standard uncertainty in the estimation of θ [17]. This motiv ates the analysis of the ef fects of quantization, when LS–based estimators are used to infer parameter values and corresponding uncertainties. It is reasonable to assume that 3 T ABLE I P R O P E RT I E S O F T H E L S– BA SE D E ST I M ATO R O F T H E S QU A R E A M P L I T U D E O F A S I N E W A V E B A S E D O N Q U A N T I Z E D S A M P L E S General properties · asymptotically biased and inconsistent estimator · sensiti ve to errors in the ratio between sampling rate and input sine w ave frequency , beyond what can be predicted by the simple appr oach (noise model of quantization) Bias · finite for ev ery quantizer resolution ( ∆ ) and any finite value of the number of samples ( N ) · not predicted well, when using the simple approac h (noise model of quantization) · does not vanish when N → ∞ , for an y finite value of ∆ · reaches quickly – from a practical engineering viewpoint – its asymptotic beha vior , when N → ∞ · spans wider intervals of values, when the signal amplitude increases · can be bounded by two expressions showing that the maximum in the magnitude of the bias is approximately dependent on ∆ 4 3 when ∆ → 0 · its order of magnitude is insensitiv e to small offsets in the input signal · decreases if input additi ve noise spans at least a quantization step V ariance · may be both lo wer and larger than that predicted by the simple appr oach (noise model of quantization) · v anishes quickly – from a practical engineering vie wpoint – when N → ∞ Mean Square Error · dominated by the square bias when ∆ is large · dominated by the v ariance when ∆ is small 0 1 2 3 4 0 0.5 1 1.5 2 2.5 θ / ∆ Va r ( ˆ θ ) / ! ∆ 2 / ( 6 N ) " Fig. 2. V ariance of ˆ θ normalized to the theoretical variance ∆ 2 / (6 N ) obtained by the simple approach ( 3 –bit quantizer): Montecarlo–based estima- tions based on the quantizer output sequence (solid line) and on the simplified assumption about the error sequence (dotted line). Estimates obtained using a 3 –bit quantizer, 15 · 10 3 records, each one containing N = 200 samples. by increasing the quantizer resolution, this effect will vanish for large values of θ . At the same time, it is expected that ev en in the case of high resolution quantizers this phenomenon will be relev ant, especially for small values of the ratio θ / ∆ . I I I . L E A S T S Q U A R E B A S E D E S T I M A T O R O F T H E S Q UA R E A M P L I T U D E O F A Q U A N T I Z E D S I N E W A V E A. Signal and System The main estimation problem analyzed in this paper , con- siders an instance of the signal chain depicted in Fig. 1 based on the following assumptions: • (Assumption 1) The quantizer input signal is a coherently sampled sine wav e, defined by s i : = − A cos( k i + ϕ ) , k i : = 2 π λ N i, i = 0 , . . . , N − 1 (3) where λ and N are co–prime integer numbers, A > 0 , A 2 and A are the parameters to be estimated and ϕ is the record initial phase. The coherency condition requires both synchronous sine wav e sampling and observation of an integer number of sine wav e periods. Since s i may also be affected by a deterministic offset or by random noise, subsections III-H and III-I sho w the estimator performance under these additional assumptions, respec- tiv ely . • (Assumption 2) The v ariable ϕ is a random v ariable, uniformly distributed in [0 , 2 π ) . When s i is quantized using a mid–tread quantizer, the ob- served sequence becomes: y i : = ∆ j s i ∆ + 0 . 5 k = s i + e i ( s i ) , i = 0 , . . . , N − 1 (4) where e i ( s i ) : = y i − s i represents the quantization error . As an example, this model applies when testing ADCs [2] or in any other situation in which the amplitude or square amplitude of a sine wav e are estimated on the basis of quantized data (e.g. in the area of power quality measurements). Since this is the information usually av ailable when using modern instrumentation, (3) applies to several situations of interest in the engineering practice. Assumption 2 is a relev ant one because, usually , the value of the initial record phase, may not be controllable or may be controllable up to a given maximum error . Therefore, when comparing estimates obtained under reproducibility conditions in dif ferent laboratories or set–up’ s, the value of ϕ introduces a variability in the estimates. The consequences of this variability are the main subject of this paper . W e will first consider the LS–based estimation of A 2 . This choice eases the initial analysis of the estimator properties since its mathematical expression becomes the summations of weighted products of observable quantities, as shown in (7). The alternati ve w ould be to estimate directly A . Ho wever , the additional nonlinear effect of the square–root operation implied by extracting A from A 2 , further complicates the analysis and is treated in subsection III-G. According to 4 (3), the model to be fitted, by using the observed sequence y i , is s i = − A cos( ϕ ) cos( k i ) + A sin( φ ) sin( k i ) i = 0 , . . . , N − 1 , that can be rewritten in matrix form as [4]: S = H θ where S : = [ s 0 s 1 · · · s N − 1 ] T , θ : = [ θ 1 θ 2 ] T : = [ A cos( ϕ ) A sin( ϕ )] T is the parameter vector and H : = − cos( k 0 ) sin( k 0 ) − cos( k 1 ) sin( k 1 ) . . . . . . − cos( k N − 1 ) sin( k N − 1 ) , is the observation matrix. Define ˆ θ : = h ˆ θ 1 ˆ θ 2 i T : = h d A cos( ϕ ) d A sin( ϕ ) i T as the estimator of θ . Since θ 2 = θ 2 1 + θ 2 2 , a natural estimator ˆ A 2 of A 2 , results in [2] ˆ A 2 = ˆ θ 2 1 + ˆ θ 2 2 (5) The LS–based estimator of θ is [4]: ˆ θ = H T H − 1 H T Y (6) where Y : = [ y 0 · · · y N − 1 ] T . By the coherence hypoth- esis, and by the orthogonality of the columns in H , when N ≥ 3 , H T H − 1 = 1 N 2 0 0 2 and (6) provides [5], ˆ θ 1 = − 2 N N − 1 X i =0 y i cos ( k i ) , ˆ θ 2 = 2 N N − 1 X i =0 y i sin ( k i ) that, apart from the signs, are equal to the coefficients of order λ , in a discrete Fourier transform of the observed sequence. Thus, from (5) ˆ A 2 = 4 N 2 N − 1 X i =0 N − 1 X u =0 y i y u cos ( k i − k u ) (7) Observe that, being a weighted combination of discrete ran- dom v ariables, ˆ A 2 is a discrete random variable, whose properties are analyzed in the following assuming both the simple calculation and the direct calculation approaches. B. Mathematical Modeling While the subject of quantization has extensiv ely been addressed in the scientific literature, it always has prov ed to be a hard problem to tackle, because of the nonlinear beha vior of the input–output quantizer characteristic. The mathematical modeling used in this paper , is based on sev eral approaches used to cross–v alidate obtained results. Modeling is performed both in the amplitude– and in the frequency–domains. When the Amplitude Domain Approach (ADA) is considered, the quantizer output is modeled as the sum of indicator functions weighted by a suitable number of quantization steps ∆ , extending the same approach presented in [18] (App. B). When the Frequency Domain Approach (FD A) is considered, the quantization error sequence e i ( · ) given by e i ( s i ) = ∆ 2 − ∆ s i 2 + 1 2 , i = 0 , . . . , N − 1 (8) with h·i as the fractional part operator , is e xpanded using Fourier series, as done, for example, in [1] (App. C and App. D). Both approaches show insights into the problem and provide closed–form expressions for the analyzed estimator parameters. Moreov er , both techniques are easy to code on a computer . Thus the y can be used to find quick v alues of error bounds, when mathematical e xpressions cease to offer easy interpretations of asymptotic beha viors and of order of magnitudes. A Montecarlo analysis has also been used to confirm mathematical results, indicated in the following as MA ( R ) , with R being the number of assumed records. In this case, we used both C–language coded LS–based estimators exploiting the LAP ACK numerical library , to achieve state-of- the-art efficienc y in numerical processing and MA TLAB, itself a commercial LAP A CK wrapper [19]. C. Statistical Assumptions (Assumption 3) The main system analyzed in this paper is based on the assumption that quantization is applied to noise– free sine wa ve samples generated by (3). So would not it be for the random initial phase, the estimator output would perfectly be predictable, being a deterministic function of system and signal parameters. As it is known, wideband noise added before quantization acts as a dithering signal, that smooths and linearizes the quantizer nonlinear characteristics, at the price of an increased estimator’ s v ariance [18]. Therefore, the main analysis described in this paper , provides knowledge about the estimator performance under the limiting situations in which the effects of the quantizer finite resolution are dominating ov er the ef fects of additi ve noise. This is of practical interest, giv en that applications exist dealing with the estimation of sine w av e amplitudes based on very fe w samples, for lo w– complexity industrial purposes [20][21]. Consequently , the assessment of the conditions under which quantization does or does not influence estimators’ performance, is necessary to compensate for the lack of v ariance suppression benefits usually associated to av eraging of a large number of samples. Because of the Assumption 2, the randomness in the signal is due to ϕ , that models the lack of information regarding synchronization between sample rate and sine wa ve frequency . Therefore, all expected values in the appendices are taken with respect to realizations of ϕ . When assumption 3 does not apply an additional source of randomness contributes to modify the estimator bias and v ariance properties. This corresponds to the practical situation in which, e.g. when testing ADCs, the input sine wa ve is corrupted by Gaussian noise. The analysis of this case is done in subsection III-I. 5 0 0.2 0.4 0.6 0.8 1 10 −6 10 −4 (a) A S t d ( ˆ A 2 ) /A 0 0.2 0.4 0.6 0.8 1 10 −6 10 −4 (b) A S t d ( ˆ A 2 ) /A Fig. 3. MA (5 · 10 3 ) : standard deviation of ˆ A 2 normalized to A when N = 2 · 10 3 , b = 13 : simple approach (dashed line) and direct approach based on (7) (solid line): (a) λ = 201 , (b) λ = 200 . D. Synchr onizations issues: Sampling and Sine Fit LS–based Estimation As pointed out in [2][5], in practice, the synchronous condi- tion implied by Assumption 1, can be met up to the frequency errors associated with the equipment used to provide experi- mental data. This is the case, for instance, when testing ADCs: an accurate sine wave generator provides the test signal to the device under test and the quantizer output data are processed to estimate the input signal parameters, needed for additional processing [2]. In this case, errors in the ratio between the ADC sampling rate and the sine wa ve frequency may result in major errors when estimating other related parameters [22]. When Assumption 1 is not met, also the properties of (7) change significantly . As an example, consider the two coprime integers N = 2000 and λ = 201 . The normalized standard deviation of ˆ A 2 obtained by the LS–based estimator is plotted in Fig. 3(a), as a function of A (solid line), when b = 13 . In the same figure also the normalized estimator standard deviation is plotted, when considering the simple approach, that models quantization as due to the effect of additi ve uniform noise (dashed line). Standard deviations in Fig. 3 are estimated using MA (5 · 10 3 ) . Fig. 3(a) sho ws that the simple assumption almost uniformly underestimates the standard deviation. No w assume that, due to inaccuracies in the experimental setup, the ratio λ/ N becomes the ratio of two integers with common dividers such as 200 / 2000 = 1 / 10 . The same results as those shown in Fig. 3(a) are plotted in Fig. 3(b). T wo outcomes are evident: the additi ve noise model still provides the same results as those in Fig. 3(a), while that based on (7) has a much larger standard deviation, increased by 20 times. This can be explained by observing that, because of quantization, only 10 samples of (3) having different instantaneous phases are in the dataset. This does not modify the standard deviation in the case of the simple approach because the superimposed additiv e noise still pro vides useful information, also for those samples associated to the same instantaneous phase. On the contrary , when noise–free data are quantized, samples associated to the same instantaneous phase provide the same quantized output value for a gi ven initial record phase ϕ and the estimator uses exactly the same data points affected by strongly correlated quantization error , over dif ferent sine wave periods. In this latter case, the amount of av ailable information is greatly reduced over the case in which no quantization is applied and we can conclude that this phenomenon is not modeled properly by the simple approach, that provides ov erly optimistic results. When Assumption 1 does not hold true, and the need is that of also taking into account the finite synchronization capabilities between sample rate and sine wa ve test frequenc y , an approach based on the Farey series can be adopted [22]. E. Squar e Amplitude LS–Based Estimation: Bias As pointed out in [10] the LS–based estimator (7) is biased. The bias can be much larger than that predicted by the simple approach. In App. C it is shown that the bias in the estimation of the square amplitude is giv en by bias ( A, ∆ , N ) : = E ( ˆ A 2 ) − A 2 = = 4 Ag ( A, ∆) + 8 h ( A, ∆ , N ) = N →∞ = 4 g ( A, ∆) [ A + g ( A, ∆)] : = bias ( A, ∆) (9) where g ( A, ∆) : = − x 2 + √ π x Γ 3 2 p X k =1 " x 2 − k − 1 2 2 π 2 # 1 2 (10) with x : = π A/ ∆ , Γ( · ) as the gamma function and p : = x π + 1 2 = A ∆ + 1 2 (11) and (App. C), h ( A, ∆ , N ) : = ∆ 2 π 2 N 2 N − 1 X i =0 N − 1 X u =0 cos( k i − k u ) · ∞ X k =1 ∞ X h =1 ( − 1) h + k hk ∞ X n =0 J 2 n +1 ( z h ) J 2 n +1 ( z k ) · cos((2 n + 1)( k i − k u )) (12) Expressions g ( A, ∆) and bias ( A, ∆ , N ) do not uniformly vanish with respect to A , for any finite value of ∆ , even when N → ∞ . Thus, (7) is a biased and inconsistent estimator of A 2 . Simulations sho w that few hundreds samples are suf ficient for the bias to achiev e conv ergence to bias ( A, ∆) when b < 20 . T wo bounds on | bias ( A, ∆) | are derived in App. E. The first one, B 1 ( A, ∆) , is based on a bound on Bessel functions and is giv en by: B 1 ( A, ∆) : = 4 AB ( A, ∆) + 4 B ( A, ∆) 2 (13) 6 0 1 2 3 4 5 6 7 −0.4 −0.2 0 0.2 ( a ) b = 4 N = 2 · 1 0 3 λ = 2 01 A / ∆ b i a s ( A , ∆ ) / ∆ 4 3 F D A A D A B 2 ( ∆ ) 0 5 10 15 20 25 30 −0.4 −0.2 0 0.2 ( b ) b = 6 N = 2 · 1 0 3 λ = 2 01 A / ∆ b i a s ( A , ∆ ) / ∆ 4 3 F D A A D A B 2 ( ∆ ) 0 20 40 60 80 100 120 −0.4 −0.2 0 0.2 ( c ) b = 8 N = 2 · 1 0 3 λ = 2 01 A / ∆ b i a s ( A , ∆ ) / ∆ 4 3 F D A B 2 ( ∆ ) 0 500 1000 1500 2000 −0.4 −0.2 0 0.2 ( d ) b = 1 2 N = 2 · 1 0 3 λ = 2 01 A / ∆ b i a s ( A , ∆ ) / ∆ 4 3 F D A Fig. 4. FD A and ADA: normalized bias in the estimation of A 2 ev aluated by assuming N = 2 · 10 3 , λ = 201 , { 4 , 6 , 8 , 12 } bit (FDA–solid line, AD A–squares) and B 2 (∆) (stars and dashed line). where B ( A, ∆) : = ∆ 4 3 ζ 4 3 c π (2 π A ) 1 3 , (14) where ζ ( s ) = P ∞ k =1 1 k s is the Riemann zeta function and c = 0 . 7857 . . . . The second one, B 2 (∆) , is based on a finite sum expression of g ( A, ∆) and is given by (see E.4): B 2 (∆) = 4 Ag p − 1 2 ∆ , ∆ p = 0 , 1 , . . . (15) While B 1 ( A, ∆) is a (somewhat loose) upper bound on | bias ( A, ∆) | for given v alues of A and ∆ , | B 2 (∆) | is a tighter bound, obtained with some approximations, that provides the discrete en velope of the minima in bias ( A, ∆) (App. E). The expression for B 1 ( A, ∆) shows that the absolute value of the bias in the estimation of A 2 is on the order of O ∆ 4 3 , when ∆ → 0 . T o illustrate the beha vior of deri ved expressions, bias ( A, ∆) is plotted in Fig. 4, when assuming b = 4 , 6 , 8 , 12 bit and N = 2 · 10 3 . Plots, calculated using FDA and AD A (solid lines and square symbols, respectively) have been normalized to ∆ 4 / 3 to validate the assumption on the approximate rate of con ver gence of the bias with respect to ∆ . The approximate equality of the range of values in Fig. 4, irrespecti ve of the number of bits, supports this statement. In Fig. 4, B 2 (∆) is plotted using stars joined by a dashed line. Direct inspection of Fig. 4 sho ws that the absolute values of the minima are larger than those of the maxima. Therefore it is conjectured that the curve obtained through | B 2 (∆) | is an approximate and tight upper bound on | bias ( A, ∆) | . This hypothesis is confirmed by simulation results based on a MA (5 · 10 3 ) and shown in Fig. 5. Data are obtained by assuming N = 2 · 10 3 and λ = 539 . The maximum magnitude of the bias has been estimated over 0 ≤ A < 1 − ∆ 2 , when b = 1 , . . . , 19 . The dashed line represents | B 2 (∆) | and circles are obtained by estimating the maximum of | bias ( A, ∆) | as a function of the number of bits. The maximum of A has been limited to 1 − ∆ / 2 , so to limit the analysis to granular quantization error , when considering quantizers with limited no–overload range, as it happens in practical usage of ADCs. The corresponding points obtained by assuming the simple approach are plotted using stars, while the continuous line represents the minimum value of the upper bound B 1 ( 1 2 , ∆) (App. E). The plot shows the very good agreement between | B 2 (∆) | , deriv ed through the FDA, and maxima obtained using the MA (5 · 10 3 ) . The difference in slopes between B 1 ( 1 2 , ∆) and | B 2 (∆) | explains the reduction in the range values in Fig. 3, as the number of bits increases. Consequently , it is conjectured that the absolute v alue of the bias vanishes faster the ∆ 4 3 when ∆ → ∞ , which is consistent with the large amplitude approximation of the Bessel functions, for large v alues of their argument [23]. Fig. 5 shows also clearly that the simple approach strongly underestimates the maxima. The asymptotic behavior of bias ( A, ∆ , N ) for large v alues of N is shown in Fig. 6, where results based on both the MA (10 6 ÷ 5 · 10 3 ) and the AD A are sho wn for a 10 bit quantizer , when A = 10 . 93∆ . Clearly , the conv ergence rate is quick, as when N > 100 , already the asymptotic value is achiev ed. A variable number of records has been used because con ver gence of Montecarlo– based algorithms depends on N : when N is small a much larger number of records is needed than when N is large. F . Square Amplitude LS–Based Estimation: V ariance and Mean Square Err or The variance of (7) can again be calculated using the modeling techniques listed in Sect. III-B and applied as in 7 1 3 5 7 9 11 13 15 17 19 10 −4 10 −2 10 0 b i t M a x | B i a s ( ˆ A 2 ) | / ∆ M A ( 2 · 1 0 3 ) B 1 ( 1 2 , ∆ ) s i m p l e | B 2 ( ∆ ) | Fig. 5. MA (2 · 10 3 ) and FD A; normalized maximum in the bias of the square amplitude estimator over all possible values of the input amplitude 0 ≤ A ≤ 1 − ∆ / 2 (circles) and based on the simple approach (stars). Shown are also the upper bound B 1 ( 1 2 , ∆) (solid line) and the approximate upper bound | B 2 (∆) | (dashed line). 50 100 150 200 250 300 0.9 0.95 1 1.05 1.1 1.15 N B i a s ( ˆ A 2 ) / ∆ 2 20 40 −0.01 0 0.01 Fig. 6. MA (10 6 ÷ 5 · 10 3 ) and ADA; main figure: normalized bias in the estimation of A 2 , as a function of N = 3 , . . . , 300 , when assuming b = 10 bit and A = 10 . 93∆ . For large values of N , the graph conv erges to 0 . 9398 . . . , as predicted by (E.1). In the inset, the difference between the Montecarlo–based estimator and the theoretical expression bias ( A, ∆ , N ) derived from (C.19) and normalized to ∆ 2 . 1 3 5 7 9 11 13 15 17 19 10 −6 10 −4 10 −2 b i t M a x Va r ( ˆ A 2 ) / ∆ 2 M A ( 5 · 1 0 3 ) s i m p l e [ 5 ] [ 6] Fig. 7. MA (5 · 10 3 ) , N = 2 · 10 3 . Normalized maximum in the variance of the square amplitude estimator over all possible values of the input amplitude 0 ≤ A ≤ 1 − ∆ / 2 (dashed line) and based on the simple approach (dots). Shown is also the theoretical variance derived in [5][9] under the assumption of zero–mean additive Gaussian noise with variance ∆ 2 / 12 (solid line). 1 3 5 7 9 11 13 15 17 19 10 −10 10 0 b i t M a x M S E ( ˆ A 2 ) / ∆ 2 M A ( 2 · 1 0 3 ) M AX B I A S 2 M AX V A R Fig. 8. MA (5 · 10 3 ) , N = 2 · 10 3 . Normalized maximum of the mean square error over all possible values of the input amplitude 0 ≤ A ≤ 1 − ∆ / 2 (solid line). Shown is also the normalized maximum square bias (solid–dotted line) and variance (dashed line). App. B and App. D. T o highlight the risks associated to the usage of the simple approach to calculate uncertainties in the estimation of the square amplitude, consider the graphs shown in Fig. 7. Data in this figure hav e been obtained using MA (5 · 10 3 ) under the same conditions used to generate data in Fig. 5 and represent the normalized maximum in the variance, ov er 0 < A ≤ 1 − ∆ 2 : the dashed line has been obtained when using quantized data. Dots represent the behavior of the maximum in the estimator v ariance when the simple model is assumed. The solid line represents its v alue published in [5][9], obtained by neglecting quantization and by assuming zero–mean additi ve Gaussian noise, having variance ∆ 2 / 12 . Clearly , the simple model underestimates the variance, ev en when the number of bit is large. The increasing behavior of the dashed line is due to the large estimator bias associated with low values of b . When b increases, the bias uniformly vanishes and the variance increases accordingly . This is a trend similar to that described in [9], to explain the super– efficienc y in the behavior of the LS–based estimator under the hypothesis of zero–mean additi ve Gaussian noise. This behavior is better seen in Fig. 8 where the normalized Mean Square Error (MSE) is plotted together with the the normalized square bias and variance, maximized ov er all possible values of 0 < A ≤ 1 − ∆ 2 , as a function of the number of bits. Data in Fig. 8 are obtained by the MA (5 · 10 3 ) and considering N = 2 · 10 3 sine wav e samples, with λ = 539 . The crossing between curves explains more clearly the super –ef ficiency type of effect shown in Fig. 7. In App. D, it is proved that the v ariance v anishes when N → ∞ . From a practical viewpoint, the speed of con ver gence is quick, as sho wn in Fig. 9, where the MA (5 · 10 3 ÷ 10 6 ) has been applied to ev aluate the estimator variance, assuming b = 10 bit and A = 10 . 93∆ . A variable number of records has been used in the Montecarlo method, to keep the simulation time approximately constant for each value of N , as N increases. In fact, when N is small, a larger number of records is needed to reduce estimators’ variance, than for larger values of N . G. Amplitude LS–Based Estimation: Bias Results described in section III-B–III-F allo w calculation of the mean value of the LS–based estimator ˆ A : = p ˆ A 2 of the amplitude A in (7). While, it is well known that nonlinear transformations such as the extraction of the square– root, do not commute ov er the expectation when calculating the moments of a random variable, by using a T aylor series 8 20 40 60 80 10 −2 10 0 10 2 N Va r ( ˆ A 2 ) / ∆ 4 20 40 60 80 −0.05 0 0.05 Fig. 9. MA (5 · 10 3 ÷ 10 6 ) and ADA; main figure: normalized estimator variance as a function of N , when b = 10 and A = 10 . 93∆ . Inset: difference between expressions derived using the MA (5 · 10 3 ) and the AD A, normalized to ∆ 4 . 0 20 40 60 80 100 120 −0.4 −0.2 0 0.2 A / ∆ h E ( ˆ A ) − A i / ∆ 0 50 100 −1 −0.5 0 x 10 −5 Fig. 10. MA (5 · 10 3 ) and ADA; main figure: normalized estimator bias as a function of A/ ∆ , when b = 10 . Inset: difference between expressions deriv ed using the MA (5 · 10 3 ) and the AD A, normalized to ∆ . expansion about the mean value of ˆ A 2 , we hav e [4]: E ˆ A ' r E ˆ A 2 − V ar ( ˆ A 2 ) 8 r h E ˆ A 2 i 3 , V ar ( ˆ A 2 ) → 0 (16) When N → ∞ the variance of the square estimator vanishes as pro ved in sec. III-F. Therefore, we hav e E ˆ A N →∞ ' r E ˆ A 2 . T o illustrate the validity of this approximation, results are shown in Fig. 10, in which the normalized bias in the LS–based estimation of the sine wav e amplitude is plotted. The main figure sho ws the behavior of the bias, obtained by using the MA (5 · 10 3 ) , when N = 500 , λ = 137 and b = 8 . In the same figure, the inset sho ws the difference in bias estimation when using (16) and the MA( 5 · 10 3 ). Since the Montecarlo approach does not rely on the approximations induced by the T aylor series expansion, it prov es the validity of this estimation method. Consequently , all properties of the square amplitude estimator such as the inconsistency , and the behavior of the bias and of its maxima, can easily be adapted by taking the square root of the estimators analyzed in sec. III-B – III-F, in the limit N → ∞ . H. Squar ed Amplitude LS–Based Estimation Bias: Effect of Input Offset The models described in this paper allo w the analysis of the estimator performance also when the input signal has a non–zero mean value, that is when (3) is modified to include an additive constant d as follows: s i : = − A cos( k i + ϕ ) + d, k i = 2 π λ N i, i = 0 , . . . , N − 1 . As an e xample, this case is rele v ant when the sine wave is used as a test signal for assessing ADC performance. Precisely controlling the mean v alue up to the accuracy required by the quantization step width, may not be an easy task, especially for small ADC resolutions. At the same time, temperature and voltage drifts may induce variations in input–related offset voltages in the considered ADC. Therefore, bounds on the estimation error need to include also the effect of d . Including an of fset in the input signal may be equiv alent to modeling a quantizer other than the mid–tread one considered in (4), acting on an offset–free sine wa ve. As an example, if d = − ( ∆ / 2 ) , (4) becomes the model of a truncation quantizer applied to (3). T o verify the implications of an additiv e constant in the input signal, the bias in estimating A 2 has been ev aluated using the AD A , under the same conditions used to generate data shown in Fig. 4. In Fig. 11(a)–(b), corresponding to Fig. 4(a)–(b), is shown the normalized bias, when assuming b = 4 , 6 . In this Figure, d is assumed as a parameter taking values in − ( ∆ / 2 ) , . . . , ∆ / 2 in steps of ∆ / 10 . The overall behavior shows that the added offset does not worsen the bias with respect to data shown in Fig. 4. The bold line in Fig. 11 is the arithmetic mean of all curves. Thus, it approximates the behavior of the bias when d is taken as being a random variable uniform in − ∆ 2 , ∆ 2 . I. Squar ed Amplitude LS–Based Estimation Bias: Ef fect of Input Noise The quantizer input sine wav e may be affected by additive wide–band noise n G , as follows: s i : = − A cos( k i + ϕ ) + n G , k i = 2 π λ N i, i = 0 , . . . , N − 1 . As it is well–known, noise act as a dither signals ( unsub- tractive dither in this case) and under suitable conditions, it renders the quantization error a uniformly distributed random variable, regardless of the input signal distribution [1], [16]. The overall effect of properly behaving input–referred additiv e noise is that of linearizing the mean value of the quantizer input–output characteristic. Consequently , a reduction in the estimation bias shown in Fig. 4 and Fig. 11 is expected. This will come at the price of an increased estimation variance. In fact, an additional source of uncertainty characterizes the estimator input data, beyond that taking into account the initial record phase. Additiv e zero–mean Gaussian noise has been considered in the following, having standard deviation σ n ∈ { 0 , ∆ / 5 , 2∆ / 5 , 3∆ / 5 } . T o verify the implications of additive Gaussian noise, the bias in estimating A 2 has been ev aluated using the M A (2 · 10 3 ) , under the same conditions used to generate data shown in Fig. 5 and Fig. 7. Results are plotted in Fig. 12 and Fig. 13, respectively . Plots in Fig. 12 sho w that by increasing the noise standard deviation the bias decreases and tends toward the value associated to the use of the noise model of quantization (graphed using stars). Even though 9 0 1 2 3 4 5 6 7 −0.4 −0.2 0 0.2 ( a ) b = 4 a r i t h m e t i c m e a n o f c u r v e s A / ∆ b i a s ( A , ∆ ) / ∆ 4 3 0 1 2 3 4 5 6 7 −0.4 −0.2 0 0.2 ( b ) b = 6 a r i t h m e t i c m e a n o f c u r v e s A / ∆ b i a s ( A , ∆ ) / ∆ 4 3 Fig. 11. AD A: normalized bias in the estimation of A 2 ev aluated by assuming N = 2 · 10 3 , λ = 201 , { 4 , 6 } bit assuming an input sine wave offset taking values in {− ∆ / 2 , − ∆ / 2 + ∆ / 10 , − ∆ / 2 + 2∆ / 10 , . . . , ∆ / 2 } . Shown is also the arithmetic mean of the bias curves and B 2 (∆) (stars and dashed line). 1 3 5 7 9 11 13 15 17 19 10 −4 10 −2 10 0 b i t M a x | B i a s ( ˆ A 2 ) | / ∆ σ n = ∆ / 5 σ n = 0 σ n = 2 ∆ / 5 σ n = 3 ∆ / 5 σ n = 0 , n o i s e m o d e l o f q u a n t i z a t i o n Fig. 12. MA (2 · 10 3 ) : normalized maximum in the bias of the square amplitude estimator over all possible values of the input amplitude 0 ≤ A ≤ 1 − ∆ / 2 (circles), by assuming zero–mean additive Gaussian noise having the indicated standard deviation. Shown is also the normalized maximum obtained by the simple approach (stars), the upper bound B 1 ( 1 2 , ∆) (solid line) and the approximate upper bound | B 2 (∆) | (dashed line). 1 3 5 7 9 11 13 15 17 19 10 −6 10 −4 10 −2 b i t M a x Va r ( ˆ A 2 ) / ∆ 2 σ n = 0 σ n = ∆ / 5 σ n = 2 ∆ / 5 σ n = 3 ∆ / 5 σ n = 0 n o i s e m o d e l o f q u an t i z a t i o n Fig. 13. MA (5 · 10 3 ) , N = 2 · 10 3 . Normalized maximum in the variance of the square amplitude estimator over all possible values of the input amplitude 0 ≤ A ≤ 1 − ∆ / 2 (circles and solid line), by assuming zero–mean additiv e Gaussian noise having the indicated standard deviation. Shown is also the normalized maximum obtained by the simple approach (stars) and the theoretical variance derived in [5][9] under the assumption of zero–mean additive Gaussian noise with variance ∆ 2 / 12 (dashed lines). Gaussian noise does not hav e the properties necessary to make the quantization error become a uniform random variable, it approximates such behavior as its variance increases. Results are also consistent with data in [8], where it is sho wn that for values of σ / ∆ > 0 . 3 , the ov erall quantization error tends to Gaussianity reg ardless of the ADC resolution. At the same time Fig. 13 sho ws that the normalized maximum v alue of the estimation variance increases with the noise v ariance, as expected, irrespectiv e of the quantizer resolution. I V . R E S U LT S A N D D I S C U S S I O N It is high–priority of the instrumentation and measurement community to understand error bounds when using procedures and algorithms to analyze data. Thus, the results in this paper serve a double role: warn against the usage of the noise model of quantization to provide bounds on estimation errors when using the LS–algorithm on quantized data and show how to include the effect of quantization when doing calculations needed to deriv e such bounds. A. Some Application Examples Results have practical relev ance. As an example, consider the case when two different laboratories want to compare results obtained when measuring electrical parameters of the same ADC. Research on these de vices is ongoing and produces design and realizations optimizing various criteria, most generally including resolution, speed of con version and energy consumption. Low resolution ADCs are used in ultra– wide bandwidth receiv ers (5 bits) [11], serial links (4.5 bits) [12], hard–disk drive read channels (6 bit) [13] and wa veform capture instruments and oscilloscopes (8 bit) [14]. Con versely low–con version rate, high resolution ADCs are employed, for instance, in industrial instrumentation or in digital audio applications. Regardless of the device resolution, all ADCs undergo testing procedures, that must be accurate, fast and sound from a metrological point of vie w . The majority of standardized tests require sine wav es as stimulating signals [2], whose parameters (e.g. amplitude) are obtained by using LS–based estimators applied to the quantized data sequence provided by the device–under –test [7]. Synchronizing the initial record phase of the sinusoidal signal, can be done only at the expense of added instrumental complexity and up to a certain uncertainty . Therefore, allowing it to freely varying among collected data records, as it is done frequently in practice, implies an added source of v ariability in the results, that can be modeled by assuming a random initial record phase. The sine wa ve amplitude is the input parameter for the estimation of many rele vant ADC parameters, such as the number of ef fecti ve bits. Thus, the performed analysis shows what to expect in the amplitude estimator properties, when the initial record phase can not be controlled over dif- ferent realizations of the same ADC testing procedure, under 10 repeatability conditions. The same also applies, when two laboratories verify the le vel of compatibility in the estimates of the sine wave amplitude, under reproducibility conditions: the uncertainty associated to the phase v ariability must be taken into account. While simulations may provide directions for further inv estigations and hints on the existence of unexpected phenomena, they must be accompanied by analyses made to reduce the role of subjective interpretations. Mathematically– deriv ed bounds presented in this paper serve this scope. As an example, consider the case in which the normalized difference in the sine w av e amplitude estimated by the two laboratories or ov er different realizations of the same ADC testing procedure is larger than what can be predicted by data shown in Fig. 10, applicable as an example to a 10 bit resolution ADC. Then, variability in the initial record phase can not be the unique cause and sources of uncertainty other than those associated to the effects of quantization must be looked for . As an additional usage example consider a medium– resolution (e.g. 10 bit) ADC embedded in a microcontroller . The acquisition and estimation of the amplitude of a sine wa ve is a typical problem in many engineering areas. This happens for instance when measuring po wer quality , when performing built–in–self–test procedures to assess functional status of system–lev el devices or when taking impedance measurements using sine–fitting techniques [15]. Simple si- nusoidal sources used frequently in this latter case, may not allow synchronized acquisition, while synchronization with the phase of the power network in the former case, may not be feasible at reasonable costs or uncertainty levels. In both cases, LS–based estimation of the amplitude of the sinusoid is done at signal processing level and requires assessment of the associated uncertainties. Results in Fig. 5 and 6 can be used to this aim in accordance with procedure written in [17], while corresponding expressions in the appendix can be used to cope with different quantizers’ resolutions and records lengths. B. Influence of Input Signal Pr operties Situations occur in which tests are performed on a reduced number of output codes, that is only few among all possible ADC codes are excited. Similarly , systems such as those in Fig.1 may be sourced by sine waves with arbitrary lar ge or small amplitudes within the admitted input range. When this case, ev en high–resolution ADCs may induce large relativ e errors in amplitude estimates, as those associated to low– resolution ADCs. As an example, consider a 16 bit ADC used in Fig. 1 to quantize a a sine wa ve, whose amplitude only fully excites 256 output codes. Fig. 4 shows that the expected behavior of the LS–based estimator is approximately that associated to an 8 bit ADC used at full–scale, once errors hav e been normalized with respect to the width of the quantization step. As shown in subsection III-I, the addition of Gaussian noise randomizes the error due to the non–linear behavior of the quantizer and reduces the estimation bias. Zero–mean uni- formly distrib uted noise would hav e a similar or better lineariz- ing behavior . With a properly set variance, uniform noise nulls the quantization error mean, while making the quantization error variance, input–signal dependent ( noise modulation. ). As data in Fig. 12 show , also an input offset may reduce the bias, if it randomly v aries in a small–amplitude range. In this Figure, the arithmetic mean value provides almost ev erywhere a much smaller estimator bias with respect to any of the possible values of the of fset, taken as a fixed deterministic value. The arithmetic mean is generally a reasonable estimator of the mean value of a random variable. Thus, its behavior in Fig. 12, approximates the behavior of the bias, where the input offset a random and not a deterministic value. In this case, the offset would behav e as a dither signal itself. Finally , in some applications, sequential least squares are used to provide amplitude estimations ov er time, when sam- pling an on–going continuous–time sinusoidal waveform [4]. Clearly , results presented here are applicable also in this case. V . C O N C L U S I O N S In this paper , we considered LS–based estimation of the square amplitude and amplitude of a quantized sine wav e. The main paper contribution is summarized by results in T ab . I. Using several analytical techniques we proved that the simple noise model of quantization provides erroneous results under sev eral conditions and may fail, when assessing the span of estimation errors. This is especially relev ant when measurement results are used for conformance testing, to assess whether manufactured products meet specified stan- dards. As an example, ADC testing requires measurement of sev eral parameters (e.g. ENOB) based on the estimation of the properties of testing signals such as sine wav es. As shown in this paper , estimates may be af fected by relev ant biases that may induce in wrong decisions about the device having characteristics being under or over given thresholds. It has also been prov ed that the estimator is inconsistent, biased and that its variance is not predicted well by the noise model of quantization. Exact expressions have been provided that allow a rigorous ev aluation of estimator properties. Both the obtained results and the methods used in this paper, are applicable also when other sine wa ve parameters are estimated on the basis of quantized data and when solving similar estimation problems. A P P E N D I X A D E R I V AT I O N O F ( 2 ) Define R : = 1 N N − 1 X i =0 y i h i , S : = 1 N N − 1 X i =0 h 2 i By using the hypothesis of zero–mean and uncorrelated ran- dom variables, E ( R ) = E 1 N N − 1 X i =0 ( θ h 2 i + e i h i ) ! = θ N N − 1 X i =0 E ( h 2 i ) = θ 2 (A.1) E ( S ) = E 1 N N − 1 X i =0 h 2 i ! = 1 2 (A.2) 11 Also E ( R 2 ) = 1 N 2 N − 1 X i =0 E ( y 2 i h 2 i ) + N ( N − 1) N 2 Corr ( y l h l , y k h k ) = = θ 2 8 N + ∆ 2 24 N + θ 2 4 , l 6 = k (A.3) and E ( S 2 ) = 1 N 2 N − 1 X i =0 E ( h 4 i ) + N ( N − 1) N 2 Corr ( h 2 l , h 2 k ) = = 1 8 N + 1 4 , l 6 = k . (A.4) Therefore V ar ( R ) = θ 2 8 N + ∆ 2 24 N , V ar ( S ) = 1 8 N (A.5) and the cov ariance between R and S is given by Cov ( R, S ) = 1 N 2 E N − 1 X i =0 ( θ h 2 i + e i h i ) N − 1 X u =0 h 2 u ! − θ 4 = = θ E ( S 2 ) − θ 4 = θ 8 N (A.6) The expected value E ( · ) and v ariance V ar ( · ) of the ratio R/S of two random v ariables, can be approximated by using a T aylor series expansion as follows [24]: E R S ' E ( R ) E ( S ) 1 − Cov ( R, S ) E ( R ) E ( S ) + V ar ( S ) E 2 ( S ) , V ar R S ' E 2 ( R ) E 2 ( S ) V ar(R) E 2 ( R ) − 2 Cov ( R, S ) E ( R ) E ( S ) + V ar ( S ) E 2 ( S ) (A.7) Thus, by substituting (A.2)–(A.6) in (A.7), (2) follows. A P P E N D I X B A M P L I T U D E D O M A I N A P P RO A C H : M O M E N T S O F T H E S Q UA R E A M P L I T U D E E S T I M A T O R Lemma B.1. Assume 0 ≤ c < 1 , 0 ≤ ϕ ≤ 1 and 0 ≤ L 1 ≤ L 2 ≤ 1 . Then, the solutions for ϕ , of the inequality L 1 ≤ h c + ϕ i < L 2 ar e { L 1 − c ≤ ϕ < L 2 − c } c ≤ L 1 { 0 ≤ ϕ < L 2 − c } ∪ { 1 − c + L 1 ≤ ϕ ≤ 1 } L 1 < c ≤ L 2 { 1 − c + L 1 ≤ ϕ < 1 − c + L 2 } c > L 2 The pr oof is straightforwar d once observed that the expr ession h c + ϕ i as a function of ϕ , is piecewise linear: h c + ϕ i = ϕ + c 0 ≤ ϕ < 1 − c ϕ − (1 − c ) 1 − c ≤ ϕ < 1 A. A Model for the Quantizer Output The model described in this Appendix is based on a similar approach taken in [18]. Howe ver , it extends it in several ways: it presents a more strict mathematical formalization of the quantizer output, it includes a mathematical description of the quantizer higher–order statistical moments and it proves its applicability to the analysis of mean value and correlation of quantized stochastic processes. Assume x u ( ϕ ) : = − A ∆ cos( k u + ϕ ) + 0 . 5+ c, u = 0 , . . . , N − 1 with A , ∆ and c real numbers, k u a sequence of real numbers, N a positiv e integer and 0 ≤ ϕ ≤ 2 π . The constant c models both the contribution of an offset in the sinusoidal signal and of an eventual offset in the quantizer input–output characteristic, as they are indistinguishable. In fact, while in this paper , mid–tread quantization is considered, by setting properly the value of c , other types of quantization are cov ered by this analysis. For instance, when c = − 0 . 5 , a truncation quantizer is modeled. Con versely by setting c = d ∆ , the following analysis also comprehends the case when the input sine wa ve has an offset d . Define the quantizer output as y u : = ∆ b x u c , u = 0 , . . . , N − 1 . Then, y u = n ∆ , n ∈ ( −∞ , ∞ ) , n integer , if x u ( ϕ ) belongs to the interval: S n : = [ D n , U n ) , where D n : = max(min ϕ { x u ( ϕ ) } , n ) and U n : = min(max ϕ { x u ( ϕ ) } , n + 1) , Consequently , y u = ∞ X n = −∞ n ∆ i ( x u ( ϕ ) ∈ S n ) , S n : = [ D n , U n ) where i ( · ) is the indicator function of the e vent at its argument. For a giv en value of k u , x u ( ϕ ) will or will not belong to S n depending on the value of ϕ . In the following, it will be shown how to find the set of v alues that make y u = n ∆ . This occurs when D n ≤ − A ∆ cos ( k u + ϕ ) + 0 . 5+ c < U n (B.1) that is ∆ A ( D n − 0 . 5 − c ) ≤ − cos ( k u + ϕ ) < ∆ A ( U n − 0 . 5 − c ) Since the cosine function is periodic with period 2 π , we may write: ∆ A ( D n − 0 . 5 − c ) ≤ − cos k i + ϕ 2 π 2 π < < ∆ A ( U n − 0 . 5 − c ) Thus, − ∆ A ( U n − 0 . 5 − c ) < cos k u + ϕ 2 π 2 π ≤ ≤ − ∆ A ( D n − 0 . 5 − c ) (B.2) 12 Define L n : = − ∆ A ( D n − 0 . 5 − c ) , R n : = − ∆ A ( U n − 0 . 5 − c ) , and observe that the arccos( · ) function is a decreasing function of its argument and returns values in [0 , π ] , so that by applying it to all members in (B.2) we obtain: 1 2 π arccos( L n ) ≤ k u + ϕ 2 π < 1 2 π arccos( R n ) ∪ ∪ 1 − 1 2 π arccos( R n ) < k u + ϕ 2 π ≤ ≤ 1 − 1 2 π arccos( L n ) Moreov er , since h a + b i = hh a i + h b ii and 0 ≤ ϕ < 2 π , for u = 0 , . . . , N − 1 , we hav e 1 2 π arccos( L n ) ≤ k u 2 π + ϕ 2 π < 1 2 π arccos( R n ) ∪ ∪ 1 − 1 2 π arccos( R n ) < k u 2 π + ϕ 2 π ≤ ≤ 1 − 1 2 π arccos( L n ) (B.3) Let us define Φ n ( k u ) , for a giv en value of k u , as the sets of values of ϕ such that (B.2) is satisfied. This set is to be found by applying twice the results in lemma (B.1) to the two intervals I 1 , I 2 , implicitly defined in (B.3) I 1 : = 1 2 π arccos( L n ) , 1 2 π arccos( R n ) I 2 : = 1 − 1 2 π arccos( R n ) , 1 − 1 2 π arccos( L n ) with c = k u 2 π , u = 0 , . . . , N − 1 . Thus, y u = ∞ X n = −∞ n ∆ i ( ϕ ∈ Φ n ( k u )) The set of Φ n ( k u ) , n = −∞ . . . ∞ , for any given k u , forms a partition of the interval [0 , 2 π ) . Consider the product y u 1 y u 2 · · · y u M , with integers u · all taking values in { 0 . . . N − 1 } . Then y u 1 y u 2 · · · y u M = ∞ X n = −∞ n ∆ i ( x u 1 ( ϕ ) ∈ S n ) · · ∞ X m = −∞ m ∆ i ( x u 2 ( ϕ ) ∈ S m ) · · · ∞ X h = −∞ h ∆ i ( x u M ( ϕ ) ∈ S h ) . Since the interv als S n and S m hav e v oid intersection, if m 6 = n , we can write y u 1 y u 2 · · · y u M = ∆ M ∞ X n = −∞ n M i ( x u 1 ( ϕ ) ∈ S n ) · i ( x u 2 ( ϕ ) ∈ S n ) · · · i ( x u M ( ϕ ) ∈ S n ) that is, y u 1 y u 2 · · · y u M = ∆ M ∞ X n = −∞ n M i ( ϕ ∈ Φ n ( k u 1 )) · i ( ϕ ∈ Φ n ( k u 2 )) · · · i ( ϕ ∈ Φ n ( k u M )) 0 0.2 0.4 0.6 0.8 1 - 2 - 1 0 1 2 2 1 0 - 1 - 2 - 2 1 2 2 1 0 - 1 - 2 - 2 - 1 0 1 - 2 - 1 - 2 0 0 2 1 2 0 0 - 2 - 1 - 2 0 0 2 1 2 0 0 - 2 Fig. 14. Usage example of the amplitude–domain model to calculate the correlation between two quantizer outputs ( b = 3 ), when ϕ/ (2 π ) is uniform in [0 , 1) and A = 1 and c = 0 : the normalized products of the two outputs (center line), must be multiplied with the width of the corresponding segment on the center line, that represents the probability of occurrence, until all contributions are summed. Giv en that the indicator function can only output 0 or 1 , by the same reasoning, we have: y m 1 u 1 y m 2 u 2 · · · y m M u M = ∆ m 1 + m 2 + ··· + m M · ∞ X n = −∞ n m 1 + m 2 + ··· + m M i ( ϕ ∈ Φ n ( k u 1 )) · i ( ϕ ∈ Φ n ( k u 2 )) · · · i ( ϕ ∈ Φ n ( k u M )) (B.4) where m 1 , m 2 , · · · , m M are integer values. Thus, the product y m 1 u 1 y m 2 u 2 · · · y m M u M is a deterministic function of ϕ . If ϕ becomes a random variable distrib uted in [0 , 2 π ) , then (B.4) allows the calculation of joint moments of the quantizer output as follows E y m 1 u 1 y m 2 u 2 · · · y m M u M = ∆ m 1 + m 2 + ··· + m M · ∞ X n = −∞ n m 1 + m 2 + ··· + m M · E ( i ( ϕ ∈ Φ n ( k u 1 )) · i ( ϕ ∈ Φ n ( k u 2 )) · · · i ( ϕ ∈ Φ n ( k u M ))) The argument of the expectation in the summation is a Bernoulli random variable. Therefore, E y m 1 u 1 y m 2 u 2 · · · y m M u M = ∆ m 1 + m 2 + ··· + m M · ∞ X n = −∞ n m 1 + m 2 + ··· + m M · Pr ( i ( ϕ ∈ Φ n ( k u 1 )) · i ( ϕ ∈ Φ n ( k u 2 )) · · · i ( ϕ ∈ Φ n ( k u M )) = 1) (B.5) Define Φ : = Φ n ( k u 1 ) ∩ Φ n ( k u M ) ∩ · · · ∩ Φ n ( k u M ) , that may be the union of disjoint intervals. Then, if ϕ is uni- formly distributed, the probability in (B.5) can be calculated as the L 1 norm of Φ normalized to 2 π , that is || Φ || 1 / (2 π ) . T o illustrate ho w the model can be used to calculate the correlation between two quantizer outputs, consider the sine 13 wa ves depicted in Fig. 11 as a function of 0 ≤ ϕ 2 π < 1 , assuming two different v alues of k u 1 and k u 2 . The 2 sine wa ves are quantized using a 3 bit quantizer: below and above the upper and lo wer sine wa ves, respectiv ely , is printed the corresponding quantized value, normalized to ∆ . The product of these two sequences is written on the center line, in this Figure. Since ϕ is uniformly distributed in [0 , 2 π ) , the correlation can be obtained by multiplying the magnitude of each segment in the center line, by the corresponding integer value printed above it. A P P E N D I X C F R E Q U E N C Y D O M A I N A P P RO A C H : B I A S O F T H E S Q U A R E A M P L I T U D E E S T I M A T O R The model described in this Appendix is based on a F ourier– series e xpansion of the quantization error sequence, a tech- nique also used elsewhere to find mathematical expressions and properties of quantized sequences [1]. It is here applied, together with original research results, to provide a fully dev eloped example of its applicability to solve LS–based estimation problems, based on quantized data. Theorem C.1. Consider the product C ( I , U, H , L, ϕ ) : = cos( I ( k i + ϕ )) cos( U ( k u + ϕ )) · cos( H ( k h + ϕ )) cos( L ( k l + ϕ )) (C.1) wher e k n : = 2 π N λn , n = 0 , . . . , N − 1 , { I , U, H , L } is a set of positive odd integ er numbers and ϕ is a random variable uniform in (0 , 2 π ] . By using trigonometric identities (C.1) becomes: C ( I , U, H , L, ϕ ) = = 1 8 { cos ( I k i + U k u + H k h + Lk l + ( I + U + H + L ) ϕ ) + cos ( I k i + U k u − H k h − Lk l + ( I + U − H − L ) ϕ ) + cos ( I k i + U k u + H k h − Lk l + ( I + U + H − L ) ϕ ) + cos ( I k i + U k u − H k h + Lk l + ( I + U − H + L ) ϕ ) + cos ( I k i − U k u + H k h + Lk l + ( I − U + H + L ) ϕ ) + cos ( I k i − U k u − H k h − Lk l + ( I − U − H − L ) ϕ ) + cos ( I k i − U k u + H k h − Lk l + ( I − U + H − L ) ϕ ) + cos ( I k i − U k u − H k h + Lk l + ( I − U − H + L ) ϕ ) } (C.2) Since C ( · , · , · , · , ϕ ) is periodic in ϕ with period 2 π , its e xpected value is differ ent from 0 , only when at least one of the coefficients of ϕ is 0 . This pr oduces a set of Diophantine equations in the form: I ± U = ± H ± L (C.3) Given that (C.3) is satisfied, and assuming I , U , H and L to be positive inte gers, the sums N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 E ( C ( I , U, H , L, ϕ )) cos( k i − k u ) cos( k h − k l ) (C.4) do not vanish only if I = U = H = L = 1 . Pr oof. Using (C.2) and (C.3), (C.4) becomes the summation of terms of the type c ( k i , k u , k h , k l ) : = coefficient · cos( Ak i + B k u + C k h + Dk l ) cos( k i − k l ) cos( k h − k l ) , (C.5) where A, B , C, D are positiv e or negati ve odd integers. Any of the expressions of the type in (C.5) is summed over the four indices i, u, h, l . Consider, for instance the summation over i . By using trigonometric identities we hav e: N − 1 X i =0 c ( k i , k u , k h , k l ) = coefficient · N − 1 X i =0 { cos( Ak i ) cos( B k u + C k h + D k l ) · cos( k i ) cos( k l ) cos( k h − k l ) + cos( Ak i ) cos( B k u + C k h + D k l ) · sin( k i ) sin( k l ) cos( k h − k l ) − sin( Ak i ) sin( B k u + C k h + D k l ) · cos( k i ) cos( k l ) cos( k h − k l ) − sin( Ak i ) sin( B k u + C k h + D k l ) · sin( k i ) sin( k l ) cos( k h − k l ) } . (C.6) Consider the first term in the summation. By the Euler’ s formula we hav e: N − 1 X i =0 cos( Ak i ) cos( B k u + C k h + D k l ) · cos( k i ) cos( k l ) cos( k h − k l ) = c 1 N − 1 X i =0 e j A 2 π N λi − e − j A 2 π N λi 2 e j 2 π N λi − e − j 2 π N λi 2 ! = c 1 4 1 − e j ( A +1)2 π λ 1 − e j ( A +1) 2 π N λ + 1 − e j ( A − 1)2 π λ 1 − e j ( A − 1) 2 π N λ + + 1 − e − j ( A +1)2 π λ 1 − e − j ( A +1) 2 π N λ + 1 − e − j ( A − 1)2 π λ 1 − e − j ( A − 1) 2 π N λ , (C.7) where c 1 is constant with respect to i and the latter equality follows by the geometric sum formula. Since λ and A are integers, all terms in (C.7) vanish unless A = ± 1 . In both cases, (C.7) results in c 1 N 2 . The same reasoning applies to the 4 − th term in (C.6), that is the product of all sine wa ve functions, while the cross–product terms of the type: constant · sin( Ak i ) cos( k i ) and constant · cos( Ak i ) sin( k i ) , when summed over i , vanish regardless of A . Since this argument applies for any of the indices in (C.4) the lemma is proved. Then, under the assumption N > 2 , the sums N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 E ( C (1 , 1 , 1 , 1 , ϕ )) · cos( k i − k u ) cos( k h − k l ) = N 4 16 , (C.8) where the last equality follows by e xpanding each term us- ing the Euler’ s formula and by summing the corresponding geometric sums. 14 Thus, from (7) we can write: ˆ A 2 = 4 N 2 N − 1 X i =0 N − 1 X u =0 ( s i + e i ( s i ))( s u + e u ( s u )) cos ( k i − k u ) = 4 N 2 N − 1 X i =0 N − 1 X u =0 z iu cos ( k i − k u ) where z iu : = s i s u + e i ( s i ) s u + e u ( s u ) s i + e i ( s i ) e u ( s u ) i = 0 , . . . , N − 1 u = 0 , . . . , N − 1 (C.9) Each term in (C.9) is a deterministic function of the random variable ϕ and will be analyzed separately in the following. W ith ϕ uniform in [0 , 2 π ) , we have E ( s i s u ) = A 2 2 cos( k i − k u ) i = 0 , . . . , N − 1 , u = 0 , . . . , N − 1 Since, when N > 2 4 N 2 N − 1 X i =0 N − 1 X u =0 cos 2 ( k i − k u ) = 2 , then, 4 N 2 N − 1 X i =0 N − 1 X u =0 E ( s i s u ) cos( k i − k u ) = A 2 . Consider now the term e i ( s i ) s u . W e have [1] e i ( s i ) = ∞ X k =1 ( − 1) k ∆ π k sin 2 π k ∆ s i = ∞ X k =1 ( − 1) k ∆ π k sin − 2 Aπ k ∆ cos ( k i + ϕ ) = ∞ X k =1 ( − 1) k 2∆ π k ∞ X h =0 ( − 1) h J 2 h +1 − 2 π k A ∆ · cos ((2 h + 1)( k i + ϕ )) (C.10) where the last equality follows by observing that [23] sin ( z cos( β )) = 2 ∞ X h =0 ( − 1) h J 2 h +1 ( z ) cos ((2 h + 1) β ) (C.11) Thus by defining z h : = 2 π hA ∆ , h = 0 , 1 , . . . we hav e 4 N 2 N − 1 X i =0 N − 1 X u =0 E ( e i ( s i ) s u ) cos( k i − k u ) = − 4 A N 2 N − 1 X i =0 N − 1 X u =0 E ∞ X k =1 ( − 1) k 2∆ π k ∞ X h =0 ( − 1) h J 2 h +1 ( − z k ) · cos ((2 h + 1)( k i + ϕ )) cos( k i + ϕ ) cos( k i − k u )) = − 4 A N 2 N − 1 X i =0 N − 1 X u =0 ∞ X k =1 ( − 1) k 2∆ π k ∞ X h =0 ( − 1) h J 2 h +1 ( − z k ) · · E ( C (2 h + 1 , 1 , 0 , 0 , ϕ )) cos( k i − k u ) (C.12) where the last equality holds by virtue of the dominated con ver gence theorem [25]. The expected v alue in (C.12) does not vanish only if h = 0 , when we have: E ( C (1 , 1 , 0 , 0 , ϕ )) = 1 2 cos( k i − k u ) Thus, from (C.10), we have: E ( e i ( s i ) s u ) = A ∆ π cos( k i − k u ) ∞ X k =1 ( − 1) k k J 1 ( z k ) (C.13) that is equal to E ( e u ( s u ) s i ) because of the cosine function being an ev en function of its argument and J 1 ( · ) being an odd function of its argument. By the same reasoning, we have: N − 1 X i =0 N − 1 X u =0 E ( C (1 , 1 , 0 , 0 , ϕ )) cos( k i − k u ) = N 2 4 Consequently , (C.12) becomes: 4 N 2 N − 1 X i =0 N − 1 X u =0 E ( e i ( s i ) s u ) cos( k i − k u ) = 2 A ∆ π ∞ X k =1 ( − 1) k k J 1 ( z k ) (C.14) Now consider the term e i ( s i ) e u ( s u ) in (C.9). W e have: e i ( s i ) e u ( s u ) = ∆ π 2 ∞ X k =1 ∞ X h =1 ( − 1) h + k hk · sin 2 π k ∆ s i sin 2 π h ∆ s u (C.15) By using (C.11), the rightmost product in (C.15) has the following expected v alue: E sin 2 π k ∆ s i sin 2 π h ∆ s u = 4 ∞ X m =0 ∞ X n =0 ( − 1) m + n J 2 m +1 ( z k ) J 2 n +1 ( z h ) · E ( C (2 m + 1 , 2 n + 1 , 0 , 0 , ϕ )) (C.16) By neglecting negati ve values of the indices, because of (C.3), the expectation in (C.16) is different from zero only if m = n , when we hav e E sin 2 π k ∆ s i sin 2 π h ∆ s u = 2 ∞ X n =0 J 2 n +1 ( z k ) J 2 n +1 ( z h ) cos((2 n + 1)( k i − k u )) (C.17) 15 which corresponds to the analysis done in [26][27] and to the results published in App. G, in [1]. By using (C.15) and (C.17) we obtain: 4 N 2 N − 1 X i =0 N − 1 X u =0 E ( e i ( s i ) e u ( s u )) cos( k i − k u ) = 8∆ 2 π 2 N 2 N − 1 X i =0 N − 1 X u =0 cos( k i − k u ) ∞ X k =1 ∞ X h =1 ( − 1) h + k hk · ∞ X n =0 J 2 n +1 ( z h ) J 2 n +1 ( z k ) cos((2 n + 1)( k i − k u )) . (C.18) By using (C.18) and twice (C.14), one can write: E ( ˆ A 2 ) = A 2 + 4 A ∆ π ∞ X k =1 ( − 1) k k J 1 ( z k ) + 8∆ 2 π 2 N 2 N − 1 X i =0 N − 1 X u =0 cos( k i − k u ) ∞ X k =1 ∞ X h =1 ( − 1) h + k hk · ∞ X n =0 J 2 n +1 ( z h ) J 2 n +1 ( z k ) cos((2 n + 1)( k i − k u )) (C.19) Expression (C.19) shows that the bias of the estimator of the square amplitude comprises two terms as follows: bias ( A, ∆ , N ) : = E ( ˆ A 2 ) − A 2 = 4 Ag ( A, ∆) + 8 h ( A, ∆ , N ) where g ( A, ∆) : = ∆ π ∞ X k =1 ( − 1) k k J 1 ( z k ) (C.20) and h ( A, ∆ , N ) : = ∆ 2 π 2 N 2 N − 1 X i =0 N − 1 X u =0 cos( k i − k u ) · ∞ X k =1 ∞ X h =1 ( − 1) h + k hk ∞ X n =0 J 2 n +1 ( z h ) J 2 n +1 ( z k ) · cos((2 n + 1)( k i − k u )) (C.21) Observe that, while g ( · , · ) does not depend on the number of samples, h ( · , · , · ) is also a function of N . Thus, the estimator of the square amplitude, based on least squares of quantized data is not consistent, since its bias does not vanish when N → ∞ , for a finite quantizer resolution. Observe also that (8.531 in [28]) J 0 ( mR ) = J 0 ( mρ ) J 0 ( mr ) + 2 ∞ X k =1 J k ( mρ ) J k ( mr ) cos( k φ ) , where, R = p r 2 + ρ 2 − 2 r ρ cos( φ ) As a consequence, the rightmost summation in (C.21) can be written as follows: ∞ X n =0 J 2 n +1 ( z k ) J 2 n +1 ( z h ) cos((2 n + 1)( k i − k u )) = 1 4 ( J 0 ( R ) − J 0 ( R )) (C.22) with R = q z 2 h + z 2 k − 2 z h z k cos( k i − k u ) and R = q z 2 h + z 2 k + 2 z h z k cos( k i − k u ) T o further characterize the bias, it is shown next the asymp- totic behavior of h ( · , · , · ) : lim N →∞ h ( A, ∆ , N ) = lim N →∞ ∆ 2 π 2 N 2 N − 1 X i =0 N − 1 X u =0 cos( k i − k u ) · ∞ X k =1 ∞ X h =1 ( − 1) h + k hk ∞ X n =0 J 2 n +1 ( z h ) J 2 n +1 ( z k ) · cos((2 n + 1)( k i − k u )) = lim N →∞ ∆ 2 π 2 ∞ X k =1 ∞ X h =1 ( − 1) h + k hk ∞ X n =0 J 2 n +1 ( z h ) J 2 n +1 ( z k ) · " 1 N 2 N − 1 X i =0 N − 1 X u =0 cos((2 n + 1)( k i − k u )) cos( k i − k u ) # (C.23) By using Euler’ s formula it can be proven that when N ≥ 3 , the double rightmost summation in square brackets in (C.23), is 0 , unless n equals 0 , when it becomes equal to 1 / 2 . Thus (C.23) becomes lim N →∞ h ( A, ∆ , N ) = 1 2 ∆ 2 π 2 ∞ X k =1 ∞ X h =1 ( − 1) h + k hk J 1 ( z h ) J 1 ( z k ) = 1 2 ∆ π ∞ X h =1 ( − 1) h h J 1 ( z h ) ! 2 = g 2 ( A, ∆) 2 (C.24) Consequently , when N → ∞ bias ( A, ∆ , N ) → bias ( A, ∆) : = 4 g ( A, ∆) [ A + g ( A, ∆)] . (C.25) A P P E N D I X D F R E Q U E N C Y D O M A I N A P P RO A C H : V A R I A N C E O F T H E S Q UA R E A M P L I T U D E E S T I M A T O R In this appendix, we provide expressions for the variance of the square amplitude estimator and we analyze its asymptotic behavior when N → ∞ . From (7), we hav e: ˆ A 2 2 = 16 N 4 N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 z iuhl cos ( k i − k u ) · cos ( k h − k l ) (D.1) where z iuhl : = y i y u y h y l . By expanding each output sequence as the sum of the input and quantization error we obtain z iuhl = ( s i + e i ( s i ))( s u + e u ( s u ))( s h + e h ( s h ))( s l + e l ( s l )) = z 4 + z 3 + z 2 + z 1 + z 0 , 16 where z 4 : = s i s u s h s l z 3 : = s i s u s h e l ( s l ) + s i s u e h ( s h ) s l + s i e u ( s u ) s h s l + e i ( s i ) s u s h s l z 2 : = s i s u e h ( s h ) e l ( s l ) + s i e u ( s u ) s h e l ( s l ) + s i e u ( s u ) e h ( s h ) s l + e i ( s i ) s u s h e l ( s l ) + e i ( s i ) s u e h ( s h ) s l + e i ( s i ) e u ( s u ) s h s l z 1 : = s i e u ( s u ) e h ( s h ) e l ( s l ) + e i ( s i ) s u e h ( s h ) e l ( s l ) + e i ( s i ) e u ( s u ) s h e l ( s l ) + e i ( s i ) e u ( s u ) e h ( s h ) s l z 0 : = e i ( s i ) e u ( s u ) e h ( s h ) e l ( s l ) (D.2) Any of the terms in (D.2) depend on N , ϕ and λ . The expected value of (D.1) is the sum of all expected values of the terms z i , i = 0 , 1 , 2 , 3 , 4 , which in turn is the sum of all expected values included in their definitions above. The term z 4 , when multiplied by 16 N 4 cos( k i − k u ) cos( k h − k l ) and summed ov er the four indices i, u, h, l will provide the v alue A 4 . Each term in (D.2) is the product of 4 factors that are either the input signal or the quantization error . Their analysis can be then carried out in a similar way independently on the number of times the signal or the quantization error appears in each term. As an example, it is shown ho w to find an expression for the term s i e u ( s u ) s h e l ( s l ) in z 2 , in (D.2). Because of (C.10) and (C.11) we hav e s i = − A cos( k i + ϕ ) , k i : = λ 2 π i N e u ( s u ) = 2 ∆ π ∞ X k =1 ∞ X r =0 ( − 1) k + r k J 2 r +1 ( z k ) · cos ((2 r + 1)( k u + ϕ )) , k u : = λ 2 π u N , z k : = 2 π k A ∆ s h = − A cos( k h + ϕ ) k h : = λ 2 π h N e l ( s l ) = 2 ∆ π ∞ X k =1 ∞ X r =0 ( − 1) k + r k J 2 r +1 ( z l ) · cos ((2 r + 1)( k l + ϕ )) , k l : = λ 2 π l N , z l : = 2 π l A ∆ (D.3) where the fact that J n ( · ) is an odd function of its argument when n is odd, has been exploited. By multiplying all terms together and by 16 N 4 cos( k i − k u ) cos( k h − k l ) , and by summing ov er i, u, h, l we obtain: 16 N 4 N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 s i e u ( s u ) s h e l ( s l ) cos ( k i − k u ) · cos ( k h − k l ) = 64 A 2 N 4 ∆ π 2 N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 ∞ X n =1 ∞ X m =1 ( − 1) n + m nm · ∞ X r =0 ∞ X t =0 ( − 1) r + t J 2 r +1 ( z n ) J 2 t +1 ( z m ) · { cos ( k i + ϕ ) cos ((2 r + 1)( k u + ϕ )) cos ( k h + ϕ ) · · cos ((2 t + 1)( k l + ϕ )) } cos ( k i − k u ) cos ( k h − k l ) (D.4) The term between curly brackets depends on ϕ , so that by taking the expectation, we have: E ( 16 N 4 N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 s i e u ( s u ) s h e l ( s l ) cos ( k i − k u ) · cos ( k h − k l ) = 64 A 2 N 4 ∆ π 2 N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 ∞ X n =1 ∞ X m =1 ( − 1) n + m nm · ∞ X r =0 ∞ X t =0 ( − 1) r + t J 2 r +1 ( z n ) J 2 t +1 ( z m ) · E ( C (1 , 2 r + 1 , 1 , 2 t + 1)) cos ( k i − k u ) cos ( k h − k l ) (D.5) where the expectation operation has been interchanged with the limit in the infinite series, because of the applicability of the dominated con ver gence theorem. Consider that the expected v alue within (D.5) is different from zero only for certain combinations of the indices r and t , so that it behav es like a siev e that filters some of the terms in the double summation over those indices. The remaining combinations of indices satisfy the Diophantine equations in (C.3) and may lead to simplified versions of the e xpressions deri ved as in (D.5), that are faster to sum numerically . The method taken here is applicable to any of the terms in z iuhl , howe ver this last simplification approach becomes more cumbersome for the terms in z 0 and in z 1 . Expression (D.5) provides meaningful information when N → ∞ . In this case, the summations ov er the indices i, u, h, l can be interchanged to provide: lim N →∞ E ( 16 N 4 N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 s i e u ( s u ) s h e l ( s l ) · cos ( k i − k u ) cos ( k h − k l ) } = lim N →∞ 64 A 2 N 4 ∆ π 2 ∞ X n =1 ∞ X m =1 ( − 1) n + m nm · ∞ X r =0 ∞ X t =0 ( − 1) r + t J 2 r +1 ( z n ) J 2 t +1 ( z m ) · N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 E ( C (1 , 2 r + 1 , 1 , 2 t + 1)) · cos ( k i − k u ) cos ( k h − k l ) (D.6) Because of (C.3) and (C.8), the rightmost summations in (D.6) vanish unless r = 0 and t = 0 , in which case we hav e: lim N →∞ E ( 16 N 4 N − 1 X i =0 N − 1 X u =0 N − 1 X h =0 N − 1 X l =0 s i e u ( s u ) s h e l ( s l ) · cos ( k i − k u ) cos ( k h − k l ) } = 4 A 2 ∆ π 2 ∞ X n =1 ( − 1) n n J 1 ( z n ) ! 2 = 4 A 2 g 2 ( A, ∆) . (D.7) 17 This approach applies for any combination of signal and errors in the terms of z iuhl , so that when N → ∞ we obtain: z 4 → A 4 z 3 → 8 A 3 g ( A, ∆) z 2 → 24 A 2 g 2 ( A, ∆) , N → ∞ z 1 → 32 Ag 3 ( A, ∆) z 0 → 16 g 4 ( A, ∆) (D.8) and E ( ˆ A 2 ) 2 N →∞ = A 4 + 8 A 3 g ( A, ∆) + 24 A 2 g 2 ( A, ∆) + 32 Ag 3 ( A, ∆) + 16 g 4 ( A, ∆) , (D.9) that is equal to the square of E ( ˆ A 2 ) N →∞ = A 2 + 4 Ag ( A, ∆) + 4 g 2 ( A, ∆) , (D.10) obtained from (C.25). Thus, when N → ∞ the v ariance of ˆ A 2 vanishes. A P P E N D I X E S U M O F g ( A, ∆) Because of its role, it is shown next how to sum the series in (C.20). T wo approaches will be tak en. It can be directly observed that g ( A, ∆) is a Schl ¨ omilch series and its sum is provided by the Nielsen formula as follows [29]: g ( A, ∆) = ∆ π ∞ X k =1 ( − 1) k k J 1 ( z k ) = ∆ π − x 2 + √ π x Γ 3 2 p X k =1 " x 2 − k − 1 2 2 π 2 # 1 2 (E.1) where x and p are defined in (11). By observing that Γ 3 2 = √ π 2 , when p is gi ven, the deriv ativ e of (E.1) with respect to A , is g 0 ( A, ∆ , p ) : = ∂ g ( A, ∆) ∂ A = − A 2 + 2∆ π · p X k =1 k − 1 2 2 A 3 r 1 − ( k − 1 2 ) 2 A 2 , p − 1 2 ∆ ≤ A < p + 1 2 ∆ . (E.2) From (E.2) we have: lim A → ( p − 1 2 )∆ + g 0 ( A, ∆ , p ) = + ∞ , p = 1 , 2 , . . . The numerical sum of (E.2), done by assuming 1 ≤ p ≤ 2 20 , shows that g 0 (( p − 1 2 )∆ , ∆ , p − 1) con ver ges to 0 for increasing values of p , remaining always negati ve. Thus, when A = p − 1 2 ∆ , (E.1) is locally minimized. By substituting these values in (E.1), the sequence of the minima in g ( A, ∆) is: g p − 1 2 ∆ , ∆ = 1 2 − p ∆ 2 + 2∆ π p − 1 X k =1 v u u t 1 − k − 1 2 2 p − 1 2 2 , p = 0 , 1 , . . . (E.3) that represents the discrete en velope of the negati ve peaks in g ( A, ∆) . Since when ∆ → 0 , g 2 ( A, ∆) A , from (C.25), we can write an expression for the sequence of its minima, that is the lower discrete en velope of the graphs in Fig. 4, en v ( p, ∆) N →∞ ∆ → 0 ' 4 Ag p − 1 2 ∆ , ∆ = : B 2 (∆) , p = 0 , 1 , . . . (E.4) Alternativ ely , the sum in (E.1) can be calculated by using the same approach taken in [30]. Assume n odd. Then, from [28] J n ( x ) = 2 π Z π 2 0 sin( nθ ) sin( x sin θ ) dθ, and with γ a positiv e real number , ∞ X k =1 ( − 1) k k J n (2 π γ k ) = = 2 π Z π 2 0 sin( nθ ) ∞ X k =1 ( − 1) k k sin(2 π k γ sin θ ) dθ = 2 π Z π 2 0 sin( nθ ) π 2 − π γ sin( θ ) + 1 2 dθ (E.5) Since γ sin( θ ) + 1 2 = γ sin( θ ) + 1 2 − γ sin( θ ) + 1 2 we have ∞ X k =1 ( − 1) k k J n (2 π γ k ) = − 2 γ Z π 2 0 sin( nθ ) sin( θ ) dθ + 2 Z π 2 0 sin( nθ ) γ sin( θ ) + 1 2 dθ The leftmost inte gral is 0 unless n = 1 . In this case, it becomes equal to π 4 . So ∞ X k =1 ( − 1) k k J n (2 π γ k ) = − γ π 2 δ n − 1 + 2 Z π 2 0 sin( nθ ) γ sin( θ ) + 1 2 dθ where δ n is equal to 1 when n = 0 and 0 otherwise. With k integer γ sin( θ ) + 1 2 = k = ⇒ k ≤ γ sin( θ ) + 1 2 < k + 1 and, assuming 0 ≤ θ ≤ π 2 , one obtains arcsin k − 1 2 γ ≤ θ < arcsin k + 1 2 γ , 0 ≤ k ≤ K where K is the largest integer k such that k + 1 2 γ ≤ 1 , that is K = b γ − 1 2 c . When γ − 1 2 6 = integer number define b k : = arcsin k − 1 2 γ 0 < k ≤ K 0 k = 0 1 k = K + 1 18 Con versely , if γ − 1 2 is an integer number , define b k : = ( arcsin k − 1 2 γ 0 < k ≤ K 0 k = 0 Then, for odd n , ∞ X k =1 ( − 1) k k J n (2 π γ k ) = − γ π 2 δ n − 1 + 2 Z π 2 0 sin( nθ ) γ sin( θ ) + 1 2 dθ = − γ π 2 δ n − 1 − 2 n K X k =0 k [cos( nb k +1 ) − cos( nb k )] (E.6) where K is equal to K if γ − 1 2 is not integer and to K + 1 , con versely . By assuming n = 1 in (E.6), an alternativ e expression for g ( A, ∆) can easily be otained. Neither (E.1) nor (E.6) provide direct information on the rate of con vergence of g ( A, ∆) to 0 when ∆ goes to 0 , as expected. A somewhat loose upper bound can be obtained by considering that [31]: | J 1 ( x ) | ≤ c | x | 1 / 3 , where c = 0 . 7857 . . . . Consequently , when A > 0 , | g ( A, ∆) | ≤ ∆ π ∞ X k =1 1 k c 2 π kA ∆ 1 3 = B ( A, ∆) , where B ( A, ∆) is defined in (14). Therefore, under the as- sumption that N → ∞ , | bias ( A, ∆) | ≤ 4 AB ( A, ∆) + 4 B ( A, ∆) 2 = : B 1 ( A, ∆) , (E.7) and bias ( A, ∆) ∼ O ∆ 4 3 when ∆ → 0 , for a giv en value of A . Observe also that B 1 ( A, ∆) is minimum when A = 1 2 . R E F E R E N C E S [1] B. Widro w and I. K oll ´ ar , Quantization Noise , Cambridge Univ ersity Press, 2008. [2] IEEE, Standard for T erminology and T est Methods for Analog–to– Digital Converter s , IEEE Std. 1241, Aug. 2009. [3] R. Pintelon and J. Schoukens, System Identification - A Fr equency Domain Appr oach, 2nd ed. , IEEE Press, John Wiley and Sons, Inc. Publication, New Jersey – USA, 2012. [4] S. M. Kay , Fundamentals of Statistical Signal Pr ocessing, Prentice–Hall, 1998. [5] F . Correa Alegria, “Bias of amplitude estimation using three-parameter sine fitting in the presence of additive noise, ” IMEK O Measur ement , 2 (2009), pp. 748–756. [6] J. J. Blair, T . E. Linnenbrink, “Corrected RMS Error and Effecti ve Number of Bits for Sine W ave ADC T ests, ” Computer Standards & Interfaces , Elsevier , V ol. 26, pp. 43–49, 2003. [7] K. Hejn, A. Pacut, “Effecti ve Resolution of Analog to Digital Conv erters – Evolution of Accuracy , ” IEEE Instr . Meas. Magazine , pp. 48–55, Sept. 2003. [8] M. Bertocco, C. Narduzzi, P . Paglierani, D. Petri, “ A Noise Model for Digitized Data, ” IEEE T rans. Instr . Meas. , Feb . 2000, vol. 49, no. 1, pp. 83–86. [9] P . Handel, “ Amplitude estimation using IEEE-STD-1057 three- parameter sine wa ve fit: Statistical distribution, bias and variance, ” IMEK O Measur ement , 43 (2010), pp. 766–770. [10] I. Koll ´ ar , J. Blair , “Improved Determination of the Best Fitting Sine W av e in ADC T esting, ” IEEE Tr ans. Instr . Meas. , Oct. 2005, vol. 54, no. 5, pp. 1978–1983. [11] B. P . Ginsburg, A. P . Chandrakasan “500-MS/s 5-bit ADC in 65-nm CMOS With Split Capacitor Array D A C, ” IEEE Journ. of Solid–State Cir cuits , V ol. 42, No. 4, April 2007, pp. 739–747. [12] M. Harw ood, N. W arke, R. Simpson, T . Leslie et al. “ A 12.5Gb/s SerDes in 65nm CMOS Using a Baud-Rate ADC with Digital Receiver Equalization and Clock Recovery , ” Digest of T echnical P apers, Solid- State Circuits Conference , 11–15 Feb. 2007, pp. 436–437 and 613. [13] Z. Cao, S. Y an, Y . Li, “ A 4 GSample/s 8b ADC in 0.35 µm CMOS, ” Digest of T echnical P apers, Solid-State Circuits Confer ence , 3–7 Feb . 2008, pp. 541–542 and 634. [14] K. Poulton, R. Neff, A. Muto, W . Liu, A. Burstein, M. Heshami, “ A 12.5Gb/s SerDes in 65nm CMOS Using a Baud-Rate ADC with Digital Receiv er Equalization and Clock Recovery , ” Digest of T echnical P apers, Solid-State Circuits Conference , 5–7 Feb. 2002, pp. 166–167. [15] P . M. Ramos, M. Fonseca da Silva, A. Cruz Serra, “Low Frequency Impedance Measurements using Sine–Fitting, ” IMEKO Measurement , 35 (2004), pp. 89–96. [16] R. A. W annamaker, S. P . Lipshitz, J. V anderkooy , J. N. Wright, “ A Theory of Nonsubtractive Dither, ” IEEE T rans. on Signal Pr ocessing , vol. 48, no. 2, Feb. 2000, pp. 499–516. [17] JCGM 100:2008, Evaluation of measurement data – Guide to the Expression of Uncertainty in Measurement. [Online]. A vailable: www .bipm.org/utils/common/documents/jcgm/JCGM 100 2008 E.pdf. [18] P . Carbone and D. Petri, “Effect of Additi ve Dither on the Resolution of Ideal Quantizers, ” IEEE Tr ans. Instr . Meas. , June 1994, vol. 43, no. 3, pp. 389–396. [19] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Don- garra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenne y and D. Sorensen, LAP ACK Users Guide , 1999:SIAM. [20] A. Flammini, D. Marioli, E. Sisinni and A. T aroni, “ A Multichannel DSP-Based Instrument for Displacement Measurement Using Differ- ential V ariable Reluctance T ransducer , ” IEEE T rans. Instr . Meas. , Feb. 2005, vol. 54, no. 1, pp. 178–182. [21] S.-T . W u and J.-L. Hong, “Fi ve–Point Amplitude Estimation of Sinu- soidal Signals: With Application to L VDT Signal Conditioning, ” IEEE T rans. Instr . Meas. , March 2010, vol. 59, no. 3, pp. 623–30. [22] P . Carbone and G. Chiorboli, “ ADC Sine W ave Histogram T esting with Quasi–Coherent Sampling, ” IEEE T rans. Instr . Meas. , Aug. 2001, vol. 50, no. 4, pp. 949–53. [23] M. Abramovitz and I. A. Stegun, Handbook of Mathematical Functions , New Y ork, 1970. [24] M. Kendall and A. Stuart, Advanced Theory of Statistics. London: Charles Grifn & Company , 1977. [25] A. N. Kolmogoro v and S. V . Fomin, Elements of the Theory of Functions and Functional Analysis, Dover Publications, 1999. [26] A. B. Kokkeler and A. W . Gunst, “Modeling Correlation of Quantized Noise and Periodic Signals, ” IEEE Signal Processing Letters , vol. 11, no. 10, 2004, pp. 802–805. [27] W . Hurd, “Correlation Function of Quantized Sine W ave Plus Gaussian Noise, ” IEEE T rans. on Information Theory , vol. IT –13, no. 1, Jan. 1967. [28] I. S. Gradshteyn and I. M. Ryzhik, T able of Inte grals, Series, and Pr oducts , se venth ed., edited by A. Jef frey and D. Zwillinger , Academica Press, 2007. [29] G. N. W atson, A T reatise on the Theory of Bessel Functions , Cambridge Univ ersity Press, 1962. [30] R. M. Gray , “Quantization Noise Spectra, ” IEEE T rans. Inf. Theory , V ol. 36, No. 6, Nov . 1990, pp. 1220–1224. [31] L. J. Landau, “Bessel Functions: Monotonicity and Bounds, ” J. London Math. Soc. , (2), 61, 2000, pp. 197-215.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment