Robust Fitting of Ellipses and Spheroids

Ellipse and ellipsoid fitting has been extensively researched and widely applied. Although traditional fitting methods provide accurate estimation of ellipse parameters in the low-noise case, their performance is compromised when the noise level or t…

Authors: Jieqi Yu, Sanjeev R. Kulkarni, H. Vincent Poor

Rob ust Fitting of Ellipses and Spheroids Jieqi Y u Department of Electrical Engineering Princeton Univ ersity Princeton, NJ, 08544 Email: jieqiyu@princeton.edu Phone: 609-258-6868 Sanjee v R. Kulkarni Department of Electrical Engineering Princeton Univ ersity Princeton, NJ, 08544 Email: kulkarni@princeton.edu H. V incent Poor Department of Electrical Engineering Princeton Univ ersity Princeton, NJ, 08544 Email: poor@princeton.edu Abstract —Ellipse and ellipsoid fitting has been extensively re- searched and widely applied. Although traditional fitting methods pro vide accurate estimation of ellipse parameters in the low- noise case, their performance is compr omised when the noise level or the ellipse eccentricity are high. A series of robust fitting algorithms are proposed that perform well in high-noise, high- eccentricity ellipse/spheroid (a special class of ellipsoid) cases. The new algorithms are based on the geometric definition of an ellipse/spheroid, and improv ed using global statistical properties of the data. The efficacy of the new algorithms is demonstrated through simulations. I . I N T RO D U C T I O N Ellipse and ellipsoid fitting has been extensi vely studied and has broad applications. For example, an ellipse can serve as a geometric primitiv e model in computer vision and pattern recognition, which allows reduction and simplification of data for higher level processing. Ellipse fitting is also essential in applied sciences from observ ational astronomy (orbit estima- tion) to structural geology (strain in natural rocks). Moreo ver , ellipsoid fitting (e.g., minimum volume ellipsoid estimator [10]) is a useful tool in outlier detection. In this paper , we consider the following issues: In Section II, the ellipse fitting problem is formulated and se veral important algorithms are re viewed. In Section III, a new objectiv e func- tion based on the geometric definition of ellipse is introduced, and it is further extended to three ellipse fitting algorithms in Section IV. Then a spheroid fitting algorithm is proposed in Section V, which is followed by the experimental results in Section VI. I I . F O R M U L A T I O N A N D B A C K G RO U N D As a classical problem, ellipse fitting has a rich literature. V arious algorithms hav e been proposed from very different perspectiv es. W e start our discussion by re viewing se veral classes of the most important ellipse fitting algorithms. The problem of ellipse fitting can be modeled as follows. W e ha ve data points { z i = ( x i , y i ) } n i =1 , which are points on an ellipse corrupted by noise. The objectiv e is to fit an ellipse with unknown parameters β to the data points, so that the total error is minimized. The measure of error dif fers for dif ferent classes of algorithms. This research was supported in part by the U.S. Of fice of Naval Research under grant numbers N00014-07-1-0555 and N00014-09-1-0342, and in part by the U.S. Army Research Office under grant number W911NF-07-1-0185. The most intuitiv e class of ellipse fitting algorithms is algebraic fitting , which uses algebr aic error as a measure of error . An ellipse can be described by an implicit function P ( β ) = Ax 2 + B xy + C y 2 + D x + E y + F = 0 , where β = ( A, B , C , D , E , F ) denotes the ellipse parameters. The algebraic error from a data point z i = ( x i , y i ) to the ellipse is thus Ax 2 i + B x i y i + C y 2 i + D x i + E y i + F . The most efficient and widely used algorithm in this cate- gory was proposed by Fitzgibbon et al. [7]. It is a least squares optimization problem with respect to the algebraic criterion: min β n X i =1 ( Ax 2 i + B x i y i + C y 2 i + D x i + E y i + F ) 2 (1) s.t. B 2 − 4 AC = 1 . The constraint ensures that the optimization result is an ellipse instead of a general conic section and also pre vents the problem of parameter free scaling. This optimization problem is further reduced to a generalized eigenv alue problem, which can be ef ficiently solved. In addition, Halir et al. improved the algorithm into a numerically stable version [8]. Algebraic fitting has the advantage of lo w computational complexity . Howe ver , the algebraic criterion lacks geometric interpretation, and the algorithm is dif ficult to generalize to three dimensions due to its non-linear constraint. T o o vercome the shortcomings of algebraic fitting, Ahn et al. proposed orthogonal least squar es fitting (OLSF) [2]. OLSF minimizes the sum of squared Euclidian distance, which is defined as the orthogonal distance from the data point to the ellipse: min β n X i =1 k z i − z 0 i ( β ) k 2 , (2) where z 0 i ( β ) is the orthogonal contacting point, which is the point on the ellipse that has the shortest distance to the corresponding data point. OLSF has a clear geometric interpretation and features high accuracy . Moreo ver , it can be generalized to the three-dimensional case [1]. Unfortunately , OLSF is computationally intensiv e. It employs the iterati ve Gauss-Newton algorithm, and in each iteration, the orthogonal contacting points hav e to be found for each data point, being iterativ e itself. V arious extensions to OLSF algorithms have also been proposed. Angular information is incorporated into the OLSF algorithm in W atson’ s 2002 paper [12], in which the orthog- onal geometric distance is replaced by the distance along the known measurement angle. Moreover , instead of the l 2 norm in OLSF , l 1 , l ∞ and l p norms hav e been considered as well [11] [3] [4]. The third class of algorithms consists of Maximum Likeli- hood (ML) algorithms, which were proposed in [5] and [9]. The key steps of the two ML algorithms, the fundamental nu- merical scheme [5] and the heteroscedastic errors-in-v ariables scheme [9], hav e been proven to be equi valent in [6]. In [5], Chojnacki et al. assume that the data points are independent and hav e a multiv ariate normal distribution: z i ∼ N ( z 0 i , Λ z i ) . The ML solution is then reduced to an optimization problem based on Mahalanobis distance : min β n X i =1 ( z i − z 0 i ( β )) T Λ − 1 z i ( z i − z 0 i ( β )) . (3) The fundamental numerical scheme is implemented to solve a variational equation iterati vely by solving an eigenv alue prob- lem at each iteration until it con verges. The ML algorithms are accurate with moderate computational cost. Howe ver , when the noise is large or the eccentricity of the ellipse is large, the algorithm breaks down, because when any data point is close to the center of the estimated ellipse, one of the matrices in the algorithm has elements that tend to infinity . All three classes of algorithms described above hav e their advantages and perform well when the noise le vel is lo w . Howe ver , the y share a common disadvantage of not being robust enough for highly noisy data. In this paper , we propose a series of algorithms that are resistant to large noise, and can be generalized to three-dimensions easily , with competiti ve accuracy and moderate computational cost. I I I . A N E W G E O M E T R I C O B J E C T I V E F U N C T I O N Note that both OLSF and ML algorithms estimate the nuisance parameters z 0 i (point on the ellipse that generates the data point) in addition to the parameters of the ellipse. It is desirable to bypass these nuisance parameters and have a more intrinsic fitting method with a clear geometric interpretation. Recall the geometric definition of an ellipse. An ellipse is the locus of points such that the sum of the distances from that point to two other fixed points (called the foci of the ellipse) is constant. I.e., z is a point on the ellipse if and only if k z − c 1 k + k z − c 2 k = 2 a, (4) where k · k denotes the l 2 norm, c 1 and c 2 are the two foci, and a is the length of the semi-major axis. Based on the geometric definition, the ellipse fitting problem can be naturally formulated as an optimization problem with a new geometric objective function: min c 1 , c 2 ,a 1 n n X i =1 ( k z i − c 1 k + k z i − c 2 k − 2 a ) 2 , (5) where n denotes the number of data points. The objectiv e function in (5) has several adv antages. First, it has a clear geometric interpretation: it is the expected squared y Δ i ζ 1 i θ 2 i θ i z x Δ i ζ 1 i θ 2 i θ i z Fig. 1. Deriv ation of the e xpected contribution of a noisy data point to our objectiv e function. difference between the sum of the distances from the data points to the foci and the length of the major axis. Second, the parameters in the objectiv e function, c 1 , c 2 and a , are intrinsic parameters of ellipses, which are translation and rotation in variant. The third advantage is that the objectiv e function is ellipse specific, and thus no extra constraints are needed. As a result, the minimization problem can be readily solved by gradient descent algorithms. Lastly , and most importantly , the objectiv e function possess one more property that contrib utes to the robustness of our algorithm, which is demonstrated by comparing with the objectiv e function of OLSF . Assume that the data points are independent and hav e a multi variate normal distribution: z i ∼ N ( z 0 i , Λ z i ) . The expected contribution of a noisy data point to the objective function of OLSF is approximately E ( || z i − z 0 i || 2 ) ≈ σ 2 . (6) Note that this is homogeneous for all the data points. Howe ver , for our objectiv e function, the expected contribution can be approximated as (see Fig. 1) E [ f z i ] = E h (∆ y (sin θ i 1 + sin θ i 2 ) + ∆ x (cos θ i 1 − cos θ i 2 )) 2 i = 2 σ 2 + 2 σ 2 (sin θ i 1 sin θ i 2 − cos θ i 1 cos θ i 2 ) = 2 σ 2 (1 − cos( θ i 1 + θ i 2 )) = 2 σ 2 (1 + cos ζ i ) , (7) where ζ i is the angle ∠ c 1 z i c 2 . This quantity is heterogeneous around the periphery of the ellipse: the objecti ve function puts a large weight on the data points located at two ends of the major axis. As a result, our algorithm provides a highly robust and accurate major axis orientation estimate. Howe ver , the objecti ve function in (5) has a global minimum at infinity . When the foci move away from each other to ward infinity and the semi-major axis length tends to infinity , the value of the objective function approaches zero. So when the noise le vel is high, the local minimum (that we desire in this case) is smeared out by the noise, resulting in the estimated foci slipping away along the major axis. In order to take adv antage of the robustness of ellipse axis orientation estimation while overcoming the problem of having a global minimum at infinity , we propose three modified algorithms in the next section, so as to increase the robustness and accuracy of the algorithm directly using our ne w objective function. I V . T H R E E M O D I FI E D A L G O R I T H M S F O R E L L I P S E F I T T I N G In this section, we introduce three modified algorithms based on the objectiv e function proposed in Section III. Each of the three algorithms overcomes the problem of a global minimum at infinity and achieves a robust and accurate result. A. P enalized Objective Function The most intuiti ve way to eliminate the global minimum at infinity is penalization. The penalty term should fav or smaller semi-major lengths, so that the foci will not diver ge. As a result, we propose the following penalized objecti ve function: 1 n n X i =1 ( || z i − c 1 || + || z i − c 2 ||− 2 a ) 2 + λ ˆ a max ˆ σ exp  a ˆ a max  4 ! , (8) where the second term is the penalty term, λ is a tuning parameter , ˆ a max is an estimated upper bound for the semi- major length a , and ˆ σ is an estimate of the noise standard deviation. ˆ σ and ˆ a max are estimated during the initialization procedure. The term exp(( a ˆ a max ) 4 ) tends to infinity rapidly when a exceeds ˆ a max , thus eliminating the global minimum at infinity . The penalty term is also proportional to the estimated noise lev el. When the noise lev el is high, the penalty term is large, so as to increase the rob ustness of the algorithm. On the other hand, when the noise le vel is low , the penalty term is small to ensure a small bias added to the objecti ve function. The coefficient ˆ a max is a scaling factor to accommodate the size of the ellipse we are fitting. In the initialization, ˆ a max is estimated as the largest distance from a data point to the mean of all the data points z mean . a is initialized as the mean of the distance from the data points to z mean . T o estimate ˆ σ , we run the gradient descent algorithm for (5) and terminate when a exceeds ˆ a max . Then ˆ σ is estimated as the square root of the resulting objecti ve function v alue. This simple noise lev el estimation method suffices for our purposes here. The penalized optimization problem (8) can be readily solved by gradient descent algorithms once initialized. The global minimum at infinity is eliminated and the resulting algorithm has good accurac y and high robustness, which we will show in Section VI via simulation results. B. Axial Guided Ellipse F itting The second algorithm we propose to overcome the problem described in Section III is axial guided fitting . Recall that the major axis orientation estimate is accurate and rob ust. So the only thing left to be determined is the size of the ellipse, i.e., the semi-major length a and semi-minor length b . T o estimate a and b , we first solve (5) to estimate the major- axis orientation φ and ellipse center ( x c , y c ) . Then we translate and rotate the data points { ( x i , y i ) } n i =1 so that the estimated ellipse is in the standard position (centered at the origin with its major axis coinciding with the x-axis). The resulting data points are x 0 i = cos φ ( x i − x c ) + sin φ ( y i − y c ) (9) and y 0 i = − sin φ ( x i − x c ) + cos φ ( y i − y c ) . (10) Then the semi-major length a is determined such that a certain percentage, P a , of the data points satisfy x 0 i ∈ [ x c − a, x c + a ] . The semi-minor length b is determined in a similar manner with percentage P b . Naturally , P a and P b are related to the noise level. If we make additional assumptions that the data points are independent of each other and hav e a multi variate normal distribution: z i ∼ N ( z 0 i , Λ z i ) , and that they are distributed uniformly in angle around the periphery , then the relationship between P a , P b and the noise le vel σ can be approximated as P a ( γ ) ≈ 1 π Z π 2 0  erf  1 √ 2 γ (1 − cos θ )  d θ , (11) and P b ( γ 0 ) ≈ 1 π Z π 2 0  erf  1 √ 2 γ 0 (1 − sin θ )  d θ , (12) where γ = a σ and γ 0 = b σ . Recall that we have the approximation (7). Assuming that the estimated ellipse is close to the true model, the noise lev el can be readily estimated as σ 2 = 1 n n X i =1 f z i 2(1 + cos ζ i ) . (13) W ith this noise estimate, we can perform the axial guided fitting as described, which results in the following procedure: 1) Solve (5) by gradient descent algorithms to obtain φ , ( x c , y c ) , f z i and ζ i , ∀ i ; 2) Translate and rotate the data set so that the estimated ellipse is located at the standard position; 3) Estimate the noise level by (13); 4) Calculate P a and P b from (11) and (12); 5) Find a and b so that a portion P a of the data points satisfy x 0 i ∈ [ x c − a, x c + a ] and a portion P b of the data points satisfy y 0 i ∈ [ y c − b, y c + b ] . Axial guided ellipse fitting di vides the ellipse fitting prob- lem into two stages: orientation estimation and size estimation. In applications where the noise le vel is kno wn a priori, axial guided fitting could be simplified and becomes more efficient and accurate. C. W eighted Objective Function In order to take adv antage of the robust ellipse orientation estimation and obtain an accurate size estimation as well, we propose the following weighted objective function : min c 1 , c 2 ,a 1 n n X i =1 1 1 + β cos ζ i ( || x i − c 1 || + || x i − c 2 || − 2 a ) 2 , (14) where β is a tuning parameter which varies from 0 to 1. When β = 0 , (14) is the same as the original objectiv e function, so that we can obtain accurate ellipse orientation estimation. On the other hand, when β = 1 , the angle dependent weight 1 + cos ζ i is applied to mimic the objecti ve function of OLSF so as to obtain an accurate size estimation. By v arying β from 0 to 1 gradually , we a void stray local minima at first, and aim for accuracy in the end. The weighted objecti ve minimization problem is solv ed by gradient descent in such a way that 1 + β cos ζ i , ∀ i is assumed to be constant in each iteration and updated afterwards. The parameter β is updated in a linear manner . W e will show the efficac y of this algorithm in Section VI. V . S P H E RO I D F I T T I N G So far , we have proposed three modified algorithms based on the geometric definition of an ellipse. In this section, we generalize our method to the three-dimensional case. Unfortunately , general ellipsoids do not hav e a natural geometric definition similar to ellipses. Nonetheless, we can still generalize our algorithm to three-dimensions in the case of a spheroid. A spheroid is defined as a quadric surface obtained by rotating an ellipse about one of its principal axes; in other words, a spheroid is an ellipsoid with two equal semi- diameters. Although a spheroid is a special case of an ellipsoid, this special case may hav e broad applications. According to the definition, a spheroid has the same basic geometric property as an ellipse. This means that all algo- rithms proposed in Section III and IV can be easily generalized to three dimensions in the natural way . For example, in the case of the weighted objecti ve function, we still have the optimization problem min c 1 , c 2 ,a 1 n n X i =1 1 1 + β cos ζ i ( || x i − c 1 || + || x i − c 2 || − 2 a ) 2 , (15) with z i , c 1 , c 2 ∈ R 3 . W e now hav e a group of algorithms almost the same as in the ellipse fitting case, and we will demonstrate the fitting result of (15) as an example at the end of the ne xt section. V I . E X P E R I M E N T A L R E S U LT S T o demonstrate the efficac y of the algorithms described abov e, we describe a series of experiments in dif ferent settings. Synthetic data has been used for the simulations. A set of points on the perimeter of the ellipse were drawn according to a uniform distribution in angle. The data points were obtained by corrupting the true points with independent and identically distributed (i.i.d.) additive Gaussian noise, with mean 0 and covariance σ 2 I . The error rate is defined as the normalized area difference between the true ellipse E t and the fitted ellipse E f : error rate = S E t S E f − S E t T E f 2 S E t , (16) where S E f S E f − S E f T E f is the area difference and S E t denotes the area of the true ellipse. 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 Noise Level σ 2 Erorr Rate Original Objective Function 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 Noise Level σ 2 Error Rate Penalized Objective Function 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 Noise Level σ 2 Error Rate Axial Guided Fitting 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 Noise Level σ 2 Error Rate Weighted Objective Function Fig. 2. Accuracy of the algorithms: av erage error rate under a wide range of noise levels for the four proposed algorithms. The lower bound and upper bound of the error bars are the 20% and 80% quantiles of 50 trials. 0 10 20 30 40 50 60 10 −1 10 0 10 1 10 2 No. of Iterations Objective Function Value (log) Convergence of the Weighted Objective Function Fig. 3. Conver gence rate of weighted objectiv e fitting (log scale). A. Comparison of the pr oposed algorithms The accuracy of the method based on the original objective function, penalized fitting, axial guided fitting and weighted objectiv e fitting were tested and compared under a wide range of noise lev els. W e consider an ellipse in standard position (centered at the origin and without rotation) with semi-major and semi-minor lengths 5 and 3 respecti vely . Fifty data points were dra wn for each trial for a total run of 50 trials per noise level. Fig. 2 shows the mean error rate for each algorithm under a range of noise levels ( σ 2 from 0 to 0 . 5 ). The lower bounds of the error bars are the 20 percent quantiles and the upper bounds are the 80 percent quantiles of the 50 trials. As shown by the simulation results, the three revised algorithms exhibit substantial improv ement compared with the method using the original objectiv e function. Penalized fitting and weighted objective fitting ha ve comparable performance, which is slightly better than that of axial guided fitting. Fig. 3 sho ws the con vergence rate of weighted objective fitting. Note that the plot is on a log scale. So the algorithm con verges faster than exponential in the first few iterations. As for penalized fitting and axial guided fitting, they con verge at similar speeds, except that penalized fitting has an initialization procedure and axial guided fitting needs noise estimation. B. Comparison with Algebr aic Fitting , OLSF , and ML In this subsection, we describe simulations to compare our method with the previous methods (algebraic fitting, OLSF 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Comparison of the Mean Error Rate Noise Level σ 2 Error Rate Penalized Fitting Algebraic Fitting OLSF FNS breakdown rate < 90% FNS breakdown rate > 90% Fig. 4. A verage error rate under a wide range of noise le vels for algebraic fitting, OLSF , FNS and penalized fitting and ML), in terms of accuracy , under a high-eccentricity situation over a wide range of noise levels. Penalized fitting is used as a representativ e for the proposed algorithms. For algebraic fitting, OLSF and ML algorithms, we implemented the numerically stable version of algebraic fitting from [8], the OLSF algorithm proposed by Ahn et al. from [2], and the FNS ellipse fitting algorithm from [5] respectiv ely . An ellipse in standard position with semi-major and semi- minor lengths 8 and 2 , respecti vely , were used in the simula- tions. Fifty data points were drawn for each trial for a total run of 50 trials per noise lev el for penalized fitting, algebraic fitting and OLSF , and a total run of 1000 trials for FNS. Fig. 4 shows the mean error rate under a range of noise lev els ( σ 2 from 0 to 0 . 8 ) for the four algorithms. Although penalized fitting performs slightly worse than the other three algorithms when the noise lev el is low , it outperforms them when the noise level increases, which shows the robustness of our algorithms. The curve with triangle markers represents the FNS algorithm. It has high accuracy when the noise lev el is low , yet it breaks down when there are data points near the center of the estimated ellipse, which happens often for moderate or high noise le vels. The FNS curve represrents the av erage error on those trials (out of 1000 trials) for which the algorithm produced an estimate. The dotted segment of the FNS plot indicates that the algorithm f ailed to produce an estimate for more than 90% of the trials. Fig. 5 shows the comparison with error bars for the algebraic fitting, OLSF and penalized fitting. As in the previous case, the lower bounds and upper bounds of the error bars are the 20% and 80% quantiles of 50 trials. This demonstrates the robustness of our algorithms. As for the computational cost, our algorithms perform almost the same as the ML algorithms and are much more efficient than the OLSF algorithms in a typical setting. C. Spher oid F itting Here we present an example of a typical spheroid fitting result to demonstrate the ef ficacy of our spheroid fitting algorithm based on the weighted objecti ve function. Fifty true points are generated from the surface of a 10 × 2 × 2 spheroid centered at the origin with rotation. The data points are generated from the true points with additiv e Gaussian noise 0 0.2 0.4 0 0.05 0.1 0.15 0.2 0.25 Noise Level σ 2 Error Rate Algebraic Fitting 0 0.2 0.4 0 0.05 0.1 0.15 0.2 0.25 Orthorgonal Least Squares Noise Level σ 2 Error Rate 0 0.2 0.4 0 0.05 0.1 0.15 0.2 0.25 Noise Level σ 2 Error Rate Our Method Fig. 5. A verage error rate for algebraic fitting, OLSF , penalized fitting with 20% quantile and 80% quantile error bars. Fig. 6. Spheroid fitting result with mean 0 and variance 0 . 2 I . Fig. 6 shows the fitting result of our algorithm. R E F E R E N C E S [1] Sung Joon Ahn, W . Rauh, Hyung Suck Cho, and H. J. W arnecke. Orthogonal distance fitting of implicit curves and surf aces. IEEE T ransactions on P attern Analysis and Mac hine Intelligence , 24(5):620– 638, 2002. [2] Sung Joon Ahn, W . Rauh, and H. J W arnecke. Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola. P attern Recognition , 34(12):2283–2303, 2001. [3] I. Al-Subaihi and G. A. W atson. Fitting parametric curves and surfaces by l ∞ distance regression. BIT Numerical Mathematics , 45(3):443–461, 2005. [4] A. Atieg and G. A. W atson. Use of l p norms in fitting curves and surfaces to data. A ustralian and New Zealand Industrial and Applied Mathematics Journal , 45:C187–C200, 2003. [5] W . Chojnacki, M. J. Brooks, A. v an den Hengel, and D. Gawle y . On the fitting of surfaces to data with covariances. IEEE Tr ansactions on P attern Analysis and Machine Intelligence , 22(11):1294–1303, 2000. [6] W . Chojnacki, M. J. Brooks, A. van den Hengel, and D. Gawley . From FNS to HEIV: A link between two vision parameter estimation meth- ods. IEEE T ransactions on P attern Analysis and Machine Intelligence , 26(2):264–268, Feb. 2004. [7] A. Fitzgibbon, M. Pilu, and R. B. Fisher . Direct least square fitting of ellipses. IEEE T ransactions on P attern Analysis and Machine Intelligence , 21(5):476–480, 1999. [8] R. Halir and J. Flusser. Numerically stable direct least squares fitting of elllipses. In Pr oc. of Sixth Int’l Conf. Computer Graphics and V isualization , volume 1, pages 125–132, Czech Republic, 1998. [9] Y oram Leedan and Peter Meer . Heteroscedastic regression in computer vision: Problems with bilinear constraint. International Journal of Computer V ision , 37(2):127–150, Jun. 2000. [10] Peter J. Rousseeuw and Annick M. Leroy . Robust Re gr ession and Outlier Detection . W iley Series in Probability and Statistics. John W iley & Sons, Inc., New Y ork, NY , 1987. [11] G. A. W atson. On the Gauss-Newton method for l 1 orthogonal distance regression. IMA J ournal of Numerical Analysis , 22(3):345–357, 2002. [12] G. A. W atson. Incorporating angular information into parametric models. BIT Numerical Mathematics , 42(4):867–878, Dec. 2002.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment