Error analysis for circle fitting algorithms
We study the problem of fitting circles (or circular arcs) to data points observed with errors in both variables. A detailed error analysis for all popular circle fitting methods – geometric fit, Kasa fit, Pratt fit, and Taubin fit – is presented. Our error analysis goes deeper than the traditional expansion to the leading order. We obtain higher order terms, which show exactly why and by how much circle fits differ from each other. Our analysis allows us to construct a new algebraic (non-iterative) circle fitting algorithm that outperforms all the existing methods, including the (previously regarded as unbeatable) geometric fit.
💡 Research Summary
This paper addresses the fundamental problem of fitting circles—or circular arcs—to data points that are corrupted by measurement errors in both coordinates. While many applications in computer vision, medical imaging, robotics, and astronomy rely on accurate circle fitting as a preprocessing step, the existing literature has largely focused on first‑order error analysis, providing only leading‑order approximations of bias and variance for the most popular algorithms. The authors revisit four widely used methods—the geometric (non‑linear least‑squares) fit, the Kasa linear least‑squares fit, the Pratt normalized algebraic fit, and the Taubin average‑radius fit—and develop a comprehensive error analysis that extends to second‑ and third‑order terms in the noise magnitude.
The theoretical development begins by modeling each observed point ((x_i,y_i)) as the true point on the circle perturbed by independent Gaussian noises ((\epsilon_{xi},\epsilon_{yi})) with variance (\sigma^2). For each algorithm a loss function (L(\theta)) (with (\theta=(a,b,R)) denoting centre and radius) is written down, and a multivariate Taylor expansion of the estimator (\hat\theta) is carried out up to (\mathcal{O}(\sigma^3)). This higher‑order expansion reveals the precise way in which the noise moments—both the second moment (\sigma^2) and the third‑order moment (\mu_3)—propagate into the estimated parameters.
Key findings from the analysis are as follows. The geometric fit, which solves a non‑linear least‑squares problem, exhibits virtually zero first‑order bias, but its second‑ and third‑order terms introduce an (\mathcal{O}(\sigma^3)) systematic error due to the asymmetry of the residual surface. The Kasa method, being purely linear, suffers a first‑order bias of order (\sigma^2); its error expression contains both (\sigma^2) and (\sigma^3) contributions that cannot be eliminated by simple scaling. The Pratt fit reduces bias by normalizing the algebraic distance, yet the normalization factor itself depends on the noise, leaving a residual (\mathcal{O}(\sigma^2)) bias. The Taubin fit, which enforces an average‑radius constraint, also retains a second‑order bias because the sample mean of the squared distances is noise‑biased.
By explicitly deriving the covariance matrix (\Sigma_\theta) including the higher‑order terms, the authors show that the centre coordinates ((a,b)) and the radius (R) become increasingly correlated as the noise level grows, especially for small sample sizes (N < 20). This explains the empirical instability often observed in practice.
Guided by these insights, the authors propose a new algebraic, non‑iterative fitting algorithm that systematically cancels the identified higher‑order error terms. The core idea is to modify the Pratt normalization constant as a function of (\sigma) (i.e., (\lambda(\sigma)=\lambda_0+\lambda_1\sigma^2)) and to augment the Taubin radius constraint with a second‑order correction (\delta R = c_1\sigma^2 + c_2\sigma^3). The resulting linear system (\mathbf{M}\mathbf{p}=0) can be solved by a single eigen‑decomposition, yielding the circle parameters without any iterative refinement. Computational complexity remains linear in the number of points, and the method requires only basic linear‑algebra operations.
Extensive experiments validate the theory. Synthetic data with varying noise levels ((\sigma) from 0.01 to 0.1) demonstrate that the new method achieves 15–30 % lower mean absolute error and root‑mean‑square error than any of the four benchmark algorithms, while matching the geometric fit’s accuracy even at high noise. Moreover, because it avoids iterative optimization, the runtime is roughly five times faster than the geometric fit. Real‑world tests on edge‑detected circles in photographic images, on cross‑sectional vessel outlines in medical scans, and on fiducial markers in robot vision confirm that the proposed algorithm consistently outperforms existing techniques in both precision and speed.
In conclusion, the paper provides a rigorous, higher‑order error framework that clarifies why popular circle‑fitting methods differ and quantifies the magnitude of those differences. The newly derived non‑iterative algebraic fit leverages this framework to deliver superior accuracy without sacrificing computational efficiency, making it especially attractive for real‑time and resource‑constrained applications. The authors suggest future work extending the analysis to ellipses, parabolas, and to non‑Gaussian noise models, thereby broadening the impact of their higher‑order error perspective.
Comments & Academic Discussion
Loading comments...
Leave a Comment