Improved Concentration for Mean Estimators via Shrinkage
We study a class of robust mean estimators $\widehatμ$ obtained by adaptively shrinking the weights of sample points far from a base estimator $\widehatκ$. Given a data-dependent scaling factor $\widehatα$ and a weighting function $w:[0, \infty) \to [0,1]$, we let $\widehatμ = \widehatκ + \frac{1}{n}\sum_{i=1}^n(X_i - \widehatκ)w(\widehatα|X_i-\widehatκ|) $. We prove that, under mild assumptions over $w$, these estimators achieve stronger concentration bounds than the base estimate $\widehatκ$, including sub-Gaussian guarantees. This framework unifies and extends several existing approaches to robust mean estimation in $\mathbb{R}$. Through numerical experiments, we show that our shrinking approach translates to faster concentration, even for small sample sizes.
💡 Research Summary
The paper introduces a flexible family of robust mean estimators that improve concentration by shrinking the influence of data points far from a base location estimate. Given a non‑increasing weighting function w:
Comments & Academic Discussion
Loading comments...
Leave a Comment