Best Arm Identification with LLM Judges and Limited Human
We study fixed-confidence best-arm identification (BAI) where a cheap but potentially biased proxy (e.g., LLM judge) is available for every sample, while an expensive ground-truth label can only be acquired selectively when using a human for auditing. Unlike classical multi-fidelity BAI, the proxy is biased (arm- and context-dependent) and ground truth is selectively observed. Consequently, standard multi-fidelity methods can mis-select the best arm, and uniform auditing, though accurate, wastes scarce resources and is inefficient. We prove that without bias correction and propensity adjustment, mis-selection probability may not vanish (even with unlimited proxy data). We then develop an estimator for the mean of each arm that combines proxy scores with inverse-propensity-weighted residuals and form anytime-valid confidence sequences for that estimator. Based on the estimator and confidence sequence, we propose an algorithm that adaptively selects and audits arms. The algorithm concentrates audits on unreliable contexts and close arms and we prove that a plug-in Neyman rule achieves near-oracle audit efficiency. Numerical experiments confirm the theoretical guarantees and demonstrate the superior empirical performance of the proposed algorithm.
💡 Research Summary
This paper addresses a novel fixed‑confidence best‑arm identification (BAI) problem in which each pull yields two sources of feedback: a cheap, automatically generated score from a large language model (LLM) judge and an expensive, accurate human audit label that can be obtained selectively. Unlike classical multi‑fidelity bandits, the low‑fidelity proxy is not unbiased; its expectation differs from the true outcome by an unknown, arm‑ and context‑dependent bias function. Consequently, standard multi‑fidelity methods that assume an unbiased surrogate can fail even with unlimited proxy observations, and naïve uniform auditing wastes scarce human budget.
The authors formalize the setting with K arms, a context distribution D, and for each arm‑context pair (k, x) a true outcome Y(k, x) ∈
Comments & Academic Discussion
Loading comments...
Leave a Comment