$epsilon$-shotgun: $epsilon$-greedy Batch Bayesian Optimisation

$epsilon$-shotgun: $epsilon$-greedy Batch Bayesian Optimisation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Bayesian optimisation is a popular, surrogate model-based approach for optimising expensive black-box functions. Given a surrogate model, the next location to expensively evaluate is chosen via maximisation of a cheap-to-query acquisition function. We present an $\epsilon$-greedy procedure for Bayesian optimisation in batch settings in which the black-box function can be evaluated multiple times in parallel. Our $\epsilon$-shotgun algorithm leverages the model’s prediction, uncertainty, and the approximated rate of change of the landscape to determine the spread of batch solutions to be distributed around a putative location. The initial target location is selected either in an exploitative fashion on the mean prediction, or – with probability $\epsilon$ – from elsewhere in the design space. This results in locations that are more densely sampled in regions where the function is changing rapidly and in locations predicted to be good (i.e close to predicted optima), with more scattered samples in regions where the function is flatter and/or of poorer quality. We empirically evaluate the $\epsilon$-shotgun methods on a range of synthetic functions and two real-world problems, finding that they perform at least as well as state-of-the-art batch methods and in many cases exceed their performance.


💡 Research Summary

The paper introduces ε‑shotgun, a novel batch Bayesian optimisation (BBO) algorithm that extends the ε‑greedy strategy—originally successful in sequential BO—to parallel evaluation settings. In standard Bayesian optimisation, a surrogate model (typically a Gaussian Process) predicts the objective function and quantifies uncertainty; an acquisition function then selects the next point to evaluate. When multiple evaluations can be performed simultaneously, the challenge is to choose a set (batch) of points that together balance exploration and exploitation while keeping computational overhead manageable.

Core Idea
ε‑shotgun selects the first point of a batch, (x’_1), using an ε‑greedy rule: with probability (1-\varepsilon) it picks the point that minimises the GP mean (\mu(x)) (pure exploitation), and with probability (\varepsilon) it chooses a point either uniformly at random (ε‑S‑RS) or from the Pareto front of ((\mu(x),\sigma^2(x))) (ε‑S‑PF). This mirrors the sequential ε‑greedy approach but now serves as the anchor for the whole batch.

The remaining (q-1) points are drawn from a multivariate normal distribution centred at (x’_1): \


Comments & Academic Discussion

Loading comments...

Leave a Comment