Active Bayesian Optimization: Minimizing Minimizer Entropy
The ultimate goal of optimization is to find the minimizer of a target function.However, typical criteria for active optimization often ignore the uncertainty about the minimizer. We propose a novel criterion for global optimization and an associated sequential active learning strategy using Gaussian processes.Our criterion is the reduction of uncertainty in the posterior distribution of the function minimizer. It can also flexibly incorporate multiple global minimizers. We implement a tractable approximation of the criterion and demonstrate that it obtains the global minimizer accurately compared to conventional Bayesian optimization criteria.
💡 Research Summary
The paper “Active Bayesian Optimization: Minimizing Minimizer Entropy” introduces a novel acquisition function for global optimization that directly targets the uncertainty about the location of the function’s minimizer(s). Traditional Bayesian optimization methods—Expected Improvement (EI), Probability of Improvement (PI), Upper Confidence Bound (UCB), and even more recent information‑theoretic approaches such as Predictive Entropy Search (PES)—focus on reducing the expected value of the objective or on mutual information between the function values and observations. While these criteria improve the surrogate model’s predictive performance, they do not explicitly minimize the posterior entropy of the minimizer, which is the quantity that truly reflects how well we have identified the optimum.
The authors adopt a Gaussian Process (GP) prior for the unknown objective f(x) and, given a data set D = {(x_i, y_i)}, compute the posterior distribution p(f|D). From this posterior they derive a distribution over the minimizer x* defined as p(x*|D) = ∫ δ(x* – argmin_x f(x)) p(f|D) df. The entropy H
Comments & Academic Discussion
Loading comments...
Leave a Comment