diff --git a/README.md b/README.md index 270291d..1e7fcd2 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # SCORE -$SCORE$ is a 1D reparameterization technique that breaks Bayesian Optimization (BO)’s curse of dimensionality by decomposing the full $D$-dimensional space into $D$ 1D spaces along each input variable. A working paper describing this approach can be found here: +$SCORE$ is a 1D reparameterization technique that breaks Bayesian Optimization (BO)’s curse of dimensionality and drastically reduces its computing time by decomposing the full $D$-dimensional space into $D$ 1D spaces along each input variable. A working paper describing this approach can be found here: If you follow `example.py` (and take a closer look at `SCORE.py`) you'll notice that just as with standard BO, the objective function (the Ackley function in this case: https://www.sfu.ca/~ssurjano/ackley.html) is first evaluated at random `n_init` initial points (or `init_combs` pre-defined initial combinations). Then, much like deriving the marginal probability distribution of a single variable from the joint probability distribution describing the relationship between multiple variables, each parameter is considered alone while marginalizing out the others. But instead of integrating over all the possible values of the other parameters, only the minimum (or maximum) value achieved so far for the objective function is recorded. This enables fitting the surrogate model to individual (discrete or continuous) variables and significantly reduces the computational load, which becomes dependent on the number of input `parameters` and their mesh resolution (`bounds`) – rather than the number of iterations `nb_it`.