Skip to content

Commit

Permalink
Merge: Consolidate recommender / strat user guides and examples (#148)
Browse files Browse the repository at this point in the history
This rewrites and moves parts of the user guides and refactored further
some examples. This is NOT intended as a complete rewrite or additional
of much content, but just a consolidation due to the changes that the
recommender/strategy refactoring brought with them
  • Loading branch information
Scienfitz authored Feb 28, 2024
2 parents 2277118 + 38f6a19 commit 505cd17
Show file tree
Hide file tree
Showing 17 changed files with 167 additions and 103 deletions.
2 changes: 1 addition & 1 deletion baybe/recommenders/pure/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Pure recommenders.
Pure recommenders implement optimization strategies and can be queried for
Pure recommenders implement selection algorithms and can be queried for providing
recommendations. They can be part of meta recommenders.
"""

Expand Down
2 changes: 1 addition & 1 deletion baybe/recommenders/pure/nonpredictive/clustering.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""Recommendation strategies based on clustering."""
"""Recommenders based on clustering."""

from abc import ABC
from typing import ClassVar, List, Type, Union
Expand Down
2 changes: 1 addition & 1 deletion baybe/recommenders/pure/nonpredictive/sampling.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""Recommendation strategies based on sampling."""
"""Recommenders based on sampling."""

from typing import ClassVar

Expand Down
2 changes: 1 addition & 1 deletion docs/userguide/campaigns.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ describe the underlying optimization problem at hand:
Apart from this basic configuration, it is possible to further define the specific
optimization
`Recommender` ([class](baybe.recommenders.pure.base.PureRecommender)
/ [user guide](./recommender)) to be used.
/ [user guide](./recommenders)) to be used.

~~~python
from baybe import Campaign
Expand Down
31 changes: 0 additions & 31 deletions docs/userguide/recommender.md

This file was deleted.

118 changes: 118 additions & 0 deletions docs/userguide/recommenders.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
# Recommenders

## General Information

Recommenders are an essential part of BayBE that effectively explore the search space
and provide recommendations for the next experiment or batch of experiments.
Available recommenders can be partitioned into the following subclasses.

## Pure Recommenders

Pure recommenders simply take on the task to recommend measurements. They each contain
the inner logic to do so via different algorithms and approaches.
While some pure recommenders are versatile and work across different types of search
spaces, other are specifically designed for discrete or continuous spaces. The
compatibility is indicated via the corresponding `compatibility` class variable.

```{admonition} Additional Options for Discrete Search Spaces
:class: note
For discrete search spaces, BayBE provides additional control over pure recommenders.
You can explicitly define whether a recommender is allowed to recommend previous
recommendations again via `allow_repeated_recommendations` and whether it can output
recommendations that have already been measured via
`allow_recommending_already_measured`.
```

### Bayesian Recommenders

The Bayesian recommenders in BayBE are built on the foundation of the
[`BayesianRecommender`](baybe.recommenders.pure.bayesian.base.BayesianRecommender)
class, offering an array of possibilities with internal surrogate models and support
for various acquisition functions.

* The **[`SequentialGreedyRecommender`](baybe.recommenders.pure.bayesian.sequential_greedy.SequentialGreedyRecommender)**
is a powerful recommender that performs sequential Greedy optimization. It can be
applied for discrete, continuous and hybrid search spaces. It is an implementation of
the BoTorch optimization functions for discrete, continuous and mixed spaces.
It is important to note that this recommender performs a brute-force search when
applied in hybrid search spaces, as it optimizes the continuous part of the space
while exhaustively searching choices in the discrete subspace. You can customize
this behavior to only sample a certain percentage of the discrete subspace via the
`sample_percentage` attribute and to choose different sampling algorithms via the
`hybrid_sampler` attribute. An example on using this recommender in a hybrid space
can be found [here](./../../examples/Backtesting/hybrid).

* The **[`NaiveHybridSpaceRecommender`](baybe.recommenders.naive.NaiveHybridSpaceRecommender)**
can be applied to all search spaces, but is intended to be used in hybrid spaces.
This recommender combines individual recommenders for the continuous and the discrete
subspaces. It independently optimizes each subspace and consolidates the best results
to generate a candidate for the original hybrid space. An example on using this
recommender in a hybrid space can be found [here](./../../examples/Backtesting/hybrid).

### Clustering Recommenders

BayBE offers a set of recommenders leveraging techniques to facilitate point selection
via clustering:
* **[`PAMClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.PAMClusteringRecommender):**
This recommender utilizes partitioning around medoids.
* **[`KMeansClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.KMeansClusteringRecommender):**
This recommender implements k-means clustering.
* **[`GaussianMixtureClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.GaussianMixtureClusteringRecommender):**
This recommender leverages Gaussian Mixture Models for clustering.

### Sampling Recommenders

BayBE provides two recommenders that recommend by sampling form the search space:
* **[`RandomRecommender`](baybe.recommenders.pure.nonpredictive.sampling.RandomRecommender):**
This recommender offers random recommendations for all types of search spaces.
It is extensively used in backtesting examples, providing a valuable comparison.
For detailed usage examples, refer to the list
[here](./../../examples/Backtesting/Backtesting).
* **[`FPSRecommender`](baybe.recommenders.pure.nonpredictive.sampling.FPSRecommender):**
This recommender is only applicable for discrete search spaces, and recommends points
based on farthest point sampling. A practical application showcasing the usage of
this recommender can be found
[here](./../../examples/Custom_Surrogates/surrogate_params).

## Meta Recommenders

In analogy to meta studies, meta recommenders are wrappers that operate on a sequence
of pure recommenders and determine when to switch between them according to different
logics. BayBE offers three distinct kinds of meta recommenders.

* The
[`TwoPhaseMetaRecommender`](baybe.recommenders.meta.sequential.TwoPhaseMetaRecommender)
employs two distinct recommenders and switches between them at a certain specified
point, controlled by the `switch_after` attribute. This is useful e.g. if you want a
different recommender for the initial recommendation when there is no data yet
available. This simple example would recommend randomly for the first batch and switch
to a Bayesian recommender as soon as measurements have been ingested:
```python
from baybe.recommenders import (
TwoPhaseMetaRecommender,
RandomRecommender,
SequentialGreedyRecommender,
)

recommender = TwoPhaseMetaRecommender(
initial_recommender=RandomRecommender(), recommender=SequentialGreedyRecommender()
)
```

* The **[`SequentialMetaRecommender`](baybe.recommenders.meta.sequential.SequentialMetaRecommender)**
introduces a simple yet versatile approach by utilizing a predefined list of
recommenders. By specifying the desired behavior using the `mode` attribute, it is
possible to flexibly determine the meta recommender's response when it exhausts the
available recommenders. The possible choices are to either raise an error, re-use the
last recommender or re-start at the beginning of the sequence.

* Similar to the `SequentialMetaRecommender`, the
**[`StreamingSequentialMetaRecommender`](baybe.recommenders.meta.sequential.StreamingSequentialMetaRecommender)**
enables the utilization of *arbitrary* iterables to select recommender.

```{warning}
Due to the arbitrary nature of iterables that can be used, de-/serializability cannot
be guaranteed. As a consequence, using a `StreamingSequentialMetaRecommender` results
in an error if you attempt to serialize the corresponding object or higher-level
objects containing it.
```
2 changes: 1 addition & 1 deletion docs/userguide/searchspace.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ A discrete/continuous search space is a s searchspace that was constructed by on
In addition to the ones noted above, a discrete subspace has the following attributes:
* **The experimental representation:** A ``DataFrame`` representing the experimental representation of the subspace.
* **The metadata:** A ``DataFrame`` keeping track of different metadata that is relevant for running a campaign.
* **An "empty" encoding flag:** A flag denoting whether an "empty" encoding should be used. This is useful, for instance, in combination with random search strategies that do not read the actual parameter values.
* **An "empty" encoding flag:** A flag denoting whether an "empty" encoding should be used. This is useful, for instance, in combination with random recommenders that do not read the actual parameter values.
* **The computational representation:** The computational representation of the space. If not provided explicitly, it will be derived from the experimental representation.

Although it is possible to directly create a discrete subspace via the ``__init__`` function, it is intended to create themvia the [`from_dataframe`](baybe.searchspace.discrete.SubspaceDiscrete.from_dataframe) or [`from_product`](baybe.searchspace.discrete.SubspaceDiscrete.from_product) methods. These methods either require a ``DataFrame`` containing the experimental representation of the parameters and the optional explicit list of parameters (``from_dataframe``) or a list of parameters and optional constraints (``from_product``).
Expand Down
34 changes: 0 additions & 34 deletions docs/userguide/strategies.md

This file was deleted.

3 changes: 1 addition & 2 deletions docs/userguide/userguide.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,9 @@ Campaigns <campaigns>
Constraints <constraints>
Objective <objective>
Parameters <parameters>
PureRecommender <recommender>
Recommenders <recommenders>
Search Spaces <searchspace>
Simulation <simulation>
Strategies <strategies>
Surrogates <surrogates>
Targets <targets>
Transfer Learning <transfer_learning>
Expand Down
12 changes: 6 additions & 6 deletions examples/Backtesting/custom_analytical.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,24 +77,24 @@ def sum_of_squares(*x: float) -> float:

### Constructing campaigns for the simulation loop

# To simplify adjusting the example for other strategies, we construct some recommender objects.
# For details on recommender objects, we refer to [`strategies`](./../Basics/strategies.md).
# To simplify adjusting the example for other recommenders, we construct some recommender objects.
# For details on recommender objects, we refer to [`recommenders`](./../Basics/recommenders.md).

seq_greedy_EI_strategy = TwoPhaseMetaRecommender(
seq_greedy_EI_recommender = TwoPhaseMetaRecommender(
recommender=SequentialGreedyRecommender(acquisition_function_cls="qEI"),
)
random_strategy = TwoPhaseMetaRecommender(recommender=RandomRecommender())
random_recommender = TwoPhaseMetaRecommender(recommender=RandomRecommender())

# We now create one campaign per recommender.

seq_greedy_EI_campaign = Campaign(
searchspace=searchspace,
recommender=seq_greedy_EI_strategy,
recommender=seq_greedy_EI_recommender,
objective=objective,
)
random_campaign = Campaign(
searchspace=searchspace,
recommender=random_strategy,
recommender=random_recommender,
objective=objective,
)

Expand Down
12 changes: 6 additions & 6 deletions examples/Backtesting/hybrid.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,31 +124,31 @@ def sum_of_squares(*x: float) -> float:
# Note that the recommender performs one optimization of the continuous subspace per sampled point.
# We thus recommend to keep this parameter rather low.

seq_greedy_strategy = TwoPhaseMetaRecommender(
seq_greedy_recommender = TwoPhaseMetaRecommender(
recommender=SequentialGreedyRecommender(
hybrid_sampler="Farthest", sampling_percentage=0.3
),
)
naive_hybrid_strategy = TwoPhaseMetaRecommender(
naive_hybrid_recommender = TwoPhaseMetaRecommender(
recommender=NaiveHybridSpaceRecommender()
)
random_strategy = TwoPhaseMetaRecommender(recommender=RandomRecommender())
random_recommender = TwoPhaseMetaRecommender(recommender=RandomRecommender())

# We now create one campaign per recommender.

seq_greedy_campaign = Campaign(
searchspace=searchspace,
recommender=seq_greedy_strategy,
recommender=seq_greedy_recommender,
objective=objective,
)
naive_hybrid_campaign = Campaign(
searchspace=searchspace,
recommender=naive_hybrid_strategy,
recommender=naive_hybrid_recommender,
objective=objective,
)
random_campaign = Campaign(
searchspace=searchspace,
recommender=random_strategy,
recommender=random_recommender,
objective=objective,
)

Expand Down
2 changes: 1 addition & 1 deletion examples/Basics/Basics_Header.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

These examples demonstrate the most basic aspects of BayBE: How to set up a
{doc}`Campaign </userguide/campaigns>` and how to configure an optimization
{doc}`Strategy </userguide/strategies>`.
{doc}`Recommender </userguide/recommenders>`.
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
from baybe.targets import NumericalTarget
from baybe.utils.dataframe import add_fake_results

### Available initial strategies
### Available recommenders suitable for initial recommendation

# For the first recommendation, the user can specify which recommender to use.
# The following initial recommenders are available.
Expand Down Expand Up @@ -110,7 +110,7 @@
# Note that they all have default values.
# Therefore one does not need to specify all of them to create a recommender object.

strategy = TwoPhaseMetaRecommender(
recommender = TwoPhaseMetaRecommender(
initial_recommender=INITIAL_RECOMMENDER,
recommender=SequentialGreedyRecommender(
surrogate_model=SURROGATE_MODEL,
Expand All @@ -120,7 +120,7 @@
),
)

print(strategy)
print(recommender)

# Note that there are the additional keywords `hybrid_sampler` and `sampling_percentag`.
# Their meaning and how to use and define it are explained in the hybrid backtesting example.
Expand Down Expand Up @@ -177,7 +177,7 @@

campaign = Campaign(
searchspace=searchspace,
recommender=strategy,
recommender=recommender,
objective=objective,
)

Expand Down
5 changes: 2 additions & 3 deletions examples/Searchspaces/hybrid_space.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,15 +111,14 @@
# recommenders for the corresponding subspaces.
# We use the default choices, which is the `SequentialGreedyRecommender`.

hybrid_recommender = NaiveHybridSpaceRecommender()
hybrid_strategy = TwoPhaseMetaRecommender(recommender=hybrid_recommender)
hybrid_recommender = TwoPhaseMetaRecommender(recommender=NaiveHybridSpaceRecommender())

### Constructing the campaign and performing a recommendation

campaign = Campaign(
searchspace=searchspace,
objective=objective,
recommender=hybrid_strategy,
recommender=hybrid_recommender,
)

# Get a recommendation for a fixed batch size.
Expand Down
Loading

0 comments on commit 505cd17

Please sign in to comment.