From d482e57228f0d421ee43eae8d09f16e6fbcba8a0 Mon Sep 17 00:00:00 2001 From: Martin Fitzner Date: Tue, 27 Feb 2024 17:08:08 +0100 Subject: [PATCH 1/8] Consolidate user guide --- docs/userguide/recommender.md | 31 -------- docs/userguide/recommenders.md | 130 +++++++++++++++++++++++++++++++++ docs/userguide/strategies.md | 34 --------- docs/userguide/userguide.md | 3 +- 4 files changed, 131 insertions(+), 67 deletions(-) delete mode 100644 docs/userguide/recommender.md create mode 100644 docs/userguide/recommenders.md delete mode 100644 docs/userguide/strategies.md diff --git a/docs/userguide/recommender.md b/docs/userguide/recommender.md deleted file mode 100644 index 0a3b32c3c..000000000 --- a/docs/userguide/recommender.md +++ /dev/null @@ -1,31 +0,0 @@ -# Recommenders - -## General information - -Recommenders are an essential part of BayBE that effectively explore the search space and provide recommendations for the next experiment or batch of experiments. While some recommenders are versatile and work across different types of search spaces, other are specifically designed for discrete or continuous spaces. The compatibility is indicated via the corresponding ``compatibility`` class variable. - -The set of available recommenders can be partitioned into the following subclasses. - -## Bayesian recommenders - -The Bayesian recommenders in BayBE are built on the foundation of the [`BayesianRecommender`](baybe.recommenders.pure.bayesian.base.BayesianRecommender) class, offering an array of possibilities with internal surrogate models and support for various acquisition functions. - -The [`SequentialGreedyRecommender`](baybe.recommenders.pure.bayesian.sequential_greedy.SequentialGreedyRecommender) is a powerful recommender that leverages BoTorch optimization functions to perform sequential Greedy optimization. It can be applied for discrete, continuous and hybrid sarch spaces. It is an implementation of the BoTorch optimization functions for discrete, continuous and mixed spaces. - -It is important to note that this recommender performs a brute-force search when applied in hybrid search spaces, as it optimizes the continuous part of the space while exhaustively searching choices in the discrete subspace. You can customize this behavior to only sample a certain percentage of the discrete subspace via the ``sample_percentage`` attribute and to choose different sampling strategies via the ``hybrid_sampler`` attribute. An example on using this recommender in a hybrid space can be found [here](./../../examples/Backtesting/hybrid). - -The [`NaiveHybridSpaceRecommender`](baybe.recommenders.naive.NaiveHybridSpaceRecommender) can be applied to all search spaces, but is intended to be used in hybrid spaces. This recommender combines individual recommenders for the continuous and the discrete subspaces. It independently optimizes each subspace and consolidates the best results to generate a candidate for the original hybrid space. An example on using this recommender in a hybrid space can be found [here](./../../examples/Backtesting/hybrid). - -## Clustering recommenders - -BayBE offers a set of recommenders leveraging clustering techniques to facilitate initial point selection: -* **[`PAMClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.PAMClusteringRecommender):** This recommender utilizes partitioning around medoids for effective clustering. -* **[`KMeansClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.KMeansClusteringRecommender):** This recommender implements the k-means clustering strategy. -* **[`GaussianMixtureClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.GaussianMixtureClusteringRecommender):** This recommender leverages Gaussian Mixture Models for clustering. - -## Sampling recommenders - -BayBE provides two sampling-based recommenders: - -* **[`RandomRecommender`](baybe.recommenders.pure.nonpredictive.sampling.RandomRecommender):** This recommender offers random recommendations for all types of search spaces. This recommender is extensively used in backtesting examples, providing a valuable comparison. For detailed usage examples, refer to the examples listed [here](./../../examples/Backtesting/Backtesting). -* **[`FPSRecommender`](baybe.recommenders.pure.nonpredictive.sampling.FPSRecommender):** This recommender is only applicable for discrete search spaces, and recommends points based on farthest point sampling. A practical application showcasing the usage of this recommender can be found [here](./../../examples/Custom_Surrogates/surrogate_params). \ No newline at end of file diff --git a/docs/userguide/recommenders.md b/docs/userguide/recommenders.md new file mode 100644 index 000000000..42b91d4a7 --- /dev/null +++ b/docs/userguide/recommenders.md @@ -0,0 +1,130 @@ +# Recommenders + +## General Information + +Recommenders are an essential part of BayBE that effectively explore the search space +and provide recommendations for the next experiment or batch of experiments. + +Available recommenders can be partitioned into the following subclasses. + +## Pure Recommenders + +Pure recommenders simply take on the task to recommend measurements. They each contain +the inner logic to do so via different algorithms and approaches. + +While some pure recommenders are versatile and work across different types of search +spaces, other are specifically designed for discrete or continuous spaces. The +compatibility is indicated via the corresponding ``compatibility`` class variable. + +### Bayesian Recommenders + +The Bayesian recommenders in BayBE are built on the foundation of the +[`BayesianRecommender`](baybe.recommenders.pure.bayesian.base.BayesianRecommender) +class, offering an array of possibilities with internal surrogate models and support +for various acquisition functions. + +The [`SequentialGreedyRecommender`](baybe.recommenders.pure.bayesian.sequential_greedy.SequentialGreedyRecommender) +is a powerful recommender that leverages BoTorch optimization functions to perform +sequential Greedy optimization. It can be applied for discrete, continuous and hybrid +search spaces. It is an implementation of the BoTorch optimization functions for +discrete, continuous and mixed spaces. + +It is important to note that this recommender performs a brute-force search when +applied in hybrid search spaces, as it optimizes the continuous part of the space +while exhaustively searching choices in the discrete subspace. You can customize +this behavior to only sample a certain percentage of the discrete subspace via the +``sample_percentage`` attribute and to choose different sampling algorithms via the +``hybrid_sampler`` attribute. An example on using this recommender in a hybrid space +can be found [here](./../../examples/Backtesting/hybrid). + +The [`NaiveHybridSpaceRecommender`](baybe.recommenders.naive.NaiveHybridSpaceRecommender) +can be applied to all search spaces, but is intended to be used in hybrid spaces. +This recommender combines individual recommenders for the continuous and the discrete +subspaces. It independently optimizes each subspace and consolidates the best results +to generate a candidate for the original hybrid space. An example on using this +recommender in a hybrid space can be found [here](./../../examples/Backtesting/hybrid). + +### Clustering Recommenders + +BayBE offers a set of recommenders leveraging techniques to facilitate point selection +via clustering: +* **[`PAMClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.PAMClusteringRecommender):** + This recommender utilizes partitioning around medoids. +* **[`KMeansClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.KMeansClusteringRecommender):** + This recommender implements k-means clustering. +* **[`GaussianMixtureClusteringRecommender`](baybe.recommenders.pure.nonpredictive.clustering.GaussianMixtureClusteringRecommender):** + This recommender leverages Gaussian Mixture Models for clustering. + +### Sampling Recommenders + +BayBE provides two recommenders that recommend by sampling form the search space: +* **[`RandomRecommender`](baybe.recommenders.pure.nonpredictive.sampling.RandomRecommender):** + This recommender offers random recommendations for all types of search spaces. + It is extensively used in backtesting examples, providing a valuable comparison. + For detailed usage examples, refer to the list + [here](./../../examples/Backtesting/Backtesting). +* **[`FPSRecommender`](baybe.recommenders.pure.nonpredictive.sampling.FPSRecommender):** + This recommender is only applicable for discrete search spaces, and recommends points + based on farthest point sampling. A practical application showcasing the usage of + this recommender can be found + [here](./../../examples/Custom_Surrogates/surrogate_params). + +```{admonition} Additional Options for Discrete Search Spaces +:class: note +For discrete search spaces, BayBE provides additional control over pure recommenders. +You can explicitly define whether a recommender is allowed to recommend previous +recommendations again via `allow_repeated_recommendations` and whether it can output +recommendations that have already been measured via +`allow_recommending_already_measured`. +``` + +## Meta Recommenders + +On analogy to meta studies, meta recommenders are wrappers that operate on a sequence +of pure recommenders and determine when to switch between them according to different +logics. + +BayBE offers three distinct kinds of meta recommenders. + +### TwoPhase + +The +[`TwoPhaseMetaRecommender`](baybe.recommenders.meta.sequential.TwoPhaseMetaRecommender) +employs two distinct recommenders and switches between them at a certain specified +point, controlled by the `switch_after` attribute. This is useful e.g. if you want a +different recommender for the initial recommendation when there is no data yet +available. This simple example would recommend randomly for the first batch and switch +to a Bayesian recommender as soon as measurements have been ingested: +```python +from baybe.recommenders import ( + TwoPhaseMetaRecommender, + RandomRecommender, + SequentialGreedyRecommender, +) + +recommender = TwoPhaseMetaRecommender( + initial_recommender=RandomRecommender(), recommender=SequentialGreedyRecommender() +) +``` + +### Sequential + +The [`SequentialMetaRecommender`](baybe.recommenders.meta.sequential.SequentialMetaRecommender) introduces a simple yet versatile approach by utilizing a +predefined list of recommenders. By specifying the desired behavior using the `mode` +attribute, it is possible to flexibly determine the strategy's response when it +exhausts the available recommenders. The possible choices are to either raise an +error, re-use the last recommender or re-start at the beginning of the sequence. + +### Streaming Sequential + +Similar to the [`SequentialStrategy`](baybe.recommenders.meta.sequential.SequentialStrategy), +the +[`StreamingSequentialMetaRecommender`](baybe.recommenders.meta.sequential.StreamingSequentialMetaRecommender) +enables the utilization of *arbitrary* iterables to select recommender. + +```{warning} +Due to the arbitrary nature of iterables that can be used, de-/serializability cannot +be guaranteed. As a consequence, using a `StreamingSequentialMetaRecommender` results +in an error if you attempt to serialize the corresponding object or higher-level +objects containing it. +``` \ No newline at end of file diff --git a/docs/userguide/strategies.md b/docs/userguide/strategies.md deleted file mode 100644 index 5a04720e9..000000000 --- a/docs/userguide/strategies.md +++ /dev/null @@ -1,34 +0,0 @@ -# Strategies - -Strategies play a crucial role in orchestrating the usage of recommenders within a campaign. -A strategy operates on a sequence of recommenders and determines when to switch between them. -All strategies are built upon the `Strategy` class. - -BayBE offers three distinct kinds of strategies. - -## The `SequentialStrategy` - -The `SequentialStrategy` introduces a simple yet versatile approach by utilizing a -predefined list of recommenders. -By specifying the desired behavior using the `mode` attribute, it is possible to -flexibly determine the strategy's response when it exhausts the available recommenders. -The possible choices are to either raise an error, re-us the last recommender or -re-start at the beginning of the sequence. - -## The `StreamingSequentialStrategy` - -Similar to the `SequentialStrategy`, the `StreamingSequentialStrategy` enables the -utilization of *arbitrary* iterables to select recommender. Note that this strategy is -however not serializable. - -## The `TwoPhaseStrategy` - -The `TwoPhaseStrategy` employs two distinct recommenders and switches between them at a -certain specified point, controlled by the `switch_after` attribute. - -## Additional options for discrete search spaces - -For discrete search spaces, BayBE provides additional control over strategies. -You can explicitly define whether a strategy is allowed to recommend previously used -recommendations and whether it can output recommendations that have already been -measured. \ No newline at end of file diff --git a/docs/userguide/userguide.md b/docs/userguide/userguide.md index 846e4a9aa..6a5a9470e 100644 --- a/docs/userguide/userguide.md +++ b/docs/userguide/userguide.md @@ -5,10 +5,9 @@ Campaigns Constraints Objective Parameters -PureRecommender +Recommenders Search Spaces Simulation -Strategies Surrogates Targets Transfer Learning From 4682d49977fd2e46a8a5ff7e40d60e87bc019667 Mon Sep 17 00:00:00 2001 From: Martin Fitzner Date: Tue, 27 Feb 2024 17:21:06 +0100 Subject: [PATCH 2/8] Consolidate examples --- docs/userguide/campaigns.md | 2 +- examples/Backtesting/custom_analytical.py | 12 ++++++------ examples/Backtesting/hybrid.py | 12 ++++++------ examples/Basics/Basics_Header.md | 2 +- examples/Basics/{strategies.py => recommenders.py} | 8 ++++---- examples/Searchspaces/hybrid_space.py | 5 ++--- streamlit/initial_strategy.py | 12 +++++++----- 7 files changed, 27 insertions(+), 26 deletions(-) rename examples/Basics/{strategies.py => recommenders.py} (97%) diff --git a/docs/userguide/campaigns.md b/docs/userguide/campaigns.md index a5a1ca493..d0b250b39 100644 --- a/docs/userguide/campaigns.md +++ b/docs/userguide/campaigns.md @@ -28,7 +28,7 @@ describe the underlying optimization problem at hand: Apart from this basic configuration, it is possible to further define the specific optimization `Recommender` ([class](baybe.recommenders.pure.base.PureRecommender) -/ [user guide](./recommender)) to be used. +/ [user guide](./recommenders)) to be used. ~~~python from baybe import Campaign diff --git a/examples/Backtesting/custom_analytical.py b/examples/Backtesting/custom_analytical.py index c8af09537..fbf3d36d8 100644 --- a/examples/Backtesting/custom_analytical.py +++ b/examples/Backtesting/custom_analytical.py @@ -77,24 +77,24 @@ def sum_of_squares(*x: float) -> float: ### Constructing campaigns for the simulation loop -# To simplify adjusting the example for other strategies, we construct some recommender objects. -# For details on recommender objects, we refer to [`strategies`](./../Basics/strategies.md). +# To simplify adjusting the example for other recommenders, we construct some recommender objects. +# For details on recommender objects, we refer to [`recommenders`](./../Basics/recommenders.md). -seq_greedy_EI_strategy = TwoPhaseMetaRecommender( +seq_greedy_EI_recommender = TwoPhaseMetaRecommender( recommender=SequentialGreedyRecommender(acquisition_function_cls="qEI"), ) -random_strategy = TwoPhaseMetaRecommender(recommender=RandomRecommender()) +random_recommender = TwoPhaseMetaRecommender(recommender=RandomRecommender()) # We now create one campaign per recommender. seq_greedy_EI_campaign = Campaign( searchspace=searchspace, - recommender=seq_greedy_EI_strategy, + recommender=seq_greedy_EI_recommender, objective=objective, ) random_campaign = Campaign( searchspace=searchspace, - recommender=random_strategy, + recommender=random_recommender, objective=objective, ) diff --git a/examples/Backtesting/hybrid.py b/examples/Backtesting/hybrid.py index 06670467a..325b7bd68 100644 --- a/examples/Backtesting/hybrid.py +++ b/examples/Backtesting/hybrid.py @@ -124,31 +124,31 @@ def sum_of_squares(*x: float) -> float: # Note that the recommender performs one optimization of the continuous subspace per sampled point. # We thus recommend to keep this parameter rather low. -seq_greedy_strategy = TwoPhaseMetaRecommender( +seq_greedy_recommender = TwoPhaseMetaRecommender( recommender=SequentialGreedyRecommender( hybrid_sampler="Farthest", sampling_percentage=0.3 ), ) -naive_hybrid_strategy = TwoPhaseMetaRecommender( +naive_hybrid_recommender = TwoPhaseMetaRecommender( recommender=NaiveHybridSpaceRecommender() ) -random_strategy = TwoPhaseMetaRecommender(recommender=RandomRecommender()) +random_recommender = TwoPhaseMetaRecommender(recommender=RandomRecommender()) # We now create one campaign per recommender. seq_greedy_campaign = Campaign( searchspace=searchspace, - recommender=seq_greedy_strategy, + recommender=seq_greedy_recommender, objective=objective, ) naive_hybrid_campaign = Campaign( searchspace=searchspace, - recommender=naive_hybrid_strategy, + recommender=naive_hybrid_recommender, objective=objective, ) random_campaign = Campaign( searchspace=searchspace, - recommender=random_strategy, + recommender=random_recommender, objective=objective, ) diff --git a/examples/Basics/Basics_Header.md b/examples/Basics/Basics_Header.md index 0d3f05cb0..fd908df61 100644 --- a/examples/Basics/Basics_Header.md +++ b/examples/Basics/Basics_Header.md @@ -2,4 +2,4 @@ These examples demonstrate the most basic aspects of BayBE: How to set up a {doc}`Campaign ` and how to configure an optimization -{doc}`Strategy `. \ No newline at end of file +{doc}`Recommenders `. \ No newline at end of file diff --git a/examples/Basics/strategies.py b/examples/Basics/recommenders.py similarity index 97% rename from examples/Basics/strategies.py rename to examples/Basics/recommenders.py index 3a394dd95..5154c8217 100644 --- a/examples/Basics/strategies.py +++ b/examples/Basics/recommenders.py @@ -34,7 +34,7 @@ from baybe.targets import NumericalTarget from baybe.utils.dataframe import add_fake_results -### Available initial strategies +### Available recommenders suitable for initial recommendation # For the first recommendation, the user can specify which recommender to use. # The following initial recommenders are available. @@ -110,7 +110,7 @@ # Note that they all have default values. # Therefore one does not need to specify all of them to create a recommender object. -strategy = TwoPhaseMetaRecommender( +recommender = TwoPhaseMetaRecommender( initial_recommender=INITIAL_RECOMMENDER, recommender=SequentialGreedyRecommender( surrogate_model=SURROGATE_MODEL, @@ -120,7 +120,7 @@ ), ) -print(strategy) +print(recommender) # Note that there are the additional keywords `hybrid_sampler` and `sampling_percentag`. # Their meaning and how to use and define it are explained in the hybrid backtesting example. @@ -177,7 +177,7 @@ campaign = Campaign( searchspace=searchspace, - recommender=strategy, + recommender=recommender, objective=objective, ) diff --git a/examples/Searchspaces/hybrid_space.py b/examples/Searchspaces/hybrid_space.py index bb2a92833..74ced677c 100644 --- a/examples/Searchspaces/hybrid_space.py +++ b/examples/Searchspaces/hybrid_space.py @@ -111,15 +111,14 @@ # recommenders for the corresponding subspaces. # We use the default choices, which is the `SequentialGreedyRecommender`. -hybrid_recommender = NaiveHybridSpaceRecommender() -hybrid_strategy = TwoPhaseMetaRecommender(recommender=hybrid_recommender) +hybrid_recommender = TwoPhaseMetaRecommender(recommender=NaiveHybridSpaceRecommender()) ### Constructing the campaign and performing a recommendation campaign = Campaign( searchspace=searchspace, objective=objective, - recommender=hybrid_strategy, + recommender=hybrid_recommender, ) # Get a recommendation for a fixed batch size. diff --git a/streamlit/initial_strategy.py b/streamlit/initial_strategy.py index e198f8b9f..03a1815fe 100644 --- a/streamlit/initial_strategy.py +++ b/streamlit/initial_strategy.py @@ -1,4 +1,4 @@ -"""This script allows comparing initial selection strategies on different data sets.""" +"""This script allows comparing selection recommenders on different data sets.""" import numpy as np import pandas as pd @@ -60,8 +60,8 @@ def plot_point_selection(points, selection, title): "Gaussian Mixture Model": gaussian_mixture_model, } -# collect all available strategies -selection_strategies = { +# collect all available recommenders +selection_recommenders = { cls.__name__: cls for cls in get_subclasses(NonPredictiveRecommender) } @@ -73,7 +73,9 @@ def main(): # simulation parameters random_seed = int(st.sidebar.number_input("Random seed", value=42)) - strategy_name = st.sidebar.selectbox("Strategy", list(selection_strategies.keys())) + strategy_name = st.sidebar.selectbox( + "Strategy", list(selection_recommenders.keys()) + ) n_points = st.sidebar.slider("Number of points to be generated", 10, 100, value=50) n_selected = st.sidebar.slider( "Number of points to be selected", @@ -108,7 +110,7 @@ def main(): # create the recommender and generate the recommendations # TODO: The acquisition function should become optional for model-free methods - strategy = selection_strategies[strategy_name]() + strategy = selection_recommenders[strategy_name]() selection = strategy.recommend(searchspace=searchspace, batch_size=n_selected) # show the result From 676ceb16271d94c0495db62cbf451e35f99e58ab Mon Sep 17 00:00:00 2001 From: Martin Fitzner Date: Tue, 27 Feb 2024 17:21:33 +0100 Subject: [PATCH 3/8] Refactor names and links --- baybe/recommenders/pure/__init__.py | 2 +- baybe/recommenders/pure/nonpredictive/clustering.py | 2 +- baybe/recommenders/pure/nonpredictive/sampling.py | 2 +- docs/userguide/recommenders.md | 2 +- docs/userguide/searchspace.md | 2 +- tests/test_iterations.py | 8 ++++---- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/baybe/recommenders/pure/__init__.py b/baybe/recommenders/pure/__init__.py index d46700682..a2eb0afb0 100644 --- a/baybe/recommenders/pure/__init__.py +++ b/baybe/recommenders/pure/__init__.py @@ -1,6 +1,6 @@ """Pure recommenders. -Pure recommenders implement optimization strategies and can be queried for +Pure recommenders implement selection algorithms and can be queried for providing recommendations. They can be part of meta recommenders. """ diff --git a/baybe/recommenders/pure/nonpredictive/clustering.py b/baybe/recommenders/pure/nonpredictive/clustering.py index dfeb51e40..6a2ad0577 100644 --- a/baybe/recommenders/pure/nonpredictive/clustering.py +++ b/baybe/recommenders/pure/nonpredictive/clustering.py @@ -1,4 +1,4 @@ -"""Recommendation strategies based on clustering.""" +"""Recommenders based on clustering.""" from abc import ABC from typing import ClassVar, List, Type, Union diff --git a/baybe/recommenders/pure/nonpredictive/sampling.py b/baybe/recommenders/pure/nonpredictive/sampling.py index 8deb6a8e2..f1685a22c 100644 --- a/baybe/recommenders/pure/nonpredictive/sampling.py +++ b/baybe/recommenders/pure/nonpredictive/sampling.py @@ -1,4 +1,4 @@ -"""Recommendation strategies based on sampling.""" +"""Recommenders based on sampling.""" from typing import ClassVar diff --git a/docs/userguide/recommenders.md b/docs/userguide/recommenders.md index 42b91d4a7..a090d3293 100644 --- a/docs/userguide/recommenders.md +++ b/docs/userguide/recommenders.md @@ -117,7 +117,7 @@ error, re-use the last recommender or re-start at the beginning of the sequence. ### Streaming Sequential -Similar to the [`SequentialStrategy`](baybe.recommenders.meta.sequential.SequentialStrategy), +Similar to the [`SequentialMetaRecommender`](baybe.recommenders.meta.sequential.SequentialMetaRecommender), the [`StreamingSequentialMetaRecommender`](baybe.recommenders.meta.sequential.StreamingSequentialMetaRecommender) enables the utilization of *arbitrary* iterables to select recommender. diff --git a/docs/userguide/searchspace.md b/docs/userguide/searchspace.md index 22b049b41..bcc1a7d92 100644 --- a/docs/userguide/searchspace.md +++ b/docs/userguide/searchspace.md @@ -23,7 +23,7 @@ A discrete/continuous search space is a s searchspace that was constructed by on In addition to the ones noted above, a discrete subspace has the following attributes: * **The experimental representation:** A ``DataFrame`` representing the experimental representation of the subspace. * **The metadata:** A ``DataFrame`` keeping track of different metadata that is relevant for running a campaign. -* **An "empty" encoding flag:** A flag denoting whether an "empty" encoding should be used. This is useful, for instance, in combination with random search strategies that do not read the actual parameter values. +* **An "empty" encoding flag:** A flag denoting whether an "empty" encoding should be used. This is useful, for instance, in combination with random recommenders that do not read the actual parameter values. * **The computational representation:** The computational representation of the space. If not provided explicitly, it will be derived from the experimental representation. Although it is possible to directly create a discrete subspace via the ``__init__`` function, it is intended to create themvia the [`from_dataframe`](baybe.searchspace.discrete.SubspaceDiscrete.from_dataframe) or [`from_product`](baybe.searchspace.discrete.SubspaceDiscrete.from_product) methods. These methods either require a ``DataFrame`` containing the experimental representation of the parameters and the optional explicit list of parameters (``from_dataframe``) or a list of parameters and optional constraints (``from_product``). diff --git a/tests/test_iterations.py b/tests/test_iterations.py index e511c00f5..2c80bf491 100644 --- a/tests/test_iterations.py +++ b/tests/test_iterations.py @@ -53,7 +53,7 @@ for cls in get_subclasses(PureRecommender) if cls.compatibility == SearchSpaceType.HYBRID ] -# List of SequentialGreedy PureRecommender with different sampling strategies. +# List of SequentialGreedy recommenders with different sampling strategies. sampling_strategies = [ # Valid combinations ("None", 0.0), @@ -103,7 +103,7 @@ valid_hybrid_recommenders.extend(valid_naive_hybrid_recommenders) valid_hybrid_recommenders.extend(valid_hybrid_sequential_greedy_recommenders) -valid_strategies = get_subclasses(MetaRecommender) +valid_meta_recommenders = get_subclasses(MetaRecommender) test_targets = [ ["Target_max"], @@ -164,6 +164,6 @@ def test_iter_recommender_hybrid(campaign, n_iterations, batch_size): run_iterations(campaign, n_iterations, batch_size) -@pytest.mark.parametrize("recommender", valid_strategies, indirect=True) -def test_strategies(campaign, n_iterations, batch_size): +@pytest.mark.parametrize("recommender", valid_meta_recommenders, indirect=True) +def test_meta_recommenders(campaign, n_iterations, batch_size): run_iterations(campaign, n_iterations, batch_size) From 74fbac8ee4d30db0152a9d0b533a1d21a9376ebd Mon Sep 17 00:00:00 2001 From: Martin Fitzner Date: Tue, 27 Feb 2024 17:26:39 +0100 Subject: [PATCH 4/8] Move admonition --- docs/userguide/recommenders.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/userguide/recommenders.md b/docs/userguide/recommenders.md index a090d3293..04751d09a 100644 --- a/docs/userguide/recommenders.md +++ b/docs/userguide/recommenders.md @@ -16,6 +16,15 @@ While some pure recommenders are versatile and work across different types of se spaces, other are specifically designed for discrete or continuous spaces. The compatibility is indicated via the corresponding ``compatibility`` class variable. +```{admonition} Additional Options for Discrete Search Spaces +:class: note +For discrete search spaces, BayBE provides additional control over pure recommenders. +You can explicitly define whether a recommender is allowed to recommend previous +recommendations again via `allow_repeated_recommendations` and whether it can output +recommendations that have already been measured via +`allow_recommending_already_measured`. +``` + ### Bayesian Recommenders The Bayesian recommenders in BayBE are built on the foundation of the @@ -69,15 +78,6 @@ BayBE provides two recommenders that recommend by sampling form the search space this recommender can be found [here](./../../examples/Custom_Surrogates/surrogate_params). -```{admonition} Additional Options for Discrete Search Spaces -:class: note -For discrete search spaces, BayBE provides additional control over pure recommenders. -You can explicitly define whether a recommender is allowed to recommend previous -recommendations again via `allow_repeated_recommendations` and whether it can output -recommendations that have already been measured via -`allow_recommending_already_measured`. -``` - ## Meta Recommenders On analogy to meta studies, meta recommenders are wrappers that operate on a sequence From 9ae9da6d46b31c7724a59445e884df26f396aa56 Mon Sep 17 00:00:00 2001 From: Martin Fitzner Date: Wed, 28 Feb 2024 09:05:14 +0100 Subject: [PATCH 5/8] Activate SMOKE_TEST in example tests --- tests/docs/test_examples.py | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/tests/docs/test_examples.py b/tests/docs/test_examples.py index ea0ac8dd7..b512c3e58 100644 --- a/tests/docs/test_examples.py +++ b/tests/docs/test_examples.py @@ -1,5 +1,6 @@ """Test if all examples can be run without error.""" +import os import runpy from pathlib import Path @@ -9,6 +10,10 @@ from ..conftest import _CHEM_INSTALLED +# Run these tests in reduced settings +_SMOKE_TEST_CACHE = os.environ.get("SMOKE_TEST", None) +os.environ["SMOKE_TEST"] = "true" + @pytest.mark.slow @pytest.mark.skipif( @@ -18,3 +23,9 @@ def test_example(example: Path): """Test an individual example by running it.""" runpy.run_path(str(example)) + + +if _SMOKE_TEST_CACHE is not None: + os.environ["SMOKE_TEST"] = _SMOKE_TEST_CACHE +else: + os.environ.pop("SMOKE_TEST") From 3806dae46471e3b41acefd63eb8fc7051d244a4c Mon Sep 17 00:00:00 2001 From: Martin Fitzner Date: Wed, 28 Feb 2024 09:06:42 +0100 Subject: [PATCH 6/8] Update user guide --- docs/userguide/recommenders.md | 106 +++++++++++++++------------------ 1 file changed, 47 insertions(+), 59 deletions(-) diff --git a/docs/userguide/recommenders.md b/docs/userguide/recommenders.md index 04751d09a..af4b09b6b 100644 --- a/docs/userguide/recommenders.md +++ b/docs/userguide/recommenders.md @@ -3,18 +3,16 @@ ## General Information Recommenders are an essential part of BayBE that effectively explore the search space -and provide recommendations for the next experiment or batch of experiments. - +and provide recommendations for the next experiment or batch of experiments. Available recommenders can be partitioned into the following subclasses. ## Pure Recommenders Pure recommenders simply take on the task to recommend measurements. They each contain the inner logic to do so via different algorithms and approaches. - While some pure recommenders are versatile and work across different types of search spaces, other are specifically designed for discrete or continuous spaces. The -compatibility is indicated via the corresponding ``compatibility`` class variable. +compatibility is indicated via the corresponding `compatibility` class variable. ```{admonition} Additional Options for Discrete Search Spaces :class: note @@ -32,26 +30,24 @@ The Bayesian recommenders in BayBE are built on the foundation of the class, offering an array of possibilities with internal surrogate models and support for various acquisition functions. -The [`SequentialGreedyRecommender`](baybe.recommenders.pure.bayesian.sequential_greedy.SequentialGreedyRecommender) -is a powerful recommender that leverages BoTorch optimization functions to perform -sequential Greedy optimization. It can be applied for discrete, continuous and hybrid -search spaces. It is an implementation of the BoTorch optimization functions for -discrete, continuous and mixed spaces. - -It is important to note that this recommender performs a brute-force search when -applied in hybrid search spaces, as it optimizes the continuous part of the space -while exhaustively searching choices in the discrete subspace. You can customize -this behavior to only sample a certain percentage of the discrete subspace via the -``sample_percentage`` attribute and to choose different sampling algorithms via the -``hybrid_sampler`` attribute. An example on using this recommender in a hybrid space -can be found [here](./../../examples/Backtesting/hybrid). - -The [`NaiveHybridSpaceRecommender`](baybe.recommenders.naive.NaiveHybridSpaceRecommender) -can be applied to all search spaces, but is intended to be used in hybrid spaces. -This recommender combines individual recommenders for the continuous and the discrete -subspaces. It independently optimizes each subspace and consolidates the best results -to generate a candidate for the original hybrid space. An example on using this -recommender in a hybrid space can be found [here](./../../examples/Backtesting/hybrid). +* The **[`SequentialGreedyRecommender`](baybe.recommenders.pure.bayesian.sequential_greedy.SequentialGreedyRecommender)** + is a powerful recommender that performs sequential Greedy optimization. It can be + applied for discrete, continuous and hybrid search spaces. It is an implementation of + the BoTorch optimization functions for discrete, continuous and mixed spaces. + It is important to note that this recommender performs a brute-force search when + applied in hybrid search spaces, as it optimizes the continuous part of the space + while exhaustively searching choices in the discrete subspace. You can customize + this behavior to only sample a certain percentage of the discrete subspace via the + `sample_percentage` attribute and to choose different sampling algorithms via the + `hybrid_sampler` attribute. An example on using this recommender in a hybrid space + can be found [here](./../../examples/Backtesting/hybrid). + +* The **[`NaiveHybridSpaceRecommender`](baybe.recommenders.naive.NaiveHybridSpaceRecommender)** + can be applied to all search spaces, but is intended to be used in hybrid spaces. + This recommender combines individual recommenders for the continuous and the discrete + subspaces. It independently optimizes each subspace and consolidates the best results + to generate a candidate for the original hybrid space. An example on using this + recommender in a hybrid space can be found [here](./../../examples/Backtesting/hybrid). ### Clustering Recommenders @@ -80,21 +76,17 @@ BayBE provides two recommenders that recommend by sampling form the search space ## Meta Recommenders -On analogy to meta studies, meta recommenders are wrappers that operate on a sequence +In analogy to meta studies, meta recommenders are wrappers that operate on a sequence of pure recommenders and determine when to switch between them according to different -logics. - -BayBE offers three distinct kinds of meta recommenders. - -### TwoPhase - -The -[`TwoPhaseMetaRecommender`](baybe.recommenders.meta.sequential.TwoPhaseMetaRecommender) -employs two distinct recommenders and switches between them at a certain specified -point, controlled by the `switch_after` attribute. This is useful e.g. if you want a -different recommender for the initial recommendation when there is no data yet -available. This simple example would recommend randomly for the first batch and switch -to a Bayesian recommender as soon as measurements have been ingested: +logics. BayBE offers three distinct kinds of meta recommenders. + +* The + [`TwoPhaseMetaRecommender`](baybe.recommenders.meta.sequential.TwoPhaseMetaRecommender) + employs two distinct recommenders and switches between them at a certain specified + point, controlled by the `switch_after` attribute. This is useful e.g. if you want a + different recommender for the initial recommendation when there is no data yet + available. This simple example would recommend randomly for the first batch and switch + to a Bayesian recommender as soon as measurements have been ingested: ```python from baybe.recommenders import ( TwoPhaseMetaRecommender, @@ -107,24 +99,20 @@ recommender = TwoPhaseMetaRecommender( ) ``` -### Sequential - -The [`SequentialMetaRecommender`](baybe.recommenders.meta.sequential.SequentialMetaRecommender) introduces a simple yet versatile approach by utilizing a -predefined list of recommenders. By specifying the desired behavior using the `mode` -attribute, it is possible to flexibly determine the strategy's response when it -exhausts the available recommenders. The possible choices are to either raise an -error, re-use the last recommender or re-start at the beginning of the sequence. - -### Streaming Sequential - -Similar to the [`SequentialMetaRecommender`](baybe.recommenders.meta.sequential.SequentialMetaRecommender), -the -[`StreamingSequentialMetaRecommender`](baybe.recommenders.meta.sequential.StreamingSequentialMetaRecommender) -enables the utilization of *arbitrary* iterables to select recommender. - -```{warning} -Due to the arbitrary nature of iterables that can be used, de-/serializability cannot -be guaranteed. As a consequence, using a `StreamingSequentialMetaRecommender` results -in an error if you attempt to serialize the corresponding object or higher-level -objects containing it. -``` \ No newline at end of file +* The **[`SequentialMetaRecommender`](baybe.recommenders.meta.sequential.SequentialMetaRecommender)** + introduces a simple yet versatile approach by utilizing a predefined list of + recommenders. By specifying the desired behavior using the `mode` attribute, it is + possible to flexibly determine the strategy's response when it exhausts the available + recommenders. The possible choices are to either raise an error, re-use the last + recommender or re-start at the beginning of the sequence. + +* Similar to the `SequentialMetaRecommender`, the + **[`StreamingSequentialMetaRecommender`](baybe.recommenders.meta.sequential.StreamingSequentialMetaRecommender)** + enables the utilization of *arbitrary* iterables to select recommender. + + ```{warning} + Due to the arbitrary nature of iterables that can be used, de-/serializability cannot + be guaranteed. As a consequence, using a `StreamingSequentialMetaRecommender` results + in an error if you attempt to serialize the corresponding object or higher-level + objects containing it. + ``` \ No newline at end of file From 35c5fb3d60b69f775dcc00fcd69fdd0b4e8baed1 Mon Sep 17 00:00:00 2001 From: Martin Fitzner Date: Wed, 28 Feb 2024 09:48:07 +0100 Subject: [PATCH 7/8] Refactor streamlit --- .../{initial_strategy.py => initial_recommender.py} | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) rename streamlit/{initial_strategy.py => initial_recommender.py} (92%) diff --git a/streamlit/initial_strategy.py b/streamlit/initial_recommender.py similarity index 92% rename from streamlit/initial_strategy.py rename to streamlit/initial_recommender.py index 03a1815fe..44362c358 100644 --- a/streamlit/initial_strategy.py +++ b/streamlit/initial_recommender.py @@ -73,8 +73,8 @@ def main(): # simulation parameters random_seed = int(st.sidebar.number_input("Random seed", value=42)) - strategy_name = st.sidebar.selectbox( - "Strategy", list(selection_recommenders.keys()) + recommender_name = st.sidebar.selectbox( + "Recommender", list(selection_recommenders.keys()) ) n_points = st.sidebar.slider("Number of points to be generated", 10, 100, value=50) n_selected = st.sidebar.slider( @@ -110,11 +110,11 @@ def main(): # create the recommender and generate the recommendations # TODO: The acquisition function should become optional for model-free methods - strategy = selection_recommenders[strategy_name]() - selection = strategy.recommend(searchspace=searchspace, batch_size=n_selected) + recommender = selection_recommenders[recommender_name]() + selection = recommender.recommend(searchspace=searchspace, batch_size=n_selected) # show the result - fig = plot_point_selection(points.values, selection.index.values, strategy_name) + fig = plot_point_selection(points.values, selection.index.values, recommender_name) st.plotly_chart(fig) From 38f6a19e716dd3e4cdee36ee31eb04713a3e5768 Mon Sep 17 00:00:00 2001 From: Martin Fitzner Date: Wed, 28 Feb 2024 09:51:32 +0100 Subject: [PATCH 8/8] Fix text --- docs/userguide/recommenders.md | 6 +++--- examples/Basics/Basics_Header.md | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/userguide/recommenders.md b/docs/userguide/recommenders.md index af4b09b6b..4634657bb 100644 --- a/docs/userguide/recommenders.md +++ b/docs/userguide/recommenders.md @@ -102,9 +102,9 @@ recommender = TwoPhaseMetaRecommender( * The **[`SequentialMetaRecommender`](baybe.recommenders.meta.sequential.SequentialMetaRecommender)** introduces a simple yet versatile approach by utilizing a predefined list of recommenders. By specifying the desired behavior using the `mode` attribute, it is - possible to flexibly determine the strategy's response when it exhausts the available - recommenders. The possible choices are to either raise an error, re-use the last - recommender or re-start at the beginning of the sequence. + possible to flexibly determine the meta recommender's response when it exhausts the + available recommenders. The possible choices are to either raise an error, re-use the + last recommender or re-start at the beginning of the sequence. * Similar to the `SequentialMetaRecommender`, the **[`StreamingSequentialMetaRecommender`](baybe.recommenders.meta.sequential.StreamingSequentialMetaRecommender)** diff --git a/examples/Basics/Basics_Header.md b/examples/Basics/Basics_Header.md index fd908df61..69b137c7f 100644 --- a/examples/Basics/Basics_Header.md +++ b/examples/Basics/Basics_Header.md @@ -2,4 +2,4 @@ These examples demonstrate the most basic aspects of BayBE: How to set up a {doc}`Campaign ` and how to configure an optimization -{doc}`Recommenders `. \ No newline at end of file +{doc}`Recommender `. \ No newline at end of file