Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

probability_below_threshold does assert_scalar for threshold while threshold can also be a TensorType #883

Open
ioananikova opened this issue Oct 23, 2024 · 6 comments
Labels
enhancement New feature or request

Comments

@ioananikova
Copy link

Describe the bug
The class description of probability_below_threshold acquisition function says that threshold can be a float or TensorType. However, at initialization with threshold a TensorType, the tf.debugging.assert_scalar fails and errors out.

To reproduce
Steps to reproduce the behaviour:

  1. create a problem with the ProbabilityOfFeasibility acquisition function (or test with the probability_below_threshold class directly)
  2. give as threshold argument a TensorType (this is useful when a modelstack is used and a different threshold is necessary for each constraint the model learns)
  3. init will fail and say, e.g.: "ValueError: Expected scalar shape, saw shape: (1, 2)"
  4. if you try this with a float as threshold, it does not error.

Include a minimal reproducible code example if relevant.

Expected behaviour
A TensorType should be supported. I commented the assertion and then it gave me correct results with the TensorType. So, the assertion should be changed to check for scalars and tensortypes.

System information

  • OS: WSL
  • Python version: 3.11.6
  • Trieste version: 3.3.4 (but I saw that the latest version still has this assertion)
  • TensorFlow version: 2.15.0
  • GPflow version: 2.9.1

Additional context
Add any other context about the problem here.

@ioananikova ioananikova added the bug Something isn't working label Oct 23, 2024
@uri-granta
Copy link
Collaborator

Apologies for the very slow response! It is true that threshold can be a TensorType but as the docstring notes it represents "the (scalar) probability of feasibility threshold". It therefore currently can only be a scalar value: either a float (e.g. 0.0) or a scalar tensor (e.g. tf.constant(0.0))

Are you trying to do constrained optimisation with multiple constraints? If so, have you seen this tutorial, which combines two ProbabilityOfFeasibility functions via a product?

@ioananikova
Copy link
Author

I am indeed trying to use multiple constraints. I know the approach from the tutorial, but I recently switched to using ModelStack which, as far as I understand, cannot be combined with the Reducer as it is seen as one model. So, I want to be able to give the stacked model to ProbabilityOfFeasibility and a different constraint threshold for each constraint.

@uri-granta
Copy link
Collaborator

I believe that's not currently supported, though I will check whether there exists any simple workaround.

@ioananikova
Copy link
Author

From what I tried, I believe ProbabilityOfFeasibility also works with a TensorType threshold. If a ModelStack is provided, it will return the probability of feasibility with the corresponding threshold value.

@uri-granta
Copy link
Collaborator

ProbabilityOfFeasibility definitely only supports a single (scalar) threshold at the moment. You can pass in a TensorType threshold but only if that tensor is a single-value scalar (e.g. tf.constant(1.0)): that's why we call tf.debugging.assert_scalar(threshold).

We will have to extend trieste if we want to support applying different thresholds to the different submodels in a model stack. I'm still not sure what the best way of doing that is (there are a few options) but I'll change the ticket type to enhancement.

@uri-granta uri-granta added enhancement New feature or request and removed bug Something isn't working labels Jan 9, 2025
@ioananikova
Copy link
Author

probability_below_threshold uses tfp.distributions.Normal which allows for batch creation of Normal distribution (see example). That is also what I see in my experiments: If two thresholds are given to cdf and the model predictions that were used to create the Normal distribution are also of size two, then the first threshold will be used on the first distribution and the second on the second distribution. This corresponds to applying the first threshold to the first model and the second threshold to the second model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants