Unable to find mistake when using PosteriorMean acquisition function #1406
-
I am trying to use PosteriorMean acquisition function to get the best possible candidate from my model (goal is to get the candidate with the lowest cost). I tested the model with one datum [1, 1, 1, 1] from the training dataset and got result as expected. I tried to test the model with a candidate [1, 2, 2, 2] which is known to perform better than [1, 1, 1, 1] and I am getting a lower cost as expected. Issue is that the candidate I am being suggested by the optimiser isn't really a good suggestion. The following are two candidates from the dataset itself, the second candidate is the best candidate from the dataset.
CODE OUTPUT:
Please let me know what I am doing wrong with the optimisation process. I may have been incorrectly using the BoTorch package for a while now and didn't realise it because I was always getting a suggestion and did not really pay attention to the details. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hmm my suspicion is that your model fits are wrong. You're doing the following:
so you're standardizing the observations, but you are not scaling the variance observations accordingly (you are leaving those on the original scale). This could result in the model being completely wrong. I would fix that and then take a look at the model fits to understand what's going on. See e.g. this tutorial for how to do cross validation: https://github.com/pytorch/botorch/blob/main/tutorials/batch_mode_cross_validation.ipynb |
Beta Was this translation helpful? Give feedback.
-
I got the mistake I was making. I was standardising the train_X dataset but wasn't standardising the bounds. That's why I was getting a suggestion that looked reasonable. |
Beta Was this translation helpful? Give feedback.
I got the mistake I was making. I was standardising the train_X dataset but wasn't standardising the bounds. That's why I was getting a suggestion that looked reasonable.