Multi-objective for Rank-Weighted GP Ensemble (RGPE) Model - Problems with the forward() instantiation of the custom model (MultitaskMultivariateNormal) #2491
-
Hello together, I am trying to setup a RGPE Model for multi-objective problems but I am running into problems when setting up the forward method of the costum RGPE model. I took the RGPE botorch tutorial as an example, but I am still getting stuck when trying to setup the MultitaskMultivariateNormal. The weights are calculated in a different method are not part of my problem here (right now I have done it via pareto dominance comparison, but I might switch it to obective-wise comparison"). This is my setup:
I am stuck getting this error: Traceback (most recent call last): Somehow I cannot figure out how to "sum" the different lazy_covariance_matrixes. Also I would appreciate some intution about the "unstandardization" that is done in the botorch tutorial:
What would be an efficient way of integrating it into my application and do I even need it? Thanks in advance for any help and tips! |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
Follow up to my post yesterday: I figured out a way that seems to work for the addition of the weighted lazy_covariance_matrixes:
Now I am able to apply the acquisition function and the optimize_acqf without running into errors. From the printout this seems to work: covar_x 0 dense: tensor([[2.0174e-02, 7.5638e-07, 4.3004e-06, 0.0000e+00, 0.0000e+00, 0.0000e+00, Weighted Covariance 0 (Dense): covar_x 1 dense: tensor([[9.8850e-03, 3.7062e-07, 2.1072e-06, 0.0000e+00, 0.0000e+00, 0.0000e+00, Weighted Covariance 1 (Dense): Summed Covariance (Dense): Any intuition, if I am going into the right direction would be appreciated :) Also I am still uncertain about the unstandardization. This is still an unclear topic for me. |
Beta Was this translation helpful? Give feedback.
-
Hi @MoBurmeister! Since you are treating each outcome (the values in Regarding standardization and normalization, those look good to me here. You may want to pass Aside the RGPE tutorial looks pretty date (calling FixedNoiseGP, which is deprecated), so I'll put up a PR to update that. |
Beta Was this translation helpful? Give feedback.
-
Thanks a lot for your suggestion, the extra step in combining the RGPE into a single ModelList is a good idea and I got it running:
As mentioned in the comment this code is still missing the part of collecting the SingleTaskGPs for each objective and feeding them into the ccorresponding RGPE model. But nevertheless the code here works for the scenario and I will adjust the rest in my on repository. I changed the unstandardization so that it works with SingleTaskGP:
Hopefully this is doing the right job. Do you maybe have an intuition on the weight calculation? I am a bit uncertain about the best choice here. Also there is no way in activating the "prune_baseline" for this setup right? Thanks a lot for all your help! |
Beta Was this translation helpful? Give feedback.
-
I would use the original approach. In the RGPE, we want to weight each model according to how well it is able to predict the target task. Doing this for each objective independently is quite intuitive because the optimal weighting (across tasks) may vary across objectives |
Beta Was this translation helpful? Give feedback.
-
Prune baseline can still be activated here (by passing |
Beta Was this translation helpful? Give feedback.
model.posterior
automatically applies the OutputTransform, so this shouldn't be necessary since Standardize is used on the SingleTaskGPs. I merged in a PR yesterday to update the RGPE tutorial to do this.I would use the original…