Replies: 37 comments
-
This is an interesting problem. It's quite related to the multi-fidelity setting, where we take a measurement at some fidelity and then project to a "target fidelity". This is done in I imagine you can do something similar where you essentially return the posterior of the fantasy model evaluated at time I'm pretty swamped right now, but i can take a look later this week - hope the pointers above are helpful in the meantime. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the pointers -- I will give it a shot. Whenever you get the time, an illustration with some toy code would be great, since I am very new to the Pytorch paradigm. |
Beta Was this translation helpful? Give feedback.
-
Hi @r-ashwin. You can achieve this by passing the See the following simple example where I pretend that
|
Beta Was this translation helpful? Give feedback.
-
@saitcakmak Thanks for the tip -- I was not aware of the |
Beta Was this translation helpful? Give feedback.
-
Let me see if I understand this correctly. You want KG to be evaluated using fantasies conditioned on some In that case, a simple wrapper around qKG (below) may work. You could probably achieve this by passing an
Ps. This wrapper approach is not fully compatible with the heuristic for generating the inner solutions. The heuristic would maximize |
Beta Was this translation helpful? Give feedback.
-
Yes, my fantasy model is conditioned at |
Beta Was this translation helpful? Give feedback.
-
Alternatively to the |
Beta Was this translation helpful? Give feedback.
-
@saitcakmak The issue with the approach suggested above is that it wouldn't fantasize from
Hope this helps. |
Beta Was this translation helpful? Give feedback.
-
@Balandat, it would work properly if used with I like your approach better since it is less hacky. One issue I see with it is that it loses the smart heuristic used to generate the |
Beta Was this translation helpful? Give feedback.
-
@saitcakmak So it looks like I cannot use Update:I get the same error when using the multifidelity qKG. Happy to share that code as well if necessary, but did not want to clutter the space. It looks like the common problem for both cases is that the
|
Beta Was this translation helpful? Give feedback.
-
It looks like I also noticed that you're using |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Actually, subclassing If I may suggest, forward compatibility between any |
Beta Was this translation helpful? Give feedback.
-
@saitcakmak Just to make sure I understood correctly, in your |
Beta Was this translation helpful? Give feedback.
-
I think you're running into a bug that I introduced here: botorch/botorch/optim/initializers.py Line 328 in 7e2a404 That line would raise an @Balandat, is it the case that
The
You cannot modify
To make sure inner optimization is done at T, you could pass In your case, it is much cleaner to use
You could then wrap Note: You cannot use |
Beta Was this translation helpful? Give feedback.
-
Oh, I see what is going on with this one.
This here queries
Yes, this should be equivalent. There are differences in how |
Beta Was this translation helpful? Give feedback.
-
Your explanation makes sense and it does remove the error when |
Beta Was this translation helpful? Give feedback.
-
I have a follow-up question, if you have any thoughts on this. Placing it here since it is related to the original question. Can I fantasize at multiple Is this related to the |
Beta Was this translation helpful? Give feedback.
-
I can interpret this in two ways: i) you want to jointly fantasize at
Based on this documentation, assuming
|
Beta Was this translation helpful? Give feedback.
-
@saitcakmak Awesome! Thanks! |
Beta Was this translation helpful? Give feedback.
-
@saitcakmak When I use |
Beta Was this translation helpful? Give feedback.
-
The expected behavior is that all calls to the
botorch/botorch/acquisition/knowledge_gradient.py Lines 405 to 407 in 7e2a404 Using fixed_features , when called from within gen_candidates_scipy , you will have X_eval[..., 1] = t , where X_eval is n x 1 x 2 . expand(X_eval) will then be n x 4 x 2 (following my_expand definition above) with expand(X_eval)[..., 0, 1] = t , expand(X_eval)[..., 1, 1] = t1 , expand(X_eval)[..., 1, 1] = t2 and expand(X_eval)[..., 1, 1] = t3 . Any fantasy model generated here will be jointly over these four solutions.
|
Beta Was this translation helpful? Give feedback.
-
@saitcakmak , two remarks:
For V&V purposes, how can I ensure the output of |
Beta Was this translation helpful? Give feedback.
-
Setting a larger optimization budget for
BoTorch doesn't implement this. You could easily write your own KG implementation. The class NestedKG(MCAcquisitionFunction):
def __init__(...):
# define your init here, you can mostly copy qKG
def forward(X: Tensor) -> Tensor:
qKnowledgeGradient.evaluate(self, X) # passing self here is crucial If the recommendations here do not solve the issue and you think there is a different bug in play, I'd be happy to look into it deeper if you share a reproducible example. |
Beta Was this translation helpful? Give feedback.
-
Okay I will prepare a reproducible example and drop it here. One thing that is worth clarifying before that is that in the In this regard, passing an
Thanks for all your responses so far -- they were very useful! |
Beta Was this translation helpful? Give feedback.
-
If you use You can install #594 via |
Beta Was this translation helpful? Give feedback.
-
I see - thanks! Let me try both and see how it goes. Update: I was able to check that your implementation in #594 does indeed what I want. However, I am not sure I am able to see that the |
Beta Was this translation helpful? Give feedback.
-
@Balandat @saitcakmak |
Beta Was this translation helpful? Give feedback.
-
@r-ashwin I think the discrepancy you observe between I ran additional testing with
That is correct. You'd have to modify it and re-evaluate the inner solutions with botorch/botorch/acquisition/knowledge_gradient.py Lines 244 to 252 in d3d4497 Would be replaced by something like
I haven't tested this, but it should give the general idea. I've used similar implementations in the past, but they tend to be significantly slower than the one-shot approach. |
Beta Was this translation helpful? Give feedback.
-
That's interesting because I am indeed subclassing PS: thanks also for the gradient tip. From what I have tested so far, the |
Beta Was this translation helpful? Give feedback.
-
Issue description
I want to modify KG for time-dependent problems as follows. Given
x in X
(some compact space) and0 <= t <= T
, I have a GP model with priorGP(mu, k_xt)
, wherek_xt = k_x * k_t
withk_x
capturing covariance in 'x' space andk_t
in 't' space. At timet
I have dataD_t = {(x_i, t_i), y_i }, i=1,...,n
andt>t_n
. I want to define KG as followsa_KG(x, t) = E_x'[max_x' mu(x', T) | {(x, t), y_i}]
where y_i is sampled from
GP(mu(x, t), k_xt) | D_t)
. In other words, my 'fantasy model' is at current timet
however, my 'inner optimization' problem maximizes the posterior atT
predicted via the fantasy model. Also my acquisition functiona_KG
is defined att
Question: How should I modify the
qKnowledgeGradient
class to achieve this, so I can take advantage of the efficient one-shot implementation of qKG? I have provided code for the GP I am using if you want to work with that.Any help is greatly appreciated! Please let me know if you need more information. Thanks!
(apologies for trying to write equations in Markdown)
System Info
Please provide information about your setup, including
Beta Was this translation helpful? Give feedback.
All reactions