You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a multi objective optimisation problem for which I am using the service API. I run an initial Models.SOBOL generation step, followed by Models.MOO for the optimisation proper.
The function I am minimising internally has 2 phases:
A "cheap" (say 5 minutes) initialisation phase.
An "expensive" (say 8 hours) evaluation phase.
After the initialisation phase, I often discover that the currently proposed parameters are actually unfeasible. In these cases I don't even run the "expensive" phase, and instead I just mark the trial as failed, and ask the optimiser for new trial parameters.
The issue is that the optimiser seems to sample in the "bad" parameter space very often, presumably because it has high uncertainty in that area. It samples so often, in fact, that the 5 minutes initialisation phase starts looking not so cheap after all.
Unfortunately I don't have an easy way of framing my constraints in the parameter space, and definitely not with the current limitation of linear parameter constraints only.
To improve optimiser behaviour, I was thinking of adding an output metric and using it as an outcome constraint. But when I fail a trial I don't have all results (as the expensive step has not run yet).
What is the recommended approach for a scenario such as this, with Ax in 2023?
My brute force approach would probably be the outcome constraint i described above + dummy (high) values for the objective metrics which I don't have yet, to discourage the optimiser from exploring that part of the search space. But this is quite hacky (particularly the somewhat empirical choice of "high" dummy values for the objective metrics), so I would be interested in hearing your opinion before going down this rabbit hole.
For the record, I believe this is quite similar to #745, but I thought the details were sufficiently different to justify a new issue. Apologies in advance if I thought wrong!
Thank you,
Luca
The text was updated successfully, but these errors were encountered:
Hi Luca thank you for reaching out. "Failure aware BO", where our models actively try and avoid suggesting trials that may fail, is actually an active area of research on the team and we're hoping on shipping some methods to handle this use case in the future (cc @Balandat@j-wilson).
Until that happens though, I believe the approach you describe of adding a new metric and outcome constraint to represent infeasibility is likely the best way to go. In general we do not recommend giving Ax metric values that weren't actually observed, especially extreme values since those tend to warp the predicted value of neighboring points, but adding dummy values that fall reasonable within the expected range might be all you need to avoid suggesting infeasible points. For a bit of background here's how outcome constraints work under the hood: we model both the objective and the constraint metrics, then the next candidate is the point with the highest expected improvement times the probability that that point's modeled constraint metrics don't violate constraints. Thus, if the model is fairly certain a candidate point will violate the outcome constraint (in your case be infeasible) then it will not suggest it even if the expected improvement of the objective is good.
I hope you find this advice helpful; what you're facing is a really interesting problem and we hope to have some even better methods for addressing optimizations like this in Ax in the future. Let me know if you have any other questions, and Max and James please chime in with your advice as well.
Hi.
I have a multi objective optimisation problem for which I am using the service API. I run an initial Models.SOBOL generation step, followed by Models.MOO for the optimisation proper.
The function I am minimising internally has 2 phases:
After the initialisation phase, I often discover that the currently proposed parameters are actually unfeasible. In these cases I don't even run the "expensive" phase, and instead I just mark the trial as failed, and ask the optimiser for new trial parameters.
The issue is that the optimiser seems to sample in the "bad" parameter space very often, presumably because it has high uncertainty in that area. It samples so often, in fact, that the 5 minutes initialisation phase starts looking not so cheap after all.
Unfortunately I don't have an easy way of framing my constraints in the parameter space, and definitely not with the current limitation of linear parameter constraints only.
To improve optimiser behaviour, I was thinking of adding an output metric and using it as an outcome constraint. But when I fail a trial I don't have all results (as the expensive step has not run yet).
What is the recommended approach for a scenario such as this, with Ax in 2023?
My brute force approach would probably be the outcome constraint i described above + dummy (high) values for the objective metrics which I don't have yet, to discourage the optimiser from exploring that part of the search space. But this is quite hacky (particularly the somewhat empirical choice of "high" dummy values for the objective metrics), so I would be interested in hearing your opinion before going down this rabbit hole.
For the record, I believe this is quite similar to #745, but I thought the details were sufficiently different to justify a new issue. Apologies in advance if I thought wrong!
Thank you,
Luca
The text was updated successfully, but these errors were encountered: