Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to best handle unfeasible search space regions? Not known a priori, can't be expressed as linear parameter constraints. #1392

Closed
optiluca opened this issue Jan 25, 2023 · 4 comments
Assignees
Labels
question Further information is requested

Comments

@optiluca
Copy link

Hi.

I have a multi objective optimisation problem for which I am using the service API. I run an initial Models.SOBOL generation step, followed by Models.MOO for the optimisation proper.

The function I am minimising internally has 2 phases:

  • A "cheap" (say 5 minutes) initialisation phase.
  • An "expensive" (say 8 hours) evaluation phase.

After the initialisation phase, I often discover that the currently proposed parameters are actually unfeasible. In these cases I don't even run the "expensive" phase, and instead I just mark the trial as failed, and ask the optimiser for new trial parameters.

The issue is that the optimiser seems to sample in the "bad" parameter space very often, presumably because it has high uncertainty in that area. It samples so often, in fact, that the 5 minutes initialisation phase starts looking not so cheap after all.

Unfortunately I don't have an easy way of framing my constraints in the parameter space, and definitely not with the current limitation of linear parameter constraints only.

To improve optimiser behaviour, I was thinking of adding an output metric and using it as an outcome constraint. But when I fail a trial I don't have all results (as the expensive step has not run yet).

What is the recommended approach for a scenario such as this, with Ax in 2023?

My brute force approach would probably be the outcome constraint i described above + dummy (high) values for the objective metrics which I don't have yet, to discourage the optimiser from exploring that part of the search space. But this is quite hacky (particularly the somewhat empirical choice of "high" dummy values for the objective metrics), so I would be interested in hearing your opinion before going down this rabbit hole.

For the record, I believe this is quite similar to #745, but I thought the details were sufficiently different to justify a new issue. Apologies in advance if I thought wrong!

Thank you,

Luca

@mpolson64
Copy link
Contributor

Hi Luca thank you for reaching out. "Failure aware BO", where our models actively try and avoid suggesting trials that may fail, is actually an active area of research on the team and we're hoping on shipping some methods to handle this use case in the future (cc @Balandat @j-wilson).

Until that happens though, I believe the approach you describe of adding a new metric and outcome constraint to represent infeasibility is likely the best way to go. In general we do not recommend giving Ax metric values that weren't actually observed, especially extreme values since those tend to warp the predicted value of neighboring points, but adding dummy values that fall reasonable within the expected range might be all you need to avoid suggesting infeasible points. For a bit of background here's how outcome constraints work under the hood: we model both the objective and the constraint metrics, then the next candidate is the point with the highest expected improvement times the probability that that point's modeled constraint metrics don't violate constraints. Thus, if the model is fairly certain a candidate point will violate the outcome constraint (in your case be infeasible) then it will not suggest it even if the expected improvement of the objective is good.

I hope you find this advice helpful; what you're facing is a really interesting problem and we hope to have some even better methods for addressing optimizations like this in Ax in the future. Let me know if you have any other questions, and Max and James please chime in with your advice as well.

@mpolson64 mpolson64 added the question Further information is requested label Jan 25, 2023
@optiluca
Copy link
Author

Very exhaustive & clear, thank you!

For now I'll proceed with my "hacky" approach, then. Unless @Balandat & @j-wilson jump in with a different proposal.

Thanks again,

Luca

@Balandat
Copy link
Contributor

Also @saitcakmak who is planning to work on this as well.

@mpolson64 mpolson64 self-assigned this Jan 25, 2023
@mpolson64
Copy link
Contributor

Closing this Issue for now -- feel free to reopen or open a new Issue if you would like to discuss anything else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants