Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Noisy NSGA2] Embedding method gives solutions with too few replications #8

Open
JusteRaimbault opened this issue Mar 19, 2019 · 3 comments
Assignees

Comments

@JusteRaimbault
Copy link
Contributor

I came across this several time: when running a noisy nsga2, some solutions have too few replications ; but most of the time a filtering of the final population on number of replications does the trick.
It sometimes however becomes problematic (as we were discussing with @mathieuleclaire this morning), when very few solutions have more than, let say 10 replications, and these are relatively bad regarding the optimization objective.
This is due to the way we handle stochasticity, by embedding the problem into a higher dimension by adding an objective as a decreasing function of the number of replications. Work is still in progress (here : https://github.com/openmole/mgo-benchmark ) to test this method against other ways to handle stochasticity, such as fixed number of replications, adaptive sampling, Kalman-based uncertainty computation, RTEA algorithm, etc.
So for the moment I would advice

  • either to go with the fixed replications method : wrap your stochastic model into a Replication task and go with a deterministic NSGA2; this should be fine if the computation budget allows it.
  • add a parameter for a minimal number of replication per points (we have a max only for now) ; this would be a "patch" but ensure no outlier with 1 replication ends in the final population (10 replications e.g. remains cheap but ensure a bit more robustness). This is however against the philosophy of the embedding method (find compromises between optimization objective and confidence in the solution).
    Would be interested to hear your thoughts on that !
@JusteRaimbault JusteRaimbault self-assigned this Mar 19, 2019
@mathieuleclaire
Copy link

Having a discussion later with @romainreuillon, it seems a bug has been introduced recently with the last code refactoring. It is not a normal behaviour to have so few replications for good candidates. We have to investigate this first before adding new features.

@JusteRaimbault
Copy link
Contributor Author

Ok let see if the bug can be fixed ; that's true that even with a reevaluation rate at 0.2 by default these solutions should be quickly removed.
However I do not think this solves the big picture ; the way we handle it with a Pareto front I think you can end in some situations where noise is fat-tailed and such "one-replication" solutions end in the Pareto front ; once they are reevaluated they are dominated and removed, but appear again later through mutation/cross-over.

@romainreuillon
Copy link
Contributor

I think that I found an explanation for this bug. It is not trivial to fix though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants