You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running batch trials is currently disabled in favor of early stopping, as both strategies were conflicting on Ax' side. Manually performing early stopping once the best-point/best-pareto can be returned from Ax.
Improve obtaining best-point/best-pareto during optimization
Improve setting termination criteria. For this it would be useful, if Ax could return the best point. This is currently not implemented
Adjust the number of cores to the number of batch trials, once batch mode is enabled again
Starting Point
This is a classmethod of AxInterface
fromax.modelbridge.modelbridge_utilsimport (
observed_pareto_frontierasobserved_pareto,
predicted_pareto_frontieraspredicted_pareto,
)
def_get_best_results(self, modelbridge, optimization_config):
raiseNotImplementedError("does not work so far.")
n_obj=self.optimization_problem.n_objectivesifn_obj>1:
pareto_optimal_observations=observed_pareto(
modelbridge,
optimization_config=optimization_config
)
pareto_optimal_predictions=predicted_pareto(
modelbridge,
optimization_config=optimization_config
)
# here also work needs to be done.else:
arm, obj=modelbridge.model_best_point()
x=np.array(list(arm.parameters.values()))
metrics=optimization_config.objective.metric_namesassertlen(metrics) ==1, "In single objective problems, only 1 Metric is allowed."f=obj[0][metrics[0]]
returnnp.array([x], ndmin=2), np.array([f], ndmin=2)
However, currently Ax does not return the best point, the reason for this is unclear
Here is another removed old snipped from code. May be another starting point for providing the current optimum to self._post_processing in the training loop.
self._data=ax.Data.from_multiple_data([self._data, trial.fetch_data()])
new_value=trial.fetch_data().df["mean"].min()
print(
f"Iteration: Best in iteration {new_value:.3f}, "f"Best so far: {self._data.df['mean'].min():.3f}"
)
The text was updated successfully, but these errors were encountered:
To tackle this problem, I will first have a look at this multi-objective tutorial written for the developer API (https://ax.dev/tutorials/saasbo_nehvi.html) and check the existing implementation of Ax to see if we can use things closer to what the developers have in mind.
Global level early stopping is definitely relevant this: https://ax.dev/tutorials/gss.html. Point 3 in the tutorial "Write your own custom global strategy" seems to be highly relevant.
Running batch trials is currently disabled in favor of early stopping, as both strategies were conflicting on Ax' side. Manually performing early stopping once the best-point/best-pareto can be returned from Ax.
Starting Point
This is a classmethod of AxInterface
then in the optimization loop:
However, currently Ax does not return the best point, the reason for this is unclear
Here is another removed old snipped from code. May be another starting point for providing the current optimum to
self._post_processing
in the training loop.The text was updated successfully, but these errors were encountered: