You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But when I try to reproduce those predictions with the same parameters using sklearn rf , I get quite different results. For instance, I get only 3 to 4 different predictions while those from Flaml were close to a random distribution.
What else Flaml does that the RF doesn't? Is there some additional post-processing done by Flaml?
Note: I already pre-process my data by removing rows with empty data and normalizing the dataset (for both for Flaml and RF).
Thanks
I have the same issue. I use sklearn pipeline with flaml and then reproduce with sklearn pipeline. The results are totally different. Not only rf, but also for k neighbor (without random seed effect).
automl_pipeline = Pipeline([
("standardizer", standardizer),
("automl", automl)
])
automl_settings = {
"time_budget": 240,
"estimator_list": ['kneighbor'], #rf
"eval_method": 'cv',
"split_type": 'stratified',
"n_splits": 5,
"metric": 'accuracy',
"task": 'classification',
"log_file_name": "data.log",
"seed": 42,
"verbose":5
}
The text was updated successfully, but these errors were encountered:
Hi @zuoxu3310 , have you tried #1054 (comment)?
If it doesn't work, you can set skip_transform to True in the automl_settings and try again. It should be reproducible.
Hi @zuoxu3310, I've recently added unit tests to FLAML, which aim to verify whether or not we can reproduce the loss provided by FLAML, given that we use the model which FLAML provides, and train and test on the same folds. The test can be found here.
At the time, I didn't find any errors with the reproducibility of the RandomForestClassifier, but I could have missed something.
One thing which I'd recommend is looking over the evaluate_cv_folds_with_underlying_model function which I wrote (and which is used by the test I linked above), which mimics the FLAML CV process (albeit with limitations).
One particularly important part is around the shuffling of data i.e. train_index = rng.permutation(train_index). If you don't shuffle the data, then you'll be training and testing on different folds to FLAML, so your results & predictions will differ.
Discussed in #1054
Originally posted by Therrm May 26, 2023
Hi there!
After running Flaml on RF only, I get the following best parameters:
best_hyperparams={"subsample": 1.0, "num_leaves": 256, "n_estimators": 300, "min_split_gain": 0.0, "min_child_samples": 30, "max_depth": -1, "learning_rate": 0.01, "colsample_bytree": 1}
But when I try to reproduce those predictions with the same parameters using sklearn rf , I get quite different results. For instance, I get only 3 to 4 different predictions while those from Flaml were close to a random distribution.
What else Flaml does that the RF doesn't? Is there some additional post-processing done by Flaml?
Note: I already pre-process my data by removing rows with empty data and normalizing the dataset (for both for Flaml and RF).
Thanks
I have the same issue. I use sklearn pipeline with flaml and then reproduce with sklearn pipeline. The results are totally different. Not only rf, but also for k neighbor (without random seed effect).
automl_pipeline = Pipeline([
("standardizer", standardizer),
("automl", automl)
])
automl_settings = {
"time_budget": 240,
"estimator_list": ['kneighbor'], #rf
"eval_method": 'cv',
"split_type": 'stratified',
"n_splits": 5,
"metric": 'accuracy',
"task": 'classification',
"log_file_name": "data.log",
"seed": 42,
"verbose":5
}
The text was updated successfully, but these errors were encountered: