Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce Flaml predictions using SkLearn RF #1287

Open
zuoxu3310 opened this issue Mar 12, 2024 Discussed in #1054 · 2 comments
Open

Cannot reproduce Flaml predictions using SkLearn RF #1287

zuoxu3310 opened this issue Mar 12, 2024 Discussed in #1054 · 2 comments

Comments

@zuoxu3310
Copy link

Discussed in #1054

Originally posted by Therrm May 26, 2023
Hi there!

After running Flaml on RF only, I get the following best parameters:

best_hyperparams={"subsample": 1.0, "num_leaves": 256, "n_estimators": 300, "min_split_gain": 0.0, "min_child_samples": 30, "max_depth": -1, "learning_rate": 0.01, "colsample_bytree": 1}

But when I try to reproduce those predictions with the same parameters using sklearn rf , I get quite different results. For instance, I get only 3 to 4 different predictions while those from Flaml were close to a random distribution.

What else Flaml does that the RF doesn't? Is there some additional post-processing done by Flaml?

Note: I already pre-process my data by removing rows with empty data and normalizing the dataset (for both for Flaml and RF).

Thanks

I have the same issue. I use sklearn pipeline with flaml and then reproduce with sklearn pipeline. The results are totally different. Not only rf, but also for k neighbor (without random seed effect).
automl_pipeline = Pipeline([
("standardizer", standardizer),
("automl", automl)
])
automl_settings = {
"time_budget": 240,
"estimator_list": ['kneighbor'], #rf
"eval_method": 'cv',
"split_type": 'stratified',
"n_splits": 5,
"metric": 'accuracy',
"task": 'classification',
"log_file_name": "data.log",
"seed": 42,
"verbose":5
}

@thinkall
Copy link
Collaborator

thinkall commented Mar 14, 2024

Hi @zuoxu3310 , have you tried #1054 (comment)?
If it doesn't work, you can set skip_transform to True in the automl_settings and try again. It should be reproducible.

@dannycg1996
Copy link
Collaborator

Hi @zuoxu3310, I've recently added unit tests to FLAML, which aim to verify whether or not we can reproduce the loss provided by FLAML, given that we use the model which FLAML provides, and train and test on the same folds. The test can be found here.

At the time, I didn't find any errors with the reproducibility of the RandomForestClassifier, but I could have missed something.
One thing which I'd recommend is looking over the evaluate_cv_folds_with_underlying_model function which I wrote (and which is used by the test I linked above), which mimics the FLAML CV process (albeit with limitations).

One particularly important part is around the shuffling of data i.e. train_index = rng.permutation(train_index). If you don't shuffle the data, then you'll be training and testing on different folds to FLAML, so your results & predictions will differ.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants