You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
produces a test score of 0.3, which is chance. Using the standard 75/25 split, I get an accuracy of .92, which is around the expected value. Using a random forest with scikit-learn defaults, I get .92 both for the 50/50 split in the example as well as for the 75/25 split.
I assume there's an issue where a parameter configuration is chosen that doesn't allow growing a tree at all.
The text was updated successfully, but these errors were encountered:
From what I observed, the small dataset doesn't match quite close to existing datapoints, which failed to apply a nice hyperparameters combination to LGBM. I'll add this data as a datapoint to the lgbm default configs, to let KNN match a nice hyperparameters combination to this data. I'll raise a PR soon.
I'm trying to benchmark the zero-shot
flaml.default.LGBMClassifier
and I have seen some unexpected results. I'm working on Flaml 2.1.1.produces a test score of 0.3, which is chance. Using the standard 75/25 split, I get an accuracy of .92, which is around the expected value. Using a random forest with scikit-learn defaults, I get .92 both for the 50/50 split in the example as well as for the 75/25 split.
I assume there's an issue where a parameter configuration is chosen that doesn't allow growing a tree at all.
The text was updated successfully, but these errors were encountered: