-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
search on hyper parameters #38
Comments
Hi, Simone. You're doing somewhat strange and expect algorithms to do the things they can't know about. Cross-validation of machine learning is easy when you have some figure of merit (ROC AUC, MSE, classification accuracy). In this case evaluation is quite straghtforward. However in case of reweighting, correct validation requires 2 steps:
(Also, is there any reason to optimize parameters automatically?) |
Hi, OK, let me try to clarify what the situation is I have played a bit with the hyper parameters and ended up using the following configuration
However, when I use different samples with much less stats I am afraid the above are far from being optimal, e.g. too many n_estimators, causing the to misbehave In particular, after having created the reweighter I do compute the ROC AUC on a number of variables of interest, which I could use a FoM Thanks |
Not really. 1-dimensional discrepancies are not all discrepancies. You can drive 1-dimensional ROC AUCs to 0.5 with max_depth=1, but you'll not cover any non-trivial difference between distributions. (Well, you can use it as a starting point, and then check results using step 2, but completely no guarantees can be done for this approach) |
OK, therefore how do you suggest to pick up the hyper parameters? |
If you really want to automate this process, you need to write evaluation function which encounters both steps 1) and 2) mentioned above. E.g. sum over KS(featuture_i) + abs(ROC AUC classifier - 0.5) As for me: I pick relatively small number of trees 30-50, select leaf size and regularization accordingly to the dataset and play with depth (2-4) and learning rate (0.1-0.3). I stop when I see that I significantly reduced discrepancy between datasets. There are many other errors to be encountered in the analysis and trying to minimize only one of those to zero isn't a wise strategy. |
Hi,
I am using this package to reweight MC to look like sPlotted data, and I would like to scan the hyper parameters to look for the best configuration
scikit tools are available for this (e.g. GridSearchCV or RandomizedSearchCV), but I am having troubles interfacing the two packages
Has anyone done that? Are there alternative ways within hep_ml?
In particular, I have my pandas DataFrame for the original and target samples and I am trying something like
but I get the following error
However, I am not sure how to set the score method for GBReweighter
Any help/suggestions/examples would be much appreciated
The text was updated successfully, but these errors were encountered: