You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm looking at the shape constraints tutorial, and the results for GBDT and DNN are listed in the tutorial as follows:
GBT Validation AUC: 0.7248634099960327
GBT Test AUC: 0.6980501413345337
DNN Validation AUC: 0.7518489956855774
DNN Testing AUC: 0.745200514793396
After the experiment results, the tutorial comments Note that even though the validation metric is better than the tree solution, the testing metric is much worse.
I don't understand where this comment comes from, since DNN outperforms GBT in both validation AUC and testing AUC.
The text was updated successfully, but these errors were encountered:
Thanks for pointing out the issue. The text matched the results at the time of writing the colab, but due to changes in the training for GBT or DNN models the results have moved around a bit. The primary reason for this instability is that we cannot train the models until convergence as the tutorials are auto generated from the colabs and need to run in a few minutes.
We are updating the colab to use less data and a more aggressive learning rate to get closer to convergence, re-optimizing the hyper parameters along the way. These changes will be reflected in the upcoming release.
Hi, I'm looking at the shape constraints tutorial, and the results for GBDT and DNN are listed in the tutorial as follows:
After the experiment results, the tutorial comments
Note that even though the validation metric is better than the tree solution, the testing metric is much worse.
I don't understand where this comment comes from, since DNN outperforms GBT in both validation AUC and testing AUC.
The text was updated successfully, but these errors were encountered: