"Optimal" gradient-tree-boosting
No due date
25% complete
When agtboost
handles
- Training on influence-adjusted derivatives
- Optimized L2-tuning from adjusted information-criterion
- Automatic stochastic variations overt trees
- Smart internal feature-engineering with categorical features
The resulting trained models should be extremely close to optimal gradient-tree-boosting ensembles
When agtboost
handles
- Training on influence-adjusted derivatives
- Optimized L2-tuning from adjusted information-criterion
- Automatic stochastic variations overt trees
- Smart internal feature-engineering with categorical features
The resulting trained models should be extremely close to optimal gradient-tree-boosting ensembles