This repository not only contains experience about parameter finetune, but also other in-practice experience such as model ensemble (boosting, bagging and stacking) in Kaggle or other competitions.
Welcome pull request!
- My Neural Network isn't working! What should I do?
- williamFalcon/DeepRLHacks: Hacks for training RL systems from John Schulman's lecture at Deep RL Bootcamp (Aug 2017)
- 浅谈深度学习中的代码效率问题
- 模型融合方法概述 | 知乎专栏
- 如何在 Kaggle 首战中进入前 10% | Wille
- Introduction to Ensembling/Stacking in Python | Kaggle
- 一文读懂集成学习(附学习资源) | 数据派THU
- [CVPR 2017] Squeeze-and-Excitation networks (ILSVRC 2017 winner) at CVPR2017
- 专栏 | Momenta详解ImageNet 2017夺冠架构SENet
- [FAIR 2017] Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
- Kaggle Invasive Species Monitor competition : 3rd place solution overview
- [LSMDC 2016] 1st place: Video Description by Combining Strong Representation and a Simple Nearest Neighbor Approach. Gil Levi, Dotan Kaufman, Lior Wolf (Tel Aviv University), Tal Hassner (University of Southern California) [slides]
- [LSMDC 2016] 1st place: Video Captioning and Retrieval Models with Semantic Attention. YoungJae Yu, Hyungjin Ko, Jongwook Choi, Gunhee Kim (Seoul National University) [slides]
- [LSMDC 2016] 2st place: Video Description by Combining Strong Representation and a Simple Nearest Neighbor Approach. Gil Levi, Dotan Kaufman, Lior Wolf (Tel Aviv University), Tal Hassner (University of Southern California)