All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
2.1.0 - 2020-11-12
- Hyperparameter search pipeline
kiwi search
built on Optuna - Docs for the search pipeline
- The
--example
flag that has and example config fromkiwi/assests/conf/
printed to terminal for each pipeline - Tests to increase coverage
- Readme link to the new OpenKiwiTasting demo.
- Example configs in
conf/
so that they are clean, consistent, and have good defaults - Moved function
feedforward
fromkiwi.tensors
tokiwi.modules.common.feedforward
where it makes more sense
- The broken relative links in the docs
- Evaluation pipeline by adding missing
quiet
andverbose
in the evaluate configuration
- Migration of models from a previous OpenKiwi version, by removing the (never fully working) code in
kiwi.utils.migrations
entirely
- Unused code in
kiwi.training.optimizers
,kiwi.modules.common.scorer
,kiwi.modules.common.layer_norm
,kiwi.modules.sentence_level_output
,kiwi.metrics.metrics
,kiwi.modules.common.attention
,kiwi.modules.token_embeddings
- All code that was already commented out
- The
systems.encoder.(predictor|bert|xlm|xlmrobera).encode_source
option that is both confusing as well as never used
- XLMR, XLM, BERT encoder models
- New pooling methods for xlmr-encoder [mixed, mean, ll_mean]
freeze_for_number_of_steps
allows freezing of xlmr-encoder for a specific number of training stepsencoder_learning_rate
allows to set a specific learning rate to be used on the encoder (different from the rest of the system)- Dataloaders now use a RandomBucketSampler which groups sentences of the same size together to minimize padding
- fp16 support
- Support for HuggingFace's transformers models
- Pytorch-Lightning as a training framework
- This changelog