PR #157 : Add CoDEx datasets and pretrained models (thanks @tsafavi)
PR #155 : Faster reading of triple files
d275419 , 87c5463 : Support parameter groups with group-specific optimizer args
PR #152 : Added training loss evaluation job
PR #147 : Support both minimization and maximization of metrics
PR #144 : Support to tune subbatch size automatically
PR #143 : Allow processing of large batches in smaller subbatches to save memory
PR #140 : Calculate penalty for entities only once, if subject-embedder == object-embedder
PR #138 : Revision of hooks, fix of embedding normalization
PR #135 : Revised sampling API, faster negative sampling with shared samples
PR #112 : Initialize embeddings from a packaged model
PR #113 : Reduce memory consumption and loading times of large datasets
Various smaller improvements and bug fixes
PR #110 : Support for different tie-breaking methods in evaluation (thanks @Nzteb)
1d26e63 : Add head/tail evaluation per relation type
dfd0aac : Added squared error loss (thanks @Nzteb)
PR #104 : Fix incorrect relation type measurement (thanks @STayinloves)
PR #101 : Revise embedder penalty API (thanks @Nzteb)
PR #94 : Support for packaged models (thanks @AdrianKS)
Improved seeding of workers when a fixed NumPy seed is used
Various smaller improvements and bug fixes
Added more mappings from entity IDs to names for Freebase datasets (in entity_strings.del file)
Improved shared negative sampling (WOR sampling, exclude positive triples from negative sample)
PR #86 : Support (s,?,o) queries for KvsAll training (thanks @vonVogelstein)
cf64dd2 : Fast dataset/index loading via cached pickle files
4bc86b1 : Add support for chunking a batch when training with negative sampling
14dc926 : Add ability to dump configs in various ways
PR #64 : Initial support for frequency-based negative sampling (thanks @AdrianKS)
PR #77 : Simpler use of command-line interface (thanks @cthoyt)
76a0077 : Added RotatE
7235e99 : Added option to add a constant offset before computing BCE loss
67de6c5 : Added CP
a5ee441 : Added SimplE
PR #71 : Faster and more memory-efficient training with negative sampling (thanks @AdrianKS)
Initial release
You can’t perform that action at this time.