diff --git a/CHANGELOG.md b/CHANGELOG.md index 44da6cae..26083bcf 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## [Unreleased] +- + +## [0.1.0] - 2024-06-22 + ### Added - Basic project structure. @@ -18,7 +22,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - `CAGrad` from [Conflict-Averse Gradient Descent for Multi-task Learning](https://arxiv.org/pdf/2110.14048.pdf). - `Constant` to aggregate with constant weights. - - `DualProjWrapper` adapted from [Gradient Episodic + - `DualProj` adapted from [Gradient Episodic Memory for Continual Learning](https://proceedings.neurips.cc/paper/2017/file/f87522788a2be2d171666752f97ddebb-Paper.pdf). - `GradDrop` from [Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout](https://arxiv.org/pdf/2010.06808.pdf). @@ -28,14 +32,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - `Mean` to average the rows of the matrix. - `MGDA` from [Multiple-gradient descent algorithm (MGDA) for multiobjective optimization](https://www.sciencedirect.com/science/article/pii/S1631073X12000738/pdf?md5=2622857e4abde98b6f7ddc8a13a337e1&pid=1-s2.0-S1631073X12000738-main.pdf>). - `NashMTL` from [Multi-Task Learning as a Bargaining Game](https://arxiv.org/pdf/2202.01017.pdf). - - `NormalizingWrapper` to normalize the weights obtained by a wrapped `Weigthing`. - `PCGrad` from [Gradient Surgery for Multi-Task Learning](https://arxiv.org/pdf/2001.06782.pdf). - `Random` from [Reasonable Effectiveness of Random Weighting: A Litmus Test for Multi-Task Learning](https://arxiv.org/pdf/2111.10603.pdf). - `Sum` to sum the rows of the matrix. - `TrimmedMean` from [Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates](https://proceedings.mlr.press/v80/yin18a/yin18a.pdf). - - `UPGradWrapper` from [Jacobian Descent for Multi-Objective Optimization](https://arxiv.org/search/?query=jacobian+descent+for+multi-objective+optimization&searchtype=all&source=header). + - `UPGrad` from [Jacobian Descent for Multi-Objective Optimization](https://arxiv.org/search/?query=jacobian+descent+for+multi-objective+optimization&searchtype=all&source=header). - `backward` function to perform a step of Jacobian descent. - Documentation of the public API and of some usage examples. - Tests: