All notable changes to this project will be documented in this file.
- We are working to include the calculation of bootstrap confidence intervals.
- Minor bugfix affecting multi-thread calculation.
- We included micro-average metrics. Now precision, recall and F-score in addition to previously reported metrics are calculated as micro-averages by averaging the confusion matrices over targets before calculating aggregated metrics (precision, recall, ect.).
- We include the calculation of the average precision score (APS) in the plot notebook.
- plot.ipynb, added calculation of average precision score (APS) in the plot notebook.
- evaluation.py, micro-average calculation, some refactoring of the core functions.
- parser.py, minor fixes and improvements.
- We changed the way alternative identifiers in ontology files are considered. Now alternative identifiers are recognized in both the ground truth and prediction files and mapped to the "canonical" term.
- CHANGELOG.md, this file!
- graph.py, changed the Graph class.
- parser.py, changed the ground truth and prediction parsers in order to replace alternative identifiers with canonical terms.
- plot.ipynb, cleaned up.
First release after CAFA5 challenge closed. The version used in Kaggle is provided under the 'Kaggle' branch. The 'main' branch includes instead a packed version of the code. While usage and performance has been improved the calculation is exactly the same.