Skip to content

Releases: marcpinet/neuralnetlib

neuralnetlib 4.3.3

10 Dec 07:57
2357dfd
Compare
Choose a tag to compare
  • fix(configs): layers states saving and loading
  • fix(configs): losses states saving and loading
  • fix(configs): optimizers states saving and loading
  • fix(configs): models states saving and loading
  • ci: bump version to 4.3.3

neuralnetlib 4.3.2

10 Dec 00:35
78c2eb1
Compare
Choose a tag to compare
  • docs: update readme
  • fix(dropout/batchnorm): save&load
  • ci: bump version to 4.3.2

neuralnetlib 4.3.1

09 Dec 23:50
65b5ced
Compare
Choose a tag to compare
  • fix(batchnorm): init
  • ci: bump version to 4.3.1

neuralnetlib 4.3.0

09 Dec 23:28
1ae3158
Compare
Choose a tag to compare
  • fix(git): file caching
  • feat: add support for multi-label classification
  • fix(cgan): y_train when None
  • feat(cgan): add label smoothing
  • fix(cgan): predictions
  • docs(gans): update
  • docs(notebook): add svm
  • feat(utils): add make_blobs
  • feat(utils): add make_classification
  • feat(metrics): add adjusted_rand_score
  • feat(metrics): add adjusted_mutual_info_score
  • fix(metrics): MAE, MSE, MAPE
  • refactor: usage of standalone functions everywhere
  • ci: bump version to 4.3.0

neuralnetlib 4.2.0

09 Dec 10:15
2087f48
Compare
Choose a tag to compare
  • docs(notebook): fresh run²
  • docs(readme): update quick examples
  • docs(readme): update²
  • docs: add conv example for gan
  • feat(layer): add Conv2DTranspose
  • docs: add conv example for gan using conv2dtranspose
  • feat(GAN): add Conditional GAN (CGAN)
  • ci: bump version to 4.2.0

neuralnetlib 4.1.0

06 Dec 21:22
3211694
Compare
Choose a tag to compare
  • fix(LSTM): huge improvements in gradient flow
  • feat(gradient_norm): better batch handling
  • fix(Callbacks): now fully working generically
  • fix(metrics): val_loss
  • fix(Embedding): bias in output
  • fix(display): val_*
  • feat(metric): add pearsonr
  • feat(metric): add kurtosis
  • feat(metric): add skew
  • fix(metric): skewness and kurtosis when variance=0
  • fix(gan): batch logs
  • fix(activation): config loading
  • feat(lstm): improved flexibility and parameterizing
  • docs(notebook): fresh run
  • ci: bump version to 4.1.0

neuralnetlib 4.0.7

06 Dec 21:07
d4065bf
Compare
Choose a tag to compare
  • fix(LSTM): huge improvements in gradient flow
  • feat(gradient_norm): better batch handling
  • fix(Callbacks): now fully working generically
  • fix(metrics): val_loss
  • fix(Embedding): bias in output
  • fix(display): val_*
  • feat(metric): add pearsonr
  • feat(metric): add kurtosis
  • feat(metric): add skew
  • fix(metric): skewness and kurtosis when variance=0
  • fix(gan): batch logs
  • fix(activation): config loading
  • feat(lstm): improved flexibility and parameterizing
  • docs(notebook): fresh run
  • ci: bump version to 4.0.7

neuralnetlib 4.0.6

06 Dec 08:30
5ac546b
Compare
Choose a tag to compare
  • refactor: autopep8 format
  • ci: bump version to 4.0.6

neuralnetlib 4.0.5

05 Dec 20:48
b4668c2
Compare
Choose a tag to compare
  • fix(cosine_sim): division by zero
  • refactor: remove the use of enums
  • docs: huge update and new examples
  • docs(readme): update hyperlinks
  • feat: add SVM
  • fix(Dense): bad variable init
  • fix(Tokenizer): bpe tokenizing
  • fix: jacobian matrix instead of approximation
  • fix(AddNorm): backward pass
  • ci: bump version to 4.0.4

neuralnetlib 4.0.4

03 Dec 18:47
fee19e9
Compare
Choose a tag to compare
  • feat(optimizers): add AdaBelief
  • feat(optimizers): add RAdam
  • feat(loss): add focal loss
  • fix(optimizers-losses-activations): from_config
  • feat(ensemble): add XGBoost
  • feat(layer): add ActivationFunction init as string in Activation
  • ci: bump version to 4.0.4