Releases: mrazomej/AutoEncoderToolkit.jl
v0.1.2 | JOSS publication
Release: AutoEncoderToolkit.jl v0.1.2
We’re thrilled to announce the release of AutoEncoderToolkit.jl
version v0.1.2
!
Highlights:
• Type stability: All custom structs are now type-stable, possibly enhancing the performance.
• Bug Fixes: All bugs that made tests fail on `GitHub` have been fixed.
• Updated Documentation: The documentation now includes community guidelines.
We especially thank the superb JOSS
reviewers for their excellent suggestions and constructive feedback. The entire review process has been a remarkable experience that gives us hope that there is a way to fix the peer-review system, and JOSS
has taken a significant step forward in the right direction.
v0.1.1 | All tests pass.
This version contains minor changes to facilitate testing with GitHub actions
. Before, tests only passed locally. Now, replacing all Dict
with NamedTuple
, tests pass on GitHub.
v0.1.0 | CUDA compatibility fully tested
Changes
-
Fully tests all CUDA-related functionality. This includes
- Custom GPU kernels
- Training functions for all available models
-
Encoders
andDecoders
are now type stable. For exampleJointGaussianLogDecoder
went from
struct JointGaussianLogDecoder <: AbstractGaussianLogDecoder
decoder::Flux.Chain
µ::Flux.Dense
logσ::Flux.Dense
end
to
struct JointGaussianLogDecoder{D,M,L} <: AbstractGaussianLogDecoder
decoder::D
µ::M
logσ::L
end
However, because of the depth of these nested structs
, the function typeof
only returns JointGaussianLogDecoder{...}
, hiding all extra subtypes.
- Custom layers now use
Flux.@layer
rather thanFlux.@functor
.
Fixing bugs when training RHVAEs on GPU
This release fixes some bugs on the CUDA extension to be able to train RHVAEs on CUDA GPUs
Full Changelog: v0.0.2...v0.0.3