We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Motivation Data normalization can be done on the fly on GPU for each batch. It's faster on GPU than CPU and cleans up the dataset init method.
Implementation Could very nicely use https://lightning.ai/docs/pytorch/stable/common/lightning_module.html#on-after-batch-transfer to normalize once data is on GPU. Makes sure that you never forget about it (all batches on GPU are normalized).
The stats could be provided by a yaml_object handler that can be accessed on the model's init
The text was updated successfully, but these errors were encountered:
sounds cool @sadamov, are you thinking this for v0.3.0 or a later release? :)
v0.3.0
Sorry, something went wrong.
This feature is ready in #39 I don't have a strong opinion about the version it should be published in. :)
sadamov
Successfully merging a pull request may close this issue.
Motivation
Data normalization can be done on the fly on GPU for each batch. It's faster on GPU than CPU and cleans up the dataset init method.
Implementation
Could very nicely use https://lightning.ai/docs/pytorch/stable/common/lightning_module.html#on-after-batch-transfer to normalize once data is on GPU. Makes sure that you never forget about it (all batches on GPU are normalized).
The stats could be provided by a yaml_object handler that can be accessed on the model's init
The text was updated successfully, but these errors were encountered: