Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR addresses an issue with VRAM usage after generating sample images.
Without using this PR, when training with SDXL and batch size > 1, the first time during training when sample images are generated, I would run out of VRAM even with 24GB (no --xformers or --mem_eff_attn).
Notably, the
sample_images_common
method in train_util callsclean_memory_on_device
near the end of the function, but I hypothesize that some of the memory is still used after the operation at the end. So calling it again after exiting the function clears the remaining.I'm not sure if anyone else has had this issue or if it's an issue with my configuration only. Please feel free to close this PR if it's unnecessary. Thank you for your consideration.