Skip to content

Misconceptions

ArgentVASIMR edited this page Mar 6, 2024 · 3 revisions

Epoch counts do not matter, step counts do.

An epoch is an arbitrary measure of training progress which does not actually reflect how trained that LoRA is. 10 epochs could either be on a dataset with 3 or 300 images, meaning 30 and 3000 steps for the LoRA respectively. Which LoRA is going to see more progress in training the concept(s) being trained?

YouTube videos are doomed to be obsolete.

YouTube videos are unchanging, persistent, uneditable. As static products, they are unable to be adapted to reflect current knowledge, making them unreliable as sources of information for both beginners and experts on LoRA training.


More is not better, better is better.

Do not set your steps too high.

At best, excess step counts are a waste of time, and the same result can be achieved with less steps (and perhaps a changed dataset/hyperparameters). At worst, excess step counts can result in a LoRA becoming completely deep-fried.

Do not set your net dim too high.

Some YouTube tutorials may suggest doing net dim as high as 128 or even 192, but we have found this to be a detriment more often than a benefit. In most cases, net dim between 8 and 64 is good enough for most things.

Do not have too many images in your dataset.

Having too many images is a management/curation issue. If you have more images to work with, attempting to fix the issues in your dataset becomes more difficult. Having thousands of images does not guarantee a successful LoRA either; the captions in the dataset as well your hyperparameters must also be considered.