You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
currently, users must manually break up their dataset if it contains multiple sample. We can help them by running Solo for them per batch. The main issue with this is that it will be slower if the user has multiple GPUs available to them.
The text was updated successfully, but these errors were encountered:
I was wondering on the status of this feature? I've been using a little wrapper script that can do multiple batches with solo using the Ray library to distribute over multiple GPUs, it can also partition up how much memory to use. My script is a pretty clunky since there is some manual input and is mainly a wrapper. It would be ideal if Solo can just take a metadata column and run with it.
Hi @cnk113 I haven't started implementing it yet. That being said if you already have a script to distribute with multiple GPUs I would stick with that because I don't plan on having code in Solo to spread the computation across multiple GPUs. The various compute environments out there are just too diverse and trying to engineer code within Solo to do this would not be worth the effort.
This issue is just for simplifying things for users working on a single GPU.
currently, users must manually break up their dataset if it contains multiple sample. We can help them by running Solo for them per batch. The main issue with this is that it will be slower if the user has multiple GPUs available to them.
The text was updated successfully, but these errors were encountered: