You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Multi-GPU is not officially supported as of yet. Here is what you can do:
You can list all available GPUs using backend.default_backend().list_devices('GPU'). Then you can set one as the default devices using backend.default_backend().set_default_device(). All tensor initializers will now allocate on this GPU.
You can use one of the native backend functions, such as Jax's pmap to parallelize your function. This currently requires you to pass only native tensors to the function.
Multi-GPU support may be added in the future but it's not a priority for us right now. Contributions are welcome!
Is there anyway to scale PhiFlow across multiple GPUs?
The text was updated successfully, but these errors were encountered: