Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling PhiFlow across multiple GPUs #107

Open
joyjitkundu032 opened this issue Feb 10, 2023 · 1 comment
Open

Scaling PhiFlow across multiple GPUs #107

joyjitkundu032 opened this issue Feb 10, 2023 · 1 comment

Comments

@joyjitkundu032
Copy link

Is there anyway to scale PhiFlow across multiple GPUs?

@holl-
Copy link
Collaborator

holl- commented Feb 10, 2023

Multi-GPU is not officially supported as of yet. Here is what you can do:

You can list all available GPUs using backend.default_backend().list_devices('GPU'). Then you can set one as the default devices using backend.default_backend().set_default_device(). All tensor initializers will now allocate on this GPU.

You can use one of the native backend functions, such as Jax's pmap to parallelize your function. This currently requires you to pass only native tensors to the function.

Multi-GPU support may be added in the future but it's not a priority for us right now. Contributions are welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants