-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion on mapping between amrex, numpy.ndarray, and torch.tensor data types #9
Comments
I think this would be a good basis for more complex amrex types. Since torch and python don't have a standardize framework for expressing amr, this is (in my opinion) the lowers common denominator. We should also keep in mind how we deal with boxes whose indices don't start at 0. @ax3l's box type already has what we need I think. So we might need to implement a thin wrapper around numpy and torch that map amrex-style indexing to python indices. Also tagging @sayerhs |
Thanks for starting a sticky thread so we can collect the approaches. Let me start with what I am using so far: General arrays (incl. numpy):
Device memory:
Compatibility:
|
Thanks @ax3l that list is a good starting point. I would vote for the python buffer protocol strategy as a starting point. This seems to work well PyCUDA also. We could then also implement some of the alternatives, depending on how much demand from applications there is, what benefits there are in each, and how much bandwidth we all have. I'll do some reading to see if there is a benefit that would entice me to change my vote. (thanks for the references) |
Agreed, I think after going through all the material again:
to start with. This will give us exposure to exactly the libraries and communities we want to interface with. |
Starting support for AMD GPUs (and Intel) in |
Next is either the |
CUDA bindings for multifabs including cupy, numba and pytorch coming in via #30 |
Did some more DLPack deep diving with @scothalverson. What we want to implement here is primarily the producer, Relatively easy to read implementations are:
More involved or less documented are:
The
This object is referred to in the capsule we produce. |
Hey, this is not so much an issue as a place to solicit public feedback.
I think we should implement type conversion from the amrex FArrayBox (or more precisely the Array4) data type to numpy.ndarray and torch.tensor. As well a suitable python CUDA variants.
I also think that this type conversion should have s copying and a referencing variant.
This shouldn't be hard to implement (NO! This won't support python 2... I have a life you know), and I volunteer my time. But first I want to run this past all y'all to see if anyone is already working on it and what you think.
Tagging @ax3l @maxpkatz @drummerdoc
The text was updated successfully, but these errors were encountered: