-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dask-Cuda running out of memory with Cupy Sobel #1250
Comments
|
The data comes from a file as np array then I convert it to cupy array with cp.asarray(file). The program works fine with smaller datasets and Larger than memroy for CPU but not for larger then memory for GPU. "Best practice is to generate the chunked array directly on the workers" can you give an example of how to do that? |
Use one of the creation mechanisms that don't materialise the data on the client. e.g. if the data are in an HDF5 file, open the HDF5 file and use Can you post a minimal failing example here so we can see what a likely route might be. |
Sure here is the minimal example:
|
OK, in this case, I recommend using |
Greetings, sadly we are not allowed to use SEGYSAK in our org... I also tried data in .npy file, but I still have the same issue. I suspect that Here is the new update to the code:
the full code now:
|
I am still a little confused, |
Sorry for the confusion, I thought so... but as I mentioned earlier I kept getting the following error:
I found a solution online with the
|
Greetings, I have the following problem. I know that Cuda-Dask is still experimental and I tried looking at the Docus to solve the issue, however, nothing seems to work.
https://stackoverflow.com/questions/77186393/dask-cuda-running-out-of-memory-with-cupy-sobel
Thanks in advance.
The text was updated successfully, but these errors were encountered: