Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong definition in CUDA operator file src/sptr/attention/attention_cuda_kernel.cu #5

Open
COMoER opened this issue Apr 19, 2023 · 0 comments

Comments

@COMoER
Copy link

COMoER commented Apr 19, 2023

First, thanks for your sharing! However, when reading your code, I think that there may be some mistakes in src/sptr/attention/attention_cuda_kernel.cu, as following:

   // start from h83
    int n_h = blockDim.x;
    int h_idx = blockIdx.y * n_h + threadIdx.y;

According to definition of thread size dim3 threads(hdim, n_h);, blockDim.x is hdim while n_h is blockDim.y. And this mistake will not cause the program error when n_h=h because in that case, blockIdx.y will always be 0 since the y size of blocks is 1.
Maybe there are some mistakes in my analysis. If you agree with my opinion, please fix this.
Looking forward to your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant