-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Triton] Triton generated kernel cannot be load correctly thru the L0 API. #659
Comments
The kernel loaded without error on integrated graphics:
My configuration
iGPU is from i5 11300H [(https://www.intel.com/content/www/us/en/products/sku/196656/intel-core-i511300h-processor-8m-cache-up-to-4-40-ghz-with-ipu/specifications.html)] |
The case failed on both ATSM and iGPU on Alderlake.
Here is my configuration:
|
@silee2 , It means the L0 module has been loaded correctly and we can iterate the kernel in the module. But in my platform, the L0 module is created without the kernel. |
One very large Triton kernel cannot be load correctly thru the L0 API.
Got the error code
0x78000011
from L0 APIzeKernelCreate
.We double confirmed that the kernel name is used correctly same as the one in the SPIRV IR.
A simple c++ unit test for reproducing this issue.
https://github.com/intel-innersource/frameworks.ai.pytorch.ipex-gpu/tree/chengjun/test_dpcpp
You can use the following command to build and run the test under the root director of the code:
mkdir build cd ./build/ cmake ../ -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=dpcpp make all ./test_void_kernel/triton_void_kernel
On ATSM platform result:
The text was updated successfully, but these errors were encountered: