-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Add amx detection in cpuinfo #224
Comments
Note that the following code, to check for linux kernel support, does not work in chromium sandbox? #if (defined(__i386__) || defined(__x86_64__)) && defined(__linux__)
#define ARCH_REQ_XCOMP_PERM 0x1023
#define XFEATURE_XTILEDATA 18
/* SetTileDataUse() - Invoke syscall to set ARCH_SET_STATE_USE */
static bool SetTileDataUse(void) {
if (syscall(SYS_arch_prctl, ARCH_REQ_XCOMP_PERM, XFEATURE_XTILEDATA)) {
return false;
}
return true;
}
#endif Is there another way to test for OS support? |
@malfet Does this repo has a CI to test the PRs? |
There are a couple more issues with amx detect, but I'm not sure they are in the scope of pytorch/cpuinfo detect os support for amx on windows, linux etc. But I assume the reason amx is disabled by default is it has a high cost to thread switches, so it would be good to enable amx once we actually know we'll be using it unclear if it is intentional, but the amx intrinsics header is only on for 64 bit, not 32 bit x86. I think these may be beyond the scope of cpuinfo and/or not entirely solvable? So this issue can be closed |
@fbarchard I think As you have just mentioned, enabling AMX will require a syscall, e.g. The detection and enabling should be decoupled as you said. Currently in pytorch, amx will be ONLY used inside onednn so you don't have to worry about the initialization. But we are trying to use amx intrinsics in some particular CPU kernels, one good example will be the
|
@mingfeima I will suggest we add enabling AMX to pytorch/cpuinfo. If an app is not using OneDNN, it will be helpful to all those apps and user base. Tying it just to OneDNN is not right approach. May be I didnt understand your response. |
sure, that's just the our original plan:) we will replace all the platforms checks currently implementing through onednn to cpuinfo. |
This proposal is to add amx detection in cpuinfo, amx refers to
Intel® Advanced Matrix Extensions (Intel® AMX)
: https://www.intel.com/content/www/us/en/products/docs/accelerator-engines/advanced-matrix-extensions/overview.htmlsomething like:
once this is settled, we can also switch the check from torch/aten to cpuinfo in convolution. Right now it is checked inside oneDNN via: https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/mkldnn/Utils.h#L99
The text was updated successfully, but these errors were encountered: