-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add support for avx512 and allow users to choose the Stim architecture #524
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool change! It looks like it was relatively straightforward to implement.
The main thing missing here is there are no unit tests. Look for the definition of TEST_EACH_WORD_SIZE_W
in the code and add the 512 case, so that the 512 simd word is unit tested, verifying the methods work correctly. It's possible you will run into issues where some tests presupposed that the max bit width was 256, and need to be tweaked.
Unfortunately, we can't yet update the python extension to use avx because of the mysterious collab crash that it causes. I'm scared that enabling avx will silently crash some unknown subset of users, with no way for me to notice it is happening, and I don't yet understand what is actually causing the crash.
(I'd also ask for you to add benchmarks, but the work to templatize the simulators so you can ask for a 512 or 256 simd simulation is not yet done so these benchmarks are essentially impossible to define for the moment.)
# '-DSTIM_PYBIND11_MODULE_NAME=_stim_avx2', | ||
# ], | ||
# ) | ||
stim_avx2 = Extension( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, due to the collab bug, this still can't be uncommented.
Hi @Strilanc ! I will have a look to the unit tests for 512 bits word. Regarding the crash, I cannot reproduce it on colab with stim version 1.10.0 (published October 25th, so before the fix on November 19th 2022). On a new and fresh notebook, I enter the following:
which returns:
Do you still have the crash ? Moise |
Even if it doesn't reproduce on collab anymore... what if it still happens on other people's machines? That's the terrifying part. |
Yeah agree, but I was asking to try have a simple look on what could happen. Because what if this crash also occurs with the AVX512 implementation ? |
@MoiseRousseau Even if the crash occurs with avx512, we can still add the implementation and turn it on later. |
I'm going to close this as stale. Still open to something like this being added. |
Hi!
I started implementing the AVX512F instructions for code vectorization, which is more recent and is available on newer AMD Zen4 and on some Intel processors. This can give up to 4% improved performance over AVX2 version on my very limited testing.
Also, I added in the Python binding the possibility to the user to specify the architecture he/she wants to use with the STIM_ARCHITECTURE environment variable. The default behavior is still SSE2 because of #432.
Below some basic benchmark (1% improvement in this case):
with the code:
TODO: correct the function
inplace_transpose_block_pass_64_and_128_and_256
line 234 in[src/stim/mem/simd_word_512_avx512.h](https://github.com/quantumlib/Stim/compare/main...MoiseRousseau:Stim:Moise/arch_avx512?expand=1#diff-75e1a1cb88702d8bf8b9f964584cababa9c07f6868db23d08a5b1015b3410ae3)
. However, it seems this function is not called in the code..Let me know what you think,
Moise