Replies: 3 comments
-
I hadn't seen that, very interesting. I would potentially be interested in seeing the scalar implementations. They may or may not translate well to SIMD. I only skimmed the paper but I wonder if they took floating point accuracy into account when coming up with new algorithms. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Forgot to link https://github.com/drbh/simd-alphatensor-rs. Here is mat4x4 * mat4x4 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hmm, they're only benchmarking with |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
You've probably seen Alphatensor discovering new matrix multiplication algorithms. Is there interest in porting these to glam?
Beta Was this translation helpful? Give feedback.
All reactions