diff --git a/external/sse2neon/CONTRIBUTING.md b/external/sse2neon/CONTRIBUTING.md index 862ffa52..4767ed7a 100644 --- a/external/sse2neon/CONTRIBUTING.md +++ b/external/sse2neon/CONTRIBUTING.md @@ -6,16 +6,457 @@ The following is a set of guidelines for contributing to [SSE2NEON](https://gith hosted on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. +## Issues + +This project uses GitHub Issues to track ongoing development, discuss project plans, and keep track of bugs. Be sure to search for existing issues before you create another one. + +Visit our [Issues page on GitHub](https://github.com/DLTcollab/sse2neon/issues) to search and submit. + ## Add New Intrinsic The new intrinsic conversion should be added in the `sse2neon.h` file, and it should be placed in the correct classification with the alphabetical order. -The classification can be referenced from [Intel Intrinsics Guide](https://software.intel.com/sites/landingpage/IntrinsicsGuide/#). +The classification can be referenced from [Intel Intrinsics Guide](https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html). Classification: `SSE`, `SSE2`, `SSE3`, `SSSE3`, `SSE4.1`, `SSE4.2` ## Coding Convention -Software requirement: `clang-format-11` +We welcome all contributions from corporate, acaddemic and individual developers. However, there are a number of fundamental ground rules that you must adhere to in order to participate. These rules are outlined as follows: +* All code must adhere to the existing C coding style (see below). While we are somewhat flexible in basic style, you will adhere to what is currently in place. Uncommented, complicated algorithmic constructs will be rejected. +* All external pull requests must contain sufficient documentation in the pull request comments in order to be accepted. + +Software requirement: [clang-format](https://clang.llvm.org/docs/ClangFormat.html) version 12 or later. + +Use the command `$ clang-format -i *.[ch]` to enforce a consistent coding style. + +## Naming Conventions + +There are some general rules. +* Names with leading and trailing underscores are reserved for system purposes, and most systems use them for names that the user should not have to know. +* Function, typedef, and variable names, as well as struct, union, and enum tag names should be in lower case. +* Many function-like macros are in all CAPS. +* Avoid names that differ only in case, like `foo` and `Foo`. Similarly, avoid `foobar` and `foo_bar`. The potential for confusion is considerable. +* Similarly, avoid names that look like each other. On many terminals and printers, `l`, `1` and `I` look quite similar. A variable named `l` is particularly bad because it looks so much like the constant `1`. + +In general, global names (including enums) should have a common prefix (`SSE2NEON_` for macros and enum constants; `_sse2neon_` for functions) identifying the module that they belong with. Globals may alternatively be grouped in a global structure. Typedeffed names often have `_t` appended to their name. + +Avoid using names that might conflict with other names used in standard libraries. There may be more library code included in some systems than you need. Your program could also be extended in the future. + +## Coding Style for Modern C + +This coding style is a variation of the K&R style. Some general principles: honor tradition, but accept progress; be consistent; +embrace the latest C standards; embrace modern compilers, their static analysis +capabilities and sanitizers. + +### Indentation + +Use 4 spaces rather than tabs. + +### Line length + +All lines should generally be within 80 characters. Wrap long lines. +There are some good reasons behind this: +* It forces the developer to write more succinct code; +* Humans are better at processing information in smaller quantity portions; +* It helps users of vi/vim (and potentially other editors) who use vertical splits. + +### Comments + +Multi-line comments shall have the opening and closing characters +in a separate line, with the lines containing the content prefixed by a space +and the `*` characters for alignment, e.g., +```c +/* + * This is a multi-line comment. + */ + +/* One line comment. */ +``` + +Use multi-line comments for more elaborative descriptions or before more +significant logical block of code. + +Single-line comments shall be written in C89 style: +```c + return (uintptr_t) val; /* return a bitfield */ +``` + +Leave two spaces between the statement and the inline comment. + +### Spacing and brackets + +Use one space after the conditional or loop keyword, no spaces around +their brackets, and one space before the opening curly bracket. + +Functions (their declarations or calls), `sizeof` operator or similar +macros shall not have a space after their name/keyword or around the +brackets, e.g., +```c +unsigned total_len = offsetof(obj_t, items[n]); +unsigned obj_len = sizeof(obj_t); +``` + +Use brackets to avoid ambiguity and with operators such as `sizeof`, +but otherwise avoid redundant or excessive brackets. + +### Variable names and declarations + +- Use descriptive names for global variables and short names for locals. +Find the right balance between descriptive and succinct. + +- Use [snakecase](https://en.wikipedia.org/wiki/Snake_case). +Do not use "camelcase". + +- Do not use Hungarian notation or other unnecessary prefixing or suffixing. + +- Use the following spacing for pointers: +```c +const char *name; /* const pointer; '*' with the name and space before it */ +conf_t * const cfg; /* pointer to a const data; spaces around 'const' */ +const uint8_t * const charmap; /* const pointer and const data */ +const void * restrict key; /* const pointer which does not alias */ +``` + +### Type definitions + +Declarations shall be on the same line, e.g., +```c +typedef void (*dir_iter_t)(void *, const char *, struct dirent *); +``` + +_Typedef_ structures rather than pointers. Note that structures can be kept +opaque if they are not dereferenced outside the translation unit where they +are defined. Pointers can be _typedefed_ only if there is a very compelling +reason. + +New types may be suffixed with `_t`. Structure name, when used within the +translation unit, may be omitted, e.g.: + +```c +typedef struct { + unsigned if_index; + unsigned addr_len; + addr_t next_hop; +} route_info_t; +``` + +### Initialization + +Embrace C99 structure initialization where reasonable, e.g., +```c +static const crypto_ops_t openssl_ops = { + .create = openssl_crypto_create, + .destroy = openssl_crypto_destroy, + .encrypt = openssl_crypto_encrypt, + .decrypt = openssl_crypto_decrypt, + .hmac = openssl_crypto_hmac, +}; +``` + +Embrace C99 array initialization, especially for the state machines, e.g., +```c +static const uint8_t tcp_fsm[TCP_NSTATES][2][TCPFC_COUNT] = { + [TCPS_CLOSED] = { + [FLOW_FORW] = { + /* Handshake (1): initial SYN. */ + [TCPFC_SYN] = TCPS_SYN_SENT, + }, + }, + ... +} +``` + +### Control structures + +Try to make the control flow easy to follow. Avoid long convoluted logic +expressions; try to split them where possible (into inline functions, +separate if-statements, etc). + +The control structure keyword and the expression in the brackets should be +separated by a single space. The opening curly bracket shall be in the +same line, also separated by a single space. Example: + +```c + for (;;) { + obj = get_first(); + while ((obj = get_next(obj))) { + ... + } + if (done) + break; + } +``` + +Do not add inner spaces around the brackets. There should be one space after +the semicolon when `for` has expressions: +```c + for (unsigned i = 0; i < __arraycount(items); i++) { + ... + } +``` + +#### Avoid unnecessary nesting levels + +Avoid: +```c +int inspect(obj_t *obj) +{ + if (cond) { + ... + /* long code block */ + ... + return 0; + } + return -1; +} +``` + +Consider: +```c +int inspect(obj_t *obj) +{ + if (!cond) + return -1; + + ... + return 0; +} +``` + +However, do not make logic more convoluted. + +### `if` statements + +Curly brackets and spacing follow the K&R style: +```c + if (a == b) { + .. + } else if (a < b) { + ... + } else { + ... + } +``` + +Simple and succinct one-line if-statements may omit curly brackets: +```c + if (!valid) + return -1; +``` + +However, do prefer curly brackets with multi-line or more complex statements. +If one branch uses curly brackets, then all other branches shall use the +curly brackets too. + +Wrap long conditions to the if-statement indentation adding extra 4 spaces: +```c + if (some_long_expression && + another_expression) { + ... + } +``` + +#### Avoid redundant `else` + +Avoid: +```c + if (flag & F_FEATURE_X) { + ... + return 0; + } else { + return -1; + } +``` + +Consider: +```c + if (flag & F_FEATURE_X) { + ... + return 0; + } + return -1; +``` + +### `switch` statements + +Switch statements should have the `case` blocks at the same indentation +level, e.g.: +```c + switch (expr) { + case A: + ... + break; + case B: + /* fallthrough */ + case C: + ... + break; + } +``` + +If the case block does not break, then it is strongly recommended to add a +comment containing "fallthrough" to indicate it. Modern compilers can also +be configured to require such comment (see gcc `-Wimplicit-fallthrough`). + +### Function definitions + +The opening and closing curly brackets shall also be in the separate lines (K&R style). + +```c +ssize_t hex_write(FILE *stream, const void *buf, size_t len) +{ + ... +} +``` + +Do not use old style K&R style C definitions. + +### Object abstraction + +Objects are often "simulated" by the C programmers with a `struct` and +its "public API". To enforce the information hiding principle, it is a +good idea to define the structure in the source file (translation unit) +and provide only the _declaration_ in the header. For example, `obj.c`: + +```c +#include "obj.h" + +struct obj { + int value; +} + +obj_t *obj_create(void) +{ + return calloc(1, sizeof(obj_t)); +} + +void obj_destroy(obj_t *obj) +{ + free(obj); +} +``` + +With an example `obj.h`: +```c +#ifndef _OBJ_H_ +#define _OBJ_H_ + +typedef struct obj; + +obj_t *obj_create(void); +void obj_destroy(obj_t *); + +#endif +``` + +Such structuring will prevent direct access of the `obj_t` members outside +the `obj.c` source file. The implementation (of such "class" or "module") +may be large and abstracted within separate source files. In such case, +consider separating structures and "methods" into separate headers (think of +different visibility), for example `obj_impl.h` (private) and `obj.h` (public). + +Consider `crypto_impl.h`: +```c +#ifndef _CRYPTO_IMPL_H_ +#define _CRYPTO_IMPL_H_ + +#if !defined(__CRYPTO_PRIVATE) +#error "only to be used by the crypto modules" +#endif + +#include "crypto.h" + +typedef struct crypto { + crypto_cipher_t cipher; + void *key; + size_t key_len; + ... +} +... + +#endif +``` + +And `crypto.h` (public API): + +```c +#ifndef _CRYPTO_H_ +#define _CRYPTO_H_ + +typedef struct crypto crypto_t; + +crypto_t *crypto_create(crypto_cipher_t); +void crypto_destroy(crypto_t *); +... + +#endif +``` + +### Use reasonable types + +Use `unsigned` for general iterators; use `size_t` for general sizes; use +`ssize_t` to return a size which may include an error. Of course, consider +possible overflows. + +Avoid using `uint8_t` or `uint16_t` or other sub-word types for general +iterators and similar cases, unless programming for micro-controllers or +other constrained environments. + +C has rather peculiar _type promotion rules_ and unnecessary use of sub-word +types might contribute to a bug once in a while. + +### Embrace portability + +#### Byte-order + +Do not assume x86 or little-endian architecture. Use endian conversion +functions for operating the on-disk and on-the-wire structures or other +cases where it is appropriate. + +#### Types + +- Do not assume a particular 32-bit vs 64-bit architecture, e.g., do not +assume the size of `long` or `unsigned long`. Use `int64_t` or `uint64_t` +for the 8-byte integers. + +- Do not assume `char` is signed; for example, on Arm it is unsigned. + +- Use C99 macros for constant prefixes or formatting of the fixed-width +types. + +Use: +```c +#define SOME_CONSTANT (UINT64_C(1) << 48) +printf("val %" PRIu64 "\n", SOME_CONSTANT); +``` + +Do not use: +```c +#define SOME_CONSTANT (1ULL << 48) +printf("val %lld\n", SOME_CONSTANT); +``` + +#### Avoid unaligned access + +Do not assume unaligned access is safe. It is not safe on Arm, POWER, +and various other architectures. Moreover, even on x86 unaligned access +is slower. + +#### Avoid extreme portability + +Unless programming for micro-controllers or exotic CPU architectures, +focus on the common denominator of the modern CPU architectures, avoiding +the very maximum portability which can make the code unnecessarily cumbersome. + +Some examples: +- It is fair to assume `sizeof(int) == 4` since it is the case on all modern +mainstream architectures. PDP-11 era is long gone. +- Using `1U` instead of `UINT32_C(1)` or `(uint32_t) 1` is also fine. +- It is fair to assume that `NULL` is matching `(uintptr_t) 0` and it is fair +to `memset()` structures with zero. Non-zero `NULL` is for retro computing. -Use the command `$ make indent` to enforce a consistent coding style. +## References +- [Linux kernel coding style](https://www.kernel.org/doc/html/latest/process/coding-style.html) +- 1999, Brian W. Kernighan and Rob Pike, The Practice of Programming, Addison–Wesley. +- 1993, Bill Shannon, [C Style and Coding Standards for SunOS](https://devnull-cz.github.io/unix-linux-prog-in-c/cstyle.ms.pdf) diff --git a/external/sse2neon/LICENSE b/external/sse2neon/LICENSE index 9cf10627..71488b16 100644 --- a/external/sse2neon/LICENSE +++ b/external/sse2neon/LICENSE @@ -1,5 +1,7 @@ MIT License +Copyright (c) 2015-2024 SSE2NEON Contributors + Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights diff --git a/external/sse2neon/Makefile b/external/sse2neon/Makefile index 132721ef..999a3a7b 100644 --- a/external/sse2neon/Makefile +++ b/external/sse2neon/Makefile @@ -1,3 +1,7 @@ +ifndef CC +override CC = gcc +endif + ifndef CXX override CXX = g++ endif @@ -5,6 +9,7 @@ endif ifndef CROSS_COMPILE processor := $(shell uname -m) else # CROSS_COMPILE was set + CC = $(CROSS_COMPILE)gcc CXX = $(CROSS_COMPILE)g++ CXXFLAGS += -static LDFLAGS += -static @@ -24,14 +29,32 @@ EXEC_WRAPPER = qemu-$(processor) endif # Follow platform-specific configurations -ifeq ($(processor),$(filter $(processor),aarch64 arm64)) - ARCH_CFLAGS = -march=armv8-a+fp+simd+crc -else ifeq ($(processor),$(filter $(processor),i386 x86_64)) - ARCH_CFLAGS = -maes -mpclmul -mssse3 -msse4.2 -else ifeq ($(processor),$(filter $(processor),arm armv7l)) - ARCH_CFLAGS = -mfpu=neon -else - $(error Unsupported architecture) +ARCH_CFLAGS ?= +ARCH_CFLAGS_IS_SET = +ifeq ($(ARCH_CFLAGS),) + ARCH_CFLAGS_IS_SET = true +endif +ifeq ($(ARCH_CFLAGS),none) + ARCH_CFLAGS_IS_SET = true +endif +ifdef ARCH_CFLAGS_IS_SET + ifeq ($(processor),$(filter $(processor),aarch64 arm64)) + override ARCH_CFLAGS := -march=armv8-a+fp+simd + else ifeq ($(processor),$(filter $(processor),i386 x86_64)) + override ARCH_CFLAGS := -maes -mpclmul -mssse3 -msse4.2 + else ifeq ($(processor),$(filter $(processor),arm armv7 armv7l)) + override ARCH_CFLAGS := -mfpu=neon + else + $(error Unsupported architecture) + endif +endif + +FEATURE ?= +ifneq ($(FEATURE),) +ifneq ($(FEATURE),none) +COMMA:= , +ARCH_CFLAGS := $(ARCH_CFLAGS)+$(subst $(COMMA),+,$(FEATURE)) +endif endif CXXFLAGS += -Wall -Wcast-qual -I. $(ARCH_CFLAGS) -std=gnu++14 @@ -53,15 +76,18 @@ $(EXEC): $(OBJS) $(CXX) $(LDFLAGS) -o $@ $^ check: tests/main +ifeq ($(processor),$(filter $(processor),aarch64 arm64 arm armv7l)) + $(CC) $(ARCH_CFLAGS) -c sse2neon.h +endif $(EXEC_WRAPPER) $^ indent: - @echo "Formating files with clang-format.." - @if ! hash clang-format-11; then echo "clang-format-11 is required to indent"; fi - clang-format-11 -i sse2neon.h tests/*.cpp tests/*.h + @echo "Formatting files with clang-format.." + @if ! hash clang-format-12; then echo "clang-format-12 is required to indent"; fi + clang-format-12 -i sse2neon.h tests/*.cpp tests/*.h .PHONY: clean check format clean: - $(RM) $(OBJS) $(EXEC) $(deps) + $(RM) $(OBJS) $(EXEC) $(deps) sse2neon.h.gch -include $(deps) diff --git a/external/sse2neon/README.md b/external/sse2neon/README.md index 6afe3503..c35bcc71 100644 --- a/external/sse2neon/README.md +++ b/external/sse2neon/README.md @@ -30,12 +30,40 @@ Header file | Extension | In order to deliver NEON-equivalent intrinsics for all SSE intrinsics used widely, please be aware that some SSE intrinsics exist a direct mapping with a concrete -NEON-equivalent intrinsic. However, others lack of 1-to-1 mapping, that means the -equivalents are implemented using several NEON intrinsics. +NEON-equivalent intrinsic. Others, unfortunately, lack a 1:1 mapping, meaning that +their equivalents are built utilizing a number of NEON intrinsics. For example, SSE intrinsic `_mm_loadu_si128` has a direct NEON mapping (`vld1q_s32`), but SSE intrinsic `_mm_maddubs_epi16` has to be implemented with 13+ NEON instructions. +### Floating-point compatibility + +Some conversions require several NEON intrinsics, which may produce inconsistent results +compared to their SSE counterparts due to differences in the arithmetic rules of IEEE-754. + +Taking a possible conversion of `_mm_rsqrt_ps` as example: + +```c +__m128 _mm_rsqrt_ps(__m128 in) +{ + float32x4_t out = vrsqrteq_f32(vreinterpretq_f32_m128(in)); + + out = vmulq_f32( + out, vrsqrtsq_f32(vmulq_f32(vreinterpretq_f32_m128(in), out), out)); + + return vreinterpretq_m128_f32(out); +} +``` + +The `_mm_rsqrt_ps` conversion will produce NaN if a source value is `0.0` (first INF for the +reciprocal square root of `0.0`, then INF * `0.0` using `vmulq_f32`). In contrast, +the SSE counterpart produces INF if a source value is `0.0`. +As a result, additional treatments should be applied to ensure consistency between the conversion and its SSE counterpart. + +## Requirement + +Developers are advised to utilize sse2neon.h with GCC version 10 or higher, or Clang version 11 or higher. While sse2neon.h might be compatible with earlier versions, certain vector operation errors have been identified in those versions. For further details, refer to the discussion in issue [#622](https://github.com/DLTcollab/sse2neon/issues/622). + ## Usage - Put the file `sse2neon.h` in to your source code directory. @@ -45,7 +73,7 @@ but SSE intrinsic `_mm_maddubs_epi16` has to be implemented with 13+ NEON instru #include #include ``` - {p,t,s,n,w}mmintrin.h should be replaceable, but the coverage of these extensions might be limited though. + {p,t,s,n,w}mmintrin.h could be replaceable as well. - Replace them with: ```C @@ -53,10 +81,14 @@ but SSE intrinsic `_mm_maddubs_epi16` has to be implemented with 13+ NEON instru ``` - Explicitly specify platform-specific options to gcc/clang compilers. - * On ARMv8-A targets, you should specify the following compiler option: (Remove `crypto` and/or `crc` if your architecture does not support cryptographic and/or CRC32 extensions) + * On ARMv8-A 64-bit targets, you should specify the following compiler option: (Remove `crypto` and/or `crc` if your architecture does not support cryptographic and/or CRC32 extensions) ```shell -march=armv8-a+fp+simd+crypto+crc ``` + * On ARMv8-A 32-bit targets, you should specify the following compiler option: + ```shell + -mfpu=neon-fp-armv8 + ``` * On ARMv7-A targets, you need to append the following compiler option: ```shell -mfpu=neon @@ -64,10 +96,12 @@ but SSE intrinsic `_mm_maddubs_epi16` has to be implemented with 13+ NEON instru ## Compile-time Configurations +Though floating-point operations in NEON use the IEEE single-precision format, NEON does not fully comply to the IEEE standard when inputs or results are denormal or NaN values for minimizing power consumption as well as maximizing performance. Considering the balance between correctness and performance, `sse2neon` recognizes the following compile-time configurations: -* `SSE2NEON_PRECISE_MINMAX`: Enable precise implementation of `_mm_min_ps` and `_mm_max_ps`. If you need consistent results such as NaN special cases, enable it. +* `SSE2NEON_PRECISE_MINMAX`: Enable precise implementation of `_mm_min_{ps,pd}` and `_mm_max_{ps,pd}`. If you need consistent results such as handling with NaN values, enable it. * `SSE2NEON_PRECISE_DIV`: Enable precise implementation of `_mm_rcp_ps` and `_mm_div_ps` by additional Netwon-Raphson iteration for accuracy. * `SSE2NEON_PRECISE_SQRT`: Enable precise implementation of `_mm_sqrt_ps` and `_mm_rsqrt_ps` by additional Netwon-Raphson iteration for accuracy. +* `SSE2NEON_PRECISE_DP`: Enable precise implementation of `_mm_dp_pd`. When the conditional bit is not set, the corresponding multiplication would not be executed. The above are turned off by default, and you should define the corresponding macro(s) as `1` before including `sse2neon.h` if you need the precise implementations. @@ -80,56 +114,144 @@ runtime. Use the following commands to perform test cases: $ make check ``` -You can specify GNU toolchain for cross compilation as well. +For running check with enabling features, you can use assign the features with `FEATURE` command. +If `none` is assigned, then the command will be the same as simply calling `make check`. +The following command enable `crypto` and `crc` features in the tests. +``` +$ make FEATURE=crypto+crc check +``` + +For running check on certain CPU, setting the mode of FPU, etc., +you can also assign the desired options with `ARCH_CFLAGS` command. +If `none` is assigned, the command acts as same as calling `make check`. +For instance, to run tests on Cortex-A53 with enabling ARM VFPv4 extension and NEON: +``` +$ make ARCH_CFLAGS="-mcpu=cortex-a53 -mfpu=neon-vfpv4" check +``` + +### Running tests on hosts other than ARM platform + +For running tests on hosts other than ARM platform, +you can specify GNU toolchain for cross compilation with `CROSS_COMPILE` command. [QEMU](https://www.qemu.org/) should be installed in advance. + +For ARMv8-A running in 64-bit mode type: ```shell $ make CROSS_COMPILE=aarch64-linux-gnu- check # ARMv8-A ``` -or + +For ARMv7-A type: ```shell $ make CROSS_COMPILE=arm-linux-gnueabihf- check # ARMv7-A ``` +For ARMv8-A running in 32-bit mode (A32 instruction set) type: +```shell +$ make \ + CROSS_COMPILE=arm-linux-gnueabihf- \ + ARCH_CFLAGS="-mcpu=cortex-a32 -mfpu=neon-fp-armv8" \ + check +``` + Check the details via [Test Suite for SSE2NEON](tests/README.md). ## Adoptions Here is a partial list of open source projects that have adopted `sse2neon` for Arm/Aarch64 support. +* [Aaru Data Preservation Suite](https://www.aaru.app/) is a fully-featured software package to preserve all storage media from the very old to the cutting edge, as well as to give detailed information about any supported image file (whether from Aaru or not) and to extract the files from those images. * [aether-game-utils](https://github.com/johnhues/aether-game-utils) is a collection of cross platform utilities for quickly creating small game prototypes in C++. +* [ALE](https://github.com/sc932/ALE), aka Assembly Likelihood Evaluation, is a tool for evaluating accuracy of assemblies without the need of a reference genome. +* [AnchorWave](https://github.com/baoxingsong/AnchorWave), Anchored Wavefront Alignment, identifies collinear regions via conserved anchors (full-length CDS and full-length exon have been implemented currently) and breaks collinear regions into shorter fragments, i.e., anchor and inter-anchor intervals. +* [ATAK-CIV](https://github.com/deptofdefense/AndroidTacticalAssaultKit-CIV), Android Tactical Assault Kit for Civilian Use, is the official geospatial-temporal and situational awareness tool used by the US Government. +* [Apache Doris](https://doris.apache.org/) is a Massively Parallel Processing (MPP) based interactive SQL data warehousing for reporting and analysis. * [Apache Impala](https://impala.apache.org/) is a lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters. * [Apache Kudu](https://kudu.apache.org/) completes Hadoop's storage layer to enable fast analytics on fast data. +* [apollo](https://github.com/ApolloAuto/apollo) is a high performance, flexible architecture which accelerates the development of Autonomous Vehicles. +* [ares](https://github.com/ares-emulator/ares) is a cross-platform, open source, multi-system emulator, focusing on accuracy and preservation. * [ART](https://github.com/dinosaure/art) is an implementation in OCaml of [Adaptive Radix Tree](https://db.in.tum.de/~leis/papers/ART.pdf) (ART). * [Async](https://github.com/romange/async) is a set of c++ primitives that allows efficient and rapid development in C++17 on GNU/Linux systems. +* [avec](https://github.com/unevens/avec) is a little library for using SIMD instructions on both x86 and Arm. +* [BEAGLE](https://github.com/beagle-dev/beagle-lib) is a high-performance library that can perform the core calculations at the heart of most Bayesian and Maximum Likelihood phylogenetics packages. +* [BitMagic](https://github.com/tlk00/BitMagic) implements compressed bit-vectors and containers (vectors) based on ideas of bit-slicing transform and Rank-Select compression, offering sets of method to architect your applications to use HPC techniques to save memory (thus be able to fit more data in one compute unit) and improve storage and traffic patterns when storing data vectors and models in files or object stores. +* [bipartite\_motif\_finder](https://github.com/soedinglab/bipartite_motif_finder) as known as BMF (Bipartite Motif Finder) is an open source tool for finding co-occurences of sequence motifs in genomic sequences. * [Blender](https://www.blender.org/) is the free and open source 3D creation suite, supporting the entirety of the 3D pipeline. * [Boo](https://github.com/AxioDL/boo) is a cross-platform windowing and event manager similar to SDL or SFML, with additional 3D rendering functionality. +* [Brickworks](https://github.com/sdangelo/brickworks) is a music DSP toolkit that supplies with the fundamental building blocks for creating and enhancing audio engines on any platform. * [CARTA](https://github.com/CARTAvis/carta-backend) is a new visualization tool designed for viewing radio astronomy images in CASA, FITS, MIRIAD, and HDF5 formats (using the IDIA custom schema for HDF5). * [Catcoon](https://github.com/i-evi/catcoon) is a [feedforward neural network](https://en.wikipedia.org/wiki/Feedforward_neural_network) implementation in C. +* [compute-runtime](https://github.com/intel/compute-runtime), the Intel Graphics Compute Runtime for oneAPI Level Zero and OpenCL Driver, provides compute API support (Level Zero, OpenCL) for Intel graphics hardware architectures (HD Graphics, Xe). +* [contour](https://github.com/contour-terminal/contour) is a modern and actually fast virtual terminal emulator. +* [Cog](https://github.com/losnoco/Cog) is a free and open source audio player for macOS. * [dab-cmdline](https://github.com/JvanKatwijk/dab-cmdline) provides entries for the functionality to handle Digital audio broadcasting (DAB)/DAB+ through some simple calls. +* [DISTRHO](https://distrho.sourceforge.io/) is an open-source project for Cross-Platform Audio Plugins. +* [Dragonfly](https://github.com/dragonflydb/dragonfly) is a modern in-memory datastore, fully compatible with Redis and Memcached APIs. * [EDGE](https://github.com/3dfxdev/EDGE) is an advanced OpenGL source port spawned from the DOOM engine, with focus on easy development and expansion for modders and end-users. -* [Embree](https://github.com/embree/embree) a collection of high-performance ray tracing kernels. Its target users are graphics application engineers who want to improve the performance of their photo-realistic rendering application by leveraging Embree's performance-optimized ray tracing kernels. +* [Embree](https://github.com/embree/embree) is a collection of high-performance ray tracing kernels. Its target users are graphics application engineers who want to improve the performance of their photo-realistic rendering application by leveraging Embree's performance-optimized ray tracing kernels. * [emp-tool](https://github.com/emp-toolkit/emp-tool) aims to provide a benchmark for secure computation and allowing other researchers to experiment and extend. +* [Exudyn](https://github.com/jgerstmayr/EXUDYN) is a C++ based Python library for efficient simulation of flexible multibody dynamics systems. * [FoundationDB](https://www.foundationdb.org) is a distributed database designed to handle large volumes of structured data across clusters of commodity servers. -* [iqtree_arm_neon](https://github.com/joshlvmh/iqtree_arm_neon) is the Arm NEON port of [IQ-TREE](http://www.iqtree.org/), fast and effective stochastic algorithm to infer phylogenetic trees by maximum likelihood. +* [fsrc](https://github.com/elsamuko/fsrc) is capable of searching large codebases for text snippets. +* [gmmlib](https://github.com/intel/gmmlib) is the Intel Graphics Memory Management Library that provides device specific and buffer management for the Intel Graphics Compute Runtime for OpenCL and the Intel Media Driver for VAAPI. +* [HISE](https://github.com/christophhart/HISE) is a cross-platform open source audio application for building virtual instruments, emphasizing on sampling, but includes some basic synthesis features for making hybrid instruments as well as audio effects. +* [iqtree2](https://github.com/iqtree/iqtree2) is an efficient and versatile stochastic implementation to infer phylogenetic trees by maximum likelihood. +* [indelPost](https://github.com/stjude/indelPost) is a Python library for indel processing via realignment and read-based phasing to resolve alignment ambiguities. +* [IResearch](https://github.com/iresearch-toolkit/iresearch) is a cross-platform, high-performance document oriented search engine library written entirely in C++ with the focus on a pluggability of different ranking/similarity models. +* [Kraken](https://github.com/Wabi-Studios/Kraken) is a 3D animation platform redefining animation composition, collaborative workflows, simulation engines, skeletal rigging systems, and look development from storyboard to final render. * [kram](https://github.com/alecazam/kram) is a wrapper to several popular encoders to and from PNG/[KTX](https://www.khronos.org/opengles/sdk/tools/KTX/file_format_spec/) files with [LDR/HDR and BC/ASTC/ETC2](https://developer.arm.com/solutions/graphics-and-gaming/developer-guides/learn-the-basics/adaptive-scalable-texture-compression/single-page). +* [Krita](https://invent.kde.org/graphics/krita) is a cross-platform application that offers an end-to-end solution for creating digital art files from scratch built on the KDE and Qt frameworks. +* [libCML](https://github.com/belosthomas/libCML) is a SLAM library and scientific tool, which include a novel fast thread-safe graph map implementation. +* [libhdfs3](https://github.com/ClickHouse/libhdfs3) is implemented based on native Hadoop RPC protocol and Hadoop Distributed File System (HDFS), a highly fault-tolerant distributed fs, data transfer protocol. +* [libpostal](https://github.com/openvenues/libpostal) is a C library for parsing/normalizing street addresses around the world using statistical NLP and open data. * [libscapi](https://github.com/cryptobiu/libscapi) stands for the "Secure Computation API", providing reliable, efficient, and highly flexible cryptographic infrastructure. +* [libstreamvbyte](https://github.com/wst24365888/libstreamvbyte) is a C++ implementation of [StreamVByte](https://arxiv.org/abs/1709.08990). * [libmatoya](https://github.com/matoya/libmatoya) is a cross-platform application development library, providing various features such as common cryptography tasks. +* [Loosejaw](https://github.com/TheHolyDiver/Loosejaw) provides deep hybrid CPU/GPU digital signal processing. +* [Madronalib](https://github.com/madronalabs/madronalib) enables efficient audio DSP on SIMD processors with readable and brief C++ code. * [minimap2](https://github.com/lh3/minimap2) is a versatile sequence alignment program that aligns DNA or mRNA sequences against a large reference database. +* [mixed-fem](https://github.com/tytrusty/mixed-fem) is an open source reference implementation of Mixed Variational Finite Elements for Implicit Simulation of Deformables. * [MMseqs2](https://github.com/soedinglab/MMseqs2) (Many-against-Many sequence searching) is a software suite to search and cluster huge protein and nucleotide sequence sets. * [MRIcroGL](https://github.com/rordenlab/MRIcroGL) is a cross-platform tool for viewing NIfTI, DICOM, MGH, MHD, NRRD, AFNI format medical images. * [N2](https://github.com/oddconcepts/n2o) is an approximate nearest neighborhoods algorithm library written in C++, providing a much faster search speed than other implementations when modeling large dataset. +* [nanors](https://github.com/sleepybishop/nanors) is a tiny, performant implementation of [Reed-Solomon codes](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction), capable of reaching multi-gigabit speeds on a single core. * [niimath](https://github.com/rordenlab/niimath) is a general image calculator with superior performance. -* [OBS Studio](https://github.com/obsproject/obs-studio) is software designed for capturing, compositing, encoding, recording, and streaming video content, efficiently. +* [NVIDIA GameWorks](https://developer.nvidia.com/gameworks-source-github) has been already used in a lot of games. These repositories are public on GitHub. +* [Nx Meta Platform Open Source Components](https://github.com/networkoptix/nx_open) are used to build all Powered-by-Nx products including Nx Witness Video Management System (VMS). +* [ofxNDI](https://github.com/leadedge/ofxNDI) is an [openFrameworks](https://openframeworks.cc/) addon to allow sending and receiving images over a network using the [NewTek](https://en.wikipedia.org/wiki/NewTek) Network Device Protocol. * [OGRE](https://github.com/OGRECave/ogre) is a scene-oriented, flexible 3D engine written in C++ designed to make it easier and more intuitive for developers to produce games and demos utilising 3D hardware. +* [Olive](https://github.com/olive-editor/olive) is a free non-linear video editor for Windows, macOS, and Linux. +* [OpenColorIO](https://github.com/AcademySoftwareFoundation/OpenColorIO) a complete color management solution geared towards motion picture production with an emphasis on visual effects and computer animation. * [OpenXRay](https://github.com/OpenXRay/xray-16) is an improved version of the X-Ray engine, used in world famous S.T.A.L.K.E.R. game series by GSC Game World. * [parallel-n64](https://github.com/libretro/parallel-n64) is an optimized/rewritten Nintendo 64 emulator made specifically for [Libretro](https://www.libretro.com/). +* [Pathfinder C++](https://github.com/floppyhammer/pathfinder-cpp) is a fast, practical, GPU-based rasterizer for fonts and vector graphics using Vulkan and C++. * [PFFFT](https://github.com/marton78/pffft) does 1D Fast Fourier Transforms, of single precision real and complex vectors. +* [pixaccess](https://github.com/oliverue/pixaccess) provides the abstractions for integer and float bitmaps, pixels, and aliased (nearest neighbor) and anti-aliased (bi-linearly interpolated) pixel access. * [PlutoSDR Firmware](https://github.com/seanstone/plutosdr-fw) is the customized firmware for the [PlutoSDR](https://wiki.analog.com/university/tools/pluto) that can be used to introduce fundamentals of Software Defined Radio (SDR) or Radio Frequency (RF) or Communications as advanced topics in electrical engineering in a self or instructor lead setting. +* [PowerToys](https://github.com/microsoft/PowerToys) is a set of utilities for power users to tune and streamline their Windows experience for greater productivity. * [Pygame](https://www.pygame.org) is cross-platform and designed to make it easy to write multimedia software, such as games, in Python. -* [simd_utils](https://github.com/JishinMaster/simd_utils) is a header-only library implementing common mathematical functions using SIMD intrinsics. +* [R:RandomFieldsUtils](https://cran.r-project.org/web/packages/RandomFieldsUtils) provides various utilities might be used in spatial statistics and elsewhere. (CRAN) +* [RAxML](https://github.com/stamatak/standard-RAxML) is tool for Phylogenetic Analysis and Post-Analysis of Large Phylogenies. +* [ReHLDS](https://github.com/gennadykataev/rehlds) is fully compatible with latest Half-Life Dedicated Server (HLDS) with a lot of defects and (potential) bugs fixed. +* [rkcommon](https://github.com/ospray/rkcommon) represents a common set of C++ infrastructure and CMake utilities used by various components of [Intel oneAPI Rendering Toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/rendering-toolkit.html). +* [RPCS3](https://github.com/RPCS3/rpcs3) is the world's first free and open-source PlayStation 3 emulator/debugger, written in C++. +* [simd\_utils](https://github.com/JishinMaster/simd_utils) is a header-only library implementing common mathematical functions using SIMD intrinsics. +* [Sire](https://github.com/OpenBioSim/sire) is a molecular modelling framework that provides extensive functionality to manipulate representations of biomolecular systems. * [SMhasher](https://github.com/rurban/smhasher) provides comprehensive Hash function quality and speed tests. +* [SNN++](https://github.com/ianmkim/snnpp) implements a single layer non linear Spiking Neural Network for images classification and generation. * [Spack](https://github.com/spack/spack) is a multi-platform package manager that builds and installs multiple versions and configurations of software. +* [SRA](https://github.com/ncbi/sra-tools) is a collection of tools and libraries for using data in the [INSDC Sequence Read Archives](https://www.ncbi.nlm.nih.gov/sra/docs/). * [srsLTE](https://github.com/srsLTE/srsLTE) is an open source SDR LTE software suite. +* [SSW](https://github.com/mengyao/Complete-Striped-Smith-Waterman-Library) is a fast implementation of the [Smith-Waterman algorithm](https://en.wikipedia.org/wiki/Smith%E2%80%93Waterman_algorithm), which uses the SIMD instructions to parallelize the algorithm at the instruction level. * [Surge](https://github.com/surge-synthesizer/surge) is an open source digital synthesizer. +* [The Forge](https://github.com/ConfettiFX/The-Forge) is a cross-platform rendering framework, providing building blocks to write your own game engine. +* [Typesense](https://github.com/typesense/typesense) is a fast, typo-tolerant search engine for building delightful search experiences. +* [Vcpkg](https://github.com/microsoft/vcpkg) is a C++ Library Manager for Windows, Linux, and macOS. +* [VelocyPack](https://github.com/arangodb/velocypack) is a fast and compact format for serialization and storage. +* [VOLK](https://github.com/gnuradio/volk), Vector-Optimized Library of Kernel, is a sub-project of [GNU Radio](https://www.gnuradio.org/). +* [Vowpal Wabbit](https://github.com/VowpalWabbit/vowpal_wabbit) is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning. +* [Winter](https://github.com/rosenthj/Winter) is the top rated chess engine from Switzerland and has competed at top invite only computer chess events. +* [XEVE](https://github.com/mpeg5/xeve) (eXtra-fast Essential Video Encoder) is an open sourced and fast MPEG-5 EVC encoder. * [XMRig](https://github.com/xmrig/xmrig) is an open source CPU miner for [Monero](https://web.getmonero.org/) cryptocurrency. +* [xsimd](https://github.com/xtensor-stack/xsimd) provides a unified means for using SIMD intrinsics and parallelized, optimized mathematical functions. +* [YACL](https://github.com/secretflow/yasl) is a C++ library contains modules and utilities which [SecretFlow](https://github.com/secretflow) code depends on. ## Related Projects * [SIMDe](https://github.com/simd-everywhere/simde): fast and portable implementations of SIMD @@ -137,16 +259,28 @@ Here is a partial list of open source projects that have adopted `sse2neon` for * [CatBoost's sse2neon](https://github.com/catboost/catboost/blob/master/library/cpp/sse/sse2neon.h) * [ARM\_NEON\_2\_x86\_SSE](https://github.com/intel/ARM_NEON_2_x86_SSE) * [AvxToNeon](https://github.com/kunpengcompute/AvxToNeon) -* [POWER/PowerPC support for GCC](https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rs6000) contains a series of headers simplifying porting x86_64 code that - makes explicit use of Intel intrinsics to powerpc64le (pure little-endian mode that has been introduced with the [POWER8](https://en.wikipedia.org/wiki/POWER8)). +* [sse2rvv](https://github.com/FeddrickAquino/sse2rvv): C header file that converts Intel SSE intrinsics to RISC-V Vector intrinsic. +* [sse2msa](https://github.com/i-evi/sse2msa): A C/C++ header file that converts Intel SSE intrinsics to MIPS/MIPS64 MSA intrinsics. +* [sse2zig](https://github.com/aqrit/sse2zig): Intel SSE intrinsics mapped to [Zig](https://ziglang.org/) vector extensions. +* [POWER/PowerPC support for GCC](https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rs6000) contains a series of headers simplifying porting x86\_64 code that makes explicit use of Intel intrinsics to powerpc64le (pure little-endian mode that has been introduced with the [POWER8](https://en.wikipedia.org/wiki/POWER8)). - implementation: [xmmintrin.h](https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rs6000/xmmintrin.h), [emmintrin.h](https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rs6000/emmintrin.h), [pmmintrin.h](https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rs6000/pmmintrin.h), [tmmintrin.h](https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rs6000/tmmintrin.h), [smmintrin.h](https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rs6000/smmintrin.h) ## Reference -* [Intel Intrinsics Guide](https://software.intel.com/sites/landingpage/IntrinsicsGuide/) +* [Intel Intrinsics Guide](https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html) +* [Microsoft: x86 intrinsics list](https://learn.microsoft.com/en-us/cpp/intrinsics/x86-intrinsics-list) * [Arm Neon Intrinsics Reference](https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics) * [Neon Programmer's Guide for Armv8-A](https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/neon-programmers-guide-for-armv8-a) * [NEON Programmer's Guide](https://static.docs.arm.com/den0018/a/DEN0018A_neon_programmers_guide_en.pdf) -* [qemu/target/i386/ops_sse.h](https://github.com/qemu/qemu/blob/master/target/i386/ops_sse.h): Comprehensive SSE instruction emulation in C. Ideal for semantic checks. +* [qemu/target/i386/ops\_sse.h](https://github.com/qemu/qemu/blob/master/target/i386/ops_sse.h): Comprehensive SSE instruction emulation in C. Ideal for semantic checks. +* [Porting Takua Renderer to 64-bit ARM- Part 1](https://blog.yiningkarlli.com/2021/05/porting-takua-to-arm-pt1.html) +* [Porting Takua Renderer to 64-bit ARM- Part 2](https://blog.yiningkarlli.com/2021/07/porting-takua-to-arm-pt2.html) +* [Comparing SIMD on x86-64 and arm64](https://blog.yiningkarlli.com/2021/09/neon-vs-sse.html) +* [Port with SSE2Neon and SIMDe](https://developer.arm.com/documentation/102581/0200/Port-with-SSE2Neon-and-SIMDe) +* [Genomics: Optimizing the BWA aligner for Arm Servers](https://community.arm.com/arm-community-blogs/b/high-performance-computing-blog/posts/optimizing-genomics-and-the-bwa-aligner-for-arm-servers) +* [Bit twiddling with Arm Neon: beating SSE movemasks, counting bits and more](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/porting-x86-vector-bitmask-optimizations-to-arm-neon) +* [C/C++ on Graviton](https://github.com/aws/aws-graviton-getting-started/blob/main/c-c%2B%2B.md) +* [Tune graphics-intensive games for Apple silicon](https://developer.apple.com/games/planning/) +* [Benchmarking and Testing of Qualcomm Snapdragon System-on-Chip for JPL Space Applications and Missions](https://ieeexplore.ieee.org/abstract/document/9843518) ## Licensing diff --git a/external/sse2neon/sse2neon.h b/external/sse2neon/sse2neon.h index 9fc39876..2b12721b 100644 --- a/external/sse2neon/sse2neon.h +++ b/external/sse2neon/sse2neon.h @@ -1,34 +1,11 @@ #ifndef SSE2NEON_H #define SSE2NEON_H -// This header file provides a simple API translation layer -// between SSE intrinsics to their corresponding Arm/Aarch64 NEON versions -// -// This header file does not yet translate all of the SSE intrinsics. -// -// Contributors to this work are: -// John W. Ratcliff -// Brandon Rowlett -// Ken Fast -// Eric van Beurden -// Alexander Potylitsin -// Hasindu Gamaarachchi -// Jim Huang -// Mark Cheng -// Malcolm James MacLeod -// Devin Hussey (easyaspi314) -// Sebastian Pop -// Developer Ecosystem Engineering -// Danila Kutenin -// François Turban (JishinMaster) -// Pei-Hsuan Hung -// Yang-Hao Yuan -// Syoyo Fujita -// Brecht Van Lommel - /* * sse2neon is freely redistributable under the MIT License. * + * Copyright (c) 2015-2024 SSE2NEON Contributors. + * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights @@ -48,17 +25,44 @@ * SOFTWARE. */ +// This header file provides a simple API translation layer +// between SSE intrinsics to their corresponding Arm/Aarch64 NEON versions +// +// Contributors to this work are: +// John W. Ratcliff +// Brandon Rowlett +// Ken Fast +// Eric van Beurden +// Alexander Potylitsin +// Hasindu Gamaarachchi +// Jim Huang +// Mark Cheng +// Malcolm James MacLeod +// Devin Hussey (easyaspi314) +// Sebastian Pop +// Developer Ecosystem Engineering +// Danila Kutenin +// François Turban (JishinMaster) +// Pei-Hsuan Hung +// Yang-Hao Yuan +// Syoyo Fujita +// Brecht Van Lommel +// Jonathan Hue +// Cuda Chen +// Aymen Qader +// Anthony Roberts + /* Tunable configurations */ /* Enable precise implementation of math operations * This would slow down the computation a bit, but gives consistent result with - * x86 SSE2. (e.g. would solve a hole or NaN pixel in the rendering result) + * x86 SSE. (e.g. would solve a hole or NaN pixel in the rendering result) */ -/* _mm_min_ps and _mm_max_ps */ +/* _mm_min|max_ps|ss|pd|sd */ #ifndef SSE2NEON_PRECISE_MINMAX #define SSE2NEON_PRECISE_MINMAX (0) #endif -/* _mm_rcp_ps and _mm_div_ps */ +/* _mm_rcp_ps */ #ifndef SSE2NEON_PRECISE_DIV #define SSE2NEON_PRECISE_DIV (0) #endif @@ -66,38 +70,159 @@ #ifndef SSE2NEON_PRECISE_SQRT #define SSE2NEON_PRECISE_SQRT (0) #endif +/* _mm_dp_pd */ +#ifndef SSE2NEON_PRECISE_DP +#define SSE2NEON_PRECISE_DP (0) +#endif + +/* Enable inclusion of windows.h on MSVC platforms + * This makes _mm_clflush functional on windows, as there is no builtin. + */ +#ifndef SSE2NEON_INCLUDE_WINDOWS_H +#define SSE2NEON_INCLUDE_WINDOWS_H (0) +#endif +/* compiler specific definitions */ #if defined(__GNUC__) || defined(__clang__) #pragma push_macro("FORCE_INLINE") #pragma push_macro("ALIGN_STRUCT") #define FORCE_INLINE static inline __attribute__((always_inline)) #define ALIGN_STRUCT(x) __attribute__((aligned(x))) -#ifndef likely -#define likely(x) __builtin_expect(!!(x), 1) -#endif -#ifndef unlikely -#define unlikely(x) __builtin_expect(!!(x), 0) -#endif -#else -#error "Macro name collisions may happen with unsupported compiler." -#ifdef FORCE_INLINE -#undef FORCE_INLINE +#define _sse2neon_likely(x) __builtin_expect(!!(x), 1) +#define _sse2neon_unlikely(x) __builtin_expect(!!(x), 0) +#elif defined(_MSC_VER) +#if _MSVC_TRADITIONAL +#error Using the traditional MSVC preprocessor is not supported! Use /Zc:preprocessor instead. #endif +#ifndef FORCE_INLINE #define FORCE_INLINE static inline +#endif #ifndef ALIGN_STRUCT #define ALIGN_STRUCT(x) __declspec(align(x)) #endif +#define _sse2neon_likely(x) (x) +#define _sse2neon_unlikely(x) (x) +#else +#pragma message("Macro name collisions may happen with unsupported compilers.") +#endif + + +#if defined(__GNUC__) && !defined(__clang__) +#pragma push_macro("FORCE_INLINE_OPTNONE") +#define FORCE_INLINE_OPTNONE static inline __attribute__((optimize("O0"))) +#elif defined(__clang__) +#pragma push_macro("FORCE_INLINE_OPTNONE") +#define FORCE_INLINE_OPTNONE static inline __attribute__((optnone)) +#else +#define FORCE_INLINE_OPTNONE FORCE_INLINE #endif -#ifndef likely -#define likely(x) (x) + +#if !defined(__clang__) && defined(__GNUC__) && __GNUC__ < 10 +#warning "GCC versions earlier than 10 are not supported." #endif -#ifndef unlikely -#define unlikely(x) (x) + +/* C language does not allow initializing a variable with a function call. */ +#ifdef __cplusplus +#define _sse2neon_const static const +#else +#define _sse2neon_const const #endif #include #include +#if defined(_WIN32) +/* Definitions for _mm_{malloc,free} are provided by + * from both MinGW-w64 and MSVC. + */ +#define SSE2NEON_ALLOC_DEFINED +#endif + +/* If using MSVC */ +#ifdef _MSC_VER +#include +#if SSE2NEON_INCLUDE_WINDOWS_H +#include +#include +#endif + +#if !defined(__cplusplus) +#error SSE2NEON only supports C++ compilation with this compiler +#endif + +#ifdef SSE2NEON_ALLOC_DEFINED +#include +#endif + +#if (defined(_M_AMD64) || defined(__x86_64__)) || \ + (defined(_M_ARM64) || defined(__arm64__)) +#define SSE2NEON_HAS_BITSCAN64 +#endif +#endif + +#if defined(__GNUC__) || defined(__clang__) +#define _sse2neon_define0(type, s, body) \ + __extension__({ \ + type _a = (s); \ + body \ + }) +#define _sse2neon_define1(type, s, body) \ + __extension__({ \ + type _a = (s); \ + body \ + }) +#define _sse2neon_define2(type, a, b, body) \ + __extension__({ \ + type _a = (a), _b = (b); \ + body \ + }) +#define _sse2neon_return(ret) (ret) +#else +#define _sse2neon_define0(type, a, body) [=](type _a) { body }(a) +#define _sse2neon_define1(type, a, body) [](type _a) { body }(a) +#define _sse2neon_define2(type, a, b, body) \ + [](type _a, type _b) { body }((a), (b)) +#define _sse2neon_return(ret) return ret +#endif + +#define _sse2neon_init(...) \ + { \ + __VA_ARGS__ \ + } + +/* Compiler barrier */ +#if defined(_MSC_VER) +#define SSE2NEON_BARRIER() _ReadWriteBarrier() +#else +#define SSE2NEON_BARRIER() \ + do { \ + __asm__ __volatile__("" ::: "memory"); \ + (void) 0; \ + } while (0) +#endif + +/* Memory barriers + * __atomic_thread_fence does not include a compiler barrier; instead, + * the barrier is part of __atomic_load/__atomic_store's "volatile-like" + * semantics. + */ +#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L) +#include +#endif + +FORCE_INLINE void _sse2neon_smp_mb(void) +{ + SSE2NEON_BARRIER(); +#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L) && \ + !defined(__STDC_NO_ATOMICS__) + atomic_thread_fence(memory_order_seq_cst); +#elif defined(__GNUC__) || defined(__clang__) + __atomic_thread_fence(__ATOMIC_SEQ_CST); +#else /* MSVC */ + __dmb(_ARM64_BARRIER_ISH); +#endif +} + /* Architecture-specific build options */ /* FIXME: #pragma GCC push_options is only available on GCC */ #if defined(__GNUC__) @@ -113,32 +238,77 @@ #pragma GCC push_options #pragma GCC target("fpu=neon") #endif -#elif defined(__aarch64__) -#if !defined(__clang__) +#elif defined(__aarch64__) || defined(_M_ARM64) +#if !defined(__clang__) && !defined(_MSC_VER) #pragma GCC push_options #pragma GCC target("+simd") #endif +#elif __ARM_ARCH == 8 +#if !defined(__ARM_NEON) || !defined(__ARM_NEON__) +#error \ + "You must enable NEON instructions (e.g. -mfpu=neon-fp-armv8) to use SSE2NEON." +#endif +#if !defined(__clang__) && !defined(_MSC_VER) +#pragma GCC push_options +#endif #else -#error "Unsupported target. Must be either ARMv7-A+NEON or ARMv8-A." +#error \ + "Unsupported target. Must be either ARMv7-A+NEON or ARMv8-A \ +(you could try setting target explicitly with -march or -mcpu)" #endif #endif #include +#if (!defined(__aarch64__) && !defined(_M_ARM64)) && (__ARM_ARCH == 8) +#if defined __has_include && __has_include() +#include +#endif +#endif + +/* Apple Silicon cache lines are double of what is commonly used by Intel, AMD + * and other Arm microarchitectures use. + * From sysctl -a on Apple M1: + * hw.cachelinesize: 128 + */ +#if defined(__APPLE__) && (defined(__aarch64__) || defined(__arm64__)) +#define SSE2NEON_CACHELINE_SIZE 128 +#else +#define SSE2NEON_CACHELINE_SIZE 64 +#endif -/* Rounding functions require either Aarch64 instructions or libm failback */ -#if !defined(__aarch64__) +/* Rounding functions require either Aarch64 instructions or libm fallback */ +#if !defined(__aarch64__) && !defined(_M_ARM64) #include #endif +/* On ARMv7, some registers, such as PMUSERENR and PMCCNTR, are read-only + * or even not accessible in user mode. + * To write or access to these registers in user mode, + * we have to perform syscall instead. + */ +#if (!defined(__aarch64__) && !defined(_M_ARM64)) +#include +#endif + /* "__has_builtin" can be used to query support for built-in functions * provided by gcc/clang and other compilers that support it. */ #ifndef __has_builtin /* GCC prior to 10 or non-clang compilers */ /* Compatibility with gcc <= 9 */ -#if __GNUC__ <= 9 +#if defined(__GNUC__) && (__GNUC__ <= 9) #define __has_builtin(x) HAS##x #define HAS__builtin_popcount 1 #define HAS__builtin_popcountll 1 + +// __builtin_shuffle introduced in GCC 4.7.0 +#if (__GNUC__ >= 5) || ((__GNUC__ == 4) && (__GNUC_MINOR__ >= 7)) +#define HAS__builtin_shuffle 1 +#else +#define HAS__builtin_shuffle 0 +#endif + +#define HAS__builtin_shufflevector 0 +#define HAS__builtin_nontemporal_store 0 #else #define __has_builtin(x) 0 #endif @@ -155,6 +325,26 @@ #define _MM_SHUFFLE(fp3, fp2, fp1, fp0) \ (((fp3) << 6) | ((fp2) << 4) | ((fp1) << 2) | ((fp0))) +#if __has_builtin(__builtin_shufflevector) +#define _sse2neon_shuffle(type, a, b, ...) \ + __builtin_shufflevector(a, b, __VA_ARGS__) +#elif __has_builtin(__builtin_shuffle) +#define _sse2neon_shuffle(type, a, b, ...) \ + __extension__({ \ + type tmp = {__VA_ARGS__}; \ + __builtin_shuffle(a, b, tmp); \ + }) +#endif + +#ifdef _sse2neon_shuffle +#define vshuffle_s16(a, b, ...) _sse2neon_shuffle(int16x4_t, a, b, __VA_ARGS__) +#define vshuffleq_s16(a, b, ...) _sse2neon_shuffle(int16x8_t, a, b, __VA_ARGS__) +#define vshuffle_s32(a, b, ...) _sse2neon_shuffle(int32x2_t, a, b, __VA_ARGS__) +#define vshuffleq_s32(a, b, ...) _sse2neon_shuffle(int32x4_t, a, b, __VA_ARGS__) +#define vshuffle_s64(a, b, ...) _sse2neon_shuffle(int64x1_t, a, b, __VA_ARGS__) +#define vshuffleq_s64(a, b, ...) _sse2neon_shuffle(int64x2_t, a, b, __VA_ARGS__) +#endif + /* Rounding mode macros. */ #define _MM_FROUND_TO_NEAREST_INT 0x00 #define _MM_FROUND_TO_NEG_INF 0x01 @@ -162,10 +352,25 @@ #define _MM_FROUND_TO_ZERO 0x03 #define _MM_FROUND_CUR_DIRECTION 0x04 #define _MM_FROUND_NO_EXC 0x08 +#define _MM_FROUND_RAISE_EXC 0x00 +#define _MM_FROUND_NINT (_MM_FROUND_TO_NEAREST_INT | _MM_FROUND_RAISE_EXC) +#define _MM_FROUND_FLOOR (_MM_FROUND_TO_NEG_INF | _MM_FROUND_RAISE_EXC) +#define _MM_FROUND_CEIL (_MM_FROUND_TO_POS_INF | _MM_FROUND_RAISE_EXC) +#define _MM_FROUND_TRUNC (_MM_FROUND_TO_ZERO | _MM_FROUND_RAISE_EXC) +#define _MM_FROUND_RINT (_MM_FROUND_CUR_DIRECTION | _MM_FROUND_RAISE_EXC) +#define _MM_FROUND_NEARBYINT (_MM_FROUND_CUR_DIRECTION | _MM_FROUND_NO_EXC) #define _MM_ROUND_NEAREST 0x0000 #define _MM_ROUND_DOWN 0x2000 #define _MM_ROUND_UP 0x4000 #define _MM_ROUND_TOWARD_ZERO 0x6000 +/* Flush zero mode macros. */ +#define _MM_FLUSH_ZERO_MASK 0x8000 +#define _MM_FLUSH_ZERO_ON 0x8000 +#define _MM_FLUSH_ZERO_OFF 0x0000 +/* Denormals are zeros mode macros. */ +#define _MM_DENORMALS_ZERO_MASK 0x0040 +#define _MM_DENORMALS_ZERO_ON 0x0040 +#define _MM_DENORMALS_ZERO_OFF 0x0000 /* indicate immediate constant argument in a given range */ #define __constrange(a, b) const @@ -181,13 +386,28 @@ typedef float32x4_t __m128; /* 128-bit vector containing 4 floats */ // On ARM 32-bit architecture, the float64x2_t is not supported. // The data type __m128d should be represented in a different way for related // intrinsic conversion. -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) typedef float64x2_t __m128d; /* 128-bit vector containing 2 doubles */ #else typedef float32x4_t __m128d; #endif typedef int64x2_t __m128i; /* 128-bit vector containing integers */ +// Some intrinsics operate on unaligned data types. +typedef int16_t ALIGN_STRUCT(1) unaligned_int16_t; +typedef int32_t ALIGN_STRUCT(1) unaligned_int32_t; +typedef int64_t ALIGN_STRUCT(1) unaligned_int64_t; + +// __int64 is defined in the Intrinsics Guide which maps to different datatype +// in different data model +#if !(defined(_WIN32) || defined(_WIN64) || defined(__int64)) +#if (defined(__x86_64__) || defined(__i386__)) +#define __int64 long long +#else +#define __int64 int64_t +#endif +#endif + /* type-safe casting between types */ #define vreinterpretq_m128_f16(x) vreinterpretq_f32_f16(x) @@ -267,7 +487,7 @@ typedef int64x2_t __m128i; /* 128-bit vector containing integers */ #define vreinterpret_f32_m64(x) vreinterpret_f32_s64(x) -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) #define vreinterpretq_m128d_s32(x) vreinterpretq_f64_s32(x) #define vreinterpretq_m128d_s64(x) vreinterpretq_f64_s64(x) @@ -301,10 +521,10 @@ typedef int64x2_t __m128i; /* 128-bit vector containing integers */ #endif // A struct is defined in this header file called 'SIMDVec' which can be used -// by applications which attempt to access the contents of an _m128 struct +// by applications which attempt to access the contents of an __m128 struct // directly. It is important to note that accessing the __m128 struct directly // is bad coding practice by Microsoft: @see: -// https://msdn.microsoft.com/en-us/library/ayeb3ayc.aspx +// https://learn.microsoft.com/en-us/cpp/cpp/m128 // // However, some legacy source code may try to access the contents of an __m128 // struct directly so the developer can use the SIMDVec as an alias for it. Any @@ -340,23 +560,38 @@ typedef union ALIGN_STRUCT(16) SIMDVec { #define vreinterpretq_nth_u32_m128i(x, n) (((SIMDVec *) &x)->m128_u32[n]) #define vreinterpretq_nth_u8_m128i(x, n) (((SIMDVec *) &x)->m128_u8[n]) +/* SSE macros */ +#define _MM_GET_FLUSH_ZERO_MODE _sse2neon_mm_get_flush_zero_mode +#define _MM_SET_FLUSH_ZERO_MODE _sse2neon_mm_set_flush_zero_mode +#define _MM_GET_DENORMALS_ZERO_MODE _sse2neon_mm_get_denormals_zero_mode +#define _MM_SET_DENORMALS_ZERO_MODE _sse2neon_mm_set_denormals_zero_mode + // Function declaration // SSE -FORCE_INLINE unsigned int _MM_GET_ROUNDING_MODE(); +FORCE_INLINE unsigned int _MM_GET_ROUNDING_MODE(void); FORCE_INLINE __m128 _mm_move_ss(__m128, __m128); +FORCE_INLINE __m128 _mm_or_ps(__m128, __m128); +FORCE_INLINE __m128 _mm_set_ps1(float); +FORCE_INLINE __m128 _mm_setzero_ps(void); // SSE2 +FORCE_INLINE __m128i _mm_and_si128(__m128i, __m128i); +FORCE_INLINE __m128i _mm_castps_si128(__m128); +FORCE_INLINE __m128i _mm_cmpeq_epi32(__m128i, __m128i); FORCE_INLINE __m128i _mm_cvtps_epi32(__m128); FORCE_INLINE __m128d _mm_move_sd(__m128d, __m128d); +FORCE_INLINE __m128i _mm_or_si128(__m128i, __m128i); FORCE_INLINE __m128i _mm_set_epi32(int, int, int, int); FORCE_INLINE __m128i _mm_set_epi64x(int64_t, int64_t); FORCE_INLINE __m128d _mm_set_pd(double, double); +FORCE_INLINE __m128i _mm_set1_epi32(int); +FORCE_INLINE __m128i _mm_setzero_si128(void); // SSE4.1 FORCE_INLINE __m128d _mm_ceil_pd(__m128d); FORCE_INLINE __m128 _mm_ceil_ps(__m128); FORCE_INLINE __m128d _mm_floor_pd(__m128d); FORCE_INLINE __m128 _mm_floor_ps(__m128); -FORCE_INLINE __m128d _mm_round_pd(__m128d, int); -FORCE_INLINE __m128 _mm_round_ps(__m128, int); +FORCE_INLINE_OPTNONE __m128d _mm_round_pd(__m128d, int); +FORCE_INLINE_OPTNONE __m128 _mm_round_ps(__m128, int); // SSE4.2 FORCE_INLINE uint32_t _mm_crc32_u8(uint32_t, uint8_t); @@ -364,7 +599,7 @@ FORCE_INLINE uint32_t _mm_crc32_u8(uint32_t, uint8_t); // Older gcc does not define vld1q_u8_x4 type #if defined(__GNUC__) && !defined(__clang__) && \ - ((__GNUC__ <= 10 && defined(__arm__)) || \ + ((__GNUC__ <= 13 && defined(__arm__)) || \ (__GNUC__ == 10 && __GNUC_MINOR__ < 3 && defined(__aarch64__)) || \ (__GNUC__ <= 9 && defined(__aarch64__))) FORCE_INLINE uint8x16x4_t _sse2neon_vld1q_u8_x4(const uint8_t *p) @@ -384,6 +619,57 @@ FORCE_INLINE uint8x16x4_t _sse2neon_vld1q_u8_x4(const uint8_t *p) } #endif +#if !defined(__aarch64__) && !defined(_M_ARM64) +/* emulate vaddv u8 variant */ +FORCE_INLINE uint8_t _sse2neon_vaddv_u8(uint8x8_t v8) +{ + const uint64x1_t v1 = vpaddl_u32(vpaddl_u16(vpaddl_u8(v8))); + return vget_lane_u8(vreinterpret_u8_u64(v1), 0); +} +#else +// Wraps vaddv_u8 +FORCE_INLINE uint8_t _sse2neon_vaddv_u8(uint8x8_t v8) +{ + return vaddv_u8(v8); +} +#endif + +#if !defined(__aarch64__) && !defined(_M_ARM64) +/* emulate vaddvq u8 variant */ +FORCE_INLINE uint8_t _sse2neon_vaddvq_u8(uint8x16_t a) +{ + uint8x8_t tmp = vpadd_u8(vget_low_u8(a), vget_high_u8(a)); + uint8_t res = 0; + for (int i = 0; i < 8; ++i) + res += tmp[i]; + return res; +} +#else +// Wraps vaddvq_u8 +FORCE_INLINE uint8_t _sse2neon_vaddvq_u8(uint8x16_t a) +{ + return vaddvq_u8(a); +} +#endif + +#if !defined(__aarch64__) && !defined(_M_ARM64) +/* emulate vaddvq u16 variant */ +FORCE_INLINE uint16_t _sse2neon_vaddvq_u16(uint16x8_t a) +{ + uint32x4_t m = vpaddlq_u16(a); + uint64x2_t n = vpaddlq_u32(m); + uint64x1_t o = vget_low_u64(n) + vget_high_u64(n); + + return vget_lane_u32((uint32x2_t) o, 0); +} +#else +// Wraps vaddvq_u16 +FORCE_INLINE uint16_t _sse2neon_vaddvq_u16(uint16x8_t a) +{ + return vaddvq_u16(a); +} +#endif + /* Function Naming Conventions * The naming convention of SSE intrinsics is straightforward. A generic SSE * intrinsic function is given as follows: @@ -396,7 +682,7 @@ FORCE_INLINE uint8x16x4_t _sse2neon_vld1q_u8_x4(const uint8_t *p) * This last part, , is a little complicated. It identifies the * content of the input values, and can be set to any of the following values: * + ps - vectors contain floats (ps stands for packed single-precision) - * + pd - vectors cantain doubles (pd stands for packed double-precision) + * + pd - vectors contain doubles (pd stands for packed double-precision) * + epi8/epi16/epi32/epi64 - vectors contain 8-bit/16-bit/32-bit/64-bit * signed integers * + epu8/epu16/epu32/epu64 - vectors contain 8-bit/16-bit/32-bit/64-bit @@ -418,59 +704,14 @@ FORCE_INLINE uint8x16x4_t _sse2neon_vld1q_u8_x4(const uint8_t *p) * 4, 5, 12, 13, 6, 7, 14, 15); * // Shuffle packed 8-bit integers * __m128i v_out = _mm_shuffle_epi8(v_in, v_perm); // pshufb - * - * Data (Number, Binary, Byte Index): - +------+------+-------------+------+------+-------------+ - | 1 | 2 | 3 | 4 | Number - +------+------+------+------+------+------+------+------+ - | 0000 | 0001 | 0000 | 0010 | 0000 | 0011 | 0000 | 0100 | Binary - +------+------+------+------+------+------+------+------+ - | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Index - +------+------+------+------+------+------+------+------+ - - +------+------+------+------+------+------+------+------+ - | 5 | 6 | 7 | 8 | Number - +------+------+------+------+------+------+------+------+ - | 0000 | 0101 | 0000 | 0110 | 0000 | 0111 | 0000 | 1000 | Binary - +------+------+------+------+------+------+------+------+ - | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Index - +------+------+------+------+------+------+------+------+ - * Index (Byte Index): - +------+------+------+------+------+------+------+------+ - | 1 | 0 | 2 | 3 | 8 | 9 | 10 | 11 | - +------+------+------+------+------+------+------+------+ - - +------+------+------+------+------+------+------+------+ - | 4 | 5 | 12 | 13 | 6 | 7 | 14 | 15 | - +------+------+------+------+------+------+------+------+ - * Result: - +------+------+------+------+------+------+------+------+ - | 1 | 0 | 2 | 3 | 8 | 9 | 10 | 11 | Index - +------+------+------+------+------+------+------+------+ - | 0001 | 0000 | 0000 | 0010 | 0000 | 0101 | 0000 | 0110 | Binary - +------+------+------+------+------+------+------+------+ - | 256 | 2 | 5 | 6 | Number - +------+------+------+------+------+------+------+------+ - - +------+------+------+------+------+------+------+------+ - | 4 | 5 | 12 | 13 | 6 | 7 | 14 | 15 | Index - +------+------+------+------+------+------+------+------+ - | 0000 | 0011 | 0000 | 0111 | 0000 | 0100 | 0000 | 1000 | Binary - +------+------+------+------+------+------+------+------+ - | 3 | 7 | 4 | 8 | Number - +------+------+------+------+------+------+-------------+ */ -/* Constants for use with _mm_prefetch. */ +/* Constants for use with _mm_prefetch. */ enum _mm_hint { - _MM_HINT_NTA = 0, /* load data to L1 and L2 cache, mark it as NTA */ - _MM_HINT_T0 = 1, /* load data to L1 and L2 cache */ - _MM_HINT_T1 = 2, /* load data to L2 cache only */ - _MM_HINT_T2 = 3, /* load data to L2 cache only, mark it as NTA */ - _MM_HINT_ENTA = 4, /* exclusive version of _MM_HINT_NTA */ - _MM_HINT_ET0 = 5, /* exclusive version of _MM_HINT_T0 */ - _MM_HINT_ET1 = 6, /* exclusive version of _MM_HINT_T1 */ - _MM_HINT_ET2 = 7 /* exclusive version of _MM_HINT_T2 */ + _MM_HINT_NTA = 0, /* load data to L1 and L2 cache, mark it as NTA */ + _MM_HINT_T0 = 1, /* load data to L1 and L2 cache */ + _MM_HINT_T1 = 2, /* load data to L2 cache only */ + _MM_HINT_T2 = 3, /* load data to L2 cache only, mark it as NTA */ }; // The bit field mapping to the FPCR(floating-point control register) @@ -479,8 +720,9 @@ typedef struct { uint8_t res1 : 6; uint8_t bit22 : 1; uint8_t bit23 : 1; - uint8_t res2; -#if defined(__aarch64__) + uint8_t bit24 : 1; + uint8_t res2 : 7; +#if defined(__aarch64__) || defined(_M_ARM64) uint32_t res3; #endif } fpcr_bitfield; @@ -620,23 +862,24 @@ FORCE_INLINE __m128 _mm_shuffle_ps_2032(__m128 a, __m128 b) return vreinterpretq_m128_f32(vcombine_f32(a32, b20)); } -// Kahan summation for accurate summation of floating-point numbers. -// http://blog.zachbjornson.com/2019/08/11/fast-float-summation.html -FORCE_INLINE void _sse2neon_kadd_f32(float *sum, float *c, float y) -{ - y -= *c; - float t = *sum + y; - *c = (t - *sum) - y; - *sum = t; -} - -#if defined(__ARM_FEATURE_CRYPTO) +// For MSVC, we check only if it is ARM64, as every single ARM64 processor +// supported by WoA has crypto extensions. If this changes in the future, +// this can be verified via the runtime-only method of: +// IsProcessorFeaturePresent(PF_ARM_V8_CRYPTO_INSTRUCTIONS_AVAILABLE) +#if (defined(_M_ARM64) && !defined(__clang__)) || \ + (defined(__ARM_FEATURE_CRYPTO) && \ + (defined(__aarch64__) || __has_builtin(__builtin_arm_crypto_vmullp64))) // Wraps vmull_p64 FORCE_INLINE uint64x2_t _sse2neon_vmull_p64(uint64x1_t _a, uint64x1_t _b) { poly64_t a = vget_lane_p64(vreinterpret_p64_u64(_a), 0); poly64_t b = vget_lane_p64(vreinterpret_p64_u64(_b), 0); +#if defined(_MSC_VER) + __n64 a1 = {a}, b1 = {b}; + return vreinterpretq_u64_p128(vmull_p64(a1, b1)); +#else return vreinterpretq_u64_p128(vmull_p64(a, b)); +#endif } #else // ARMv7 polyfill // ARMv7/some A64 lacks vmull_p64, but it has vmull_p8. @@ -754,21 +997,17 @@ static uint64x2_t _sse2neon_vmull_p64(uint64x1_t _a, uint64x1_t _b) // return ret; // } #define _mm_shuffle_epi32_default(a, imm) \ - __extension__({ \ - int32x4_t ret; \ - ret = vmovq_n_s32( \ - vgetq_lane_s32(vreinterpretq_s32_m128i(a), (imm) & (0x3))); \ - ret = vsetq_lane_s32( \ - vgetq_lane_s32(vreinterpretq_s32_m128i(a), ((imm) >> 2) & 0x3), \ - ret, 1); \ - ret = vsetq_lane_s32( \ + vreinterpretq_m128i_s32(vsetq_lane_s32( \ + vgetq_lane_s32(vreinterpretq_s32_m128i(a), ((imm) >> 6) & 0x3), \ + vsetq_lane_s32( \ vgetq_lane_s32(vreinterpretq_s32_m128i(a), ((imm) >> 4) & 0x3), \ - ret, 2); \ - ret = vsetq_lane_s32( \ - vgetq_lane_s32(vreinterpretq_s32_m128i(a), ((imm) >> 6) & 0x3), \ - ret, 3); \ - vreinterpretq_m128i_s32(ret); \ - }) + vsetq_lane_s32(vgetq_lane_s32(vreinterpretq_s32_m128i(a), \ + ((imm) >> 2) & 0x3), \ + vmovq_n_s32(vgetq_lane_s32( \ + vreinterpretq_s32_m128i(a), (imm) & (0x3))), \ + 1), \ + 2), \ + 3)) // Takes the upper 64 bits of a and places it in the low end of the result // Takes the lower 64 bits of a and places it into the high end of the result. @@ -852,25 +1091,18 @@ FORCE_INLINE __m128i _mm_shuffle_epi_3332(__m128i a) return vreinterpretq_m128i_s32(vcombine_s32(a32, a33)); } -// FORCE_INLINE __m128i _mm_shuffle_epi32_splat(__m128i a, __constrange(0,255) -// int imm) -#if defined(__aarch64__) -#define _mm_shuffle_epi32_splat(a, imm) \ - __extension__({ \ - vreinterpretq_m128i_s32( \ - vdupq_laneq_s32(vreinterpretq_s32_m128i(a), (imm))); \ - }) +#if defined(__aarch64__) || defined(_M_ARM64) +#define _mm_shuffle_epi32_splat(a, imm) \ + vreinterpretq_m128i_s32(vdupq_laneq_s32(vreinterpretq_s32_m128i(a), (imm))) #else -#define _mm_shuffle_epi32_splat(a, imm) \ - __extension__({ \ - vreinterpretq_m128i_s32( \ - vdupq_n_s32(vgetq_lane_s32(vreinterpretq_s32_m128i(a), (imm)))); \ - }) +#define _mm_shuffle_epi32_splat(a, imm) \ + vreinterpretq_m128i_s32( \ + vdupq_n_s32(vgetq_lane_s32(vreinterpretq_s32_m128i(a), (imm)))) #endif -// NEON does not support a general purpose permute intrinsic -// Selects four specific single-precision, floating-point values from a and b, -// based on the mask i. +// NEON does not support a general purpose permute intrinsic. +// Shuffle single-precision (32-bit) floating-point elements in a using the +// control in imm8, and store the results in dst. // // C equivalent: // __m128 _mm_shuffle_ps_default(__m128 a, __m128 b, @@ -881,33 +1113,27 @@ FORCE_INLINE __m128i _mm_shuffle_epi_3332(__m128i a) // return ret; // } // -// https://msdn.microsoft.com/en-us/library/vstudio/5f0858x0(v=vs.100).aspx -#define _mm_shuffle_ps_default(a, b, imm) \ - __extension__({ \ - float32x4_t ret; \ - ret = vmovq_n_f32( \ - vgetq_lane_f32(vreinterpretq_f32_m128(a), (imm) & (0x3))); \ - ret = vsetq_lane_f32( \ - vgetq_lane_f32(vreinterpretq_f32_m128(a), ((imm) >> 2) & 0x3), \ - ret, 1); \ - ret = vsetq_lane_f32( \ - vgetq_lane_f32(vreinterpretq_f32_m128(b), ((imm) >> 4) & 0x3), \ - ret, 2); \ - ret = vsetq_lane_f32( \ - vgetq_lane_f32(vreinterpretq_f32_m128(b), ((imm) >> 6) & 0x3), \ - ret, 3); \ - vreinterpretq_m128_f32(ret); \ - }) - -// Shuffles the lower 4 signed or unsigned 16-bit integers in a as specified -// by imm. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/y41dkk37(v=vs.100) -// FORCE_INLINE __m128i _mm_shufflelo_epi16_function(__m128i a, -// __constrange(0,255) int -// imm) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_shuffle_ps +#define _mm_shuffle_ps_default(a, b, imm) \ + vreinterpretq_m128_f32(vsetq_lane_f32( \ + vgetq_lane_f32(vreinterpretq_f32_m128(b), ((imm) >> 6) & 0x3), \ + vsetq_lane_f32( \ + vgetq_lane_f32(vreinterpretq_f32_m128(b), ((imm) >> 4) & 0x3), \ + vsetq_lane_f32( \ + vgetq_lane_f32(vreinterpretq_f32_m128(a), ((imm) >> 2) & 0x3), \ + vmovq_n_f32( \ + vgetq_lane_f32(vreinterpretq_f32_m128(a), (imm) & (0x3))), \ + 1), \ + 2), \ + 3)) + +// Shuffle 16-bit integers in the low 64 bits of a using the control in imm8. +// Store the results in the low 64 bits of dst, with the high 64 bits being +// copied from a to dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_shufflelo_epi16 #define _mm_shufflelo_epi16_function(a, imm) \ - __extension__({ \ - int16x8_t ret = vreinterpretq_s16_m128i(a); \ + _sse2neon_define1( \ + __m128i, a, int16x8_t ret = vreinterpretq_s16_m128i(_a); \ int16x4_t lowBits = vget_low_s16(ret); \ ret = vsetq_lane_s16(vget_lane_s16(lowBits, (imm) & (0x3)), ret, 0); \ ret = vsetq_lane_s16(vget_lane_s16(lowBits, ((imm) >> 2) & 0x3), ret, \ @@ -916,18 +1142,15 @@ FORCE_INLINE __m128i _mm_shuffle_epi_3332(__m128i a) 2); \ ret = vsetq_lane_s16(vget_lane_s16(lowBits, ((imm) >> 6) & 0x3), ret, \ 3); \ - vreinterpretq_m128i_s16(ret); \ - }) + _sse2neon_return(vreinterpretq_m128i_s16(ret));) -// Shuffles the upper 4 signed or unsigned 16-bit integers in a as specified -// by imm. -// https://msdn.microsoft.com/en-us/library/13ywktbs(v=vs.100).aspx -// FORCE_INLINE __m128i _mm_shufflehi_epi16_function(__m128i a, -// __constrange(0,255) int -// imm) +// Shuffle 16-bit integers in the high 64 bits of a using the control in imm8. +// Store the results in the high 64 bits of dst, with the low 64 bits being +// copied from a to dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_shufflehi_epi16 #define _mm_shufflehi_epi16_function(a, imm) \ - __extension__({ \ - int16x8_t ret = vreinterpretq_s16_m128i(a); \ + _sse2neon_define1( \ + __m128i, a, int16x8_t ret = vreinterpretq_s16_m128i(_a); \ int16x4_t highBits = vget_high_s16(ret); \ ret = vsetq_lane_s16(vget_lane_s16(highBits, (imm) & (0x3)), ret, 4); \ ret = vsetq_lane_s16(vget_lane_s16(highBits, ((imm) >> 2) & 0x3), ret, \ @@ -936,27 +1159,28 @@ FORCE_INLINE __m128i _mm_shuffle_epi_3332(__m128i a) 6); \ ret = vsetq_lane_s16(vget_lane_s16(highBits, ((imm) >> 6) & 0x3), ret, \ 7); \ - vreinterpretq_m128i_s16(ret); \ - }) + _sse2neon_return(vreinterpretq_m128i_s16(ret));) + +/* MMX */ + +//_mm_empty is a no-op on arm +FORCE_INLINE void _mm_empty(void) {} /* SSE */ -// Adds the four single-precision, floating-point values of a and b. -// -// r0 := a0 + b0 -// r1 := a1 + b1 -// r2 := a2 + b2 -// r3 := a3 + b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/c9848chc(v=vs.100).aspx +// Add packed single-precision (32-bit) floating-point elements in a and b, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_ps FORCE_INLINE __m128 _mm_add_ps(__m128 a, __m128 b) { return vreinterpretq_m128_f32( vaddq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); } -// adds the scalar single-precision floating point values of a and b. -// https://msdn.microsoft.com/en-us/library/be94x2y6(v=vs.100).aspx +// Add the lower single-precision (32-bit) floating-point element in a and b, +// store the result in the lower element of dst, and copy the upper 3 packed +// elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_ss FORCE_INLINE __m128 _mm_add_ss(__m128 a, __m128 b) { float32_t b0 = vgetq_lane_f32(vreinterpretq_f32_m128(b), 0); @@ -965,30 +1189,18 @@ FORCE_INLINE __m128 _mm_add_ss(__m128 a, __m128 b) return vreinterpretq_m128_f32(vaddq_f32(a, value)); } -// Computes the bitwise AND of the four single-precision, floating-point values -// of a and b. -// -// r0 := a0 & b0 -// r1 := a1 & b1 -// r2 := a2 & b2 -// r3 := a3 & b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/73ck1xc5(v=vs.100).aspx +// Compute the bitwise AND of packed single-precision (32-bit) floating-point +// elements in a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_and_ps FORCE_INLINE __m128 _mm_and_ps(__m128 a, __m128 b) { return vreinterpretq_m128_s32( vandq_s32(vreinterpretq_s32_m128(a), vreinterpretq_s32_m128(b))); } -// Computes the bitwise AND-NOT of the four single-precision, floating-point -// values of a and b. -// -// r0 := ~a0 & b0 -// r1 := ~a1 & b1 -// r2 := ~a2 & b2 -// r3 := ~a3 & b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/68h7wd02(v=vs.100).aspx +// Compute the bitwise NOT of packed single-precision (32-bit) floating-point +// elements in a and then AND with b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_andnot_ps FORCE_INLINE __m128 _mm_andnot_ps(__m128 a, __m128 b) { return vreinterpretq_m128_s32( @@ -998,13 +1210,7 @@ FORCE_INLINE __m128 _mm_andnot_ps(__m128 a, __m128 b) // Average packed unsigned 16-bit integers in a and b, and store the results in // dst. -// -// FOR j := 0 to 3 -// i := j*16 -// dst[i+15:i] := (a[i+15:i] + b[i+15:i] + 1) >> 1 -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_avg_pu16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_avg_pu16 FORCE_INLINE __m64 _mm_avg_pu16(__m64 a, __m64 b) { return vreinterpret_m64_u16( @@ -1013,182 +1219,199 @@ FORCE_INLINE __m64 _mm_avg_pu16(__m64 a, __m64 b) // Average packed unsigned 8-bit integers in a and b, and store the results in // dst. -// -// FOR j := 0 to 7 -// i := j*8 -// dst[i+7:i] := (a[i+7:i] + b[i+7:i] + 1) >> 1 -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_avg_pu8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_avg_pu8 FORCE_INLINE __m64 _mm_avg_pu8(__m64 a, __m64 b) { return vreinterpret_m64_u8( vrhadd_u8(vreinterpret_u8_m64(a), vreinterpret_u8_m64(b))); } -// Compares for equality. -// https://msdn.microsoft.com/en-us/library/vstudio/36aectz5(v=vs.100).aspx +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for equality, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpeq_ps FORCE_INLINE __m128 _mm_cmpeq_ps(__m128 a, __m128 b) { return vreinterpretq_m128_u32( vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); } -// Compares for equality. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/k423z28e(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for equality, store the result in the lower element of dst, and copy the +// upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpeq_ss FORCE_INLINE __m128 _mm_cmpeq_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_cmpeq_ps(a, b)); } -// Compares for greater than or equal. -// https://msdn.microsoft.com/en-us/library/vstudio/fs813y2t(v=vs.100).aspx +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for greater-than-or-equal, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpge_ps FORCE_INLINE __m128 _mm_cmpge_ps(__m128 a, __m128 b) { return vreinterpretq_m128_u32( vcgeq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); } -// Compares for greater than or equal. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/kesh3ddc(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for greater-than-or-equal, store the result in the lower element of dst, +// and copy the upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpge_ss FORCE_INLINE __m128 _mm_cmpge_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_cmpge_ps(a, b)); } -// Compares for greater than. -// -// r0 := (a0 > b0) ? 0xffffffff : 0x0 -// r1 := (a1 > b1) ? 0xffffffff : 0x0 -// r2 := (a2 > b2) ? 0xffffffff : 0x0 -// r3 := (a3 > b3) ? 0xffffffff : 0x0 -// -// https://msdn.microsoft.com/en-us/library/vstudio/11dy102s(v=vs.100).aspx +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for greater-than, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpgt_ps FORCE_INLINE __m128 _mm_cmpgt_ps(__m128 a, __m128 b) { return vreinterpretq_m128_u32( vcgtq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); } -// Compares for greater than. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/1xyyyy9e(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for greater-than, store the result in the lower element of dst, and copy +// the upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpgt_ss FORCE_INLINE __m128 _mm_cmpgt_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_cmpgt_ps(a, b)); } -// Compares for less than or equal. -// -// r0 := (a0 <= b0) ? 0xffffffff : 0x0 -// r1 := (a1 <= b1) ? 0xffffffff : 0x0 -// r2 := (a2 <= b2) ? 0xffffffff : 0x0 -// r3 := (a3 <= b3) ? 0xffffffff : 0x0 -// -// https://msdn.microsoft.com/en-us/library/vstudio/1s75w83z(v=vs.100).aspx +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for less-than-or-equal, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmple_ps FORCE_INLINE __m128 _mm_cmple_ps(__m128 a, __m128 b) { return vreinterpretq_m128_u32( vcleq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); } -// Compares for less than or equal. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/a7x0hbhw(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for less-than-or-equal, store the result in the lower element of dst, and +// copy the upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmple_ss FORCE_INLINE __m128 _mm_cmple_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_cmple_ps(a, b)); } -// Compares for less than -// https://msdn.microsoft.com/en-us/library/vstudio/f330yhc8(v=vs.100).aspx +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for less-than, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmplt_ps FORCE_INLINE __m128 _mm_cmplt_ps(__m128 a, __m128 b) { return vreinterpretq_m128_u32( vcltq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); } -// Compares for less than -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/fy94wye7(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for less-than, store the result in the lower element of dst, and copy the +// upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmplt_ss FORCE_INLINE __m128 _mm_cmplt_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_cmplt_ps(a, b)); } -// Compares for inequality. -// https://msdn.microsoft.com/en-us/library/sf44thbx(v=vs.100).aspx +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for not-equal, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpneq_ps FORCE_INLINE __m128 _mm_cmpneq_ps(__m128 a, __m128 b) { return vreinterpretq_m128_u32(vmvnq_u32( vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)))); } -// Compares for inequality. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/ekya8fh4(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for not-equal, store the result in the lower element of dst, and copy the +// upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpneq_ss FORCE_INLINE __m128 _mm_cmpneq_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_cmpneq_ps(a, b)); } -// Compares for not greater than or equal. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/wsexys62(v=vs.100) +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for not-greater-than-or-equal, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnge_ps FORCE_INLINE __m128 _mm_cmpnge_ps(__m128 a, __m128 b) { - return _mm_cmplt_ps(a, b); + return vreinterpretq_m128_u32(vmvnq_u32( + vcgeq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)))); } -// Compares for not greater than or equal. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/fk2y80s8(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for not-greater-than-or-equal, store the result in the lower element of +// dst, and copy the upper 3 packed elements from a to the upper elements of +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnge_ss FORCE_INLINE __m128 _mm_cmpnge_ss(__m128 a, __m128 b) { - return _mm_cmplt_ss(a, b); + return _mm_move_ss(a, _mm_cmpnge_ps(a, b)); } -// Compares for not greater than. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/d0xh7w0s(v=vs.100) +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for not-greater-than, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpngt_ps FORCE_INLINE __m128 _mm_cmpngt_ps(__m128 a, __m128 b) { - return _mm_cmple_ps(a, b); + return vreinterpretq_m128_u32(vmvnq_u32( + vcgtq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)))); } -// Compares for not greater than. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/z7x9ydwh(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for not-greater-than, store the result in the lower element of dst, and +// copy the upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpngt_ss FORCE_INLINE __m128 _mm_cmpngt_ss(__m128 a, __m128 b) { - return _mm_cmple_ss(a, b); + return _mm_move_ss(a, _mm_cmpngt_ps(a, b)); } -// Compares for not less than or equal. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/6a330kxw(v=vs.100) +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for not-less-than-or-equal, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnle_ps FORCE_INLINE __m128 _mm_cmpnle_ps(__m128 a, __m128 b) { - return _mm_cmpgt_ps(a, b); + return vreinterpretq_m128_u32(vmvnq_u32( + vcleq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)))); } -// Compares for not less than or equal. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/z7x9ydwh(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for not-less-than-or-equal, store the result in the lower element of dst, +// and copy the upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnle_ss FORCE_INLINE __m128 _mm_cmpnle_ss(__m128 a, __m128 b) { - return _mm_cmpgt_ss(a, b); + return _mm_move_ss(a, _mm_cmpnle_ps(a, b)); } -// Compares for not less than. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/4686bbdw(v=vs.100) +// Compare packed single-precision (32-bit) floating-point elements in a and b +// for not-less-than, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnlt_ps FORCE_INLINE __m128 _mm_cmpnlt_ps(__m128 a, __m128 b) { - return _mm_cmpge_ps(a, b); + return vreinterpretq_m128_u32(vmvnq_u32( + vcltq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)))); } -// Compares for not less than. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/56b9z2wf(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b for not-less-than, store the result in the lower element of dst, and copy +// the upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnlt_ss FORCE_INLINE __m128 _mm_cmpnlt_ss(__m128 a, __m128 b) { - return _mm_cmpge_ss(a, b); + return _mm_move_ss(a, _mm_cmpnlt_ps(a, b)); } -// Compares the four 32-bit floats in a and b to check if any values are NaN. -// Ordered compare between each value returns true for "orderable" and false for -// "not orderable" (NaN). -// https://msdn.microsoft.com/en-us/library/vstudio/0h9w00fx(v=vs.100).aspx see -// also: +// Compare packed single-precision (32-bit) floating-point elements in a and b +// to see if neither is NaN, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpord_ps +// +// See also: // http://stackoverflow.com/questions/8627331/what-does-ordered-unordered-comparison-mean // http://stackoverflow.com/questions/29349621/neon-isnanval-intrinsics FORCE_INLINE __m128 _mm_cmpord_ps(__m128 a, __m128 b) @@ -1203,15 +1426,18 @@ FORCE_INLINE __m128 _mm_cmpord_ps(__m128 a, __m128 b) return vreinterpretq_m128_u32(vandq_u32(ceqaa, ceqbb)); } -// Compares for ordered. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/343t62da(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b to see if neither is NaN, store the result in the lower element of dst, and +// copy the upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpord_ss FORCE_INLINE __m128 _mm_cmpord_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_cmpord_ps(a, b)); } -// Compares for unordered. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/khy6fk1t(v=vs.100) +// Compare packed single-precision (32-bit) floating-point elements in a and b +// to see if either is NaN, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpunord_ps FORCE_INLINE __m128 _mm_cmpunord_ps(__m128 a, __m128 b) { uint32x4_t f32a = @@ -1221,126 +1447,78 @@ FORCE_INLINE __m128 _mm_cmpunord_ps(__m128 a, __m128 b) return vreinterpretq_m128_u32(vmvnq_u32(vandq_u32(f32a, f32b))); } -// Compares for unordered. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/2as2387b(v=vs.100) +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b to see if either is NaN, store the result in the lower element of dst, and +// copy the upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpunord_ss FORCE_INLINE __m128 _mm_cmpunord_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_cmpunord_ps(a, b)); } -// Compares the lower single-precision floating point scalar values of a and b -// using an equality operation. : -// https://msdn.microsoft.com/en-us/library/93yx2h2b(v=vs.100).aspx +// Compare the lower single-precision (32-bit) floating-point element in a and b +// for equality, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comieq_ss FORCE_INLINE int _mm_comieq_ss(__m128 a, __m128 b) { - // return vgetq_lane_u32(vceqq_f32(vreinterpretq_f32_m128(a), - // vreinterpretq_f32_m128(b)), 0); - uint32x4_t a_not_nan = - vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a)); - uint32x4_t b_not_nan = - vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b)); - uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan); uint32x4_t a_eq_b = vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)); - return vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_eq_b), 0) & 0x1; + return vgetq_lane_u32(a_eq_b, 0) & 0x1; } -// Compares the lower single-precision floating point scalar values of a and b -// using a greater than or equal operation. : -// https://msdn.microsoft.com/en-us/library/8t80des6(v=vs.100).aspx +// Compare the lower single-precision (32-bit) floating-point element in a and b +// for greater-than-or-equal, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comige_ss FORCE_INLINE int _mm_comige_ss(__m128 a, __m128 b) { - // return vgetq_lane_u32(vcgeq_f32(vreinterpretq_f32_m128(a), - // vreinterpretq_f32_m128(b)), 0); - uint32x4_t a_not_nan = - vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a)); - uint32x4_t b_not_nan = - vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b)); - uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan); uint32x4_t a_ge_b = vcgeq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)); - return vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_ge_b), 0) & 0x1; + return vgetq_lane_u32(a_ge_b, 0) & 0x1; } -// Compares the lower single-precision floating point scalar values of a and b -// using a greater than operation. : -// https://msdn.microsoft.com/en-us/library/b0738e0t(v=vs.100).aspx +// Compare the lower single-precision (32-bit) floating-point element in a and b +// for greater-than, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comigt_ss FORCE_INLINE int _mm_comigt_ss(__m128 a, __m128 b) { - // return vgetq_lane_u32(vcgtq_f32(vreinterpretq_f32_m128(a), - // vreinterpretq_f32_m128(b)), 0); - uint32x4_t a_not_nan = - vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a)); - uint32x4_t b_not_nan = - vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b)); - uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan); uint32x4_t a_gt_b = vcgtq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)); - return vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_gt_b), 0) & 0x1; + return vgetq_lane_u32(a_gt_b, 0) & 0x1; } -// Compares the lower single-precision floating point scalar values of a and b -// using a less than or equal operation. : -// https://msdn.microsoft.com/en-us/library/1w4t7c57(v=vs.90).aspx +// Compare the lower single-precision (32-bit) floating-point element in a and b +// for less-than-or-equal, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comile_ss FORCE_INLINE int _mm_comile_ss(__m128 a, __m128 b) { - // return vgetq_lane_u32(vcleq_f32(vreinterpretq_f32_m128(a), - // vreinterpretq_f32_m128(b)), 0); - uint32x4_t a_not_nan = - vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a)); - uint32x4_t b_not_nan = - vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b)); - uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan); uint32x4_t a_le_b = vcleq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)); - return vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_le_b), 0) & 0x1; + return vgetq_lane_u32(a_le_b, 0) & 0x1; } -// Compares the lower single-precision floating point scalar values of a and b -// using a less than operation. : -// https://msdn.microsoft.com/en-us/library/2kwe606b(v=vs.90).aspx Important -// note!! The documentation on MSDN is incorrect! If either of the values is a -// NAN the docs say you will get a one, but in fact, it will return a zero!! +// Compare the lower single-precision (32-bit) floating-point element in a and b +// for less-than, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comilt_ss FORCE_INLINE int _mm_comilt_ss(__m128 a, __m128 b) { - uint32x4_t a_not_nan = - vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a)); - uint32x4_t b_not_nan = - vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b)); - uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan); uint32x4_t a_lt_b = vcltq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b)); - return vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_lt_b), 0) & 0x1; + return vgetq_lane_u32(a_lt_b, 0) & 0x1; } -// Compares the lower single-precision floating point scalar values of a and b -// using an inequality operation. : -// https://msdn.microsoft.com/en-us/library/bafh5e0a(v=vs.90).aspx +// Compare the lower single-precision (32-bit) floating-point element in a and b +// for not-equal, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comineq_ss FORCE_INLINE int _mm_comineq_ss(__m128 a, __m128 b) { - // return !vgetq_lane_u32(vceqq_f32(vreinterpretq_f32_m128(a), - // vreinterpretq_f32_m128(b)), 0); - uint32x4_t a_not_nan = - vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a)); - uint32x4_t b_not_nan = - vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b)); - uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan)); - uint32x4_t a_neq_b = vmvnq_u32( - vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); - return vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_neq_b), 0) & 0x1; + return !_mm_comieq_ss(a, b); } // Convert packed signed 32-bit integers in b to packed single-precision // (32-bit) floating-point elements, store the results in the lower 2 elements // of dst, and copy the upper 2 packed elements from a to the upper elements of // dst. -// -// dst[31:0] := Convert_Int32_To_FP32(b[31:0]) -// dst[63:32] := Convert_Int32_To_FP32(b[63:32]) -// dst[95:64] := a[95:64] -// dst[127:96] := a[127:96] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvt_pi2ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvt_pi2ps FORCE_INLINE __m128 _mm_cvt_pi2ps(__m128 a, __m64 b) { return vreinterpretq_m128_f32( @@ -1350,16 +1528,11 @@ FORCE_INLINE __m128 _mm_cvt_pi2ps(__m128 a, __m64 b) // Convert packed single-precision (32-bit) floating-point elements in a to // packed 32-bit integers, and store the results in dst. -// -// FOR j := 0 to 1 -// i := 32*j -// dst[i+31:i] := Convert_FP32_To_Int32(a[i+31:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvt_ps2pi +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvt_ps2pi FORCE_INLINE __m64 _mm_cvt_ps2pi(__m128 a) { -#if defined(__aarch64__) +#if (defined(__aarch64__) || defined(_M_ARM64)) || \ + defined(__ARM_FEATURE_DIRECTED_ROUNDING) return vreinterpret_m64_s32( vget_low_s32(vcvtnq_s32_f32(vrndiq_f32(vreinterpretq_f32_m128(a))))); #else @@ -1371,11 +1544,7 @@ FORCE_INLINE __m64 _mm_cvt_ps2pi(__m128 a) // Convert the signed 32-bit integer b to a single-precision (32-bit) // floating-point element, store the result in the lower element of dst, and // copy the upper 3 packed elements from a to the upper elements of dst. -// -// dst[31:0] := Convert_Int32_To_FP32(b[31:0]) -// dst[127:32] := a[127:32] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvt_si2ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvt_si2ss FORCE_INLINE __m128 _mm_cvt_si2ss(__m128 a, int b) { return vreinterpretq_m128_f32( @@ -1384,10 +1553,11 @@ FORCE_INLINE __m128 _mm_cvt_si2ss(__m128 a, int b) // Convert the lower single-precision (32-bit) floating-point element in a to a // 32-bit integer, and store the result in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvt_ss2si +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvt_ss2si FORCE_INLINE int _mm_cvt_ss2si(__m128 a) { -#if defined(__aarch64__) +#if (defined(__aarch64__) || defined(_M_ARM64)) || \ + defined(__ARM_FEATURE_DIRECTED_ROUNDING) return vgetq_lane_s32(vcvtnq_s32_f32(vrndiq_f32(vreinterpretq_f32_m128(a))), 0); #else @@ -1399,14 +1569,7 @@ FORCE_INLINE int _mm_cvt_ss2si(__m128 a) // Convert packed 16-bit integers in a to packed single-precision (32-bit) // floating-point elements, and store the results in dst. -// -// FOR j := 0 to 3 -// i := j*16 -// m := j*32 -// dst[m+31:m] := Convert_Int16_To_FP32(a[i+15:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpi16_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpi16_ps FORCE_INLINE __m128 _mm_cvtpi16_ps(__m64 a) { return vreinterpretq_m128_f32( @@ -1416,13 +1579,7 @@ FORCE_INLINE __m128 _mm_cvtpi16_ps(__m64 a) // Convert packed 32-bit integers in b to packed single-precision (32-bit) // floating-point elements, store the results in the lower 2 elements of dst, // and copy the upper 2 packed elements from a to the upper elements of dst. -// -// dst[31:0] := Convert_Int32_To_FP32(b[31:0]) -// dst[63:32] := Convert_Int32_To_FP32(b[63:32]) -// dst[95:64] := a[95:64] -// dst[127:96] := a[127:96] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpi32_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpi32_ps FORCE_INLINE __m128 _mm_cvtpi32_ps(__m128 a, __m64 b) { return vreinterpretq_m128_f32( @@ -1432,16 +1589,10 @@ FORCE_INLINE __m128 _mm_cvtpi32_ps(__m128 a, __m64 b) // Convert packed signed 32-bit integers in a to packed single-precision // (32-bit) floating-point elements, store the results in the lower 2 elements -// of dst, then covert the packed signed 32-bit integers in b to +// of dst, then convert the packed signed 32-bit integers in b to // single-precision (32-bit) floating-point element, and store the results in // the upper 2 elements of dst. -// -// dst[31:0] := Convert_Int32_To_FP32(a[31:0]) -// dst[63:32] := Convert_Int32_To_FP32(a[63:32]) -// dst[95:64] := Convert_Int32_To_FP32(b[31:0]) -// dst[127:96] := Convert_Int32_To_FP32(b[63:32]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpi32x2_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpi32x2_ps FORCE_INLINE __m128 _mm_cvtpi32x2_ps(__m64 a, __m64 b) { return vreinterpretq_m128_f32(vcvtq_f32_s32( @@ -1450,14 +1601,7 @@ FORCE_INLINE __m128 _mm_cvtpi32x2_ps(__m64 a, __m64 b) // Convert the lower packed 8-bit integers in a to packed single-precision // (32-bit) floating-point elements, and store the results in dst. -// -// FOR j := 0 to 3 -// i := j*8 -// m := j*32 -// dst[m+31:m] := Convert_Int8_To_FP32(a[i+7:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpi8_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpi8_ps FORCE_INLINE __m128 _mm_cvtpi8_ps(__m64 a) { return vreinterpretq_m128_f32(vcvtq_f32_s32( @@ -1468,35 +1612,32 @@ FORCE_INLINE __m128 _mm_cvtpi8_ps(__m64 a) // packed 16-bit integers, and store the results in dst. Note: this intrinsic // will generate 0x7FFF, rather than 0x8000, for input values between 0x7FFF and // 0x7FFFFFFF. -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtps_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtps_pi16 FORCE_INLINE __m64 _mm_cvtps_pi16(__m128 a) { return vreinterpret_m64_s16( - vmovn_s32(vreinterpretq_s32_m128i(_mm_cvtps_epi32(a)))); + vqmovn_s32(vreinterpretq_s32_m128i(_mm_cvtps_epi32(a)))); } // Convert packed single-precision (32-bit) floating-point elements in a to // packed 32-bit integers, and store the results in dst. -// -// FOR j := 0 to 1 -// i := 32*j -// dst[i+31:i] := Convert_FP32_To_Int32(a[i+31:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtps_pi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtps_pi32 #define _mm_cvtps_pi32(a) _mm_cvt_ps2pi(a) +// Convert packed single-precision (32-bit) floating-point elements in a to +// packed 8-bit integers, and store the results in lower 4 elements of dst. +// Note: this intrinsic will generate 0x7F, rather than 0x80, for input values +// between 0x7F and 0x7FFFFFFF. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtps_pi8 +FORCE_INLINE __m64 _mm_cvtps_pi8(__m128 a) +{ + return vreinterpret_m64_s8(vqmovn_s16( + vcombine_s16(vreinterpret_s16_m64(_mm_cvtps_pi16(a)), vdup_n_s16(0)))); +} + // Convert packed unsigned 16-bit integers in a to packed single-precision // (32-bit) floating-point elements, and store the results in dst. -// -// FOR j := 0 to 3 -// i := j*16 -// m := j*32 -// dst[m+31:m] := Convert_UInt16_To_FP32(a[i+15:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpu16_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpu16_ps FORCE_INLINE __m128 _mm_cvtpu16_ps(__m64 a) { return vreinterpretq_m128_f32( @@ -1506,14 +1647,7 @@ FORCE_INLINE __m128 _mm_cvtpu16_ps(__m64 a) // Convert the lower packed unsigned 8-bit integers in a to packed // single-precision (32-bit) floating-point elements, and store the results in // dst. -// -// FOR j := 0 to 3 -// i := j*8 -// m := j*32 -// dst[m+31:m] := Convert_UInt8_To_FP32(a[i+7:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpu8_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpu8_ps FORCE_INLINE __m128 _mm_cvtpu8_ps(__m64 a) { return vreinterpretq_m128_f32(vcvtq_f32_u32( @@ -1523,21 +1657,13 @@ FORCE_INLINE __m128 _mm_cvtpu8_ps(__m64 a) // Convert the signed 32-bit integer b to a single-precision (32-bit) // floating-point element, store the result in the lower element of dst, and // copy the upper 3 packed elements from a to the upper elements of dst. -// -// dst[31:0] := Convert_Int32_To_FP32(b[31:0]) -// dst[127:32] := a[127:32] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi32_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi32_ss #define _mm_cvtsi32_ss(a, b) _mm_cvt_si2ss(a, b) // Convert the signed 64-bit integer b to a single-precision (32-bit) // floating-point element, store the result in the lower element of dst, and // copy the upper 3 packed elements from a to the upper elements of dst. -// -// dst[31:0] := Convert_Int64_To_FP32(b[63:0]) -// dst[127:32] := a[127:32] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi64_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi64_ss FORCE_INLINE __m128 _mm_cvtsi64_ss(__m128 a, int64_t b) { return vreinterpretq_m128_f32( @@ -1545,10 +1671,7 @@ FORCE_INLINE __m128 _mm_cvtsi64_ss(__m128 a, int64_t b) } // Copy the lower single-precision (32-bit) floating-point element of a to dst. -// -// dst[31:0] := a[31:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtss_f32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtss_f32 FORCE_INLINE float _mm_cvtss_f32(__m128 a) { return vgetq_lane_f32(vreinterpretq_f32_m128(a), 0); @@ -1556,21 +1679,16 @@ FORCE_INLINE float _mm_cvtss_f32(__m128 a) // Convert the lower single-precision (32-bit) floating-point element in a to a // 32-bit integer, and store the result in dst. -// -// dst[31:0] := Convert_FP32_To_Int32(a[31:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtss_si32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtss_si32 #define _mm_cvtss_si32(a) _mm_cvt_ss2si(a) // Convert the lower single-precision (32-bit) floating-point element in a to a // 64-bit integer, and store the result in dst. -// -// dst[63:0] := Convert_FP32_To_Int64(a[31:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtss_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtss_si64 FORCE_INLINE int64_t _mm_cvtss_si64(__m128 a) { -#if defined(__aarch64__) +#if (defined(__aarch64__) || defined(_M_ARM64)) || \ + defined(__ARM_FEATURE_DIRECTED_ROUNDING) return (int64_t) vgetq_lane_f32(vrndiq_f32(vreinterpretq_f32_m128(a)), 0); #else float32_t data = vgetq_lane_f32( @@ -1581,13 +1699,7 @@ FORCE_INLINE int64_t _mm_cvtss_si64(__m128 a) // Convert packed single-precision (32-bit) floating-point elements in a to // packed 32-bit integers with truncation, and store the results in dst. -// -// FOR j := 0 to 1 -// i := 32*j -// dst[i+31:i] := Convert_FP32_To_Int32_Truncate(a[i+31:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtt_ps2pi +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtt_ps2pi FORCE_INLINE __m64 _mm_cvtt_ps2pi(__m128 a) { return vreinterpret_m64_s32( @@ -1596,10 +1708,7 @@ FORCE_INLINE __m64 _mm_cvtt_ps2pi(__m128 a) // Convert the lower single-precision (32-bit) floating-point element in a to a // 32-bit integer with truncation, and store the result in dst. -// -// dst[31:0] := Convert_FP32_To_Int32_Truncate(a[31:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtt_ss2si +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtt_ss2si FORCE_INLINE int _mm_cvtt_ss2si(__m128 a) { return vgetq_lane_s32(vcvtq_s32_f32(vreinterpretq_f32_m128(a)), 0); @@ -1607,60 +1716,49 @@ FORCE_INLINE int _mm_cvtt_ss2si(__m128 a) // Convert packed single-precision (32-bit) floating-point elements in a to // packed 32-bit integers with truncation, and store the results in dst. -// -// FOR j := 0 to 1 -// i := 32*j -// dst[i+31:i] := Convert_FP32_To_Int32_Truncate(a[i+31:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvttps_pi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttps_pi32 #define _mm_cvttps_pi32(a) _mm_cvtt_ps2pi(a) // Convert the lower single-precision (32-bit) floating-point element in a to a // 32-bit integer with truncation, and store the result in dst. -// -// dst[31:0] := Convert_FP32_To_Int32_Truncate(a[31:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvttss_si32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttss_si32 #define _mm_cvttss_si32(a) _mm_cvtt_ss2si(a) // Convert the lower single-precision (32-bit) floating-point element in a to a // 64-bit integer with truncation, and store the result in dst. -// -// dst[63:0] := Convert_FP32_To_Int64_Truncate(a[31:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvttss_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttss_si64 FORCE_INLINE int64_t _mm_cvttss_si64(__m128 a) { return (int64_t) vgetq_lane_f32(vreinterpretq_f32_m128(a), 0); } -// Divides the four single-precision, floating-point values of a and b. -// -// r0 := a0 / b0 -// r1 := a1 / b1 -// r2 := a2 / b2 -// r3 := a3 / b3 -// -// https://msdn.microsoft.com/en-us/library/edaw8147(v=vs.100).aspx +// Divide packed single-precision (32-bit) floating-point elements in a by +// packed elements in b, and store the results in dst. +// Due to ARMv7-A NEON's lack of a precise division intrinsic, we implement +// division by multiplying a by b's reciprocal before using the Newton-Raphson +// method to approximate the results. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_div_ps FORCE_INLINE __m128 _mm_div_ps(__m128 a, __m128 b) { -#if defined(__aarch64__) && !SSE2NEON_PRECISE_DIV +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128_f32( vdivq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); #else float32x4_t recip = vrecpeq_f32(vreinterpretq_f32_m128(b)); recip = vmulq_f32(recip, vrecpsq_f32(recip, vreinterpretq_f32_m128(b))); -#if SSE2NEON_PRECISE_DIV // Additional Netwon-Raphson iteration for accuracy recip = vmulq_f32(recip, vrecpsq_f32(recip, vreinterpretq_f32_m128(b))); -#endif return vreinterpretq_m128_f32(vmulq_f32(vreinterpretq_f32_m128(a), recip)); #endif } -// Divides the scalar single-precision floating point value of a by b. -// https://msdn.microsoft.com/en-us/library/4y73xa49(v=vs.100).aspx +// Divide the lower single-precision (32-bit) floating-point element in a by the +// lower single-precision (32-bit) floating-point element in b, store the result +// in the lower element of dst, and copy the upper 3 packed elements from a to +// the upper elements of dst. +// Warning: ARMv7-A does not produce the same result compared to Intel and not +// IEEE-compliant. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_div_ss FORCE_INLINE __m128 _mm_div_ss(__m128 a, __m128 b) { float32_t value = @@ -1671,36 +1769,82 @@ FORCE_INLINE __m128 _mm_div_ss(__m128 a, __m128 b) // Extract a 16-bit integer from a, selected with imm8, and store the result in // the lower element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_extract_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_extract_pi16 #define _mm_extract_pi16(a, imm) \ (int32_t) vget_lane_u16(vreinterpret_u16_m64(a), (imm)) // Free aligned memory that was allocated with _mm_malloc. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_free +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_free +#if !defined(SSE2NEON_ALLOC_DEFINED) FORCE_INLINE void _mm_free(void *addr) { free(addr); } +#endif + +FORCE_INLINE uint64_t _sse2neon_get_fpcr(void) +{ + uint64_t value; +#if defined(_MSC_VER) + value = _ReadStatusReg(ARM64_FPCR); +#else + __asm__ __volatile__("mrs %0, FPCR" : "=r"(value)); /* read */ +#endif + return value; +} + +FORCE_INLINE void _sse2neon_set_fpcr(uint64_t value) +{ +#if defined(_MSC_VER) + _WriteStatusReg(ARM64_FPCR, value); +#else + __asm__ __volatile__("msr FPCR, %0" ::"r"(value)); /* write */ +#endif +} + +// Macro: Get the flush zero bits from the MXCSR control and status register. +// The flush zero may contain any of the following flags: _MM_FLUSH_ZERO_ON or +// _MM_FLUSH_ZERO_OFF +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_MM_GET_FLUSH_ZERO_MODE +FORCE_INLINE unsigned int _sse2neon_mm_get_flush_zero_mode(void) +{ + union { + fpcr_bitfield field; +#if defined(__aarch64__) || defined(_M_ARM64) + uint64_t value; +#else + uint32_t value; +#endif + } r; + +#if defined(__aarch64__) || defined(_M_ARM64) + r.value = _sse2neon_get_fpcr(); +#else + __asm__ __volatile__("vmrs %0, FPSCR" : "=r"(r.value)); /* read */ +#endif + + return r.field.bit24 ? _MM_FLUSH_ZERO_ON : _MM_FLUSH_ZERO_OFF; +} // Macro: Get the rounding mode bits from the MXCSR control and status register. // The rounding mode may contain any of the following flags: _MM_ROUND_NEAREST, // _MM_ROUND_DOWN, _MM_ROUND_UP, _MM_ROUND_TOWARD_ZERO -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_MM_GET_ROUNDING_MODE -FORCE_INLINE unsigned int _MM_GET_ROUNDING_MODE() +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_MM_GET_ROUNDING_MODE +FORCE_INLINE unsigned int _MM_GET_ROUNDING_MODE(void) { union { fpcr_bitfield field; -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) uint64_t value; #else uint32_t value; #endif } r; -#if defined(__aarch64__) - asm volatile("mrs %0, FPCR" : "=r"(r.value)); /* read */ +#if defined(__aarch64__) || defined(_M_ARM64) + r.value = _sse2neon_get_fpcr(); #else - asm volatile("vmrs %0, FPSCR" : "=r"(r.value)); /* read */ + __asm__ __volatile__("vmrs %0, FPSCR" : "=r"(r.value)); /* read */ #endif if (r.field.bit22) { @@ -1712,15 +1856,14 @@ FORCE_INLINE unsigned int _MM_GET_ROUNDING_MODE() // Copy a to dst, and insert the 16-bit integer i into dst at the location // specified by imm8. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_insert_pi16 -#define _mm_insert_pi16(a, b, imm) \ - __extension__({ \ - vreinterpret_m64_s16( \ - vset_lane_s16((b), vreinterpret_s16_m64(a), (imm))); \ - }) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_insert_pi16 +#define _mm_insert_pi16(a, b, imm) \ + vreinterpret_m64_s16(vset_lane_s16((b), vreinterpret_s16_m64(a), (imm))) -// Loads four single-precision, floating-point values. -// https://msdn.microsoft.com/en-us/library/vstudio/zzd50xxt(v=vs.100).aspx +// Load 128-bits (composed of 4 packed single-precision (32-bit) floating-point +// elements) from memory into dst. mem_addr must be aligned on a 16-byte +// boundary or a general-protection exception may be generated. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load_ps FORCE_INLINE __m128 _mm_load_ps(const float *p) { return vreinterpretq_m128_f32(vld1q_f32(p)); @@ -1734,52 +1877,40 @@ FORCE_INLINE __m128 _mm_load_ps(const float *p) // dst[95:64] := MEM[mem_addr+31:mem_addr] // dst[127:96] := MEM[mem_addr+31:mem_addr] // -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_load_ps1 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load_ps1 #define _mm_load_ps1 _mm_load1_ps -// Loads an single - precision, floating - point value into the low word and -// clears the upper three words. -// https://msdn.microsoft.com/en-us/library/548bb9h4%28v=vs.90%29.aspx +// Load a single-precision (32-bit) floating-point element from memory into the +// lower of dst, and zero the upper 3 elements. mem_addr does not need to be +// aligned on any particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load_ss FORCE_INLINE __m128 _mm_load_ss(const float *p) { return vreinterpretq_m128_f32(vsetq_lane_f32(*p, vdupq_n_f32(0), 0)); } -// Loads a single single-precision, floating-point value, copying it into all -// four words -// https://msdn.microsoft.com/en-us/library/vstudio/5cdkf716(v=vs.100).aspx +// Load a single-precision (32-bit) floating-point element from memory into all +// elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load1_ps FORCE_INLINE __m128 _mm_load1_ps(const float *p) { return vreinterpretq_m128_f32(vld1q_dup_f32(p)); } -// Sets the upper two single-precision, floating-point values with 64 -// bits of data loaded from the address p; the lower two values are passed -// through from a. -// -// r0 := a0 -// r1 := a1 -// r2 := *p0 -// r3 := *p1 -// -// https://msdn.microsoft.com/en-us/library/w92wta0x(v%3dvs.100).aspx +// Load 2 single-precision (32-bit) floating-point elements from memory into the +// upper 2 elements of dst, and copy the lower 2 elements from a to dst. +// mem_addr does not need to be aligned on any particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadh_pi FORCE_INLINE __m128 _mm_loadh_pi(__m128 a, __m64 const *p) { return vreinterpretq_m128_f32( vcombine_f32(vget_low_f32(a), vld1_f32((const float32_t *) p))); } -// Sets the lower two single-precision, floating-point values with 64 -// bits of data loaded from the address p; the upper two values are passed -// through from a. -// -// Return Value -// r0 := *p0 -// r1 := *p1 -// r2 := a2 -// r3 := a3 -// -// https://msdn.microsoft.com/en-us/library/s57cyak2(v=vs.100).aspx +// Load 2 single-precision (32-bit) floating-point elements from memory into the +// lower 2 elements of dst, and copy the upper 2 elements from a to dst. +// mem_addr does not need to be aligned on any particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadl_pi FORCE_INLINE __m128 _mm_loadl_pi(__m128 a, __m64 const *p) { return vreinterpretq_m128_f32( @@ -1789,21 +1920,17 @@ FORCE_INLINE __m128 _mm_loadl_pi(__m128 a, __m64 const *p) // Load 4 single-precision (32-bit) floating-point elements from memory into dst // in reverse order. mem_addr must be aligned on a 16-byte boundary or a // general-protection exception may be generated. -// -// dst[31:0] := MEM[mem_addr+127:mem_addr+96] -// dst[63:32] := MEM[mem_addr+95:mem_addr+64] -// dst[95:64] := MEM[mem_addr+63:mem_addr+32] -// dst[127:96] := MEM[mem_addr+31:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadr_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadr_ps FORCE_INLINE __m128 _mm_loadr_ps(const float *p) { float32x4_t v = vrev64q_f32(vld1q_f32(p)); return vreinterpretq_m128_f32(vextq_f32(v, v, 2)); } -// Loads four single-precision, floating-point values. -// https://msdn.microsoft.com/en-us/library/x1b16s7z%28v=vs.90%29.aspx +// Load 128-bits (composed of 4 packed single-precision (32-bit) floating-point +// elements) from memory into dst. mem_addr does not need to be aligned on any +// particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadu_ps FORCE_INLINE __m128 _mm_loadu_ps(const float *p) { // for neon, alignment doesn't matter, so _mm_load_ps and _mm_loadu_ps are @@ -1812,32 +1939,26 @@ FORCE_INLINE __m128 _mm_loadu_ps(const float *p) } // Load unaligned 16-bit integer from memory into the first element of dst. -// -// dst[15:0] := MEM[mem_addr+15:mem_addr] -// dst[MAX:16] := 0 -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadu_si16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadu_si16 FORCE_INLINE __m128i _mm_loadu_si16(const void *p) { return vreinterpretq_m128i_s16( - vsetq_lane_s16(*(const int16_t *) p, vdupq_n_s16(0), 0)); + vsetq_lane_s16(*(const unaligned_int16_t *) p, vdupq_n_s16(0), 0)); } // Load unaligned 64-bit integer from memory into the first element of dst. -// -// dst[63:0] := MEM[mem_addr+63:mem_addr] -// dst[MAX:64] := 0 -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadu_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadu_si64 FORCE_INLINE __m128i _mm_loadu_si64(const void *p) { return vreinterpretq_m128i_s64( - vcombine_s64(vld1_s64((const int64_t *) p), vdup_n_s64(0))); + vsetq_lane_s64(*(const unaligned_int64_t *) p, vdupq_n_s64(0), 0)); } -// Allocate aligned blocks of memory. -// https://software.intel.com/en-us/ -// cpp-compiler-developer-guide-and-reference-allocating-and-freeing-aligned-memory-blocks +// Allocate size bytes of memory, aligned to the alignment specified in align, +// and return a pointer to the allocated memory. _mm_free should be used to free +// memory that is allocated with _mm_malloc. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_malloc +#if !defined(SSE2NEON_ALLOC_DEFINED) FORCE_INLINE void *_mm_malloc(size_t size, size_t align) { void *ptr; @@ -1849,11 +1970,12 @@ FORCE_INLINE void *_mm_malloc(size_t size, size_t align) return ptr; return NULL; } +#endif // Conditionally store 8-bit integer elements from a into memory using mask // (elements are not stored when the highest bit is not set in the corresponding // element) and a non-temporal memory hint. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_maskmove_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_maskmove_si64 FORCE_INLINE void _mm_maskmove_si64(__m64 a, __m64 mask, char *mem_addr) { int8x8_t shr_mask = vshr_n_s8(vreinterpret_s8_m64(mask), 7); @@ -1867,33 +1989,29 @@ FORCE_INLINE void _mm_maskmove_si64(__m64 a, __m64 mask, char *mem_addr) // Conditionally store 8-bit integer elements from a into memory using mask // (elements are not stored when the highest bit is not set in the corresponding // element) and a non-temporal memory hint. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_maskmovq +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_maskmovq #define _m_maskmovq(a, mask, mem_addr) _mm_maskmove_si64(a, mask, mem_addr) // Compare packed signed 16-bit integers in a and b, and store packed maximum // values in dst. -// -// FOR j := 0 to 3 -// i := j*16 -// dst[i+15:i] := MAX(a[i+15:i], b[i+15:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_max_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_pi16 FORCE_INLINE __m64 _mm_max_pi16(__m64 a, __m64 b) { return vreinterpret_m64_s16( vmax_s16(vreinterpret_s16_m64(a), vreinterpret_s16_m64(b))); } -// Computes the maximums of the four single-precision, floating-point values of -// a and b. -// https://msdn.microsoft.com/en-us/library/vstudio/ff5d607a(v=vs.100).aspx +// Compare packed single-precision (32-bit) floating-point elements in a and b, +// and store packed maximum values in dst. dst does not follow the IEEE Standard +// for Floating-Point Arithmetic (IEEE 754) maximum value when inputs are NaN or +// signed-zero values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_ps FORCE_INLINE __m128 _mm_max_ps(__m128 a, __m128 b) { #if SSE2NEON_PRECISE_MINMAX float32x4_t _a = vreinterpretq_f32_m128(a); float32x4_t _b = vreinterpretq_f32_m128(b); - return vbslq_f32(vcltq_f32(_b, _a), _a, _b); + return vreinterpretq_m128_f32(vbslq_f32(vcgtq_f32(_a, _b), _a, _b)); #else return vreinterpretq_m128_f32( vmaxq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); @@ -1902,22 +2020,19 @@ FORCE_INLINE __m128 _mm_max_ps(__m128 a, __m128 b) // Compare packed unsigned 8-bit integers in a and b, and store packed maximum // values in dst. -// -// FOR j := 0 to 7 -// i := j*8 -// dst[i+7:i] := MAX(a[i+7:i], b[i+7:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_max_pu8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_pu8 FORCE_INLINE __m64 _mm_max_pu8(__m64 a, __m64 b) { return vreinterpret_m64_u8( vmax_u8(vreinterpret_u8_m64(a), vreinterpret_u8_m64(b))); } -// Computes the maximum of the two lower scalar single-precision floating point -// values of a and b. -// https://msdn.microsoft.com/en-us/library/s6db5esz(v=vs.100).aspx +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b, store the maximum value in the lower element of dst, and copy the upper 3 +// packed elements from a to the upper element of dst. dst does not follow the +// IEEE Standard for Floating-Point Arithmetic (IEEE 754) maximum value when +// inputs are NaN or signed-zero values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_ss FORCE_INLINE __m128 _mm_max_ss(__m128 a, __m128 b) { float32_t value = vgetq_lane_f32(_mm_max_ps(a, b), 0); @@ -1927,28 +2042,24 @@ FORCE_INLINE __m128 _mm_max_ss(__m128 a, __m128 b) // Compare packed signed 16-bit integers in a and b, and store packed minimum // values in dst. -// -// FOR j := 0 to 3 -// i := j*16 -// dst[i+15:i] := MIN(a[i+15:i], b[i+15:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_min_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_pi16 FORCE_INLINE __m64 _mm_min_pi16(__m64 a, __m64 b) { return vreinterpret_m64_s16( vmin_s16(vreinterpret_s16_m64(a), vreinterpret_s16_m64(b))); } -// Computes the minima of the four single-precision, floating-point values of a -// and b. -// https://msdn.microsoft.com/en-us/library/vstudio/wh13kadz(v=vs.100).aspx +// Compare packed single-precision (32-bit) floating-point elements in a and b, +// and store packed minimum values in dst. dst does not follow the IEEE Standard +// for Floating-Point Arithmetic (IEEE 754) minimum value when inputs are NaN or +// signed-zero values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_ps FORCE_INLINE __m128 _mm_min_ps(__m128 a, __m128 b) { #if SSE2NEON_PRECISE_MINMAX float32x4_t _a = vreinterpretq_f32_m128(a); float32x4_t _b = vreinterpretq_f32_m128(b); - return vbslq_f32(vcltq_f32(_a, _b), _a, _b); + return vreinterpretq_m128_f32(vbslq_f32(vcltq_f32(_a, _b), _a, _b)); #else return vreinterpretq_m128_f32( vminq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); @@ -1957,22 +2068,19 @@ FORCE_INLINE __m128 _mm_min_ps(__m128 a, __m128 b) // Compare packed unsigned 8-bit integers in a and b, and store packed minimum // values in dst. -// -// FOR j := 0 to 7 -// i := j*8 -// dst[i+7:i] := MIN(a[i+7:i], b[i+7:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_min_pu8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_pu8 FORCE_INLINE __m64 _mm_min_pu8(__m64 a, __m64 b) { return vreinterpret_m64_u8( vmin_u8(vreinterpret_u8_m64(a), vreinterpret_u8_m64(b))); } -// Computes the minimum of the two lower scalar single-precision floating point -// values of a and b. -// https://msdn.microsoft.com/en-us/library/0a9y7xaa(v=vs.100).aspx +// Compare the lower single-precision (32-bit) floating-point elements in a and +// b, store the minimum value in the lower element of dst, and copy the upper 3 +// packed elements from a to the upper element of dst. dst does not follow the +// IEEE Standard for Floating-Point Arithmetic (IEEE 754) minimum value when +// inputs are NaN or signed-zero values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_ss FORCE_INLINE __m128 _mm_min_ss(__m128 a, __m128 b) { float32_t value = vgetq_lane_f32(_mm_min_ps(a, b), 0); @@ -1980,8 +2088,10 @@ FORCE_INLINE __m128 _mm_min_ss(__m128 a, __m128 b) vsetq_lane_f32(value, vreinterpretq_f32_m128(a), 0)); } -// Sets the low word to the single-precision, floating-point value of b -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/35hdzazd(v=vs.100) +// Move the lower single-precision (32-bit) floating-point element from b to the +// lower element of dst, and copy the upper 3 packed elements from a to the +// upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_move_ss FORCE_INLINE __m128 _mm_move_ss(__m128 a, __m128 b) { return vreinterpretq_m128_f32( @@ -1989,25 +2099,26 @@ FORCE_INLINE __m128 _mm_move_ss(__m128 a, __m128 b) vreinterpretq_f32_m128(a), 0)); } -// Moves the upper two values of B into the lower two values of A. -// -// r3 := a3 -// r2 := a2 -// r1 := b3 -// r0 := b2 -FORCE_INLINE __m128 _mm_movehl_ps(__m128 __A, __m128 __B) -{ - float32x2_t a32 = vget_high_f32(vreinterpretq_f32_m128(__A)); - float32x2_t b32 = vget_high_f32(vreinterpretq_f32_m128(__B)); +// Move the upper 2 single-precision (32-bit) floating-point elements from b to +// the lower 2 elements of dst, and copy the upper 2 elements from a to the +// upper 2 elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movehl_ps +FORCE_INLINE __m128 _mm_movehl_ps(__m128 a, __m128 b) +{ +#if defined(aarch64__) + return vreinterpretq_m128_u64( + vzip2q_u64(vreinterpretq_u64_m128(b), vreinterpretq_u64_m128(a))); +#else + float32x2_t a32 = vget_high_f32(vreinterpretq_f32_m128(a)); + float32x2_t b32 = vget_high_f32(vreinterpretq_f32_m128(b)); return vreinterpretq_m128_f32(vcombine_f32(b32, a32)); +#endif } -// Moves the lower two values of B into the upper two values of A. -// -// r3 := b1 -// r2 := b0 -// r1 := a1 -// r0 := a0 +// Move the lower 2 single-precision (32-bit) floating-point elements from b to +// the upper 2 elements of dst, and copy the lower 2 elements from a to the +// lower 2 elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movelh_ps FORCE_INLINE __m128 _mm_movelh_ps(__m128 __A, __m128 __B) { float32x2_t a10 = vget_low_f32(vreinterpretq_f32_m128(__A)); @@ -2017,14 +2128,14 @@ FORCE_INLINE __m128 _mm_movelh_ps(__m128 __A, __m128 __B) // Create mask from the most significant bit of each 8-bit element in a, and // store the result in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_movemask_pi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movemask_pi8 FORCE_INLINE int _mm_movemask_pi8(__m64 a) { uint8x8_t input = vreinterpret_u8_m64(a); -#if defined(__aarch64__) - static const int8x8_t shift = {0, 1, 2, 3, 4, 5, 6, 7}; +#if defined(__aarch64__) || defined(_M_ARM64) + static const int8_t shift[8] = {0, 1, 2, 3, 4, 5, 6, 7}; uint8x8_t tmp = vshr_n_u8(input, 7); - return vaddv_u8(vshl_u8(tmp, shift)); + return vaddv_u8(vshl_u8(tmp, vld1_s8(shift))); #else // Refer the implementation of `_mm_movemask_epi8` uint16x4_t high_bits = vreinterpret_u16_u8(vshr_n_u8(input, 7)); @@ -2036,17 +2147,16 @@ FORCE_INLINE int _mm_movemask_pi8(__m64 a) #endif } -// NEON does not provide this method -// Creates a 4-bit mask from the most significant bits of the four -// single-precision, floating-point values. -// https://msdn.microsoft.com/en-us/library/vstudio/4490ys29(v=vs.100).aspx +// Set each bit of mask dst based on the most significant bit of the +// corresponding packed single-precision (32-bit) floating-point element in a. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movemask_ps FORCE_INLINE int _mm_movemask_ps(__m128 a) { uint32x4_t input = vreinterpretq_u32_m128(a); -#if defined(__aarch64__) - static const int32x4_t shift = {0, 1, 2, 3}; +#if defined(__aarch64__) || defined(_M_ARM64) + static const int32_t shift[4] = {0, 1, 2, 3}; uint32x4_t tmp = vshrq_n_u32(input, 31); - return vaddvq_u32(vshlq_u32(tmp, shift)); + return vaddvq_u32(vshlq_u32(tmp, vld1q_s32(shift))); #else // Uses the exact same method as _mm_movemask_epi8, see that for details. // Shift out everything but the sign bits with a 32-bit unsigned shift @@ -2060,15 +2170,10 @@ FORCE_INLINE int _mm_movemask_ps(__m128 a) #endif } -// Multiplies the four single-precision, floating-point values of a and b. -// -// r0 := a0 * b0 -// r1 := a1 * b1 -// r2 := a2 * b2 -// r3 := a3 * b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/22kbk6t9(v=vs.100).aspx -FORCE_INLINE __m128 _mm_mul_ps(__m128 a, __m128 b) +// Multiply packed single-precision (32-bit) floating-point elements in a and b, +// and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mul_ps +FORCE_INLINE_OPTNONE __m128 _mm_mul_ps(__m128 a, __m128 b) { return vreinterpretq_m128_f32( vmulq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); @@ -2077,11 +2182,7 @@ FORCE_INLINE __m128 _mm_mul_ps(__m128 a, __m128 b) // Multiply the lower single-precision (32-bit) floating-point element in a and // b, store the result in the lower element of dst, and copy the upper 3 packed // elements from a to the upper elements of dst. -// -// dst[31:0] := a[31:0] * b[31:0] -// dst[127:32] := a[127:32] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_mul_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mul_ss FORCE_INLINE __m128 _mm_mul_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_mul_ps(a, b)); @@ -2090,16 +2191,16 @@ FORCE_INLINE __m128 _mm_mul_ss(__m128 a, __m128 b) // Multiply the packed unsigned 16-bit integers in a and b, producing // intermediate 32-bit integers, and store the high 16 bits of the intermediate // integers in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_mulhi_pu16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mulhi_pu16 FORCE_INLINE __m64 _mm_mulhi_pu16(__m64 a, __m64 b) { return vreinterpret_m64_u16(vshrn_n_u32( vmull_u16(vreinterpret_u16_m64(a), vreinterpret_u16_m64(b)), 16)); } -// Computes the bitwise OR of the four single-precision, floating-point values -// of a and b. -// https://msdn.microsoft.com/en-us/library/vstudio/7ctdsyy0(v=vs.100).aspx +// Compute the bitwise OR of packed single-precision (32-bit) floating-point +// elements in a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_or_ps FORCE_INLINE __m128 _mm_or_ps(__m128 a, __m128 b) { return vreinterpretq_m128_s32( @@ -2108,91 +2209,110 @@ FORCE_INLINE __m128 _mm_or_ps(__m128 a, __m128 b) // Average packed unsigned 8-bit integers in a and b, and store the results in // dst. -// -// FOR j := 0 to 7 -// i := j*8 -// dst[i+7:i] := (a[i+7:i] + b[i+7:i] + 1) >> 1 -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pavgb +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pavgb #define _m_pavgb(a, b) _mm_avg_pu8(a, b) // Average packed unsigned 16-bit integers in a and b, and store the results in // dst. -// -// FOR j := 0 to 3 -// i := j*16 -// dst[i+15:i] := (a[i+15:i] + b[i+15:i] + 1) >> 1 -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pavgw +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pavgw #define _m_pavgw(a, b) _mm_avg_pu16(a, b) // Extract a 16-bit integer from a, selected with imm8, and store the result in // the lower element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pextrw +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pextrw #define _m_pextrw(a, imm) _mm_extract_pi16(a, imm) // Copy a to dst, and insert the 16-bit integer i into dst at the location // specified by imm8. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=m_pinsrw +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=m_pinsrw #define _m_pinsrw(a, i, imm) _mm_insert_pi16(a, i, imm) // Compare packed signed 16-bit integers in a and b, and store packed maximum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pmaxsw +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pmaxsw #define _m_pmaxsw(a, b) _mm_max_pi16(a, b) // Compare packed unsigned 8-bit integers in a and b, and store packed maximum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pmaxub +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pmaxub #define _m_pmaxub(a, b) _mm_max_pu8(a, b) // Compare packed signed 16-bit integers in a and b, and store packed minimum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pminsw +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pminsw #define _m_pminsw(a, b) _mm_min_pi16(a, b) // Compare packed unsigned 8-bit integers in a and b, and store packed minimum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pminub +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pminub #define _m_pminub(a, b) _mm_min_pu8(a, b) // Create mask from the most significant bit of each 8-bit element in a, and // store the result in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pmovmskb +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pmovmskb #define _m_pmovmskb(a) _mm_movemask_pi8(a) // Multiply the packed unsigned 16-bit integers in a and b, producing // intermediate 32-bit integers, and store the high 16 bits of the intermediate // integers in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pmulhuw +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pmulhuw #define _m_pmulhuw(a, b) _mm_mulhi_pu16(a, b) -// Loads one cache line of data from address p to a location closer to the -// processor. https://msdn.microsoft.com/en-us/library/84szxsww(v=vs.100).aspx -FORCE_INLINE void _mm_prefetch(const void *p, int i) +// Fetch the line of data from memory that contains address p to a location in +// the cache hierarchy specified by the locality hint i. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_prefetch +FORCE_INLINE void _mm_prefetch(char const *p, int i) { (void) i; - __builtin_prefetch(p); +#if defined(_MSC_VER) + switch (i) { + case _MM_HINT_NTA: + __prefetch2(p, 1); + break; + case _MM_HINT_T0: + __prefetch2(p, 0); + break; + case _MM_HINT_T1: + __prefetch2(p, 2); + break; + case _MM_HINT_T2: + __prefetch2(p, 4); + break; + } +#else + switch (i) { + case _MM_HINT_NTA: + __builtin_prefetch(p, 0, 0); + break; + case _MM_HINT_T0: + __builtin_prefetch(p, 0, 3); + break; + case _MM_HINT_T1: + __builtin_prefetch(p, 0, 2); + break; + case _MM_HINT_T2: + __builtin_prefetch(p, 0, 1); + break; + } +#endif } // Compute the absolute differences of packed unsigned 8-bit integers in a and // b, then horizontally sum each consecutive 8 differences to produce four // unsigned 16-bit integers, and pack these unsigned 16-bit integers in the low // 16 bits of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=m_psadbw +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=m_psadbw #define _m_psadbw(a, b) _mm_sad_pu8(a, b) // Shuffle 16-bit integers in a using the control in imm8, and store the results // in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_m_pshufw +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_m_pshufw #define _m_pshufw(a, imm) _mm_shuffle_pi16(a, imm) // Compute the approximate reciprocal of packed single-precision (32-bit) // floating-point elements in a, and store the results in dst. The maximum // relative error for this approximation is less than 1.5*2^-12. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_rcp_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_rcp_ps FORCE_INLINE __m128 _mm_rcp_ps(__m128 in) { float32x4_t recip = vrecpeq_f32(vreinterpretq_f32_m128(in)); @@ -2208,30 +2328,42 @@ FORCE_INLINE __m128 _mm_rcp_ps(__m128 in) // floating-point element in a, store the result in the lower element of dst, // and copy the upper 3 packed elements from a to the upper elements of dst. The // maximum relative error for this approximation is less than 1.5*2^-12. -// -// dst[31:0] := (1.0 / a[31:0]) -// dst[127:32] := a[127:32] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_rcp_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_rcp_ss FORCE_INLINE __m128 _mm_rcp_ss(__m128 a) { return _mm_move_ss(a, _mm_rcp_ps(a)); } -// Computes the approximations of the reciprocal square roots of the four -// single-precision floating point values of in. -// The current precision is 1% error. -// https://msdn.microsoft.com/en-us/library/22hfsh53(v=vs.100).aspx +// Compute the approximate reciprocal square root of packed single-precision +// (32-bit) floating-point elements in a, and store the results in dst. The +// maximum relative error for this approximation is less than 1.5*2^-12. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_rsqrt_ps FORCE_INLINE __m128 _mm_rsqrt_ps(__m128 in) { float32x4_t out = vrsqrteq_f32(vreinterpretq_f32_m128(in)); -#if SSE2NEON_PRECISE_SQRT - // Additional Netwon-Raphson iteration for accuracy + + // Generate masks for detecting whether input has any 0.0f/-0.0f + // (which becomes positive/negative infinity by IEEE-754 arithmetic rules). + const uint32x4_t pos_inf = vdupq_n_u32(0x7F800000); + const uint32x4_t neg_inf = vdupq_n_u32(0xFF800000); + const uint32x4_t has_pos_zero = + vceqq_u32(pos_inf, vreinterpretq_u32_f32(out)); + const uint32x4_t has_neg_zero = + vceqq_u32(neg_inf, vreinterpretq_u32_f32(out)); + out = vmulq_f32( out, vrsqrtsq_f32(vmulq_f32(vreinterpretq_f32_m128(in), out), out)); +#if SSE2NEON_PRECISE_SQRT + // Additional Netwon-Raphson iteration for accuracy out = vmulq_f32( out, vrsqrtsq_f32(vmulq_f32(vreinterpretq_f32_m128(in), out), out)); #endif + + // Set output vector element to infinity/negative-infinity if + // the corresponding input vector element is 0.0f/-0.0f. + out = vbslq_f32(has_pos_zero, (float32x4_t) pos_inf, out); + out = vbslq_f32(has_neg_zero, (float32x4_t) neg_inf, out); + return vreinterpretq_m128_f32(out); } @@ -2239,7 +2371,7 @@ FORCE_INLINE __m128 _mm_rsqrt_ps(__m128 in) // (32-bit) floating-point element in a, store the result in the lower element // of dst, and copy the upper 3 packed elements from a to the upper elements of // dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_rsqrt_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_rsqrt_ss FORCE_INLINE __m128 _mm_rsqrt_ss(__m128 in) { return vsetq_lane_f32(vgetq_lane_f32(_mm_rsqrt_ps(in), 0), in, 0); @@ -2249,25 +2381,59 @@ FORCE_INLINE __m128 _mm_rsqrt_ss(__m128 in) // b, then horizontally sum each consecutive 8 differences to produce four // unsigned 16-bit integers, and pack these unsigned 16-bit integers in the low // 16 bits of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sad_pu8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sad_pu8 FORCE_INLINE __m64 _mm_sad_pu8(__m64 a, __m64 b) { uint64x1_t t = vpaddl_u32(vpaddl_u16( vpaddl_u8(vabd_u8(vreinterpret_u8_m64(a), vreinterpret_u8_m64(b))))); return vreinterpret_m64_u16( - vset_lane_u16(vget_lane_u64(t, 0), vdup_n_u16(0), 0)); + vset_lane_u16((int) vget_lane_u64(t, 0), vdup_n_u16(0), 0)); +} + +// Macro: Set the flush zero bits of the MXCSR control and status register to +// the value in unsigned 32-bit integer a. The flush zero may contain any of the +// following flags: _MM_FLUSH_ZERO_ON or _MM_FLUSH_ZERO_OFF +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_MM_SET_FLUSH_ZERO_MODE +FORCE_INLINE void _sse2neon_mm_set_flush_zero_mode(unsigned int flag) +{ + // AArch32 Advanced SIMD arithmetic always uses the Flush-to-zero setting, + // regardless of the value of the FZ bit. + union { + fpcr_bitfield field; +#if defined(__aarch64__) || defined(_M_ARM64) + uint64_t value; +#else + uint32_t value; +#endif + } r; + +#if defined(__aarch64__) || defined(_M_ARM64) + r.value = _sse2neon_get_fpcr(); +#else + __asm__ __volatile__("vmrs %0, FPSCR" : "=r"(r.value)); /* read */ +#endif + + r.field.bit24 = (flag & _MM_FLUSH_ZERO_MASK) == _MM_FLUSH_ZERO_ON; + +#if defined(__aarch64__) || defined(_M_ARM64) + _sse2neon_set_fpcr(r.value); +#else + __asm__ __volatile__("vmsr FPSCR, %0" ::"r"(r)); /* write */ +#endif } -// Sets the four single-precision, floating-point values to the four inputs. -// https://msdn.microsoft.com/en-us/library/vstudio/afh0zf75(v=vs.100).aspx +// Set packed single-precision (32-bit) floating-point elements in dst with the +// supplied values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_ps FORCE_INLINE __m128 _mm_set_ps(float w, float z, float y, float x) { float ALIGN_STRUCT(16) data[4] = {x, y, z, w}; return vreinterpretq_m128_f32(vld1q_f32(data)); } -// Sets the four single-precision, floating-point values to w. -// https://msdn.microsoft.com/en-us/library/vstudio/2x1se8ha(v=vs.100).aspx +// Broadcast single-precision (32-bit) floating-point value a to all elements of +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_ps1 FORCE_INLINE __m128 _mm_set_ps1(float _w) { return vreinterpretq_m128_f32(vdupq_n_f32(_w)); @@ -2277,22 +2443,22 @@ FORCE_INLINE __m128 _mm_set_ps1(float _w) // the value in unsigned 32-bit integer a. The rounding mode may contain any of // the following flags: _MM_ROUND_NEAREST, _MM_ROUND_DOWN, _MM_ROUND_UP, // _MM_ROUND_TOWARD_ZERO -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_MM_SET_ROUNDING_MODE +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_MM_SET_ROUNDING_MODE FORCE_INLINE void _MM_SET_ROUNDING_MODE(int rounding) { union { fpcr_bitfield field; -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) uint64_t value; #else uint32_t value; #endif } r; -#if defined(__aarch64__) - asm volatile("mrs %0, FPCR" : "=r"(r.value)); /* read */ +#if defined(__aarch64__) || defined(_M_ARM64) + r.value = _sse2neon_get_fpcr(); #else - asm volatile("vmrs %0, FPSCR" : "=r"(r.value)); /* read */ + __asm__ __volatile__("vmrs %0, FPSCR" : "=r"(r.value)); /* read */ #endif switch (rounding) { @@ -2313,48 +2479,57 @@ FORCE_INLINE void _MM_SET_ROUNDING_MODE(int rounding) r.field.bit23 = 0; } -#if defined(__aarch64__) - asm volatile("msr FPCR, %0" ::"r"(r)); /* write */ +#if defined(__aarch64__) || defined(_M_ARM64) + _sse2neon_set_fpcr(r.value); #else - asm volatile("vmsr FPSCR, %0" ::"r"(r)); /* write */ + __asm__ __volatile__("vmsr FPSCR, %0" ::"r"(r)); /* write */ #endif } // Copy single-precision (32-bit) floating-point element a to the lower element // of dst, and zero the upper 3 elements. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_set_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_ss FORCE_INLINE __m128 _mm_set_ss(float a) { - float ALIGN_STRUCT(16) data[4] = {a, 0, 0, 0}; - return vreinterpretq_m128_f32(vld1q_f32(data)); + return vreinterpretq_m128_f32(vsetq_lane_f32(a, vdupq_n_f32(0), 0)); } -// Sets the four single-precision, floating-point values to w. -// -// r0 := r1 := r2 := r3 := w -// -// https://msdn.microsoft.com/en-us/library/vstudio/2x1se8ha(v=vs.100).aspx +// Broadcast single-precision (32-bit) floating-point value a to all elements of +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set1_ps FORCE_INLINE __m128 _mm_set1_ps(float _w) { return vreinterpretq_m128_f32(vdupq_n_f32(_w)); } +// Set the MXCSR control and status register with the value in unsigned 32-bit +// integer a. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setcsr +// FIXME: _mm_setcsr() implementation supports changing the rounding mode only. FORCE_INLINE void _mm_setcsr(unsigned int a) { _MM_SET_ROUNDING_MODE(a); } -// Sets the four single-precision, floating-point values to the four inputs in -// reverse order. -// https://msdn.microsoft.com/en-us/library/vstudio/d2172ct3(v=vs.100).aspx +// Get the unsigned 32-bit value of the MXCSR control and status register. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_getcsr +// FIXME: _mm_getcsr() implementation supports reading the rounding mode only. +FORCE_INLINE unsigned int _mm_getcsr(void) +{ + return _MM_GET_ROUNDING_MODE(); +} + +// Set packed single-precision (32-bit) floating-point elements in dst with the +// supplied values in reverse order. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setr_ps FORCE_INLINE __m128 _mm_setr_ps(float w, float z, float y, float x) { float ALIGN_STRUCT(16) data[4] = {w, z, y, x}; return vreinterpretq_m128_f32(vld1q_f32(data)); } -// Clears the four single-precision, floating-point values. -// https://msdn.microsoft.com/en-us/library/vstudio/tk1t2tbz(v=vs.100).aspx +// Return vector of type __m128 with all elements set to zero. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setzero_ps FORCE_INLINE __m128 _mm_setzero_ps(void) { return vreinterpretq_m128_f32(vdupq_n_f32(0)); @@ -2362,130 +2537,145 @@ FORCE_INLINE __m128 _mm_setzero_ps(void) // Shuffle 16-bit integers in a using the control in imm8, and store the results // in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_shuffle_pi16 -#if __has_builtin(__builtin_shufflevector) -#define _mm_shuffle_pi16(a, imm) \ - __extension__({ \ - vreinterpret_m64_s16(__builtin_shufflevector( \ - vreinterpret_s16_m64(a), vreinterpret_s16_m64(a), (imm & 0x3), \ - ((imm >> 2) & 0x3), ((imm >> 4) & 0x3), ((imm >> 6) & 0x3))); \ - }) -#else -#define _mm_shuffle_pi16(a, imm) \ - __extension__({ \ - int16x4_t ret; \ - ret = \ - vmov_n_s16(vget_lane_s16(vreinterpret_s16_m64(a), (imm) & (0x3))); \ - ret = vset_lane_s16( \ - vget_lane_s16(vreinterpret_s16_m64(a), ((imm) >> 2) & 0x3), ret, \ - 1); \ - ret = vset_lane_s16( \ - vget_lane_s16(vreinterpret_s16_m64(a), ((imm) >> 4) & 0x3), ret, \ - 2); \ - ret = vset_lane_s16( \ - vget_lane_s16(vreinterpret_s16_m64(a), ((imm) >> 6) & 0x3), ret, \ - 3); \ - vreinterpret_m64_s16(ret); \ - }) -#endif - -// Guarantees that every preceding store is globally visible before any -// subsequent store. -// https://msdn.microsoft.com/en-us/library/5h2w73d1%28v=vs.90%29.aspx +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_shuffle_pi16 +#ifdef _sse2neon_shuffle +#define _mm_shuffle_pi16(a, imm) \ + vreinterpret_m64_s16(vshuffle_s16( \ + vreinterpret_s16_m64(a), vreinterpret_s16_m64(a), (imm & 0x3), \ + ((imm >> 2) & 0x3), ((imm >> 4) & 0x3), ((imm >> 6) & 0x3))) +#else +#define _mm_shuffle_pi16(a, imm) \ + _sse2neon_define1( \ + __m64, a, int16x4_t ret; \ + ret = vmov_n_s16( \ + vget_lane_s16(vreinterpret_s16_m64(_a), (imm) & (0x3))); \ + ret = vset_lane_s16( \ + vget_lane_s16(vreinterpret_s16_m64(_a), ((imm) >> 2) & 0x3), ret, \ + 1); \ + ret = vset_lane_s16( \ + vget_lane_s16(vreinterpret_s16_m64(_a), ((imm) >> 4) & 0x3), ret, \ + 2); \ + ret = vset_lane_s16( \ + vget_lane_s16(vreinterpret_s16_m64(_a), ((imm) >> 6) & 0x3), ret, \ + 3); \ + _sse2neon_return(vreinterpret_m64_s16(ret));) +#endif + +// Perform a serializing operation on all store-to-memory instructions that were +// issued prior to this instruction. Guarantees that every store instruction +// that precedes, in program order, is globally visible before any store +// instruction which follows the fence in program order. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sfence FORCE_INLINE void _mm_sfence(void) { - __sync_synchronize(); + _sse2neon_smp_mb(); +} + +// Perform a serializing operation on all load-from-memory and store-to-memory +// instructions that were issued prior to this instruction. Guarantees that +// every memory access that precedes, in program order, the memory fence +// instruction is globally visible before any memory instruction which follows +// the fence in program order. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mfence +FORCE_INLINE void _mm_mfence(void) +{ + _sse2neon_smp_mb(); +} + +// Perform a serializing operation on all load-from-memory instructions that +// were issued prior to this instruction. Guarantees that every load instruction +// that precedes, in program order, is globally visible before any load +// instruction which follows the fence in program order. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_lfence +FORCE_INLINE void _mm_lfence(void) +{ + _sse2neon_smp_mb(); } // FORCE_INLINE __m128 _mm_shuffle_ps(__m128 a, __m128 b, __constrange(0,255) // int imm) -#if __has_builtin(__builtin_shufflevector) -#define _mm_shuffle_ps(a, b, imm) \ - __extension__({ \ - float32x4_t _input1 = vreinterpretq_f32_m128(a); \ - float32x4_t _input2 = vreinterpretq_f32_m128(b); \ - float32x4_t _shuf = __builtin_shufflevector( \ - _input1, _input2, (imm) & (0x3), ((imm) >> 2) & 0x3, \ - (((imm) >> 4) & 0x3) + 4, (((imm) >> 6) & 0x3) + 4); \ - vreinterpretq_m128_f32(_shuf); \ +#ifdef _sse2neon_shuffle +#define _mm_shuffle_ps(a, b, imm) \ + __extension__({ \ + float32x4_t _input1 = vreinterpretq_f32_m128(a); \ + float32x4_t _input2 = vreinterpretq_f32_m128(b); \ + float32x4_t _shuf = \ + vshuffleq_s32(_input1, _input2, (imm) & (0x3), ((imm) >> 2) & 0x3, \ + (((imm) >> 4) & 0x3) + 4, (((imm) >> 6) & 0x3) + 4); \ + vreinterpretq_m128_f32(_shuf); \ }) #else // generic -#define _mm_shuffle_ps(a, b, imm) \ - __extension__({ \ - __m128 ret; \ - switch (imm) { \ - case _MM_SHUFFLE(1, 0, 3, 2): \ - ret = _mm_shuffle_ps_1032((a), (b)); \ - break; \ - case _MM_SHUFFLE(2, 3, 0, 1): \ - ret = _mm_shuffle_ps_2301((a), (b)); \ - break; \ - case _MM_SHUFFLE(0, 3, 2, 1): \ - ret = _mm_shuffle_ps_0321((a), (b)); \ - break; \ - case _MM_SHUFFLE(2, 1, 0, 3): \ - ret = _mm_shuffle_ps_2103((a), (b)); \ - break; \ - case _MM_SHUFFLE(1, 0, 1, 0): \ - ret = _mm_movelh_ps((a), (b)); \ - break; \ - case _MM_SHUFFLE(1, 0, 0, 1): \ - ret = _mm_shuffle_ps_1001((a), (b)); \ - break; \ - case _MM_SHUFFLE(0, 1, 0, 1): \ - ret = _mm_shuffle_ps_0101((a), (b)); \ - break; \ - case _MM_SHUFFLE(3, 2, 1, 0): \ - ret = _mm_shuffle_ps_3210((a), (b)); \ - break; \ - case _MM_SHUFFLE(0, 0, 1, 1): \ - ret = _mm_shuffle_ps_0011((a), (b)); \ - break; \ - case _MM_SHUFFLE(0, 0, 2, 2): \ - ret = _mm_shuffle_ps_0022((a), (b)); \ - break; \ - case _MM_SHUFFLE(2, 2, 0, 0): \ - ret = _mm_shuffle_ps_2200((a), (b)); \ - break; \ - case _MM_SHUFFLE(3, 2, 0, 2): \ - ret = _mm_shuffle_ps_3202((a), (b)); \ - break; \ - case _MM_SHUFFLE(3, 2, 3, 2): \ - ret = _mm_movehl_ps((b), (a)); \ - break; \ - case _MM_SHUFFLE(1, 1, 3, 3): \ - ret = _mm_shuffle_ps_1133((a), (b)); \ - break; \ - case _MM_SHUFFLE(2, 0, 1, 0): \ - ret = _mm_shuffle_ps_2010((a), (b)); \ - break; \ - case _MM_SHUFFLE(2, 0, 0, 1): \ - ret = _mm_shuffle_ps_2001((a), (b)); \ - break; \ - case _MM_SHUFFLE(2, 0, 3, 2): \ - ret = _mm_shuffle_ps_2032((a), (b)); \ - break; \ - default: \ - ret = _mm_shuffle_ps_default((a), (b), (imm)); \ - break; \ - } \ - ret; \ - }) -#endif - -// Computes the approximations of square roots of the four single-precision, -// floating-point values of a. First computes reciprocal square roots and then -// reciprocals of the four values. -// -// r0 := sqrt(a0) -// r1 := sqrt(a1) -// r2 := sqrt(a2) -// r3 := sqrt(a3) -// -// https://msdn.microsoft.com/en-us/library/vstudio/8z67bwwk(v=vs.100).aspx +#define _mm_shuffle_ps(a, b, imm) \ + _sse2neon_define2( \ + __m128, a, b, __m128 ret; switch (imm) { \ + case _MM_SHUFFLE(1, 0, 3, 2): \ + ret = _mm_shuffle_ps_1032(_a, _b); \ + break; \ + case _MM_SHUFFLE(2, 3, 0, 1): \ + ret = _mm_shuffle_ps_2301(_a, _b); \ + break; \ + case _MM_SHUFFLE(0, 3, 2, 1): \ + ret = _mm_shuffle_ps_0321(_a, _b); \ + break; \ + case _MM_SHUFFLE(2, 1, 0, 3): \ + ret = _mm_shuffle_ps_2103(_a, _b); \ + break; \ + case _MM_SHUFFLE(1, 0, 1, 0): \ + ret = _mm_movelh_ps(_a, _b); \ + break; \ + case _MM_SHUFFLE(1, 0, 0, 1): \ + ret = _mm_shuffle_ps_1001(_a, _b); \ + break; \ + case _MM_SHUFFLE(0, 1, 0, 1): \ + ret = _mm_shuffle_ps_0101(_a, _b); \ + break; \ + case _MM_SHUFFLE(3, 2, 1, 0): \ + ret = _mm_shuffle_ps_3210(_a, _b); \ + break; \ + case _MM_SHUFFLE(0, 0, 1, 1): \ + ret = _mm_shuffle_ps_0011(_a, _b); \ + break; \ + case _MM_SHUFFLE(0, 0, 2, 2): \ + ret = _mm_shuffle_ps_0022(_a, _b); \ + break; \ + case _MM_SHUFFLE(2, 2, 0, 0): \ + ret = _mm_shuffle_ps_2200(_a, _b); \ + break; \ + case _MM_SHUFFLE(3, 2, 0, 2): \ + ret = _mm_shuffle_ps_3202(_a, _b); \ + break; \ + case _MM_SHUFFLE(3, 2, 3, 2): \ + ret = _mm_movehl_ps(_b, _a); \ + break; \ + case _MM_SHUFFLE(1, 1, 3, 3): \ + ret = _mm_shuffle_ps_1133(_a, _b); \ + break; \ + case _MM_SHUFFLE(2, 0, 1, 0): \ + ret = _mm_shuffle_ps_2010(_a, _b); \ + break; \ + case _MM_SHUFFLE(2, 0, 0, 1): \ + ret = _mm_shuffle_ps_2001(_a, _b); \ + break; \ + case _MM_SHUFFLE(2, 0, 3, 2): \ + ret = _mm_shuffle_ps_2032(_a, _b); \ + break; \ + default: \ + ret = _mm_shuffle_ps_default(_a, _b, (imm)); \ + break; \ + } _sse2neon_return(ret);) +#endif + +// Compute the square root of packed single-precision (32-bit) floating-point +// elements in a, and store the results in dst. +// Due to ARMv7-A NEON's lack of a precise square root intrinsic, we implement +// square root by multiplying input in with its reciprocal square root before +// using the Newton-Raphson method to approximate the results. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sqrt_ps FORCE_INLINE __m128 _mm_sqrt_ps(__m128 in) { -#if SSE2NEON_PRECISE_SQRT +#if (defined(__aarch64__) || defined(_M_ARM64)) && !SSE2NEON_PRECISE_SQRT + return vreinterpretq_m128_f32(vsqrtq_f32(vreinterpretq_f32_m128(in))); +#else float32x4_t recip = vrsqrteq_f32(vreinterpretq_f32_m128(in)); // Test for vrsqrteq_f32(0) -> positive infinity case. @@ -2496,28 +2686,23 @@ FORCE_INLINE __m128 _mm_sqrt_ps(__m128 in) recip = vreinterpretq_f32_u32( vandq_u32(vmvnq_u32(div_by_zero), vreinterpretq_u32_f32(recip))); - // Additional Netwon-Raphson iteration for accuracy recip = vmulq_f32( vrsqrtsq_f32(vmulq_f32(recip, recip), vreinterpretq_f32_m128(in)), recip); + // Additional Netwon-Raphson iteration for accuracy recip = vmulq_f32( vrsqrtsq_f32(vmulq_f32(recip, recip), vreinterpretq_f32_m128(in)), recip); // sqrt(s) = s * 1/sqrt(s) return vreinterpretq_m128_f32(vmulq_f32(vreinterpretq_f32_m128(in), recip)); -#elif defined(__aarch64__) - return vreinterpretq_m128_f32(vsqrtq_f32(vreinterpretq_f32_m128(in))); -#else - float32x4_t recipsq = vrsqrteq_f32(vreinterpretq_f32_m128(in)); - float32x4_t sq = vrecpeq_f32(recipsq); - return vreinterpretq_m128_f32(sq); #endif } -// Computes the approximation of the square root of the scalar single-precision -// floating point value of in. -// https://msdn.microsoft.com/en-us/library/ahfsc22d(v=vs.100).aspx +// Compute the square root of the lower single-precision (32-bit) floating-point +// element in a, store the result in the lower element of dst, and copy the +// upper 3 packed elements from a to the upper elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sqrt_ss FORCE_INLINE __m128 _mm_sqrt_ss(__m128 in) { float32_t value = @@ -2526,8 +2711,10 @@ FORCE_INLINE __m128 _mm_sqrt_ss(__m128 in) vsetq_lane_f32(value, vreinterpretq_f32_m128(in), 0)); } -// Stores four single-precision, floating-point values. -// https://msdn.microsoft.com/en-us/library/vstudio/s3h4ay6y(v=vs.100).aspx +// Store 128-bits (composed of 4 packed single-precision (32-bit) floating-point +// elements) from a into memory. mem_addr must be aligned on a 16-byte boundary +// or a general-protection exception may be generated. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_store_ps FORCE_INLINE void _mm_store_ps(float *p, __m128 a) { vst1q_f32(p, vreinterpretq_f32_m128(a)); @@ -2536,21 +2723,16 @@ FORCE_INLINE void _mm_store_ps(float *p, __m128 a) // Store the lower single-precision (32-bit) floating-point element from a into // 4 contiguous elements in memory. mem_addr must be aligned on a 16-byte // boundary or a general-protection exception may be generated. -// -// MEM[mem_addr+31:mem_addr] := a[31:0] -// MEM[mem_addr+63:mem_addr+32] := a[31:0] -// MEM[mem_addr+95:mem_addr+64] := a[31:0] -// MEM[mem_addr+127:mem_addr+96] := a[31:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_store_ps1 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_store_ps1 FORCE_INLINE void _mm_store_ps1(float *p, __m128 a) { float32_t a0 = vgetq_lane_f32(vreinterpretq_f32_m128(a), 0); vst1q_f32(p, vdupq_n_f32(a0)); } -// Stores the lower single - precision, floating - point value. -// https://msdn.microsoft.com/en-us/library/tzz10fbx(v=vs.100).aspx +// Store the lower single-precision (32-bit) floating-point element from a into +// memory. mem_addr does not need to be aligned on any particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_store_ss FORCE_INLINE void _mm_store_ss(float *p, __m128 a) { vst1q_lane_f32(p, vreinterpretq_f32_m128(a), 0); @@ -2559,34 +2741,20 @@ FORCE_INLINE void _mm_store_ss(float *p, __m128 a) // Store the lower single-precision (32-bit) floating-point element from a into // 4 contiguous elements in memory. mem_addr must be aligned on a 16-byte // boundary or a general-protection exception may be generated. -// -// MEM[mem_addr+31:mem_addr] := a[31:0] -// MEM[mem_addr+63:mem_addr+32] := a[31:0] -// MEM[mem_addr+95:mem_addr+64] := a[31:0] -// MEM[mem_addr+127:mem_addr+96] := a[31:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_store1_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_store1_ps #define _mm_store1_ps _mm_store_ps1 -// Stores the upper two single-precision, floating-point values of a to the -// address p. -// -// *p0 := a2 -// *p1 := a3 -// -// https://msdn.microsoft.com/en-us/library/a7525fs8(v%3dvs.90).aspx +// Store the upper 2 single-precision (32-bit) floating-point elements from a +// into memory. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storeh_pi FORCE_INLINE void _mm_storeh_pi(__m64 *p, __m128 a) { *p = vreinterpret_m64_f32(vget_high_f32(a)); } -// Stores the lower two single-precision floating point values of a to the -// address p. -// -// *p0 := a0 -// *p1 := a1 -// -// https://msdn.microsoft.com/en-us/library/h54t98ks(v=vs.90).aspx +// Store the lower 2 single-precision (32-bit) floating-point elements from a +// into memory. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storel_pi FORCE_INLINE void _mm_storel_pi(__m64 *p, __m128 a) { *p = vreinterpret_m64_f32(vget_low_f32(a)); @@ -2595,13 +2763,7 @@ FORCE_INLINE void _mm_storel_pi(__m64 *p, __m128 a) // Store 4 single-precision (32-bit) floating-point elements from a into memory // in reverse order. mem_addr must be aligned on a 16-byte boundary or a // general-protection exception may be generated. -// -// MEM[mem_addr+31:mem_addr] := a[127:96] -// MEM[mem_addr+63:mem_addr+32] := a[95:64] -// MEM[mem_addr+95:mem_addr+64] := a[63:32] -// MEM[mem_addr+127:mem_addr+96] := a[31:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storer_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storer_ps FORCE_INLINE void _mm_storer_ps(float *p, __m128 a) { float32x4_t tmp = vrev64q_f32(vreinterpretq_f32_m128(a)); @@ -2609,22 +2771,24 @@ FORCE_INLINE void _mm_storer_ps(float *p, __m128 a) vst1q_f32(p, rev); } -// Stores four single-precision, floating-point values. -// https://msdn.microsoft.com/en-us/library/44e30x22(v=vs.100).aspx +// Store 128-bits (composed of 4 packed single-precision (32-bit) floating-point +// elements) from a into memory. mem_addr does not need to be aligned on any +// particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storeu_ps FORCE_INLINE void _mm_storeu_ps(float *p, __m128 a) { vst1q_f32(p, vreinterpretq_f32_m128(a)); } // Stores 16-bits of integer data a at the address p. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storeu_si16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storeu_si16 FORCE_INLINE void _mm_storeu_si16(void *p, __m128i a) { vst1q_lane_s16((int16_t *) p, vreinterpretq_s16_m128i(a), 0); } // Stores 64-bits of integer data a at the address p. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storeu_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storeu_si64 FORCE_INLINE void _mm_storeu_si64(void *p, __m128i a) { vst1q_lane_s64((int64_t *) p, vreinterpretq_s64_m128i(a), 0); @@ -2632,7 +2796,7 @@ FORCE_INLINE void _mm_storeu_si64(void *p, __m128i a) // Store 64-bits of integer data from a into memory using a non-temporal memory // hint. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_stream_pi +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_stream_pi FORCE_INLINE void _mm_stream_pi(__m64 *p, __m64 a) { vst1_s64((int64_t *) p, vreinterpret_s64_m64(a)); @@ -2640,7 +2804,7 @@ FORCE_INLINE void _mm_stream_pi(__m64 *p, __m64 a) // Store 128-bits (composed of 4 packed single-precision (32-bit) floating- // point elements) from a into memory using a non-temporal memory hint. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_stream_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_stream_ps FORCE_INLINE void _mm_stream_ps(float *p, __m128 a) { #if __has_builtin(__builtin_nontemporal_store) @@ -2650,14 +2814,10 @@ FORCE_INLINE void _mm_stream_ps(float *p, __m128 a) #endif } -// Subtracts the four single-precision, floating-point values of a and b. -// -// r0 := a0 - b0 -// r1 := a1 - b1 -// r2 := a2 - b2 -// r3 := a3 - b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/1zad2k61(v=vs.100).aspx +// Subtract packed single-precision (32-bit) floating-point elements in b from +// packed single-precision (32-bit) floating-point elements in a, and store the +// results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sub_ps FORCE_INLINE __m128 _mm_sub_ps(__m128 a, __m128 b) { return vreinterpretq_m128_f32( @@ -2668,11 +2828,7 @@ FORCE_INLINE __m128 _mm_sub_ps(__m128 a, __m128 b) // the lower single-precision (32-bit) floating-point element in a, store the // result in the lower element of dst, and copy the upper 3 packed elements from // a to the upper elements of dst. -// -// dst[31:0] := a[31:0] - b[31:0] -// dst[127:32] := a[127:32] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sub_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sub_ss FORCE_INLINE __m128 _mm_sub_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_sub_ps(a, b)); @@ -2681,7 +2837,7 @@ FORCE_INLINE __m128 _mm_sub_ss(__m128 a, __m128 b) // Macro: Transpose the 4x4 matrix formed by the 4 rows of single-precision // (32-bit) floating-point elements in row0, row1, row2, and row3, and store the // transposed matrix in these vectors (row0 now contains column 0, etc.). -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=MM_TRANSPOSE4_PS +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=MM_TRANSPOSE4_PS #define _MM_TRANSPOSE4_PS(row0, row1, row2, row3) \ do { \ float32x4x2_t ROW01 = vtrnq_f32(row0, row1); \ @@ -2705,8 +2861,26 @@ FORCE_INLINE __m128 _mm_sub_ss(__m128 a, __m128 b) #define _mm_ucomilt_ss _mm_comilt_ss #define _mm_ucomineq_ss _mm_comineq_ss +// Return vector of type __m128i with undefined elements. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=mm_undefined_si128 +FORCE_INLINE __m128i _mm_undefined_si128(void) +{ +#if defined(__GNUC__) || defined(__clang__) +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wuninitialized" +#endif + __m128i a; +#if defined(_MSC_VER) + a = _mm_setzero_si128(); +#endif + return a; +#if defined(__GNUC__) || defined(__clang__) +#pragma GCC diagnostic pop +#endif +} + // Return vector of type __m128 with undefined elements. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_undefined_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_undefined_ps FORCE_INLINE __m128 _mm_undefined_ps(void) { #if defined(__GNUC__) || defined(__clang__) @@ -2714,24 +2888,21 @@ FORCE_INLINE __m128 _mm_undefined_ps(void) #pragma GCC diagnostic ignored "-Wuninitialized" #endif __m128 a; +#if defined(_MSC_VER) + a = _mm_setzero_ps(); +#endif return a; #if defined(__GNUC__) || defined(__clang__) #pragma GCC diagnostic pop #endif } -// Selects and interleaves the upper two single-precision, floating-point values -// from a and b. -// -// r0 := a2 -// r1 := b2 -// r2 := a3 -// r3 := b3 -// -// https://msdn.microsoft.com/en-us/library/skccxx7d%28v=vs.90%29.aspx +// Unpack and interleave single-precision (32-bit) floating-point elements from +// the high half a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpackhi_ps FORCE_INLINE __m128 _mm_unpackhi_ps(__m128 a, __m128 b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128_f32( vzip2q_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); #else @@ -2742,18 +2913,12 @@ FORCE_INLINE __m128 _mm_unpackhi_ps(__m128 a, __m128 b) #endif } -// Selects and interleaves the lower two single-precision, floating-point values -// from a and b. -// -// r0 := a0 -// r1 := b0 -// r2 := a1 -// r3 := b1 -// -// https://msdn.microsoft.com/en-us/library/25st103b%28v=vs.90%29.aspx +// Unpack and interleave single-precision (32-bit) floating-point elements from +// the low half of a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpacklo_ps FORCE_INLINE __m128 _mm_unpacklo_ps(__m128 a, __m128 b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128_f32( vzip1q_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); #else @@ -2764,9 +2929,9 @@ FORCE_INLINE __m128 _mm_unpacklo_ps(__m128 a, __m128 b) #endif } -// Computes bitwise EXOR (exclusive-or) of the four single-precision, -// floating-point values of a and b. -// https://msdn.microsoft.com/en-us/library/ss6k3wk8(v=vs.100).aspx +// Compute the bitwise XOR of packed single-precision (32-bit) floating-point +// elements in a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_xor_ps FORCE_INLINE __m128 _mm_xor_ps(__m128 a, __m128 b) { return vreinterpretq_m128_s32( @@ -2775,42 +2940,32 @@ FORCE_INLINE __m128 _mm_xor_ps(__m128 a, __m128 b) /* SSE2 */ -// Adds the 8 signed or unsigned 16-bit integers in a to the 8 signed or -// unsigned 16-bit integers in b. -// https://msdn.microsoft.com/en-us/library/fceha5k4(v=vs.100).aspx +// Add packed 16-bit integers in a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_epi16 FORCE_INLINE __m128i _mm_add_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_s16( vaddq_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); } -// Adds the 4 signed or unsigned 32-bit integers in a to the 4 signed or -// unsigned 32-bit integers in b. -// -// r0 := a0 + b0 -// r1 := a1 + b1 -// r2 := a2 + b2 -// r3 := a3 + b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/09xs4fkk(v=vs.100).aspx +// Add packed 32-bit integers in a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_epi32 FORCE_INLINE __m128i _mm_add_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( vaddq_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); } -// Adds the 4 signed or unsigned 64-bit integers in a to the 4 signed or -// unsigned 32-bit integers in b. -// https://msdn.microsoft.com/en-us/library/vstudio/09xs4fkk(v=vs.100).aspx +// Add packed 64-bit integers in a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_epi64 FORCE_INLINE __m128i _mm_add_epi64(__m128i a, __m128i b) { return vreinterpretq_m128i_s64( vaddq_s64(vreinterpretq_s64_m128i(a), vreinterpretq_s64_m128i(b))); } -// Adds the 16 signed or unsigned 8-bit integers in a to the 16 signed or -// unsigned 8-bit integers in b. -// https://technet.microsoft.com/en-us/subscriptions/yc7tcyzs(v=vs.90) +// Add packed 8-bit integers in a and b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_epi8 FORCE_INLINE __m128i _mm_add_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_s8( @@ -2819,10 +2974,10 @@ FORCE_INLINE __m128i _mm_add_epi8(__m128i a, __m128i b) // Add packed double-precision (64-bit) floating-point elements in a and b, and // store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_add_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_pd FORCE_INLINE __m128d _mm_add_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vaddq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -2838,14 +2993,10 @@ FORCE_INLINE __m128d _mm_add_pd(__m128d a, __m128d b) // Add the lower double-precision (64-bit) floating-point element in a and b, // store the result in the lower element of dst, and copy the upper element from // a to the upper element of dst. -// -// dst[63:0] := a[63:0] + b[63:0] -// dst[127:64] := a[127:64] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_add_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_sd FORCE_INLINE __m128d _mm_add_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_add_pd(a, b)); #else double *da = (double *) &a; @@ -2858,25 +3009,16 @@ FORCE_INLINE __m128d _mm_add_sd(__m128d a, __m128d b) } // Add 64-bit integers a and b, and store the result in dst. -// -// dst[63:0] := a[63:0] + b[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_add_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_add_si64 FORCE_INLINE __m64 _mm_add_si64(__m64 a, __m64 b) { return vreinterpret_m64_s64( vadd_s64(vreinterpret_s64_m64(a), vreinterpret_s64_m64(b))); } -// Adds the 8 signed 16-bit integers in a to the 8 signed 16-bit integers in b -// and saturates. -// -// r0 := SignedSaturate(a0 + b0) -// r1 := SignedSaturate(a1 + b1) -// ... -// r7 := SignedSaturate(a7 + b7) -// -// https://msdn.microsoft.com/en-us/library/1a306ef8(v=vs.100).aspx +// Add packed signed 16-bit integers in a and b using saturation, and store the +// results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_adds_epi16 FORCE_INLINE __m128i _mm_adds_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_s16( @@ -2885,13 +3027,7 @@ FORCE_INLINE __m128i _mm_adds_epi16(__m128i a, __m128i b) // Add packed signed 8-bit integers in a and b using saturation, and store the // results in dst. -// -// FOR j := 0 to 15 -// i := j*8 -// dst[i+7:i] := Saturate8( a[i+7:i] + b[i+7:i] ) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_adds_epi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_adds_epi8 FORCE_INLINE __m128i _mm_adds_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_s8( @@ -2900,16 +3036,16 @@ FORCE_INLINE __m128i _mm_adds_epi8(__m128i a, __m128i b) // Add packed unsigned 16-bit integers in a and b using saturation, and store // the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_adds_epu16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_adds_epu16 FORCE_INLINE __m128i _mm_adds_epu16(__m128i a, __m128i b) { return vreinterpretq_m128i_u16( vqaddq_u16(vreinterpretq_u16_m128i(a), vreinterpretq_u16_m128i(b))); } -// Adds the 16 unsigned 8-bit integers in a to the 16 unsigned 8-bit integers in -// b and saturates.. -// https://msdn.microsoft.com/en-us/library/9hahyddy(v=vs.100).aspx +// Add packed unsigned 8-bit integers in a and b using saturation, and store the +// results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_adds_epu8 FORCE_INLINE __m128i _mm_adds_epu8(__m128i a, __m128i b) { return vreinterpretq_m128i_u8( @@ -2918,25 +3054,16 @@ FORCE_INLINE __m128i _mm_adds_epu8(__m128i a, __m128i b) // Compute the bitwise AND of packed double-precision (64-bit) floating-point // elements in a and b, and store the results in dst. -// -// FOR j := 0 to 1 -// i := j*64 -// dst[i+63:i] := a[i+63:i] AND b[i+63:i] -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_and_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_and_pd FORCE_INLINE __m128d _mm_and_pd(__m128d a, __m128d b) { return vreinterpretq_m128d_s64( vandq_s64(vreinterpretq_s64_m128d(a), vreinterpretq_s64_m128d(b))); } -// Computes the bitwise AND of the 128-bit value in a and the 128-bit value in -// b. -// -// r := a & b -// -// https://msdn.microsoft.com/en-us/library/vstudio/6d1txsa8(v=vs.100).aspx +// Compute the bitwise AND of 128 bits (representing integer data) in a and b, +// and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_and_si128 FORCE_INLINE __m128i _mm_and_si128(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( @@ -2945,13 +3072,7 @@ FORCE_INLINE __m128i _mm_and_si128(__m128i a, __m128i b) // Compute the bitwise NOT of packed double-precision (64-bit) floating-point // elements in a and then AND with b, and store the results in dst. -// -// FOR j := 0 to 1 -// i := j*64 -// dst[i+63:i] := ((NOT a[i+63:i]) AND b[i+63:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_andnot_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_andnot_pd FORCE_INLINE __m128d _mm_andnot_pd(__m128d a, __m128d b) { // *NOTE* argument swap @@ -2959,12 +3080,9 @@ FORCE_INLINE __m128d _mm_andnot_pd(__m128d a, __m128d b) vbicq_s64(vreinterpretq_s64_m128d(b), vreinterpretq_s64_m128d(a))); } -// Computes the bitwise AND of the 128-bit value in b and the bitwise NOT of the -// 128-bit value in a. -// -// r := (~a) & b -// -// https://msdn.microsoft.com/en-us/library/vstudio/1beaceh8(v=vs.100).aspx +// Compute the bitwise NOT of 128 bits (representing integer data) in a and then +// AND with b, and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_andnot_si128 FORCE_INLINE __m128i _mm_andnot_si128(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( @@ -2972,30 +3090,18 @@ FORCE_INLINE __m128i _mm_andnot_si128(__m128i a, __m128i b) vreinterpretq_s32_m128i(a))); // *NOTE* argument swap } -// Computes the average of the 8 unsigned 16-bit integers in a and the 8 -// unsigned 16-bit integers in b and rounds. -// -// r0 := (a0 + b0) / 2 -// r1 := (a1 + b1) / 2 -// ... -// r7 := (a7 + b7) / 2 -// -// https://msdn.microsoft.com/en-us/library/vstudio/y13ca3c8(v=vs.90).aspx +// Average packed unsigned 16-bit integers in a and b, and store the results in +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_avg_epu16 FORCE_INLINE __m128i _mm_avg_epu16(__m128i a, __m128i b) { return (__m128i) vrhaddq_u16(vreinterpretq_u16_m128i(a), vreinterpretq_u16_m128i(b)); } -// Computes the average of the 16 unsigned 8-bit integers in a and the 16 -// unsigned 8-bit integers in b and rounds. -// -// r0 := (a0 + b0) / 2 -// r1 := (a1 + b1) / 2 -// ... -// r15 := (a15 + b15) / 2 -// -// https://msdn.microsoft.com/en-us/library/vstudio/8zwh554a(v%3dvs.90).aspx +// Average packed unsigned 8-bit integers in a and b, and store the results in +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_avg_epu8 FORCE_INLINE __m128i _mm_avg_epu8(__m128i a, __m128i b) { return vreinterpretq_m128i_u8( @@ -3004,17 +3110,17 @@ FORCE_INLINE __m128i _mm_avg_epu8(__m128i a, __m128i b) // Shift a left by imm8 bytes while shifting in zeros, and store the results in // dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_bslli_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_bslli_si128 #define _mm_bslli_si128(a, imm) _mm_slli_si128(a, imm) // Shift a right by imm8 bytes while shifting in zeros, and store the results in // dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_bsrli_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_bsrli_si128 #define _mm_bsrli_si128(a, imm) _mm_srli_si128(a, imm) // Cast vector of type __m128d to type __m128. This intrinsic is only used for // compilation and does not generate any instructions, thus it has zero latency. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_castpd_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_castpd_ps FORCE_INLINE __m128 _mm_castpd_ps(__m128d a) { return vreinterpretq_m128_s64(vreinterpretq_s64_m128d(a)); @@ -3022,7 +3128,7 @@ FORCE_INLINE __m128 _mm_castpd_ps(__m128d a) // Cast vector of type __m128d to type __m128i. This intrinsic is only used for // compilation and does not generate any instructions, thus it has zero latency. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_castpd_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_castpd_si128 FORCE_INLINE __m128i _mm_castpd_si128(__m128d a) { return vreinterpretq_m128i_s64(vreinterpretq_s64_m128d(a)); @@ -3030,15 +3136,15 @@ FORCE_INLINE __m128i _mm_castpd_si128(__m128d a) // Cast vector of type __m128 to type __m128d. This intrinsic is only used for // compilation and does not generate any instructions, thus it has zero latency. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_castps_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_castps_pd FORCE_INLINE __m128d _mm_castps_pd(__m128 a) { return vreinterpretq_m128d_s32(vreinterpretq_s32_m128(a)); } -// Applies a type cast to reinterpret four 32-bit floating point values passed -// in as a 128-bit parameter as packed 32-bit integers. -// https://msdn.microsoft.com/en-us/library/bb514099.aspx +// Cast vector of type __m128 to type __m128i. This intrinsic is only used for +// compilation and does not generate any instructions, thus it has zero latency. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_castps_si128 FORCE_INLINE __m128i _mm_castps_si128(__m128 a) { return vreinterpretq_m128i_s32(vreinterpretq_s32_m128(a)); @@ -3046,36 +3152,52 @@ FORCE_INLINE __m128i _mm_castps_si128(__m128 a) // Cast vector of type __m128i to type __m128d. This intrinsic is only used for // compilation and does not generate any instructions, thus it has zero latency. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_castsi128_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_castsi128_pd FORCE_INLINE __m128d _mm_castsi128_pd(__m128i a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vreinterpretq_f64_m128i(a)); #else return vreinterpretq_m128d_f32(vreinterpretq_f32_m128i(a)); #endif } -// Applies a type cast to reinterpret four 32-bit integers passed in as a -// 128-bit parameter as packed 32-bit floating point values. -// https://msdn.microsoft.com/en-us/library/bb514029.aspx +// Cast vector of type __m128i to type __m128. This intrinsic is only used for +// compilation and does not generate any instructions, thus it has zero latency. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_castsi128_ps FORCE_INLINE __m128 _mm_castsi128_ps(__m128i a) { return vreinterpretq_m128_s32(vreinterpretq_s32_m128i(a)); } -// Cache line containing p is flushed and invalidated from all caches in the -// coherency domain. : -// https://msdn.microsoft.com/en-us/library/ba08y07y(v=vs.100).aspx +// Invalidate and flush the cache line that contains p from all levels of the +// cache hierarchy. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_clflush +#if defined(__APPLE__) +#include +#endif FORCE_INLINE void _mm_clflush(void const *p) { (void) p; - // no corollary for Neon? + + /* sys_icache_invalidate is supported since macOS 10.5. + * However, it does not work on non-jailbroken iOS devices, although the + * compilation is successful. + */ +#if defined(__APPLE__) + sys_icache_invalidate((void *) (uintptr_t) p, SSE2NEON_CACHELINE_SIZE); +#elif defined(__GNUC__) || defined(__clang__) + uintptr_t ptr = (uintptr_t) p; + __builtin___clear_cache((char *) ptr, + (char *) ptr + SSE2NEON_CACHELINE_SIZE); +#elif (_MSC_VER) && SSE2NEON_INCLUDE_WINDOWS_H + FlushInstructionCache(GetCurrentProcess(), p, SSE2NEON_CACHELINE_SIZE); +#endif } -// Compares the 8 signed or unsigned 16-bit integers in a and the 8 signed or -// unsigned 16-bit integers in b for equality. -// https://msdn.microsoft.com/en-us/library/2ay060te(v=vs.100).aspx +// Compare packed 16-bit integers in a and b for equality, and store the results +// in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpeq_epi16 FORCE_INLINE __m128i _mm_cmpeq_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_u16( @@ -3083,16 +3205,17 @@ FORCE_INLINE __m128i _mm_cmpeq_epi16(__m128i a, __m128i b) } // Compare packed 32-bit integers in a and b for equality, and store the results -// in dst +// in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpeq_epi32 FORCE_INLINE __m128i _mm_cmpeq_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_u32( vceqq_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); } -// Compares the 16 signed or unsigned 8-bit integers in a and the 16 signed or -// unsigned 8-bit integers in b for equality. -// https://msdn.microsoft.com/en-us/library/windows/desktop/bz5xk21a(v=vs.90).aspx +// Compare packed 8-bit integers in a and b for equality, and store the results +// in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpeq_epi8 FORCE_INLINE __m128i _mm_cmpeq_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_u8( @@ -3101,10 +3224,10 @@ FORCE_INLINE __m128i _mm_cmpeq_epi8(__m128i a, __m128i b) // Compare packed double-precision (64-bit) floating-point elements in a and b // for equality, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpeq_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpeq_pd FORCE_INLINE __m128d _mm_cmpeq_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_u64( vceqq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -3119,7 +3242,7 @@ FORCE_INLINE __m128d _mm_cmpeq_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b for equality, store the result in the lower element of dst, and copy the // upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpeq_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpeq_sd FORCE_INLINE __m128d _mm_cmpeq_sd(__m128d a, __m128d b) { return _mm_move_sd(a, _mm_cmpeq_pd(a, b)); @@ -3127,10 +3250,10 @@ FORCE_INLINE __m128d _mm_cmpeq_sd(__m128d a, __m128d b) // Compare packed double-precision (64-bit) floating-point elements in a and b // for greater-than-or-equal, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpge_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpge_pd FORCE_INLINE __m128d _mm_cmpge_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_u64( vcgeq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -3149,10 +3272,10 @@ FORCE_INLINE __m128d _mm_cmpge_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b for greater-than-or-equal, store the result in the lower element of dst, // and copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpge_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpge_sd FORCE_INLINE __m128d _mm_cmpge_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_cmpge_pd(a, b)); #else // expand "_mm_cmpge_pd()" to reduce unnecessary operations @@ -3167,39 +3290,27 @@ FORCE_INLINE __m128d _mm_cmpge_sd(__m128d a, __m128d b) #endif } -// Compares the 8 signed 16-bit integers in a and the 8 signed 16-bit integers -// in b for greater than. -// -// r0 := (a0 > b0) ? 0xffff : 0x0 -// r1 := (a1 > b1) ? 0xffff : 0x0 -// ... -// r7 := (a7 > b7) ? 0xffff : 0x0 -// -// https://technet.microsoft.com/en-us/library/xd43yfsa(v=vs.100).aspx +// Compare packed signed 16-bit integers in a and b for greater-than, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpgt_epi16 FORCE_INLINE __m128i _mm_cmpgt_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_u16( vcgtq_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); } -// Compares the 4 signed 32-bit integers in a and the 4 signed 32-bit integers -// in b for greater than. -// https://msdn.microsoft.com/en-us/library/vstudio/1s9f2z0y(v=vs.100).aspx +// Compare packed signed 32-bit integers in a and b for greater-than, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpgt_epi32 FORCE_INLINE __m128i _mm_cmpgt_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_u32( vcgtq_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); } -// Compares the 16 signed 8-bit integers in a and the 16 signed 8-bit integers -// in b for greater than. -// -// r0 := (a0 > b0) ? 0xff : 0x0 -// r1 := (a1 > b1) ? 0xff : 0x0 -// ... -// r15 := (a15 > b15) ? 0xff : 0x0 -// -// https://msdn.microsoft.com/zh-tw/library/wf45zt2b(v=vs.100).aspx +// Compare packed signed 8-bit integers in a and b for greater-than, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpgt_epi8 FORCE_INLINE __m128i _mm_cmpgt_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_u8( @@ -3208,10 +3319,10 @@ FORCE_INLINE __m128i _mm_cmpgt_epi8(__m128i a, __m128i b) // Compare packed double-precision (64-bit) floating-point elements in a and b // for greater-than, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpgt_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpgt_pd FORCE_INLINE __m128d _mm_cmpgt_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_u64( vcgtq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -3230,10 +3341,10 @@ FORCE_INLINE __m128d _mm_cmpgt_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b for greater-than, store the result in the lower element of dst, and copy // the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpgt_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpgt_sd FORCE_INLINE __m128d _mm_cmpgt_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_cmpgt_pd(a, b)); #else // expand "_mm_cmpge_pd()" to reduce unnecessary operations @@ -3250,10 +3361,10 @@ FORCE_INLINE __m128d _mm_cmpgt_sd(__m128d a, __m128d b) // Compare packed double-precision (64-bit) floating-point elements in a and b // for less-than-or-equal, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmple_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmple_pd FORCE_INLINE __m128d _mm_cmple_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_u64( vcleq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -3272,10 +3383,10 @@ FORCE_INLINE __m128d _mm_cmple_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b for less-than-or-equal, store the result in the lower element of dst, and // copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmple_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmple_sd FORCE_INLINE __m128d _mm_cmple_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_cmple_pd(a, b)); #else // expand "_mm_cmpge_pd()" to reduce unnecessary operations @@ -3290,34 +3401,30 @@ FORCE_INLINE __m128d _mm_cmple_sd(__m128d a, __m128d b) #endif } -// Compares the 8 signed 16-bit integers in a and the 8 signed 16-bit integers -// in b for less than. -// -// r0 := (a0 < b0) ? 0xffff : 0x0 -// r1 := (a1 < b1) ? 0xffff : 0x0 -// ... -// r7 := (a7 < b7) ? 0xffff : 0x0 -// -// https://technet.microsoft.com/en-us/library/t863edb2(v=vs.100).aspx +// Compare packed signed 16-bit integers in a and b for less-than, and store the +// results in dst. Note: This intrinsic emits the pcmpgtw instruction with the +// order of the operands switched. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmplt_epi16 FORCE_INLINE __m128i _mm_cmplt_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_u16( vcltq_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); } - -// Compares the 4 signed 32-bit integers in a and the 4 signed 32-bit integers -// in b for less than. -// https://msdn.microsoft.com/en-us/library/vstudio/4ak0bf5d(v=vs.100).aspx +// Compare packed signed 32-bit integers in a and b for less-than, and store the +// results in dst. Note: This intrinsic emits the pcmpgtd instruction with the +// order of the operands switched. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmplt_epi32 FORCE_INLINE __m128i _mm_cmplt_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_u32( vcltq_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); } -// Compares the 16 signed 8-bit integers in a and the 16 signed 8-bit integers -// in b for lesser than. -// https://msdn.microsoft.com/en-us/library/windows/desktop/9s46csht(v=vs.90).aspx +// Compare packed signed 8-bit integers in a and b for less-than, and store the +// results in dst. Note: This intrinsic emits the pcmpgtb instruction with the +// order of the operands switched. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmplt_epi8 FORCE_INLINE __m128i _mm_cmplt_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_u8( @@ -3326,10 +3433,10 @@ FORCE_INLINE __m128i _mm_cmplt_epi8(__m128i a, __m128i b) // Compare packed double-precision (64-bit) floating-point elements in a and b // for less-than, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmplt_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmplt_pd FORCE_INLINE __m128d _mm_cmplt_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_u64( vcltq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -3348,10 +3455,10 @@ FORCE_INLINE __m128d _mm_cmplt_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b for less-than, store the result in the lower element of dst, and copy the // upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmplt_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmplt_sd FORCE_INLINE __m128d _mm_cmplt_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_cmplt_pd(a, b)); #else uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); @@ -3367,10 +3474,10 @@ FORCE_INLINE __m128d _mm_cmplt_sd(__m128d a, __m128d b) // Compare packed double-precision (64-bit) floating-point elements in a and b // for not-equal, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpneq_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpneq_pd FORCE_INLINE __m128d _mm_cmpneq_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_s32(vmvnq_s32(vreinterpretq_s32_u64( vceqq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))))); #else @@ -3385,7 +3492,7 @@ FORCE_INLINE __m128d _mm_cmpneq_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b for not-equal, store the result in the lower element of dst, and copy the // upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpneq_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpneq_sd FORCE_INLINE __m128d _mm_cmpneq_sd(__m128d a, __m128d b) { return _mm_move_sd(a, _mm_cmpneq_pd(a, b)); @@ -3393,54 +3500,142 @@ FORCE_INLINE __m128d _mm_cmpneq_sd(__m128d a, __m128d b) // Compare packed double-precision (64-bit) floating-point elements in a and b // for not-greater-than-or-equal, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpnge_pd -#define _mm_cmpnge_pd(a, b) _mm_cmplt_pd(a, b) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnge_pd +FORCE_INLINE __m128d _mm_cmpnge_pd(__m128d a, __m128d b) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128d_u64(veorq_u64( + vcgeq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b)), + vdupq_n_u64(UINT64_MAX))); +#else + uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); + uint64_t a1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(a)); + uint64_t b0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(b)); + uint64_t b1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(b)); + uint64_t d[2]; + d[0] = + !((*(double *) &a0) >= (*(double *) &b0)) ? ~UINT64_C(0) : UINT64_C(0); + d[1] = + !((*(double *) &a1) >= (*(double *) &b1)) ? ~UINT64_C(0) : UINT64_C(0); + + return vreinterpretq_m128d_u64(vld1q_u64(d)); +#endif +} // Compare the lower double-precision (64-bit) floating-point elements in a and // b for not-greater-than-or-equal, store the result in the lower element of // dst, and copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpnge_sd -#define _mm_cmpnge_sd(a, b) _mm_cmplt_sd(a, b) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnge_sd +FORCE_INLINE __m128d _mm_cmpnge_sd(__m128d a, __m128d b) +{ + return _mm_move_sd(a, _mm_cmpnge_pd(a, b)); +} // Compare packed double-precision (64-bit) floating-point elements in a and b // for not-greater-than, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_cmpngt_pd -#define _mm_cmpngt_pd(a, b) _mm_cmple_pd(a, b) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_cmpngt_pd +FORCE_INLINE __m128d _mm_cmpngt_pd(__m128d a, __m128d b) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128d_u64(veorq_u64( + vcgtq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b)), + vdupq_n_u64(UINT64_MAX))); +#else + uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); + uint64_t a1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(a)); + uint64_t b0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(b)); + uint64_t b1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(b)); + uint64_t d[2]; + d[0] = + !((*(double *) &a0) > (*(double *) &b0)) ? ~UINT64_C(0) : UINT64_C(0); + d[1] = + !((*(double *) &a1) > (*(double *) &b1)) ? ~UINT64_C(0) : UINT64_C(0); + + return vreinterpretq_m128d_u64(vld1q_u64(d)); +#endif +} // Compare the lower double-precision (64-bit) floating-point elements in a and // b for not-greater-than, store the result in the lower element of dst, and // copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpngt_sd -#define _mm_cmpngt_sd(a, b) _mm_cmple_sd(a, b) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpngt_sd +FORCE_INLINE __m128d _mm_cmpngt_sd(__m128d a, __m128d b) +{ + return _mm_move_sd(a, _mm_cmpngt_pd(a, b)); +} // Compare packed double-precision (64-bit) floating-point elements in a and b // for not-less-than-or-equal, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpnle_pd -#define _mm_cmpnle_pd(a, b) _mm_cmpgt_pd(a, b) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnle_pd +FORCE_INLINE __m128d _mm_cmpnle_pd(__m128d a, __m128d b) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128d_u64(veorq_u64( + vcleq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b)), + vdupq_n_u64(UINT64_MAX))); +#else + uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); + uint64_t a1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(a)); + uint64_t b0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(b)); + uint64_t b1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(b)); + uint64_t d[2]; + d[0] = + !((*(double *) &a0) <= (*(double *) &b0)) ? ~UINT64_C(0) : UINT64_C(0); + d[1] = + !((*(double *) &a1) <= (*(double *) &b1)) ? ~UINT64_C(0) : UINT64_C(0); + + return vreinterpretq_m128d_u64(vld1q_u64(d)); +#endif +} // Compare the lower double-precision (64-bit) floating-point elements in a and // b for not-less-than-or-equal, store the result in the lower element of dst, // and copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpnle_sd -#define _mm_cmpnle_sd(a, b) _mm_cmpgt_sd(a, b) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnle_sd +FORCE_INLINE __m128d _mm_cmpnle_sd(__m128d a, __m128d b) +{ + return _mm_move_sd(a, _mm_cmpnle_pd(a, b)); +} // Compare packed double-precision (64-bit) floating-point elements in a and b // for not-less-than, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpnlt_pd -#define _mm_cmpnlt_pd(a, b) _mm_cmpge_pd(a, b) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnlt_pd +FORCE_INLINE __m128d _mm_cmpnlt_pd(__m128d a, __m128d b) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128d_u64(veorq_u64( + vcltq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b)), + vdupq_n_u64(UINT64_MAX))); +#else + uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); + uint64_t a1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(a)); + uint64_t b0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(b)); + uint64_t b1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(b)); + uint64_t d[2]; + d[0] = + !((*(double *) &a0) < (*(double *) &b0)) ? ~UINT64_C(0) : UINT64_C(0); + d[1] = + !((*(double *) &a1) < (*(double *) &b1)) ? ~UINT64_C(0) : UINT64_C(0); + + return vreinterpretq_m128d_u64(vld1q_u64(d)); +#endif +} // Compare the lower double-precision (64-bit) floating-point elements in a and // b for not-less-than, store the result in the lower element of dst, and copy // the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpnlt_sd -#define _mm_cmpnlt_sd(a, b) _mm_cmpge_sd(a, b) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpnlt_sd +FORCE_INLINE __m128d _mm_cmpnlt_sd(__m128d a, __m128d b) +{ + return _mm_move_sd(a, _mm_cmpnlt_pd(a, b)); +} // Compare packed double-precision (64-bit) floating-point elements in a and b // to see if neither is NaN, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpord_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpord_pd FORCE_INLINE __m128d _mm_cmpord_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) // Excluding NaNs, any two floating point numbers can be compared. uint64x2_t not_nan_a = vceqq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(a)); @@ -3469,10 +3664,10 @@ FORCE_INLINE __m128d _mm_cmpord_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b to see if neither is NaN, store the result in the lower element of dst, and // copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpord_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpord_sd FORCE_INLINE __m128d _mm_cmpord_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_cmpord_pd(a, b)); #else uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); @@ -3491,10 +3686,10 @@ FORCE_INLINE __m128d _mm_cmpord_sd(__m128d a, __m128d b) // Compare packed double-precision (64-bit) floating-point elements in a and b // to see if either is NaN, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpunord_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpunord_pd FORCE_INLINE __m128d _mm_cmpunord_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) // Two NaNs are not equal in comparison operation. uint64x2_t not_nan_a = vceqq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(a)); @@ -3524,10 +3719,10 @@ FORCE_INLINE __m128d _mm_cmpunord_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b to see if either is NaN, store the result in the lower element of dst, and // copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cmpunord_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpunord_sd FORCE_INLINE __m128d _mm_cmpunord_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_cmpunord_pd(a, b)); #else uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); @@ -3544,13 +3739,73 @@ FORCE_INLINE __m128d _mm_cmpunord_sd(__m128d a, __m128d b) #endif } +// Compare the lower double-precision (64-bit) floating-point element in a and b +// for greater-than-or-equal, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comige_sd +FORCE_INLINE int _mm_comige_sd(__m128d a, __m128d b) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + return vgetq_lane_u64(vcgeq_f64(a, b), 0) & 0x1; +#else + uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); + uint64_t b0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(b)); + + return (*(double *) &a0 >= *(double *) &b0); +#endif +} + +// Compare the lower double-precision (64-bit) floating-point element in a and b +// for greater-than, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comigt_sd +FORCE_INLINE int _mm_comigt_sd(__m128d a, __m128d b) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + return vgetq_lane_u64(vcgtq_f64(a, b), 0) & 0x1; +#else + uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); + uint64_t b0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(b)); + + return (*(double *) &a0 > *(double *) &b0); +#endif +} + +// Compare the lower double-precision (64-bit) floating-point element in a and b +// for less-than-or-equal, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comile_sd +FORCE_INLINE int _mm_comile_sd(__m128d a, __m128d b) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + return vgetq_lane_u64(vcleq_f64(a, b), 0) & 0x1; +#else + uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); + uint64_t b0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(b)); + + return (*(double *) &a0 <= *(double *) &b0); +#endif +} + +// Compare the lower double-precision (64-bit) floating-point element in a and b +// for less-than, and return the boolean result (0 or 1). +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comilt_sd +FORCE_INLINE int _mm_comilt_sd(__m128d a, __m128d b) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + return vgetq_lane_u64(vcltq_f64(a, b), 0) & 0x1; +#else + uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); + uint64_t b0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(b)); + + return (*(double *) &a0 < *(double *) &b0); +#endif +} + // Compare the lower double-precision (64-bit) floating-point element in a and b // for equality, and return the boolean result (0 or 1). -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_comieq_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comieq_sd FORCE_INLINE int _mm_comieq_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) - return !!vgetq_lane_u64(vceqq_f64(a, b), 0); +#if defined(__aarch64__) || defined(_M_ARM64) + return vgetq_lane_u64(vceqq_f64(a, b), 0) & 0x1; #else uint32x4_t a_not_nan = vceqq_u32(vreinterpretq_u32_m128d(a), vreinterpretq_u32_m128d(a)); @@ -3561,38 +3816,24 @@ FORCE_INLINE int _mm_comieq_sd(__m128d a, __m128d b) vceqq_u32(vreinterpretq_u32_m128d(a), vreinterpretq_u32_m128d(b)); uint64x2_t and_results = vandq_u64(vreinterpretq_u64_u32(a_and_b_not_nan), vreinterpretq_u64_u32(a_eq_b)); - return !!vgetq_lane_u64(and_results, 0); + return vgetq_lane_u64(and_results, 0) & 0x1; #endif } // Compare the lower double-precision (64-bit) floating-point element in a and b // for not-equal, and return the boolean result (0 or 1). -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_comineq_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_comineq_sd FORCE_INLINE int _mm_comineq_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) - return !vgetq_lane_u64(vceqq_f64(a, b), 0); -#else - // FIXME we should handle NaN condition here - uint32x4_t a_eq_b = - vceqq_u32(vreinterpretq_u32_m128d(a), vreinterpretq_u32_m128d(b)); - return !vgetq_lane_u64(vreinterpretq_u64_u32(a_eq_b), 0); -#endif + return !_mm_comieq_sd(a, b); } // Convert packed signed 32-bit integers in a to packed double-precision // (64-bit) floating-point elements, and store the results in dst. -// -// FOR j := 0 to 1 -// i := j*32 -// m := j*64 -// dst[m+63:m] := Convert_Int32_To_FP64(a[i+31:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtepi32_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepi32_pd FORCE_INLINE __m128d _mm_cvtepi32_pd(__m128i a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vcvtq_f64_s64(vmovl_s32(vget_low_s32(vreinterpretq_s32_m128i(a))))); #else @@ -3602,9 +3843,9 @@ FORCE_INLINE __m128d _mm_cvtepi32_pd(__m128i a) #endif } -// Converts the four signed 32-bit integer values of a to single-precision, -// floating-point values -// https://msdn.microsoft.com/en-us/library/vstudio/36bwxcx5(v=vs.100).aspx +// Convert packed signed 32-bit integers in a to packed single-precision +// (32-bit) floating-point elements, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepi32_ps FORCE_INLINE __m128 _mm_cvtepi32_ps(__m128i a) { return vreinterpretq_m128_f32(vcvtq_f32_s32(vreinterpretq_s32_m128i(a))); @@ -3612,33 +3853,27 @@ FORCE_INLINE __m128 _mm_cvtepi32_ps(__m128i a) // Convert packed double-precision (64-bit) floating-point elements in a to // packed 32-bit integers, and store the results in dst. -// -// FOR j := 0 to 1 -// i := 32*j -// k := 64*j -// dst[i+31:i] := Convert_FP64_To_Int32(a[k+63:k]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpd_epi32 -FORCE_INLINE __m128i _mm_cvtpd_epi32(__m128d a) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpd_epi32 +FORCE_INLINE_OPTNONE __m128i _mm_cvtpd_epi32(__m128d a) { +// vrnd32xq_f64 not supported on clang +#if defined(__ARM_FEATURE_FRINT) && !defined(__clang__) + float64x2_t rounded = vrnd32xq_f64(vreinterpretq_f64_m128d(a)); + int64x2_t integers = vcvtq_s64_f64(rounded); + return vreinterpretq_m128i_s32( + vcombine_s32(vmovn_s64(integers), vdup_n_s32(0))); +#else __m128d rnd = _mm_round_pd(a, _MM_FROUND_CUR_DIRECTION); double d0 = ((double *) &rnd)[0]; double d1 = ((double *) &rnd)[1]; return _mm_set_epi32(0, 0, (int32_t) d1, (int32_t) d0); +#endif } // Convert packed double-precision (64-bit) floating-point elements in a to // packed 32-bit integers, and store the results in dst. -// -// FOR j := 0 to 1 -// i := 32*j -// k := 64*j -// dst[i+31:i] := Convert_FP64_To_Int32(a[k+63:k]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpd_pi32 -FORCE_INLINE __m64 _mm_cvtpd_pi32(__m128d a) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpd_pi32 +FORCE_INLINE_OPTNONE __m64 _mm_cvtpd_pi32(__m128d a) { __m128d rnd = _mm_round_pd(a, _MM_FROUND_CUR_DIRECTION); double d0 = ((double *) &rnd)[0]; @@ -3650,18 +3885,10 @@ FORCE_INLINE __m64 _mm_cvtpd_pi32(__m128d a) // Convert packed double-precision (64-bit) floating-point elements in a to // packed single-precision (32-bit) floating-point elements, and store the // results in dst. -// -// FOR j := 0 to 1 -// i := 32*j -// k := 64*j -// dst[i+31:i] := Convert_FP64_To_FP32(a[k+64:k]) -// ENDFOR -// dst[127:64] := 0 -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpd_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpd_ps FORCE_INLINE __m128 _mm_cvtpd_ps(__m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) float32x2_t tmp = vcvt_f32_f64(vreinterpretq_f64_m128d(a)); return vreinterpretq_m128_f32(vcombine_f32(tmp, vdup_n_f32(0))); #else @@ -3673,17 +3900,10 @@ FORCE_INLINE __m128 _mm_cvtpd_ps(__m128d a) // Convert packed signed 32-bit integers in a to packed double-precision // (64-bit) floating-point elements, and store the results in dst. -// -// FOR j := 0 to 1 -// i := j*32 -// m := j*64 -// dst[m+63:m] := Convert_Int32_To_FP64(a[i+31:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtpi32_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtpi32_pd FORCE_INLINE __m128d _mm_cvtpi32_pd(__m64 a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vcvtq_f64_s64(vmovl_s32(vreinterpret_s32_m64(a)))); #else @@ -3693,20 +3913,17 @@ FORCE_INLINE __m128d _mm_cvtpi32_pd(__m64 a) #endif } -// Converts the four single-precision, floating-point values of a to signed -// 32-bit integer values. -// -// r0 := (int) a0 -// r1 := (int) a1 -// r2 := (int) a2 -// r3 := (int) a3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/xdc42k5e(v=vs.100).aspx +// Convert packed single-precision (32-bit) floating-point elements in a to +// packed 32-bit integers, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtps_epi32 // *NOTE*. The default rounding mode on SSE is 'round to even', which ARMv7-A // does not support! It is supported on ARMv8-A however. FORCE_INLINE __m128i _mm_cvtps_epi32(__m128 a) { -#if defined(__aarch64__) +#if defined(__ARM_FEATURE_FRINT) + return vreinterpretq_m128i_s32(vcvtq_s32_f32(vrnd32xq_f32(a))); +#elif (defined(__aarch64__) || defined(_M_ARM64)) || \ + defined(__ARM_FEATURE_DIRECTED_ROUNDING) switch (_MM_GET_ROUNDING_MODE()) { case _MM_ROUND_NEAREST: return vreinterpretq_m128i_s32(vcvtnq_s32_f32(a)); @@ -3756,17 +3973,10 @@ FORCE_INLINE __m128i _mm_cvtps_epi32(__m128 a) // Convert packed single-precision (32-bit) floating-point elements in a to // packed double-precision (64-bit) floating-point elements, and store the // results in dst. -// -// FOR j := 0 to 1 -// i := 64*j -// k := 32*j -// dst[i+63:i] := Convert_FP32_To_FP64(a[k+31:k]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtps_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtps_pd FORCE_INLINE __m128d _mm_cvtps_pd(__m128 a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vcvt_f64_f32(vget_low_f32(vreinterpretq_f32_m128(a)))); #else @@ -3777,13 +3987,10 @@ FORCE_INLINE __m128d _mm_cvtps_pd(__m128 a) } // Copy the lower double-precision (64-bit) floating-point element of a to dst. -// -// dst[63:0] := a[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsd_f64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsd_f64 FORCE_INLINE double _mm_cvtsd_f64(__m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return (double) vgetq_lane_f64(vreinterpretq_f64_m128d(a), 0); #else return ((double *) &a)[0]; @@ -3792,13 +3999,10 @@ FORCE_INLINE double _mm_cvtsd_f64(__m128d a) // Convert the lower double-precision (64-bit) floating-point element in a to a // 32-bit integer, and store the result in dst. -// -// dst[31:0] := Convert_FP64_To_Int32(a[63:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsd_si32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsd_si32 FORCE_INLINE int32_t _mm_cvtsd_si32(__m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return (int32_t) vgetq_lane_f64(vrndiq_f64(vreinterpretq_f64_m128d(a)), 0); #else __m128d rnd = _mm_round_pd(a, _MM_FROUND_CUR_DIRECTION); @@ -3809,13 +4013,10 @@ FORCE_INLINE int32_t _mm_cvtsd_si32(__m128d a) // Convert the lower double-precision (64-bit) floating-point element in a to a // 64-bit integer, and store the result in dst. -// -// dst[63:0] := Convert_FP64_To_Int64(a[63:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsd_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsd_si64 FORCE_INLINE int64_t _mm_cvtsd_si64(__m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return (int64_t) vgetq_lane_f64(vrndiq_f64(vreinterpretq_f64_m128d(a)), 0); #else __m128d rnd = _mm_round_pd(a, _MM_FROUND_CUR_DIRECTION); @@ -3826,20 +4027,17 @@ FORCE_INLINE int64_t _mm_cvtsd_si64(__m128d a) // Convert the lower double-precision (64-bit) floating-point element in a to a // 64-bit integer, and store the result in dst. -// -// dst[63:0] := Convert_FP64_To_Int64(a[63:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsd_si64x +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsd_si64x #define _mm_cvtsd_si64x _mm_cvtsd_si64 // Convert the lower double-precision (64-bit) floating-point element in b to a // single-precision (32-bit) floating-point element, store the result in the // lower element of dst, and copy the upper 3 packed elements from a to the // upper elements of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsd_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsd_ss FORCE_INLINE __m128 _mm_cvtsd_ss(__m128 a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128_f32(vsetq_lane_f32( vget_lane_f32(vcvt_f32_f64(vreinterpretq_f64_m128d(b)), 0), vreinterpretq_f32_m128(a), 0)); @@ -3850,36 +4048,30 @@ FORCE_INLINE __m128 _mm_cvtsd_ss(__m128 a, __m128d b) } // Copy the lower 32-bit integer in a to dst. -// -// dst[31:0] := a[31:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi128_si32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi128_si32 FORCE_INLINE int _mm_cvtsi128_si32(__m128i a) { return vgetq_lane_s32(vreinterpretq_s32_m128i(a), 0); } // Copy the lower 64-bit integer in a to dst. -// -// dst[63:0] := a[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi128_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi128_si64 FORCE_INLINE int64_t _mm_cvtsi128_si64(__m128i a) { return vgetq_lane_s64(vreinterpretq_s64_m128i(a), 0); } // Copy the lower 64-bit integer in a to dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi128_si64x +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi128_si64x #define _mm_cvtsi128_si64x(a) _mm_cvtsi128_si64(a) // Convert the signed 32-bit integer b to a double-precision (64-bit) // floating-point element, store the result in the lower element of dst, and // copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi32_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi32_sd FORCE_INLINE __m128d _mm_cvtsi32_sd(__m128d a, int32_t b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vsetq_lane_f64((double) b, vreinterpretq_f64_m128d(a), 0)); #else @@ -3890,21 +4082,12 @@ FORCE_INLINE __m128d _mm_cvtsi32_sd(__m128d a, int32_t b) } // Copy the lower 64-bit integer in a to dst. -// -// dst[63:0] := a[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi128_si64x +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi128_si64x #define _mm_cvtsi128_si64x(a) _mm_cvtsi128_si64(a) -// Moves 32-bit integer a to the least significant 32 bits of an __m128 object, -// zero extending the upper bits. -// -// r0 := a -// r1 := 0x0 -// r2 := 0x0 -// r3 := 0x0 -// -// https://msdn.microsoft.com/en-us/library/ct3539ha%28v=vs.90%29.aspx +// Copy 32-bit integer a to the lower elements of dst, and zero the upper +// elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi32_si128 FORCE_INLINE __m128i _mm_cvtsi32_si128(int a) { return vreinterpretq_m128i_s32(vsetq_lane_s32(a, vdupq_n_s32(0), 0)); @@ -3913,10 +4096,10 @@ FORCE_INLINE __m128i _mm_cvtsi32_si128(int a) // Convert the signed 64-bit integer b to a double-precision (64-bit) // floating-point element, store the result in the lower element of dst, and // copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi64_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi64_sd FORCE_INLINE __m128d _mm_cvtsi64_sd(__m128d a, int64_t b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vsetq_lane_f64((double) b, vreinterpretq_f64_m128d(a), 0)); #else @@ -3926,11 +4109,9 @@ FORCE_INLINE __m128d _mm_cvtsi64_sd(__m128d a, int64_t b) #endif } -// Moves 64-bit integer a to the least significant 64 bits of an __m128 object, -// zero extending the upper bits. -// -// r0 := a -// r1 := 0x0 +// Copy 64-bit integer a to the lower element of dst, and zero the upper +// element. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi64_si128 FORCE_INLINE __m128i _mm_cvtsi64_si128(int64_t a) { return vreinterpretq_m128i_s64(vsetq_lane_s64(a, vdupq_n_s64(0), 0)); @@ -3938,28 +4119,24 @@ FORCE_INLINE __m128i _mm_cvtsi64_si128(int64_t a) // Copy 64-bit integer a to the lower element of dst, and zero the upper // element. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi64x_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi64x_si128 #define _mm_cvtsi64x_si128(a) _mm_cvtsi64_si128(a) // Convert the signed 64-bit integer b to a double-precision (64-bit) // floating-point element, store the result in the lower element of dst, and // copy the upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtsi64x_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtsi64x_sd #define _mm_cvtsi64x_sd(a, b) _mm_cvtsi64_sd(a, b) // Convert the lower single-precision (32-bit) floating-point element in b to a // double-precision (64-bit) floating-point element, store the result in the // lower element of dst, and copy the upper element from a to the upper element // of dst. -// -// dst[63:0] := Convert_FP32_To_FP64(b[31:0]) -// dst[127:64] := a[127:64] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtss_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtss_sd FORCE_INLINE __m128d _mm_cvtss_sd(__m128d a, __m128 b) { double d = (double) vgetq_lane_f32(vreinterpretq_f32_m128(b), 0); -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vsetq_lane_f64(d, vreinterpretq_f64_m128d(a), 0)); #else @@ -3970,7 +4147,7 @@ FORCE_INLINE __m128d _mm_cvtss_sd(__m128d a, __m128 b) // Convert packed double-precision (64-bit) floating-point elements in a to // packed 32-bit integers with truncation, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvttpd_epi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttpd_epi32 FORCE_INLINE __m128i _mm_cvttpd_epi32(__m128d a) { double a0 = ((double *) &a)[0]; @@ -3980,7 +4157,7 @@ FORCE_INLINE __m128i _mm_cvttpd_epi32(__m128d a) // Convert packed double-precision (64-bit) floating-point elements in a to // packed 32-bit integers with truncation, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvttpd_pi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttpd_pi32 FORCE_INLINE __m64 _mm_cvttpd_pi32(__m128d a) { double a0 = ((double *) &a)[0]; @@ -3989,9 +4166,9 @@ FORCE_INLINE __m64 _mm_cvttpd_pi32(__m128d a) return vreinterpret_m64_s32(vld1_s32(data)); } -// Converts the four single-precision, floating-point values of a to signed -// 32-bit integer values using truncate. -// https://msdn.microsoft.com/en-us/library/vstudio/1h005y6x(v=vs.100).aspx +// Convert packed single-precision (32-bit) floating-point elements in a to +// packed 32-bit integers with truncation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttps_epi32 FORCE_INLINE __m128i _mm_cvttps_epi32(__m128 a) { return vreinterpretq_m128i_s32(vcvtq_s32_f32(vreinterpretq_f32_m128(a))); @@ -3999,10 +4176,7 @@ FORCE_INLINE __m128i _mm_cvttps_epi32(__m128 a) // Convert the lower double-precision (64-bit) floating-point element in a to a // 32-bit integer with truncation, and store the result in dst. -// -// dst[63:0] := Convert_FP64_To_Int32_Truncate(a[63:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvttsd_si32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttsd_si32 FORCE_INLINE int32_t _mm_cvttsd_si32(__m128d a) { double ret = *((double *) &a); @@ -4011,13 +4185,10 @@ FORCE_INLINE int32_t _mm_cvttsd_si32(__m128d a) // Convert the lower double-precision (64-bit) floating-point element in a to a // 64-bit integer with truncation, and store the result in dst. -// -// dst[63:0] := Convert_FP64_To_Int64_Truncate(a[63:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvttsd_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttsd_si64 FORCE_INLINE int64_t _mm_cvttsd_si64(__m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vgetq_lane_s64(vcvtq_s64_f64(vreinterpretq_f64_m128d(a)), 0); #else double ret = *((double *) &a); @@ -4027,24 +4198,15 @@ FORCE_INLINE int64_t _mm_cvttsd_si64(__m128d a) // Convert the lower double-precision (64-bit) floating-point element in a to a // 64-bit integer with truncation, and store the result in dst. -// -// dst[63:0] := Convert_FP64_To_Int64_Truncate(a[63:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvttsd_si64x +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvttsd_si64x #define _mm_cvttsd_si64x(a) _mm_cvttsd_si64(a) // Divide packed double-precision (64-bit) floating-point elements in a by // packed elements in b, and store the results in dst. -// -// FOR j := 0 to 1 -// i := 64*j -// dst[i+63:i] := a[i+63:i] / b[i+63:i] -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_div_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_div_pd FORCE_INLINE __m128d _mm_div_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vdivq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -4061,10 +4223,10 @@ FORCE_INLINE __m128d _mm_div_pd(__m128d a, __m128d b) // lower double-precision (64-bit) floating-point element in b, store the result // in the lower element of dst, and copy the upper element from a to the upper // element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_div_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_div_sd FORCE_INLINE __m128d _mm_div_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) float64x2_t tmp = vdivq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b)); return vreinterpretq_m128d_f64( @@ -4074,33 +4236,29 @@ FORCE_INLINE __m128d _mm_div_sd(__m128d a, __m128d b) #endif } -// Extracts the selected signed or unsigned 16-bit integer from a and zero -// extends. -// https://msdn.microsoft.com/en-us/library/6dceta0c(v=vs.100).aspx +// Extract a 16-bit integer from a, selected with imm8, and store the result in +// the lower element of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_extract_epi16 // FORCE_INLINE int _mm_extract_epi16(__m128i a, __constrange(0,8) int imm) #define _mm_extract_epi16(a, imm) \ vgetq_lane_u16(vreinterpretq_u16_m128i(a), (imm)) -// Inserts the least significant 16 bits of b into the selected 16-bit integer -// of a. -// https://msdn.microsoft.com/en-us/library/kaze8hz1%28v=vs.100%29.aspx +// Copy a to dst, and insert the 16-bit integer i into dst at the location +// specified by imm8. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_insert_epi16 // FORCE_INLINE __m128i _mm_insert_epi16(__m128i a, int b, // __constrange(0,8) int imm) -#define _mm_insert_epi16(a, b, imm) \ - __extension__({ \ - vreinterpretq_m128i_s16( \ - vsetq_lane_s16((b), vreinterpretq_s16_m128i(a), (imm))); \ - }) +#define _mm_insert_epi16(a, b, imm) \ + vreinterpretq_m128i_s16( \ + vsetq_lane_s16((b), vreinterpretq_s16_m128i(a), (imm))) -// Loads two double-precision from 16-byte aligned memory, floating-point -// values. -// -// dst[127:0] := MEM[mem_addr+127:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_load_pd +// Load 128-bits (composed of 2 packed double-precision (64-bit) floating-point +// elements) from memory into dst. mem_addr must be aligned on a 16-byte +// boundary or a general-protection exception may be generated. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load_pd FORCE_INLINE __m128d _mm_load_pd(const double *p) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vld1q_f64(p)); #else const float *fp = (const float *) p; @@ -4111,24 +4269,16 @@ FORCE_INLINE __m128d _mm_load_pd(const double *p) // Load a double-precision (64-bit) floating-point element from memory into both // elements of dst. -// -// dst[63:0] := MEM[mem_addr+63:mem_addr] -// dst[127:64] := MEM[mem_addr+63:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_load_pd1 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load_pd1 #define _mm_load_pd1 _mm_load1_pd // Load a double-precision (64-bit) floating-point element from memory into the // lower of dst, and zero the upper element. mem_addr does not need to be // aligned on any particular boundary. -// -// dst[63:0] := MEM[mem_addr+63:mem_addr] -// dst[127:64] := 0 -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_load_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load_sd FORCE_INLINE __m128d _mm_load_sd(const double *p) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vsetq_lane_f64(*p, vdupq_n_f64(0), 0)); #else const float *fp = (const float *) p; @@ -4137,8 +4287,9 @@ FORCE_INLINE __m128d _mm_load_sd(const double *p) #endif } -// Loads 128-bit value. : -// https://msdn.microsoft.com/en-us/library/atzzad1h(v=vs.80).aspx +// Load 128-bits of integer data from memory into dst. mem_addr must be aligned +// on a 16-byte boundary or a general-protection exception may be generated. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load_si128 FORCE_INLINE __m128i _mm_load_si128(const __m128i *p) { return vreinterpretq_m128i_s32(vld1q_s32((const int32_t *) p)); @@ -4146,14 +4297,10 @@ FORCE_INLINE __m128i _mm_load_si128(const __m128i *p) // Load a double-precision (64-bit) floating-point element from memory into both // elements of dst. -// -// dst[63:0] := MEM[mem_addr+63:mem_addr] -// dst[127:64] := MEM[mem_addr+63:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_load1_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_load1_pd FORCE_INLINE __m128d _mm_load1_pd(const double *p) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vld1q_dup_f64(p)); #else return vreinterpretq_m128d_s64(vdupq_n_s64(*(const int64_t *) p)); @@ -4163,14 +4310,10 @@ FORCE_INLINE __m128d _mm_load1_pd(const double *p) // Load a double-precision (64-bit) floating-point element from memory into the // upper element of dst, and copy the lower element from a to dst. mem_addr does // not need to be aligned on any particular boundary. -// -// dst[63:0] := a[63:0] -// dst[127:64] := MEM[mem_addr+63:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadh_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadh_pd FORCE_INLINE __m128d _mm_loadh_pd(__m128d a, const double *p) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vcombine_f64(vget_low_f64(vreinterpretq_f64_m128d(a)), vld1_f64(p))); #else @@ -4180,7 +4323,7 @@ FORCE_INLINE __m128d _mm_loadh_pd(__m128d a, const double *p) } // Load 64-bit integer from memory into the first element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadl_epi64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadl_epi64 FORCE_INLINE __m128i _mm_loadl_epi64(__m128i const *p) { /* Load the lower 64 bits of the value pointed to by p into the @@ -4193,14 +4336,10 @@ FORCE_INLINE __m128i _mm_loadl_epi64(__m128i const *p) // Load a double-precision (64-bit) floating-point element from memory into the // lower element of dst, and copy the upper element from a to dst. mem_addr does // not need to be aligned on any particular boundary. -// -// dst[63:0] := MEM[mem_addr+63:mem_addr] -// dst[127:64] := a[127:64] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadl_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadl_pd FORCE_INLINE __m128d _mm_loadl_pd(__m128d a, const double *p) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vcombine_f64(vld1_f64(p), vget_high_f64(vreinterpretq_f64_m128d(a)))); #else @@ -4213,14 +4352,10 @@ FORCE_INLINE __m128d _mm_loadl_pd(__m128d a, const double *p) // Load 2 double-precision (64-bit) floating-point elements from memory into dst // in reverse order. mem_addr must be aligned on a 16-byte boundary or a // general-protection exception may be generated. -// -// dst[63:0] := MEM[mem_addr+127:mem_addr+64] -// dst[127:64] := MEM[mem_addr+63:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadr_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadr_pd FORCE_INLINE __m128d _mm_loadr_pd(const double *p) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) float64x2_t v = vld1q_f64(p); return vreinterpretq_m128d_f64(vextq_f64(v, v, 1)); #else @@ -4230,57 +4365,57 @@ FORCE_INLINE __m128d _mm_loadr_pd(const double *p) } // Loads two double-precision from unaligned memory, floating-point values. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadu_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadu_pd FORCE_INLINE __m128d _mm_loadu_pd(const double *p) { return _mm_load_pd(p); } -// Loads 128-bit value. : -// https://msdn.microsoft.com/zh-cn/library/f4k12ae8(v=vs.90).aspx +// Load 128-bits of integer data from memory into dst. mem_addr does not need to +// be aligned on any particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadu_si128 FORCE_INLINE __m128i _mm_loadu_si128(const __m128i *p) { - return vreinterpretq_m128i_s32(vld1q_s32((const int32_t *) p)); + return vreinterpretq_m128i_s32(vld1q_s32((const unaligned_int32_t *) p)); } // Load unaligned 32-bit integer from memory into the first element of dst. -// -// dst[31:0] := MEM[mem_addr+31:mem_addr] -// dst[MAX:32] := 0 -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loadu_si32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loadu_si32 FORCE_INLINE __m128i _mm_loadu_si32(const void *p) { return vreinterpretq_m128i_s32( - vsetq_lane_s32(*(const int32_t *) p, vdupq_n_s32(0), 0)); + vsetq_lane_s32(*(const unaligned_int32_t *) p, vdupq_n_s32(0), 0)); } -// Multiplies the 8 signed 16-bit integers from a by the 8 signed 16-bit -// integers from b. -// -// r0 := (a0 * b0) + (a1 * b1) -// r1 := (a2 * b2) + (a3 * b3) -// r2 := (a4 * b4) + (a5 * b5) -// r3 := (a6 * b6) + (a7 * b7) -// https://msdn.microsoft.com/en-us/library/yht36sa6(v=vs.90).aspx +// Multiply packed signed 16-bit integers in a and b, producing intermediate +// signed 32-bit integers. Horizontally add adjacent pairs of intermediate +// 32-bit integers, and pack the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_madd_epi16 FORCE_INLINE __m128i _mm_madd_epi16(__m128i a, __m128i b) { int32x4_t low = vmull_s16(vget_low_s16(vreinterpretq_s16_m128i(a)), vget_low_s16(vreinterpretq_s16_m128i(b))); - int32x4_t high = vmull_s16(vget_high_s16(vreinterpretq_s16_m128i(a)), +#if defined(__aarch64__) || defined(_M_ARM64) + int32x4_t high = + vmull_high_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b)); + + return vreinterpretq_m128i_s32(vpaddq_s32(low, high)); +#else + int32x4_t high = vmull_s16(vget_high_s16(vreinterpretq_s16_m128i(a)), vget_high_s16(vreinterpretq_s16_m128i(b))); int32x2_t low_sum = vpadd_s32(vget_low_s32(low), vget_high_s32(low)); int32x2_t high_sum = vpadd_s32(vget_low_s32(high), vget_high_s32(high)); return vreinterpretq_m128i_s32(vcombine_s32(low_sum, high_sum)); +#endif } // Conditionally store 8-bit integer elements from a into memory using mask // (elements are not stored when the highest bit is not set in the corresponding // element) and a non-temporal memory hint. mem_addr does not need to be aligned // on any particular boundary. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_maskmoveu_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_maskmoveu_si128 FORCE_INLINE void _mm_maskmoveu_si128(__m128i a, __m128i mask, char *mem_addr) { int8x16_t shr_mask = vshrq_n_s8(vreinterpretq_s8_m128i(mask), 7); @@ -4291,18 +4426,18 @@ FORCE_INLINE void _mm_maskmoveu_si128(__m128i a, __m128i mask, char *mem_addr) vst1q_s8((int8_t *) mem_addr, masked); } -// Computes the pairwise maxima of the 8 signed 16-bit integers from a and the 8 -// signed 16-bit integers from b. -// https://msdn.microsoft.com/en-us/LIBRary/3x060h7c(v=vs.100).aspx +// Compare packed signed 16-bit integers in a and b, and store packed maximum +// values in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_epi16 FORCE_INLINE __m128i _mm_max_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_s16( vmaxq_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); } -// Computes the pairwise maxima of the 16 unsigned 8-bit integers from a and the -// 16 unsigned 8-bit integers from b. -// https://msdn.microsoft.com/en-us/library/st6634za(v=vs.100).aspx +// Compare packed unsigned 8-bit integers in a and b, and store packed maximum +// values in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_epu8 FORCE_INLINE __m128i _mm_max_epu8(__m128i a, __m128i b) { return vreinterpretq_m128i_u8( @@ -4311,12 +4446,18 @@ FORCE_INLINE __m128i _mm_max_epu8(__m128i a, __m128i b) // Compare packed double-precision (64-bit) floating-point elements in a and b, // and store packed maximum values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_max_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_pd FORCE_INLINE __m128d _mm_max_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) +#if SSE2NEON_PRECISE_MINMAX + float64x2_t _a = vreinterpretq_f64_m128d(a); + float64x2_t _b = vreinterpretq_f64_m128d(b); + return vreinterpretq_m128d_f64(vbslq_f64(vcgtq_f64(_a, _b), _a, _b)); +#else return vreinterpretq_m128d_f64( vmaxq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); +#endif #else uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); uint64_t a1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(a)); @@ -4333,31 +4474,31 @@ FORCE_INLINE __m128d _mm_max_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b, store the maximum value in the lower element of dst, and copy the upper // element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_max_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_sd FORCE_INLINE __m128d _mm_max_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_max_pd(a, b)); #else double *da = (double *) &a; double *db = (double *) &b; - double c[2] = {fmax(da[0], db[0]), da[1]}; - return vld1q_f32((float32_t *) c); + double c[2] = {da[0] > db[0] ? da[0] : db[0], da[1]}; + return vreinterpretq_m128d_f32(vld1q_f32((float32_t *) c)); #endif } -// Computes the pairwise minima of the 8 signed 16-bit integers from a and the 8 -// signed 16-bit integers from b. -// https://msdn.microsoft.com/en-us/library/vstudio/6te997ew(v=vs.100).aspx +// Compare packed signed 16-bit integers in a and b, and store packed minimum +// values in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_epi16 FORCE_INLINE __m128i _mm_min_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_s16( vminq_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); } -// Computes the pairwise minima of the 16 unsigned 8-bit integers from a and the -// 16 unsigned 8-bit integers from b. -// https://msdn.microsoft.com/ko-kr/library/17k8cf58(v=vs.100).aspxx +// Compare packed unsigned 8-bit integers in a and b, and store packed minimum +// values in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_epu8 FORCE_INLINE __m128i _mm_min_epu8(__m128i a, __m128i b) { return vreinterpretq_m128i_u8( @@ -4366,12 +4507,18 @@ FORCE_INLINE __m128i _mm_min_epu8(__m128i a, __m128i b) // Compare packed double-precision (64-bit) floating-point elements in a and b, // and store packed minimum values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_min_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_pd FORCE_INLINE __m128d _mm_min_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) +#if SSE2NEON_PRECISE_MINMAX + float64x2_t _a = vreinterpretq_f64_m128d(a); + float64x2_t _b = vreinterpretq_f64_m128d(b); + return vreinterpretq_m128d_f64(vbslq_f64(vcltq_f64(_a, _b), _a, _b)); +#else return vreinterpretq_m128d_f64( vminq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); +#endif #else uint64_t a0 = (uint64_t) vget_low_u64(vreinterpretq_u64_m128d(a)); uint64_t a1 = (uint64_t) vget_high_u64(vreinterpretq_u64_m128d(a)); @@ -4387,26 +4534,22 @@ FORCE_INLINE __m128d _mm_min_pd(__m128d a, __m128d b) // Compare the lower double-precision (64-bit) floating-point elements in a and // b, store the minimum value in the lower element of dst, and copy the upper // element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_min_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_sd FORCE_INLINE __m128d _mm_min_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_min_pd(a, b)); #else double *da = (double *) &a; double *db = (double *) &b; - double c[2] = {fmin(da[0], db[0]), da[1]}; - return vld1q_f32((float32_t *) c); + double c[2] = {da[0] < db[0] ? da[0] : db[0], da[1]}; + return vreinterpretq_m128d_f32(vld1q_f32((float32_t *) c)); #endif } // Copy the lower 64-bit integer in a to the lower element of dst, and zero the // upper element. -// -// dst[63:0] := a[63:0] -// dst[127:64] := 0 -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_move_epi64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_move_epi64 FORCE_INLINE __m128i _mm_move_epi64(__m128i a) { return vreinterpretq_m128i_s64( @@ -4416,11 +4559,7 @@ FORCE_INLINE __m128i _mm_move_epi64(__m128i a) // Move the lower double-precision (64-bit) floating-point element from b to the // lower element of dst, and copy the upper element from a to the upper element // of dst. -// -// dst[63:0] := b[63:0] -// dst[127:64] := a[127:64] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_move_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_move_sd FORCE_INLINE __m128d _mm_move_sd(__m128d a, __m128d b) { return vreinterpretq_m128d_f32( @@ -4428,10 +4567,9 @@ FORCE_INLINE __m128d _mm_move_sd(__m128d a, __m128d b) vget_high_f32(vreinterpretq_f32_m128d(a)))); } -// NEON does not provide a version of this function. -// Creates a 16-bit mask from the most significant bits of the 16 signed or -// unsigned 8-bit integers in a and zero extends the upper bits. -// https://msdn.microsoft.com/en-us/library/vstudio/s090c8fk(v=vs.100).aspx +// Create mask from the most significant bit of each 8-bit element in a, and +// store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movemask_epi8 FORCE_INLINE int _mm_movemask_epi8(__m128i a) { // Use increasingly wide shifts+adds to collect the sign bits @@ -4514,19 +4652,17 @@ FORCE_INLINE int _mm_movemask_epi8(__m128i a) // Set each bit of mask dst based on the most significant bit of the // corresponding packed double-precision (64-bit) floating-point element in a. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_movemask_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movemask_pd FORCE_INLINE int _mm_movemask_pd(__m128d a) { uint64x2_t input = vreinterpretq_u64_m128d(a); uint64x2_t high_bits = vshrq_n_u64(input, 63); - return vgetq_lane_u64(high_bits, 0) | (vgetq_lane_u64(high_bits, 1) << 1); + return (int) (vgetq_lane_u64(high_bits, 0) | + (vgetq_lane_u64(high_bits, 1) << 1)); } // Copy the lower 64-bit integer in a to dst. -// -// dst[63:0] := a[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_movepi64_pi64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movepi64_pi64 FORCE_INLINE __m64 _mm_movepi64_pi64(__m128i a) { return vreinterpret_m64_s64(vget_low_s64(vreinterpretq_s64_m128i(a))); @@ -4534,11 +4670,7 @@ FORCE_INLINE __m64 _mm_movepi64_pi64(__m128i a) // Copy the 64-bit integer a to the lower element of dst, and zero the upper // element. -// -// dst[63:0] := a[63:0] -// dst[127:64] := 0 -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_movpi64_epi64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movpi64_epi64 FORCE_INLINE __m128i _mm_movpi64_epi64(__m64 a) { return vreinterpretq_m128i_s64( @@ -4547,9 +4679,7 @@ FORCE_INLINE __m128i _mm_movpi64_epi64(__m64 a) // Multiply the low unsigned 32-bit integers from each packed 64-bit element in // a and b, and store the unsigned 64-bit results in dst. -// -// r0 := (a0 & 0xFFFFFFFF) * (b0 & 0xFFFFFFFF) -// r1 := (a2 & 0xFFFFFFFF) * (b2 & 0xFFFFFFFF) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mul_epu32 FORCE_INLINE __m128i _mm_mul_epu32(__m128i a, __m128i b) { // vmull_u32 upcasts instead of masking, so we downcast. @@ -4560,10 +4690,10 @@ FORCE_INLINE __m128i _mm_mul_epu32(__m128i a, __m128i b) // Multiply packed double-precision (64-bit) floating-point elements in a and b, // and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_mul_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mul_pd FORCE_INLINE __m128d _mm_mul_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vmulq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -4579,7 +4709,7 @@ FORCE_INLINE __m128d _mm_mul_pd(__m128d a, __m128d b) // Multiply the lower double-precision (64-bit) floating-point element in a and // b, store the result in the lower element of dst, and copy the upper element // from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=mm_mul_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=mm_mul_sd FORCE_INLINE __m128d _mm_mul_sd(__m128d a, __m128d b) { return _mm_move_sd(a, _mm_mul_pd(a, b)); @@ -4587,25 +4717,17 @@ FORCE_INLINE __m128d _mm_mul_sd(__m128d a, __m128d b) // Multiply the low unsigned 32-bit integers from a and b, and store the // unsigned 64-bit result in dst. -// -// dst[63:0] := a[31:0] * b[31:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_mul_su32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mul_su32 FORCE_INLINE __m64 _mm_mul_su32(__m64 a, __m64 b) { return vreinterpret_m64_u64(vget_low_u64( vmull_u32(vreinterpret_u32_m64(a), vreinterpret_u32_m64(b)))); } -// Multiplies the 8 signed 16-bit integers from a by the 8 signed 16-bit -// integers from b. -// -// r0 := (a0 * b0)[31:16] -// r1 := (a1 * b1)[31:16] -// ... -// r7 := (a7 * b7)[31:16] -// -// https://msdn.microsoft.com/en-us/library/vstudio/59hddw1d(v=vs.100).aspx +// Multiply the packed signed 16-bit integers in a and b, producing intermediate +// 32-bit integers, and store the high 16 bits of the intermediate integers in +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mulhi_epi16 FORCE_INLINE __m128i _mm_mulhi_epi16(__m128i a, __m128i b) { /* FIXME: issue with large values because of result saturation */ @@ -4626,13 +4748,13 @@ FORCE_INLINE __m128i _mm_mulhi_epi16(__m128i a, __m128i b) // Multiply the packed unsigned 16-bit integers in a and b, producing // intermediate 32-bit integers, and store the high 16 bits of the intermediate // integers in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_mulhi_epu16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mulhi_epu16 FORCE_INLINE __m128i _mm_mulhi_epu16(__m128i a, __m128i b) { uint16x4_t a3210 = vget_low_u16(vreinterpretq_u16_m128i(a)); uint16x4_t b3210 = vget_low_u16(vreinterpretq_u16_m128i(b)); uint32x4_t ab3210 = vmull_u16(a3210, b3210); -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) uint32x4_t ab7654 = vmull_high_u16(vreinterpretq_u16_m128i(a), vreinterpretq_u16_m128i(b)); uint16x8_t r = vuzp2q_u16(vreinterpretq_u16_u32(ab3210), @@ -4648,15 +4770,9 @@ FORCE_INLINE __m128i _mm_mulhi_epu16(__m128i a, __m128i b) #endif } -// Multiplies the 8 signed or unsigned 16-bit integers from a by the 8 signed or -// unsigned 16-bit integers from b. -// -// r0 := (a0 * b0)[15:0] -// r1 := (a1 * b1)[15:0] -// ... -// r7 := (a7 * b7)[15:0] -// -// https://msdn.microsoft.com/en-us/library/vstudio/9ks1472s(v=vs.100).aspx +// Multiply the packed 16-bit integers in a and b, producing intermediate 32-bit +// integers, and store the low 16 bits of the intermediate integers in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mullo_epi16 FORCE_INLINE __m128i _mm_mullo_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_s16( @@ -4665,27 +4781,25 @@ FORCE_INLINE __m128i _mm_mullo_epi16(__m128i a, __m128i b) // Compute the bitwise OR of packed double-precision (64-bit) floating-point // elements in a and b, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=mm_or_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=mm_or_pd FORCE_INLINE __m128d _mm_or_pd(__m128d a, __m128d b) { return vreinterpretq_m128d_s64( vorrq_s64(vreinterpretq_s64_m128d(a), vreinterpretq_s64_m128d(b))); } -// Computes the bitwise OR of the 128-bit value in a and the 128-bit value in b. -// -// r := a | b -// -// https://msdn.microsoft.com/en-us/library/vstudio/ew8ty0db(v=vs.100).aspx +// Compute the bitwise OR of 128 bits (representing integer data) in a and b, +// and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_or_si128 FORCE_INLINE __m128i _mm_or_si128(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( vorrq_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); } -// Packs the 16 signed 16-bit integers from a and b into 8-bit integers and -// saturates. -// https://msdn.microsoft.com/en-us/library/k4y4f7w5%28v=vs.90%29.aspx +// Convert packed signed 16-bit integers from a and b to packed 8-bit integers +// using signed saturation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_packs_epi16 FORCE_INLINE __m128i _mm_packs_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_s8( @@ -4693,19 +4807,9 @@ FORCE_INLINE __m128i _mm_packs_epi16(__m128i a, __m128i b) vqmovn_s16(vreinterpretq_s16_m128i(b)))); } -// Packs the 8 signed 32-bit integers from a and b into signed 16-bit integers -// and saturates. -// -// r0 := SignedSaturate(a0) -// r1 := SignedSaturate(a1) -// r2 := SignedSaturate(a2) -// r3 := SignedSaturate(a3) -// r4 := SignedSaturate(b0) -// r5 := SignedSaturate(b1) -// r6 := SignedSaturate(b2) -// r7 := SignedSaturate(b3) -// -// https://msdn.microsoft.com/en-us/library/393t56f9%28v=vs.90%29.aspx +// Convert packed signed 32-bit integers from a and b to packed 16-bit integers +// using signed saturation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_packs_epi32 FORCE_INLINE __m128i _mm_packs_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_s16( @@ -4713,19 +4817,9 @@ FORCE_INLINE __m128i _mm_packs_epi32(__m128i a, __m128i b) vqmovn_s32(vreinterpretq_s32_m128i(b)))); } -// Packs the 16 signed 16 - bit integers from a and b into 8 - bit unsigned -// integers and saturates. -// -// r0 := UnsignedSaturate(a0) -// r1 := UnsignedSaturate(a1) -// ... -// r7 := UnsignedSaturate(a7) -// r8 := UnsignedSaturate(b0) -// r9 := UnsignedSaturate(b1) -// ... -// r15 := UnsignedSaturate(b7) -// -// https://msdn.microsoft.com/en-us/library/07ad1wx4(v=vs.100).aspx +// Convert packed signed 16-bit integers from a and b to packed 8-bit integers +// using unsigned saturation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_packus_epi16 FORCE_INLINE __m128i _mm_packus_epi16(const __m128i a, const __m128i b) { return vreinterpretq_m128i_u8( @@ -4735,27 +4829,32 @@ FORCE_INLINE __m128i _mm_packus_epi16(const __m128i a, const __m128i b) // Pause the processor. This is typically used in spin-wait loops and depending // on the x86 processor typical values are in the 40-100 cycle range. The -// 'yield' instruction isn't a good fit beacuse it's effectively a nop on most +// 'yield' instruction isn't a good fit because it's effectively a nop on most // Arm cores. Experience with several databases has shown has shown an 'isb' is // a reasonable approximation. -FORCE_INLINE void _mm_pause() +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_pause +FORCE_INLINE void _mm_pause(void) { +#if defined(_MSC_VER) + __isb(_ARM64_BARRIER_SY); +#else __asm__ __volatile__("isb\n"); +#endif } // Compute the absolute differences of packed unsigned 8-bit integers in a and // b, then horizontally sum each consecutive 8 differences to produce two // unsigned 16-bit integers, and pack these unsigned 16-bit integers in the low // 16 bits of 64-bit elements in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sad_epu8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sad_epu8 FORCE_INLINE __m128i _mm_sad_epu8(__m128i a, __m128i b) { uint16x8_t t = vpaddlq_u8(vabdq_u8((uint8x16_t) a, (uint8x16_t) b)); return vreinterpretq_m128i_u64(vpaddlq_u32(vpaddlq_u16(t))); } -// Sets the 8 signed 16-bit integer values. -// https://msdn.microsoft.com/en-au/library/3e0fek84(v=vs.90).aspx +// Set packed 16-bit integers in dst with the supplied values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_epi16 FORCE_INLINE __m128i _mm_set_epi16(short i7, short i6, short i5, @@ -4769,33 +4868,31 @@ FORCE_INLINE __m128i _mm_set_epi16(short i7, return vreinterpretq_m128i_s16(vld1q_s16(data)); } -// Sets the 4 signed 32-bit integer values. -// https://msdn.microsoft.com/en-us/library/vstudio/019beekt(v=vs.100).aspx +// Set packed 32-bit integers in dst with the supplied values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_epi32 FORCE_INLINE __m128i _mm_set_epi32(int i3, int i2, int i1, int i0) { int32_t ALIGN_STRUCT(16) data[4] = {i0, i1, i2, i3}; return vreinterpretq_m128i_s32(vld1q_s32(data)); } -// Returns the __m128i structure with its two 64-bit integer values -// initialized to the values of the two 64-bit integers passed in. -// https://msdn.microsoft.com/en-us/library/dk2sdw0h(v=vs.120).aspx +// Set packed 64-bit integers in dst with the supplied values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_epi64 FORCE_INLINE __m128i _mm_set_epi64(__m64 i1, __m64 i2) { - return _mm_set_epi64x((int64_t) i1, (int64_t) i2); + return _mm_set_epi64x(vget_lane_s64(i1, 0), vget_lane_s64(i2, 0)); } -// Returns the __m128i structure with its two 64-bit integer values -// initialized to the values of the two 64-bit integers passed in. -// https://msdn.microsoft.com/en-us/library/dk2sdw0h(v=vs.120).aspx +// Set packed 64-bit integers in dst with the supplied values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_epi64x FORCE_INLINE __m128i _mm_set_epi64x(int64_t i1, int64_t i2) { return vreinterpretq_m128i_s64( vcombine_s64(vcreate_s64(i2), vcreate_s64(i1))); } -// Sets the 16 signed 8-bit integer values. -// https://msdn.microsoft.com/en-us/library/x0cx8zd3(v=vs.90).aspx +// Set packed 8-bit integers in dst with the supplied values. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_epi8 FORCE_INLINE __m128i _mm_set_epi8(signed char b15, signed char b14, signed char b13, @@ -4823,11 +4920,11 @@ FORCE_INLINE __m128i _mm_set_epi8(signed char b15, // Set packed double-precision (64-bit) floating-point elements in dst with the // supplied values. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_set_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_pd FORCE_INLINE __m128d _mm_set_pd(double e1, double e0) { double ALIGN_STRUCT(16) data[2] = {e0, e1}; -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vld1q_f64((float64_t *) data)); #else return vreinterpretq_m128d_f32(vld1q_f32((float32_t *) data)); @@ -4836,65 +4933,51 @@ FORCE_INLINE __m128d _mm_set_pd(double e1, double e0) // Broadcast double-precision (64-bit) floating-point value a to all elements of // dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_set_pd1 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_pd1 #define _mm_set_pd1 _mm_set1_pd // Copy double-precision (64-bit) floating-point element a to the lower element // of dst, and zero the upper element. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_set_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set_sd FORCE_INLINE __m128d _mm_set_sd(double a) { +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128d_f64(vsetq_lane_f64(a, vdupq_n_f64(0), 0)); +#else return _mm_set_pd(0, a); +#endif } -// Sets the 8 signed 16-bit integer values to w. -// -// r0 := w -// r1 := w -// ... -// r7 := w -// -// https://msdn.microsoft.com/en-us/library/k0ya3x0e(v=vs.90).aspx +// Broadcast 16-bit integer a to all elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set1_epi16 FORCE_INLINE __m128i _mm_set1_epi16(short w) { return vreinterpretq_m128i_s16(vdupq_n_s16(w)); } -// Sets the 4 signed 32-bit integer values to i. -// -// r0 := i -// r1 := i -// r2 := i -// r3 := I -// -// https://msdn.microsoft.com/en-us/library/vstudio/h4xscxat(v=vs.100).aspx +// Broadcast 32-bit integer a to all elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set1_epi32 FORCE_INLINE __m128i _mm_set1_epi32(int _i) { return vreinterpretq_m128i_s32(vdupq_n_s32(_i)); } -// Sets the 2 signed 64-bit integer values to i. -// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/whtfzhzk(v=vs.100) +// Broadcast 64-bit integer a to all elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set1_epi64 FORCE_INLINE __m128i _mm_set1_epi64(__m64 _i) { - return vreinterpretq_m128i_s64(vdupq_n_s64((int64_t) _i)); + return vreinterpretq_m128i_s64(vdupq_lane_s64(_i, 0)); } -// Sets the 2 signed 64-bit integer values to i. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_set1_epi64x +// Broadcast 64-bit integer a to all elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set1_epi64x FORCE_INLINE __m128i _mm_set1_epi64x(int64_t _i) { return vreinterpretq_m128i_s64(vdupq_n_s64(_i)); } -// Sets the 16 signed 8-bit integer values to b. -// -// r0 := b -// r1 := b -// ... -// r15 := b -// -// https://msdn.microsoft.com/en-us/library/6e14xhyf(v=vs.100).aspx +// Broadcast 8-bit integer a to all elements of dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set1_epi8 FORCE_INLINE __m128i _mm_set1_epi8(signed char w) { return vreinterpretq_m128i_s8(vdupq_n_s8(w)); @@ -4902,23 +4985,18 @@ FORCE_INLINE __m128i _mm_set1_epi8(signed char w) // Broadcast double-precision (64-bit) floating-point value a to all elements of // dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_set1_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_set1_pd FORCE_INLINE __m128d _mm_set1_pd(double d) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vdupq_n_f64(d)); #else return vreinterpretq_m128d_s64(vdupq_n_s64(*(int64_t *) &d)); #endif } -// Sets the 8 signed 16-bit integer values in reverse order. -// -// Return Value -// r0 := w0 -// r1 := w1 -// ... -// r7 := w7 +// Set packed 16-bit integers in dst with the supplied values in reverse order. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setr_epi16 FORCE_INLINE __m128i _mm_setr_epi16(short w0, short w1, short w2, @@ -4932,8 +5010,8 @@ FORCE_INLINE __m128i _mm_setr_epi16(short w0, return vreinterpretq_m128i_s16(vld1q_s16((int16_t *) data)); } -// Sets the 4 signed 32-bit integer values in reverse order -// https://technet.microsoft.com/en-us/library/security/27yb3ee5(v=vs.90).aspx +// Set packed 32-bit integers in dst with the supplied values in reverse order. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setr_epi32 FORCE_INLINE __m128i _mm_setr_epi32(int i3, int i2, int i1, int i0) { int32_t ALIGN_STRUCT(16) data[4] = {i3, i2, i1, i0}; @@ -4941,14 +5019,14 @@ FORCE_INLINE __m128i _mm_setr_epi32(int i3, int i2, int i1, int i0) } // Set packed 64-bit integers in dst with the supplied values in reverse order. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_setr_epi64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setr_epi64 FORCE_INLINE __m128i _mm_setr_epi64(__m64 e1, __m64 e0) { return vreinterpretq_m128i_s64(vcombine_s64(e1, e0)); } -// Sets the 16 signed 8-bit integer values in reverse order. -// https://msdn.microsoft.com/en-us/library/2khb9c7k(v=vs.90).aspx +// Set packed 8-bit integers in dst with the supplied values in reverse order. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setr_epi8 FORCE_INLINE __m128i _mm_setr_epi8(signed char b0, signed char b1, signed char b2, @@ -4976,110 +5054,104 @@ FORCE_INLINE __m128i _mm_setr_epi8(signed char b0, // Set packed double-precision (64-bit) floating-point elements in dst with the // supplied values in reverse order. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_setr_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setr_pd FORCE_INLINE __m128d _mm_setr_pd(double e1, double e0) { return _mm_set_pd(e0, e1); } // Return vector of type __m128d with all elements set to zero. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_setzero_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setzero_pd FORCE_INLINE __m128d _mm_setzero_pd(void) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vdupq_n_f64(0)); #else return vreinterpretq_m128d_f32(vdupq_n_f32(0)); #endif } -// Sets the 128-bit value to zero -// https://msdn.microsoft.com/en-us/library/vstudio/ys7dw0kh(v=vs.100).aspx +// Return vector of type __m128i with all elements set to zero. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_setzero_si128 FORCE_INLINE __m128i _mm_setzero_si128(void) { return vreinterpretq_m128i_s32(vdupq_n_s32(0)); } -// Shuffles the 4 signed or unsigned 32-bit integers in a as specified by imm. -// https://msdn.microsoft.com/en-us/library/56f67xbk%28v=vs.90%29.aspx +// Shuffle 32-bit integers in a using the control in imm8, and store the results +// in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_shuffle_epi32 // FORCE_INLINE __m128i _mm_shuffle_epi32(__m128i a, // __constrange(0,255) int imm) -#if __has_builtin(__builtin_shufflevector) -#define _mm_shuffle_epi32(a, imm) \ - __extension__({ \ - int32x4_t _input = vreinterpretq_s32_m128i(a); \ - int32x4_t _shuf = __builtin_shufflevector( \ - _input, _input, (imm) & (0x3), ((imm) >> 2) & 0x3, \ - ((imm) >> 4) & 0x3, ((imm) >> 6) & 0x3); \ - vreinterpretq_m128i_s32(_shuf); \ +#if defined(_sse2neon_shuffle) +#define _mm_shuffle_epi32(a, imm) \ + __extension__({ \ + int32x4_t _input = vreinterpretq_s32_m128i(a); \ + int32x4_t _shuf = \ + vshuffleq_s32(_input, _input, (imm) & (0x3), ((imm) >> 2) & 0x3, \ + ((imm) >> 4) & 0x3, ((imm) >> 6) & 0x3); \ + vreinterpretq_m128i_s32(_shuf); \ }) #else // generic -#define _mm_shuffle_epi32(a, imm) \ - __extension__({ \ - __m128i ret; \ - switch (imm) { \ - case _MM_SHUFFLE(1, 0, 3, 2): \ - ret = _mm_shuffle_epi_1032((a)); \ - break; \ - case _MM_SHUFFLE(2, 3, 0, 1): \ - ret = _mm_shuffle_epi_2301((a)); \ - break; \ - case _MM_SHUFFLE(0, 3, 2, 1): \ - ret = _mm_shuffle_epi_0321((a)); \ - break; \ - case _MM_SHUFFLE(2, 1, 0, 3): \ - ret = _mm_shuffle_epi_2103((a)); \ - break; \ - case _MM_SHUFFLE(1, 0, 1, 0): \ - ret = _mm_shuffle_epi_1010((a)); \ - break; \ - case _MM_SHUFFLE(1, 0, 0, 1): \ - ret = _mm_shuffle_epi_1001((a)); \ - break; \ - case _MM_SHUFFLE(0, 1, 0, 1): \ - ret = _mm_shuffle_epi_0101((a)); \ - break; \ - case _MM_SHUFFLE(2, 2, 1, 1): \ - ret = _mm_shuffle_epi_2211((a)); \ - break; \ - case _MM_SHUFFLE(0, 1, 2, 2): \ - ret = _mm_shuffle_epi_0122((a)); \ - break; \ - case _MM_SHUFFLE(3, 3, 3, 2): \ - ret = _mm_shuffle_epi_3332((a)); \ - break; \ - case _MM_SHUFFLE(0, 0, 0, 0): \ - ret = _mm_shuffle_epi32_splat((a), 0); \ - break; \ - case _MM_SHUFFLE(1, 1, 1, 1): \ - ret = _mm_shuffle_epi32_splat((a), 1); \ - break; \ - case _MM_SHUFFLE(2, 2, 2, 2): \ - ret = _mm_shuffle_epi32_splat((a), 2); \ - break; \ - case _MM_SHUFFLE(3, 3, 3, 3): \ - ret = _mm_shuffle_epi32_splat((a), 3); \ - break; \ - default: \ - ret = _mm_shuffle_epi32_default((a), (imm)); \ - break; \ - } \ - ret; \ - }) +#define _mm_shuffle_epi32(a, imm) \ + _sse2neon_define1( \ + __m128i, a, __m128i ret; switch (imm) { \ + case _MM_SHUFFLE(1, 0, 3, 2): \ + ret = _mm_shuffle_epi_1032(_a); \ + break; \ + case _MM_SHUFFLE(2, 3, 0, 1): \ + ret = _mm_shuffle_epi_2301(_a); \ + break; \ + case _MM_SHUFFLE(0, 3, 2, 1): \ + ret = _mm_shuffle_epi_0321(_a); \ + break; \ + case _MM_SHUFFLE(2, 1, 0, 3): \ + ret = _mm_shuffle_epi_2103(_a); \ + break; \ + case _MM_SHUFFLE(1, 0, 1, 0): \ + ret = _mm_shuffle_epi_1010(_a); \ + break; \ + case _MM_SHUFFLE(1, 0, 0, 1): \ + ret = _mm_shuffle_epi_1001(_a); \ + break; \ + case _MM_SHUFFLE(0, 1, 0, 1): \ + ret = _mm_shuffle_epi_0101(_a); \ + break; \ + case _MM_SHUFFLE(2, 2, 1, 1): \ + ret = _mm_shuffle_epi_2211(_a); \ + break; \ + case _MM_SHUFFLE(0, 1, 2, 2): \ + ret = _mm_shuffle_epi_0122(_a); \ + break; \ + case _MM_SHUFFLE(3, 3, 3, 2): \ + ret = _mm_shuffle_epi_3332(_a); \ + break; \ + case _MM_SHUFFLE(0, 0, 0, 0): \ + ret = _mm_shuffle_epi32_splat(_a, 0); \ + break; \ + case _MM_SHUFFLE(1, 1, 1, 1): \ + ret = _mm_shuffle_epi32_splat(_a, 1); \ + break; \ + case _MM_SHUFFLE(2, 2, 2, 2): \ + ret = _mm_shuffle_epi32_splat(_a, 2); \ + break; \ + case _MM_SHUFFLE(3, 3, 3, 3): \ + ret = _mm_shuffle_epi32_splat(_a, 3); \ + break; \ + default: \ + ret = _mm_shuffle_epi32_default(_a, (imm)); \ + break; \ + } _sse2neon_return(ret);) #endif // Shuffle double-precision (64-bit) floating-point elements using the control // in imm8, and store the results in dst. -// -// dst[63:0] := (imm8[0] == 0) ? a[63:0] : a[127:64] -// dst[127:64] := (imm8[1] == 0) ? b[63:0] : b[127:64] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_shuffle_pd -#if __has_builtin(__builtin_shufflevector) -#define _mm_shuffle_pd(a, b, imm8) \ - vreinterpretq_m128d_s64(__builtin_shufflevector( \ - vreinterpretq_s64_m128d(a), vreinterpretq_s64_m128d(b), imm8 & 0x1, \ - ((imm8 & 0x2) >> 1) + 2)) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_shuffle_pd +#ifdef _sse2neon_shuffle +#define _mm_shuffle_pd(a, b, imm8) \ + vreinterpretq_m128d_s64( \ + vshuffleq_s64(vreinterpretq_s64_m128d(a), vreinterpretq_s64_m128d(b), \ + imm8 & 0x1, ((imm8 & 0x2) >> 1) + 2)) #else #define _mm_shuffle_pd(a, b, imm8) \ _mm_castsi128_pd(_mm_set_epi64x( \ @@ -5089,15 +5161,15 @@ FORCE_INLINE __m128i _mm_setzero_si128(void) // FORCE_INLINE __m128i _mm_shufflehi_epi16(__m128i a, // __constrange(0,255) int imm) -#if __has_builtin(__builtin_shufflevector) -#define _mm_shufflehi_epi16(a, imm) \ - __extension__({ \ - int16x8_t _input = vreinterpretq_s16_m128i(a); \ - int16x8_t _shuf = __builtin_shufflevector( \ - _input, _input, 0, 1, 2, 3, ((imm) & (0x3)) + 4, \ - (((imm) >> 2) & 0x3) + 4, (((imm) >> 4) & 0x3) + 4, \ - (((imm) >> 6) & 0x3) + 4); \ - vreinterpretq_m128i_s16(_shuf); \ +#if defined(_sse2neon_shuffle) +#define _mm_shufflehi_epi16(a, imm) \ + __extension__({ \ + int16x8_t _input = vreinterpretq_s16_m128i(a); \ + int16x8_t _shuf = \ + vshuffleq_s16(_input, _input, 0, 1, 2, 3, ((imm) & (0x3)) + 4, \ + (((imm) >> 2) & 0x3) + 4, (((imm) >> 4) & 0x3) + 4, \ + (((imm) >> 6) & 0x3) + 4); \ + vreinterpretq_m128i_s16(_shuf); \ }) #else // generic #define _mm_shufflehi_epi16(a, imm) _mm_shufflehi_epi16_function((a), (imm)) @@ -5105,11 +5177,11 @@ FORCE_INLINE __m128i _mm_setzero_si128(void) // FORCE_INLINE __m128i _mm_shufflelo_epi16(__m128i a, // __constrange(0,255) int imm) -#if __has_builtin(__builtin_shufflevector) +#if defined(_sse2neon_shuffle) #define _mm_shufflelo_epi16(a, imm) \ __extension__({ \ int16x8_t _input = vreinterpretq_s16_m128i(a); \ - int16x8_t _shuf = __builtin_shufflevector( \ + int16x8_t _shuf = vshuffleq_s16( \ _input, _input, ((imm) & (0x3)), (((imm) >> 2) & 0x3), \ (((imm) >> 4) & 0x3), (((imm) >> 6) & 0x3), 4, 5, 6, 7); \ vreinterpretq_m128i_s16(_shuf); \ @@ -5118,94 +5190,62 @@ FORCE_INLINE __m128i _mm_setzero_si128(void) #define _mm_shufflelo_epi16(a, imm) _mm_shufflelo_epi16_function((a), (imm)) #endif -// Shifts the 8 signed or unsigned 16-bit integers in a left by count bits while -// shifting in zeros. -// -// r0 := a0 << count -// r1 := a1 << count -// ... -// r7 := a7 << count -// -// https://msdn.microsoft.com/en-us/library/c79w388h(v%3dvs.90).aspx +// Shift packed 16-bit integers in a left by count while shifting in zeros, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sll_epi16 FORCE_INLINE __m128i _mm_sll_epi16(__m128i a, __m128i count) { uint64_t c = vreinterpretq_nth_u64_m128i(count, 0); - if (unlikely(c > 15)) + if (_sse2neon_unlikely(c & ~15)) return _mm_setzero_si128(); int16x8_t vc = vdupq_n_s16((int16_t) c); return vreinterpretq_m128i_s16(vshlq_s16(vreinterpretq_s16_m128i(a), vc)); } -// Shifts the 4 signed or unsigned 32-bit integers in a left by count bits while -// shifting in zeros. -// -// r0 := a0 << count -// r1 := a1 << count -// r2 := a2 << count -// r3 := a3 << count -// -// https://msdn.microsoft.com/en-us/library/6fe5a6s9(v%3dvs.90).aspx +// Shift packed 32-bit integers in a left by count while shifting in zeros, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sll_epi32 FORCE_INLINE __m128i _mm_sll_epi32(__m128i a, __m128i count) { uint64_t c = vreinterpretq_nth_u64_m128i(count, 0); - if (unlikely(c > 31)) + if (_sse2neon_unlikely(c & ~31)) return _mm_setzero_si128(); int32x4_t vc = vdupq_n_s32((int32_t) c); return vreinterpretq_m128i_s32(vshlq_s32(vreinterpretq_s32_m128i(a), vc)); } -// Shifts the 2 signed or unsigned 64-bit integers in a left by count bits while -// shifting in zeros. -// -// r0 := a0 << count -// r1 := a1 << count -// -// https://msdn.microsoft.com/en-us/library/6ta9dffd(v%3dvs.90).aspx +// Shift packed 64-bit integers in a left by count while shifting in zeros, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sll_epi64 FORCE_INLINE __m128i _mm_sll_epi64(__m128i a, __m128i count) { uint64_t c = vreinterpretq_nth_u64_m128i(count, 0); - if (unlikely(c > 63)) + if (_sse2neon_unlikely(c & ~63)) return _mm_setzero_si128(); int64x2_t vc = vdupq_n_s64((int64_t) c); return vreinterpretq_m128i_s64(vshlq_s64(vreinterpretq_s64_m128i(a), vc)); } -// Shifts the 8 signed or unsigned 16-bit integers in a left by count bits while -// shifting in zeros. -// -// r0 := a0 << count -// r1 := a1 << count -// ... -// r7 := a7 << count -// -// https://msdn.microsoft.com/en-us/library/es73bcsy(v=vs.90).aspx -#define _mm_slli_epi16(a, imm) \ - __extension__({ \ - __m128i ret; \ - if (unlikely((imm)) <= 0) { \ - ret = a; \ - } \ - if (unlikely((imm) > 15)) { \ - ret = _mm_setzero_si128(); \ - } else { \ - ret = vreinterpretq_m128i_s16( \ - vshlq_n_s16(vreinterpretq_s16_m128i(a), (imm))); \ - } \ - ret; \ - }) +// Shift packed 16-bit integers in a left by imm8 while shifting in zeros, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_slli_epi16 +FORCE_INLINE __m128i _mm_slli_epi16(__m128i a, int imm) +{ + if (_sse2neon_unlikely(imm & ~15)) + return _mm_setzero_si128(); + return vreinterpretq_m128i_s16( + vshlq_s16(vreinterpretq_s16_m128i(a), vdupq_n_s16(imm))); +} -// Shifts the 4 signed or unsigned 32-bit integers in a left by count bits while -// shifting in zeros. : -// https://msdn.microsoft.com/en-us/library/z2k3bbtb%28v=vs.90%29.aspx -// FORCE_INLINE __m128i _mm_slli_epi32(__m128i a, __constrange(0,255) int imm) +// Shift packed 32-bit integers in a left by imm8 while shifting in zeros, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_slli_epi32 FORCE_INLINE __m128i _mm_slli_epi32(__m128i a, int imm) { - if (unlikely(imm <= 0)) /* TODO: add constant range macro: [0, 255] */ - return a; - if (unlikely(imm > 31)) + if (_sse2neon_unlikely(imm & ~31)) return _mm_setzero_si128(); return vreinterpretq_m128i_s32( vshlq_s32(vreinterpretq_s32_m128i(a), vdupq_n_s32(imm))); @@ -5213,44 +5253,33 @@ FORCE_INLINE __m128i _mm_slli_epi32(__m128i a, int imm) // Shift packed 64-bit integers in a left by imm8 while shifting in zeros, and // store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_slli_epi64 FORCE_INLINE __m128i _mm_slli_epi64(__m128i a, int imm) { - if (unlikely(imm <= 0)) /* TODO: add constant range macro: [0, 255] */ - return a; - if (unlikely(imm > 63)) + if (_sse2neon_unlikely(imm & ~63)) return _mm_setzero_si128(); return vreinterpretq_m128i_s64( vshlq_s64(vreinterpretq_s64_m128i(a), vdupq_n_s64(imm))); } -// Shifts the 128-bit value in a left by imm bytes while shifting in zeros. imm -// must be an immediate. -// -// r := a << (imm * 8) -// -// https://msdn.microsoft.com/en-us/library/34d3k2kt(v=vs.100).aspx -// FORCE_INLINE __m128i _mm_slli_si128(__m128i a, __constrange(0,255) int imm) -#define _mm_slli_si128(a, imm) \ - __extension__({ \ - __m128i ret; \ - if (unlikely((imm) <= 0)) { \ - ret = a; \ - } \ - if (unlikely((imm) > 15)) { \ - ret = _mm_setzero_si128(); \ - } else { \ - ret = vreinterpretq_m128i_s8(vextq_s8( \ - vdupq_n_s8(0), vreinterpretq_s8_m128i(a), 16 - (imm))); \ - } \ - ret; \ - }) +// Shift a left by imm8 bytes while shifting in zeros, and store the results in +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_slli_si128 +#define _mm_slli_si128(a, imm) \ + _sse2neon_define1( \ + __m128i, a, int8x16_t ret; \ + if (_sse2neon_unlikely(imm == 0)) ret = vreinterpretq_s8_m128i(_a); \ + else if (_sse2neon_unlikely((imm) & ~15)) ret = vdupq_n_s8(0); \ + else ret = vextq_s8(vdupq_n_s8(0), vreinterpretq_s8_m128i(_a), \ + ((imm <= 0 || imm > 15) ? 0 : (16 - imm))); \ + _sse2neon_return(vreinterpretq_m128i_s8(ret));) // Compute the square root of packed double-precision (64-bit) floating-point // elements in a, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sqrt_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sqrt_pd FORCE_INLINE __m128d _mm_sqrt_pd(__m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vsqrtq_f64(vreinterpretq_f64_m128d(a))); #else double a0 = sqrt(((double *) &a)[0]); @@ -5262,53 +5291,43 @@ FORCE_INLINE __m128d _mm_sqrt_pd(__m128d a) // Compute the square root of the lower double-precision (64-bit) floating-point // element in b, store the result in the lower element of dst, and copy the // upper element from a to the upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sqrt_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sqrt_sd FORCE_INLINE __m128d _mm_sqrt_sd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return _mm_move_sd(a, _mm_sqrt_pd(b)); #else return _mm_set_pd(((double *) &a)[1], sqrt(((double *) &b)[0])); #endif } -// Shifts the 8 signed 16-bit integers in a right by count bits while shifting -// in the sign bit. -// -// r0 := a0 >> count -// r1 := a1 >> count -// ... -// r7 := a7 >> count -// -// https://msdn.microsoft.com/en-us/library/3c9997dk(v%3dvs.90).aspx +// Shift packed 16-bit integers in a right by count while shifting in sign bits, +// and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sra_epi16 FORCE_INLINE __m128i _mm_sra_epi16(__m128i a, __m128i count) { - int64_t c = (int64_t) vget_low_s64((int64x2_t) count); - if (unlikely(c > 15)) + int64_t c = vgetq_lane_s64(count, 0); + if (_sse2neon_unlikely(c & ~15)) return _mm_cmplt_epi16(a, _mm_setzero_si128()); - return vreinterpretq_m128i_s16(vshlq_s16((int16x8_t) a, vdupq_n_s16(-c))); + return vreinterpretq_m128i_s16( + vshlq_s16((int16x8_t) a, vdupq_n_s16((int) -c))); } -// Shifts the 4 signed 32-bit integers in a right by count bits while shifting -// in the sign bit. -// -// r0 := a0 >> count -// r1 := a1 >> count -// r2 := a2 >> count -// r3 := a3 >> count -// -// https://msdn.microsoft.com/en-us/library/ce40009e(v%3dvs.100).aspx +// Shift packed 32-bit integers in a right by count while shifting in sign bits, +// and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sra_epi32 FORCE_INLINE __m128i _mm_sra_epi32(__m128i a, __m128i count) { - int64_t c = (int64_t) vget_low_s64((int64x2_t) count); - if (unlikely(c > 31)) + int64_t c = vgetq_lane_s64(count, 0); + if (_sse2neon_unlikely(c & ~31)) return _mm_cmplt_epi32(a, _mm_setzero_si128()); - return vreinterpretq_m128i_s32(vshlq_s32((int32x4_t) a, vdupq_n_s32(-c))); + return vreinterpretq_m128i_s32( + vshlq_s32((int32x4_t) a, vdupq_n_s32((int) -c))); } -// Shift packed 16-bit integers in a right by imm while shifting in sign +// Shift packed 16-bit integers in a right by imm8 while shifting in sign // bits, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_srai_epi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srai_epi16 FORCE_INLINE __m128i _mm_srai_epi16(__m128i a, int imm) { const int count = (imm & ~15) ? 15 : imm; @@ -5317,82 +5336,53 @@ FORCE_INLINE __m128i _mm_srai_epi16(__m128i a, int imm) // Shift packed 32-bit integers in a right by imm8 while shifting in sign bits, // and store the results in dst. -// -// FOR j := 0 to 3 -// i := j*32 -// IF imm8[7:0] > 31 -// dst[i+31:i] := (a[i+31] ? 0xFFFFFFFF : 0x0) -// ELSE -// dst[i+31:i] := SignExtend32(a[i+31:i] >> imm8[7:0]) -// FI -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_srai_epi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srai_epi32 // FORCE_INLINE __m128i _mm_srai_epi32(__m128i a, __constrange(0,255) int imm) -#define _mm_srai_epi32(a, imm) \ - __extension__({ \ - __m128i ret; \ - if (unlikely((imm) == 0)) { \ - ret = a; \ - } else if (likely(0 < (imm) && (imm) < 32)) { \ - ret = vreinterpretq_m128i_s32( \ - vshlq_s32(vreinterpretq_s32_m128i(a), vdupq_n_s32(-imm))); \ - } else { \ - ret = vreinterpretq_m128i_s32( \ - vshrq_n_s32(vreinterpretq_s32_m128i(a), 31)); \ - } \ - ret; \ - }) +#define _mm_srai_epi32(a, imm) \ + _sse2neon_define0( \ + __m128i, a, __m128i ret; if (_sse2neon_unlikely((imm) == 0)) { \ + ret = _a; \ + } else if (_sse2neon_likely(0 < (imm) && (imm) < 32)) { \ + ret = vreinterpretq_m128i_s32( \ + vshlq_s32(vreinterpretq_s32_m128i(_a), vdupq_n_s32(-(imm)))); \ + } else { \ + ret = vreinterpretq_m128i_s32( \ + vshrq_n_s32(vreinterpretq_s32_m128i(_a), 31)); \ + } _sse2neon_return(ret);) -// Shifts the 8 signed or unsigned 16-bit integers in a right by count bits -// while shifting in zeros. -// -// r0 := srl(a0, count) -// r1 := srl(a1, count) -// ... -// r7 := srl(a7, count) -// -// https://msdn.microsoft.com/en-us/library/wd5ax830(v%3dvs.90).aspx +// Shift packed 16-bit integers in a right by count while shifting in zeros, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srl_epi16 FORCE_INLINE __m128i _mm_srl_epi16(__m128i a, __m128i count) { uint64_t c = vreinterpretq_nth_u64_m128i(count, 0); - if (unlikely(c > 15)) + if (_sse2neon_unlikely(c & ~15)) return _mm_setzero_si128(); int16x8_t vc = vdupq_n_s16(-(int16_t) c); return vreinterpretq_m128i_u16(vshlq_u16(vreinterpretq_u16_m128i(a), vc)); } -// Shifts the 4 signed or unsigned 32-bit integers in a right by count bits -// while shifting in zeros. -// -// r0 := srl(a0, count) -// r1 := srl(a1, count) -// r2 := srl(a2, count) -// r3 := srl(a3, count) -// -// https://msdn.microsoft.com/en-us/library/a9cbttf4(v%3dvs.90).aspx +// Shift packed 32-bit integers in a right by count while shifting in zeros, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srl_epi32 FORCE_INLINE __m128i _mm_srl_epi32(__m128i a, __m128i count) { uint64_t c = vreinterpretq_nth_u64_m128i(count, 0); - if (unlikely(c > 31)) + if (_sse2neon_unlikely(c & ~31)) return _mm_setzero_si128(); int32x4_t vc = vdupq_n_s32(-(int32_t) c); return vreinterpretq_m128i_u32(vshlq_u32(vreinterpretq_u32_m128i(a), vc)); } -// Shifts the 2 signed or unsigned 64-bit integers in a right by count bits -// while shifting in zeros. -// -// r0 := srl(a0, count) -// r1 := srl(a1, count) -// -// https://msdn.microsoft.com/en-us/library/yf6cf9k8(v%3dvs.90).aspx +// Shift packed 64-bit integers in a right by count while shifting in zeros, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srl_epi64 FORCE_INLINE __m128i _mm_srl_epi64(__m128i a, __m128i count) { uint64_t c = vreinterpretq_nth_u64_m128i(count, 0); - if (unlikely(c > 63)) + if (_sse2neon_unlikely(c & ~63)) return _mm_setzero_si128(); int64x2_t vc = vdupq_n_s64(-(int64_t) c); @@ -5401,115 +5391,59 @@ FORCE_INLINE __m128i _mm_srl_epi64(__m128i a, __m128i count) // Shift packed 16-bit integers in a right by imm8 while shifting in zeros, and // store the results in dst. -// -// FOR j := 0 to 7 -// i := j*16 -// IF imm8[7:0] > 15 -// dst[i+15:i] := 0 -// ELSE -// dst[i+15:i] := ZeroExtend16(a[i+15:i] >> imm8[7:0]) -// FI -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_srli_epi16 -#define _mm_srli_epi16(a, imm) \ - __extension__({ \ - __m128i ret; \ - if (unlikely(imm) == 0) { \ - ret = a; \ - } else if (likely(0 < (imm) && (imm) < 16)) { \ - ret = vreinterpretq_m128i_u16( \ - vshlq_u16(vreinterpretq_u16_m128i(a), vdupq_n_s16(-imm))); \ - } else { \ - ret = _mm_setzero_si128(); \ - } \ - ret; \ - }) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srli_epi16 +#define _mm_srli_epi16(a, imm) \ + _sse2neon_define0( \ + __m128i, a, __m128i ret; if (_sse2neon_unlikely((imm) & ~15)) { \ + ret = _mm_setzero_si128(); \ + } else { \ + ret = vreinterpretq_m128i_u16( \ + vshlq_u16(vreinterpretq_u16_m128i(_a), vdupq_n_s16(-(imm)))); \ + } _sse2neon_return(ret);) // Shift packed 32-bit integers in a right by imm8 while shifting in zeros, and // store the results in dst. -// -// FOR j := 0 to 3 -// i := j*32 -// IF imm8[7:0] > 31 -// dst[i+31:i] := 0 -// ELSE -// dst[i+31:i] := ZeroExtend32(a[i+31:i] >> imm8[7:0]) -// FI -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_srli_epi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srli_epi32 // FORCE_INLINE __m128i _mm_srli_epi32(__m128i a, __constrange(0,255) int imm) -#define _mm_srli_epi32(a, imm) \ - __extension__({ \ - __m128i ret; \ - if (unlikely((imm) == 0)) { \ - ret = a; \ - } else if (likely(0 < (imm) && (imm) < 32)) { \ - ret = vreinterpretq_m128i_u32( \ - vshlq_u32(vreinterpretq_u32_m128i(a), vdupq_n_s32(-imm))); \ - } else { \ - ret = _mm_setzero_si128(); \ - } \ - ret; \ - }) +#define _mm_srli_epi32(a, imm) \ + _sse2neon_define0( \ + __m128i, a, __m128i ret; if (_sse2neon_unlikely((imm) & ~31)) { \ + ret = _mm_setzero_si128(); \ + } else { \ + ret = vreinterpretq_m128i_u32( \ + vshlq_u32(vreinterpretq_u32_m128i(_a), vdupq_n_s32(-(imm)))); \ + } _sse2neon_return(ret);) // Shift packed 64-bit integers in a right by imm8 while shifting in zeros, and // store the results in dst. -// -// FOR j := 0 to 1 -// i := j*64 -// IF imm8[7:0] > 63 -// dst[i+63:i] := 0 -// ELSE -// dst[i+63:i] := ZeroExtend64(a[i+63:i] >> imm8[7:0]) -// FI -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_srli_epi64 -#define _mm_srli_epi64(a, imm) \ - __extension__({ \ - __m128i ret; \ - if (unlikely((imm) == 0)) { \ - ret = a; \ - } else if (likely(0 < (imm) && (imm) < 64)) { \ - ret = vreinterpretq_m128i_u64( \ - vshlq_u64(vreinterpretq_u64_m128i(a), vdupq_n_s64(-imm))); \ - } else { \ - ret = _mm_setzero_si128(); \ - } \ - ret; \ - }) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srli_epi64 +#define _mm_srli_epi64(a, imm) \ + _sse2neon_define0( \ + __m128i, a, __m128i ret; if (_sse2neon_unlikely((imm) & ~63)) { \ + ret = _mm_setzero_si128(); \ + } else { \ + ret = vreinterpretq_m128i_u64( \ + vshlq_u64(vreinterpretq_u64_m128i(_a), vdupq_n_s64(-(imm)))); \ + } _sse2neon_return(ret);) -// Shifts the 128 - bit value in a right by imm bytes while shifting in -// zeros.imm must be an immediate. -// -// r := srl(a, imm*8) -// -// https://msdn.microsoft.com/en-us/library/305w28yz(v=vs.100).aspx -// FORCE_INLINE _mm_srli_si128(__m128i a, __constrange(0,255) int imm) -#define _mm_srli_si128(a, imm) \ - __extension__({ \ - __m128i ret; \ - if (unlikely((imm) <= 0)) { \ - ret = a; \ - } \ - if (unlikely((imm) > 15)) { \ - ret = _mm_setzero_si128(); \ - } else { \ - ret = vreinterpretq_m128i_s8( \ - vextq_s8(vreinterpretq_s8_m128i(a), vdupq_n_s8(0), (imm))); \ - } \ - ret; \ - }) +// Shift a right by imm8 bytes while shifting in zeros, and store the results in +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_srli_si128 +#define _mm_srli_si128(a, imm) \ + _sse2neon_define1( \ + __m128i, a, int8x16_t ret; \ + if (_sse2neon_unlikely((imm) & ~15)) ret = vdupq_n_s8(0); \ + else ret = vextq_s8(vreinterpretq_s8_m128i(_a), vdupq_n_s8(0), \ + (imm > 15 ? 0 : imm)); \ + _sse2neon_return(vreinterpretq_m128i_s8(ret));) // Store 128-bits (composed of 2 packed double-precision (64-bit) floating-point // elements) from a into memory. mem_addr must be aligned on a 16-byte boundary // or a general-protection exception may be generated. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_store_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_store_pd FORCE_INLINE void _mm_store_pd(double *mem_addr, __m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) vst1q_f64((float64_t *) mem_addr, vreinterpretq_f64_m128d(a)); #else vst1q_f32((float32_t *) mem_addr, vreinterpretq_f32_m128d(a)); @@ -5519,10 +5453,10 @@ FORCE_INLINE void _mm_store_pd(double *mem_addr, __m128d a) // Store the lower double-precision (64-bit) floating-point element from a into // 2 contiguous elements in memory. mem_addr must be aligned on a 16-byte // boundary or a general-protection exception may be generated. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_store_pd1 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_store_pd1 FORCE_INLINE void _mm_store_pd1(double *mem_addr, __m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) float64x1_t a_low = vget_low_f64(vreinterpretq_f64_m128d(a)); vst1q_f64((float64_t *) mem_addr, vreinterpretq_f64_m128d(vcombine_f64(a_low, a_low))); @@ -5535,18 +5469,19 @@ FORCE_INLINE void _mm_store_pd1(double *mem_addr, __m128d a) // Store the lower double-precision (64-bit) floating-point element from a into // memory. mem_addr does not need to be aligned on any particular boundary. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=mm_store_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=mm_store_sd FORCE_INLINE void _mm_store_sd(double *mem_addr, __m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) vst1_f64((float64_t *) mem_addr, vget_low_f64(vreinterpretq_f64_m128d(a))); #else vst1_u64((uint64_t *) mem_addr, vget_low_u64(vreinterpretq_u64_m128d(a))); #endif } -// Stores four 32-bit integer values as (as a __m128i value) at the address p. -// https://msdn.microsoft.com/en-us/library/vstudio/edk11s13(v=vs.100).aspx +// Store 128-bits of integer data from a into memory. mem_addr must be aligned +// on a 16-byte boundary or a general-protection exception may be generated. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_store_si128 FORCE_INLINE void _mm_store_si128(__m128i *p, __m128i a) { vst1q_s32((int32_t *) p, vreinterpretq_s32_m128i(a)); @@ -5555,42 +5490,34 @@ FORCE_INLINE void _mm_store_si128(__m128i *p, __m128i a) // Store the lower double-precision (64-bit) floating-point element from a into // 2 contiguous elements in memory. mem_addr must be aligned on a 16-byte // boundary or a general-protection exception may be generated. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#expand=9,526,5601&text=_mm_store1_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#expand=9,526,5601&text=_mm_store1_pd #define _mm_store1_pd _mm_store_pd1 // Store the upper double-precision (64-bit) floating-point element from a into // memory. -// -// MEM[mem_addr+63:mem_addr] := a[127:64] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storeh_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storeh_pd FORCE_INLINE void _mm_storeh_pd(double *mem_addr, __m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) vst1_f64((float64_t *) mem_addr, vget_high_f64(vreinterpretq_f64_m128d(a))); #else vst1_f32((float32_t *) mem_addr, vget_high_f32(vreinterpretq_f32_m128d(a))); #endif } -// Reads the lower 64 bits of b and stores them into the lower 64 bits of a. -// https://msdn.microsoft.com/en-us/library/hhwf428f%28v=vs.90%29.aspx +// Store 64-bit integer from the first element of a into memory. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storel_epi64 FORCE_INLINE void _mm_storel_epi64(__m128i *a, __m128i b) { - uint64x1_t hi = vget_high_u64(vreinterpretq_u64_m128i(*a)); - uint64x1_t lo = vget_low_u64(vreinterpretq_u64_m128i(b)); - *a = vreinterpretq_m128i_u64(vcombine_u64(lo, hi)); + vst1_u64((uint64_t *) a, vget_low_u64(vreinterpretq_u64_m128i(b))); } // Store the lower double-precision (64-bit) floating-point element from a into // memory. -// -// MEM[mem_addr+63:mem_addr] := a[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storel_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storel_pd FORCE_INLINE void _mm_storel_pd(double *mem_addr, __m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) vst1_f64((float64_t *) mem_addr, vget_low_f64(vreinterpretq_f64_m128d(a))); #else vst1_f32((float32_t *) mem_addr, vget_low_f32(vreinterpretq_f32_m128d(a))); @@ -5600,11 +5527,7 @@ FORCE_INLINE void _mm_storel_pd(double *mem_addr, __m128d a) // Store 2 double-precision (64-bit) floating-point elements from a into memory // in reverse order. mem_addr must be aligned on a 16-byte boundary or a // general-protection exception may be generated. -// -// MEM[mem_addr+63:mem_addr] := a[127:64] -// MEM[mem_addr+127:mem_addr+64] := a[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storer_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storer_pd FORCE_INLINE void _mm_storer_pd(double *mem_addr, __m128d a) { float32x4_t f = vreinterpretq_f32_m128d(a); @@ -5614,21 +5537,23 @@ FORCE_INLINE void _mm_storer_pd(double *mem_addr, __m128d a) // Store 128-bits (composed of 2 packed double-precision (64-bit) floating-point // elements) from a into memory. mem_addr does not need to be aligned on any // particular boundary. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storeu_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storeu_pd FORCE_INLINE void _mm_storeu_pd(double *mem_addr, __m128d a) { _mm_store_pd(mem_addr, a); } -// Stores 128-bits of integer data a at the address p. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storeu_si128 +// Store 128-bits of integer data from a into memory. mem_addr does not need to +// be aligned on any particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storeu_si128 FORCE_INLINE void _mm_storeu_si128(__m128i *p, __m128i a) { vst1q_s32((int32_t *) p, vreinterpretq_s32_m128i(a)); } -// Stores 32-bits of integer data a at the address p. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_storeu_si32 +// Store 32-bit integer from the first element of a into memory. mem_addr does +// not need to be aligned on any particular boundary. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_storeu_si32 FORCE_INLINE void _mm_storeu_si32(void *p, __m128i a) { vst1q_lane_s32((int32_t *) p, vreinterpretq_s32_m128i(a), 0); @@ -5638,22 +5563,22 @@ FORCE_INLINE void _mm_storeu_si32(void *p, __m128i a) // elements) from a into memory using a non-temporal memory hint. mem_addr must // be aligned on a 16-byte boundary or a general-protection exception may be // generated. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_stream_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_stream_pd FORCE_INLINE void _mm_stream_pd(double *p, __m128d a) { #if __has_builtin(__builtin_nontemporal_store) - __builtin_nontemporal_store(a, (float32x4_t *) p); -#elif defined(__aarch64__) + __builtin_nontemporal_store(a, (__m128d *) p); +#elif defined(__aarch64__) || defined(_M_ARM64) vst1q_f64(p, vreinterpretq_f64_m128d(a)); #else vst1q_s64((int64_t *) p, vreinterpretq_s64_m128d(a)); #endif } -// Stores the data in a to the address p without polluting the caches. If the -// cache line containing address p is already in the cache, the cache will be -// updated. -// https://msdn.microsoft.com/en-us/library/ba08y07y%28v=vs.90%29.aspx +// Store 128-bits of integer data from a into memory using a non-temporal memory +// hint. mem_addr must be aligned on a 16-byte boundary or a general-protection +// exception may be generated. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_stream_si128 FORCE_INLINE void _mm_stream_si128(__m128i *p, __m128i a) { #if __has_builtin(__builtin_nontemporal_store) @@ -5666,40 +5591,42 @@ FORCE_INLINE void _mm_stream_si128(__m128i *p, __m128i a) // Store 32-bit integer a into memory using a non-temporal hint to minimize // cache pollution. If the cache line containing address mem_addr is already in // the cache, the cache will be updated. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_stream_si32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_stream_si32 FORCE_INLINE void _mm_stream_si32(int *p, int a) { vst1q_lane_s32((int32_t *) p, vdupq_n_s32(a), 0); } +// Store 64-bit integer a into memory using a non-temporal hint to minimize +// cache pollution. If the cache line containing address mem_addr is already in +// the cache, the cache will be updated. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_stream_si64 +FORCE_INLINE void _mm_stream_si64(__int64 *p, __int64 a) +{ + vst1_s64((int64_t *) p, vdup_n_s64((int64_t) a)); +} + // Subtract packed 16-bit integers in b from packed 16-bit integers in a, and // store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sub_epi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sub_epi16 FORCE_INLINE __m128i _mm_sub_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_s16( vsubq_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); } -// Subtracts the 4 signed or unsigned 32-bit integers of b from the 4 signed or -// unsigned 32-bit integers of a. -// -// r0 := a0 - b0 -// r1 := a1 - b1 -// r2 := a2 - b2 -// r3 := a3 - b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/fhh866h0(v=vs.100).aspx +// Subtract packed 32-bit integers in b from packed 32-bit integers in a, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sub_epi32 FORCE_INLINE __m128i _mm_sub_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( vsubq_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); } -// Subtract 2 packed 64-bit integers in b from 2 packed 64-bit integers in a, -// and store the results in dst. -// r0 := a0 - b0 -// r1 := a1 - b1 +// Subtract packed 64-bit integers in b from packed 64-bit integers in a, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sub_epi64 FORCE_INLINE __m128i _mm_sub_epi64(__m128i a, __m128i b) { return vreinterpretq_m128i_s64( @@ -5708,7 +5635,7 @@ FORCE_INLINE __m128i _mm_sub_epi64(__m128i a, __m128i b) // Subtract packed 8-bit integers in b from packed 8-bit integers in a, and // store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sub_epi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sub_epi8 FORCE_INLINE __m128i _mm_sub_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_s8( @@ -5718,16 +5645,10 @@ FORCE_INLINE __m128i _mm_sub_epi8(__m128i a, __m128i b) // Subtract packed double-precision (64-bit) floating-point elements in b from // packed double-precision (64-bit) floating-point elements in a, and store the // results in dst. -// -// FOR j := 0 to 1 -// i := j*64 -// dst[i+63:i] := a[i+63:i] - b[i+63:i] -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=mm_sub_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=mm_sub_pd FORCE_INLINE __m128d _mm_sub_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vsubq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -5744,71 +5665,50 @@ FORCE_INLINE __m128d _mm_sub_pd(__m128d a, __m128d b) // the lower double-precision (64-bit) floating-point element in a, store the // result in the lower element of dst, and copy the upper element from a to the // upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sub_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sub_sd FORCE_INLINE __m128d _mm_sub_sd(__m128d a, __m128d b) { return _mm_move_sd(a, _mm_sub_pd(a, b)); } // Subtract 64-bit integer b from 64-bit integer a, and store the result in dst. -// -// dst[63:0] := a[63:0] - b[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sub_si64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sub_si64 FORCE_INLINE __m64 _mm_sub_si64(__m64 a, __m64 b) { return vreinterpret_m64_s64( vsub_s64(vreinterpret_s64_m64(a), vreinterpret_s64_m64(b))); } -// Subtracts the 8 signed 16-bit integers of b from the 8 signed 16-bit integers -// of a and saturates. -// -// r0 := SignedSaturate(a0 - b0) -// r1 := SignedSaturate(a1 - b1) -// ... -// r7 := SignedSaturate(a7 - b7) -// -// https://technet.microsoft.com/en-us/subscriptions/3247z5b8(v=vs.90) +// Subtract packed signed 16-bit integers in b from packed 16-bit integers in a +// using saturation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_subs_epi16 FORCE_INLINE __m128i _mm_subs_epi16(__m128i a, __m128i b) { return vreinterpretq_m128i_s16( vqsubq_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); } -// Subtracts the 16 signed 8-bit integers of b from the 16 signed 8-bit integers -// of a and saturates. -// -// r0 := SignedSaturate(a0 - b0) -// r1 := SignedSaturate(a1 - b1) -// ... -// r15 := SignedSaturate(a15 - b15) -// -// https://technet.microsoft.com/en-us/subscriptions/by7kzks1(v=vs.90) +// Subtract packed signed 8-bit integers in b from packed 8-bit integers in a +// using saturation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_subs_epi8 FORCE_INLINE __m128i _mm_subs_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_s8( vqsubq_s8(vreinterpretq_s8_m128i(a), vreinterpretq_s8_m128i(b))); } -// Subtracts the 8 unsigned 16-bit integers of bfrom the 8 unsigned 16-bit -// integers of a and saturates.. -// https://technet.microsoft.com/en-us/subscriptions/index/f44y0s19(v=vs.90).aspx +// Subtract packed unsigned 16-bit integers in b from packed unsigned 16-bit +// integers in a using saturation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_subs_epu16 FORCE_INLINE __m128i _mm_subs_epu16(__m128i a, __m128i b) { return vreinterpretq_m128i_u16( vqsubq_u16(vreinterpretq_u16_m128i(a), vreinterpretq_u16_m128i(b))); } -// Subtracts the 16 unsigned 8-bit integers of b from the 16 unsigned 8-bit -// integers of a and saturates. -// -// r0 := UnsignedSaturate(a0 - b0) -// r1 := UnsignedSaturate(a1 - b1) -// ... -// r15 := UnsignedSaturate(a15 - b15) -// -// https://technet.microsoft.com/en-us/subscriptions/yadkxc18(v=vs.90) +// Subtract packed unsigned 8-bit integers in b from packed unsigned 8-bit +// integers in a using saturation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_subs_epu8 FORCE_INLINE __m128i _mm_subs_epu8(__m128i a, __m128i b) { return vreinterpretq_m128i_u8( @@ -5822,22 +5722,30 @@ FORCE_INLINE __m128i _mm_subs_epu8(__m128i a, __m128i b) #define _mm_ucomilt_sd _mm_comilt_sd #define _mm_ucomineq_sd _mm_comineq_sd -// Interleaves the upper 4 signed or unsigned 16-bit integers in a with the -// upper 4 signed or unsigned 16-bit integers in b. -// -// r0 := a4 -// r1 := b4 -// r2 := a5 -// r3 := b5 -// r4 := a6 -// r5 := b6 -// r6 := a7 -// r7 := b7 -// -// https://msdn.microsoft.com/en-us/library/03196cz7(v=vs.100).aspx +// Return vector of type __m128d with undefined elements. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_undefined_pd +FORCE_INLINE __m128d _mm_undefined_pd(void) +{ +#if defined(__GNUC__) || defined(__clang__) +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wuninitialized" +#endif + __m128d a; +#if defined(_MSC_VER) + a = _mm_setzero_pd(); +#endif + return a; +#if defined(__GNUC__) || defined(__clang__) +#pragma GCC diagnostic pop +#endif +} + +// Unpack and interleave 16-bit integers from the high half of a and b, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpackhi_epi16 FORCE_INLINE __m128i _mm_unpackhi_epi16(__m128i a, __m128i b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_s16( vzip2q_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); #else @@ -5848,12 +5756,12 @@ FORCE_INLINE __m128i _mm_unpackhi_epi16(__m128i a, __m128i b) #endif } -// Interleaves the upper 2 signed or unsigned 32-bit integers in a with the -// upper 2 signed or unsigned 32-bit integers in b. -// https://msdn.microsoft.com/en-us/library/65sa7cbs(v=vs.100).aspx +// Unpack and interleave 32-bit integers from the high half of a and b, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpackhi_epi32 FORCE_INLINE __m128i _mm_unpackhi_epi32(__m128i a, __m128i b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_s32( vzip2q_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); #else @@ -5864,33 +5772,27 @@ FORCE_INLINE __m128i _mm_unpackhi_epi32(__m128i a, __m128i b) #endif } -// Interleaves the upper signed or unsigned 64-bit integer in a with the -// upper signed or unsigned 64-bit integer in b. -// -// r0 := a1 -// r1 := b1 +// Unpack and interleave 64-bit integers from the high half of a and b, and +// store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpackhi_epi64 FORCE_INLINE __m128i _mm_unpackhi_epi64(__m128i a, __m128i b) { +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128i_s64( + vzip2q_s64(vreinterpretq_s64_m128i(a), vreinterpretq_s64_m128i(b))); +#else int64x1_t a_h = vget_high_s64(vreinterpretq_s64_m128i(a)); int64x1_t b_h = vget_high_s64(vreinterpretq_s64_m128i(b)); return vreinterpretq_m128i_s64(vcombine_s64(a_h, b_h)); +#endif } -// Interleaves the upper 8 signed or unsigned 8-bit integers in a with the upper -// 8 signed or unsigned 8-bit integers in b. -// -// r0 := a8 -// r1 := b8 -// r2 := a9 -// r3 := b9 -// ... -// r14 := a15 -// r15 := b15 -// -// https://msdn.microsoft.com/en-us/library/t5h7783k(v=vs.100).aspx +// Unpack and interleave 8-bit integers from the high half of a and b, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpackhi_epi8 FORCE_INLINE __m128i _mm_unpackhi_epi8(__m128i a, __m128i b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_s8( vzip2q_s8(vreinterpretq_s8_m128i(a), vreinterpretq_s8_m128i(b))); #else @@ -5905,18 +5807,10 @@ FORCE_INLINE __m128i _mm_unpackhi_epi8(__m128i a, __m128i b) // Unpack and interleave double-precision (64-bit) floating-point elements from // the high half of a and b, and store the results in dst. -// -// DEFINE INTERLEAVE_HIGH_QWORDS(src1[127:0], src2[127:0]) { -// dst[63:0] := src1[127:64] -// dst[127:64] := src2[127:64] -// RETURN dst[127:0] -// } -// dst[127:0] := INTERLEAVE_HIGH_QWORDS(a[127:0], b[127:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_unpackhi_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpackhi_pd FORCE_INLINE __m128d _mm_unpackhi_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vzip2q_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -5926,22 +5820,12 @@ FORCE_INLINE __m128d _mm_unpackhi_pd(__m128d a, __m128d b) #endif } -// Interleaves the lower 4 signed or unsigned 16-bit integers in a with the -// lower 4 signed or unsigned 16-bit integers in b. -// -// r0 := a0 -// r1 := b0 -// r2 := a1 -// r3 := b1 -// r4 := a2 -// r5 := b2 -// r6 := a3 -// r7 := b3 -// -// https://msdn.microsoft.com/en-us/library/btxb17bw%28v=vs.90%29.aspx +// Unpack and interleave 16-bit integers from the low half of a and b, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpacklo_epi16 FORCE_INLINE __m128i _mm_unpacklo_epi16(__m128i a, __m128i b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_s16( vzip1q_s16(vreinterpretq_s16_m128i(a), vreinterpretq_s16_m128i(b))); #else @@ -5952,18 +5836,12 @@ FORCE_INLINE __m128i _mm_unpacklo_epi16(__m128i a, __m128i b) #endif } -// Interleaves the lower 2 signed or unsigned 32 - bit integers in a with the -// lower 2 signed or unsigned 32 - bit integers in b. -// -// r0 := a0 -// r1 := b0 -// r2 := a1 -// r3 := b1 -// -// https://msdn.microsoft.com/en-us/library/x8atst9d(v=vs.100).aspx +// Unpack and interleave 32-bit integers from the low half of a and b, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpacklo_epi32 FORCE_INLINE __m128i _mm_unpacklo_epi32(__m128i a, __m128i b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_s32( vzip1q_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); #else @@ -5974,28 +5852,27 @@ FORCE_INLINE __m128i _mm_unpacklo_epi32(__m128i a, __m128i b) #endif } +// Unpack and interleave 64-bit integers from the low half of a and b, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpacklo_epi64 FORCE_INLINE __m128i _mm_unpacklo_epi64(__m128i a, __m128i b) { +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128i_s64( + vzip1q_s64(vreinterpretq_s64_m128i(a), vreinterpretq_s64_m128i(b))); +#else int64x1_t a_l = vget_low_s64(vreinterpretq_s64_m128i(a)); int64x1_t b_l = vget_low_s64(vreinterpretq_s64_m128i(b)); return vreinterpretq_m128i_s64(vcombine_s64(a_l, b_l)); +#endif } -// Interleaves the lower 8 signed or unsigned 8-bit integers in a with the lower -// 8 signed or unsigned 8-bit integers in b. -// -// r0 := a0 -// r1 := b0 -// r2 := a1 -// r3 := b1 -// ... -// r14 := a7 -// r15 := b7 -// -// https://msdn.microsoft.com/en-us/library/xf7k860c%28v=vs.90%29.aspx +// Unpack and interleave 8-bit integers from the low half of a and b, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpacklo_epi8 FORCE_INLINE __m128i _mm_unpacklo_epi8(__m128i a, __m128i b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_s8( vzip1q_s8(vreinterpretq_s8_m128i(a), vreinterpretq_s8_m128i(b))); #else @@ -6008,18 +5885,10 @@ FORCE_INLINE __m128i _mm_unpacklo_epi8(__m128i a, __m128i b) // Unpack and interleave double-precision (64-bit) floating-point elements from // the low half of a and b, and store the results in dst. -// -// DEFINE INTERLEAVE_QWORDS(src1[127:0], src2[127:0]) { -// dst[63:0] := src1[63:0] -// dst[127:64] := src2[63:0] -// RETURN dst[127:0] -// } -// dst[127:0] := INTERLEAVE_QWORDS(a[127:0], b[127:0]) -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_unpacklo_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_unpacklo_pd FORCE_INLINE __m128d _mm_unpacklo_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vzip1q_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -6031,21 +5900,16 @@ FORCE_INLINE __m128d _mm_unpacklo_pd(__m128d a, __m128d b) // Compute the bitwise XOR of packed double-precision (64-bit) floating-point // elements in a and b, and store the results in dst. -// -// FOR j := 0 to 1 -// i := j*64 -// dst[i+63:i] := a[i+63:i] XOR b[i+63:i] -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_xor_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_xor_pd FORCE_INLINE __m128d _mm_xor_pd(__m128d a, __m128d b) { return vreinterpretq_m128d_s64( veorq_s64(vreinterpretq_s64_m128d(a), vreinterpretq_s64_m128d(b))); } -// Computes the bitwise XOR of the 128-bit value in a and the 128-bit value in -// b. https://msdn.microsoft.com/en-us/library/fzt08www(v=vs.100).aspx +// Compute the bitwise XOR of 128 bits (representing integer data) in a and b, +// and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_xor_si128 FORCE_INLINE __m128i _mm_xor_si128(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( @@ -6057,21 +5921,11 @@ FORCE_INLINE __m128i _mm_xor_si128(__m128i a, __m128i b) // Alternatively add and subtract packed double-precision (64-bit) // floating-point elements in a to/from packed elements in b, and store the // results in dst. -// -// FOR j := 0 to 1 -// i := j*64 -// IF ((j & 1) == 0) -// dst[i+63:i] := a[i+63:i] - b[i+63:i] -// ELSE -// dst[i+63:i] := a[i+63:i] + b[i+63:i] -// FI -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_addsub_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_addsub_pd FORCE_INLINE __m128d _mm_addsub_pd(__m128d a, __m128d b) { - __m128d mask = _mm_set_pd(1.0f, -1.0f); -#if defined(__aarch64__) + _sse2neon_const __m128d mask = _mm_set_pd(1.0f, -1.0f); +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vfmaq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b), vreinterpretq_f64_m128d(mask))); @@ -6083,11 +5937,12 @@ FORCE_INLINE __m128d _mm_addsub_pd(__m128d a, __m128d b) // Alternatively add and subtract packed single-precision (32-bit) // floating-point elements in a to/from packed elements in b, and store the // results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=addsub_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=addsub_ps FORCE_INLINE __m128 _mm_addsub_ps(__m128 a, __m128 b) { - __m128 mask = {-1.0f, 1.0f, -1.0f, 1.0f}; -#if defined(__aarch64__) || defined(__ARM_FEATURE_FMA) /* VFPv4+ */ + _sse2neon_const __m128 mask = _mm_setr_ps(-1.0f, 1.0f, -1.0f, 1.0f); +#if (defined(__aarch64__) || defined(_M_ARM64)) || \ + defined(__ARM_FEATURE_FMA) /* VFPv4+ */ return vreinterpretq_m128_f32(vfmaq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(mask), vreinterpretq_f32_m128(b))); @@ -6098,10 +5953,10 @@ FORCE_INLINE __m128 _mm_addsub_ps(__m128 a, __m128 b) // Horizontally add adjacent pairs of double-precision (64-bit) floating-point // elements in a and b, and pack the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hadd_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hadd_pd FORCE_INLINE __m128d _mm_hadd_pd(__m128d a, __m128d b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vpaddq_f64(vreinterpretq_f64_m128d(a), vreinterpretq_f64_m128d(b))); #else @@ -6112,12 +5967,12 @@ FORCE_INLINE __m128d _mm_hadd_pd(__m128d a, __m128d b) #endif } -// Computes pairwise add of each argument as single-precision, floating-point -// values a and b. -// https://msdn.microsoft.com/en-us/library/yd9wecaa.aspx +// Horizontally add adjacent pairs of single-precision (32-bit) floating-point +// elements in a and b, and pack the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hadd_ps FORCE_INLINE __m128 _mm_hadd_ps(__m128 a, __m128 b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128_f32( vpaddq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))); #else @@ -6132,13 +5987,14 @@ FORCE_INLINE __m128 _mm_hadd_ps(__m128 a, __m128 b) // Horizontally subtract adjacent pairs of double-precision (64-bit) // floating-point elements in a and b, and pack the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hsub_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hsub_pd FORCE_INLINE __m128d _mm_hsub_pd(__m128d _a, __m128d _b) { -#if defined(__aarch64__) - return vreinterpretq_m128d_f64(vsubq_f64( - vuzp1q_f64(vreinterpretq_f64_m128d(_a), vreinterpretq_f64_m128d(_b)), - vuzp2q_f64(vreinterpretq_f64_m128d(_a), vreinterpretq_f64_m128d(_b)))); +#if defined(__aarch64__) || defined(_M_ARM64) + float64x2_t a = vreinterpretq_f64_m128d(_a); + float64x2_t b = vreinterpretq_f64_m128d(_b); + return vreinterpretq_m128d_f64( + vsubq_f64(vuzp1q_f64(a, b), vuzp2q_f64(a, b))); #else double *da = (double *) &_a; double *db = (double *) &_b; @@ -6147,18 +6003,18 @@ FORCE_INLINE __m128d _mm_hsub_pd(__m128d _a, __m128d _b) #endif } -// Horizontally substract adjacent pairs of single-precision (32-bit) +// Horizontally subtract adjacent pairs of single-precision (32-bit) // floating-point elements in a and b, and pack the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hsub_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hsub_ps FORCE_INLINE __m128 _mm_hsub_ps(__m128 _a, __m128 _b) { -#if defined(__aarch64__) - return vreinterpretq_m128_f32(vsubq_f32( - vuzp1q_f32(vreinterpretq_f32_m128(_a), vreinterpretq_f32_m128(_b)), - vuzp2q_f32(vreinterpretq_f32_m128(_a), vreinterpretq_f32_m128(_b)))); + float32x4_t a = vreinterpretq_f32_m128(_a); + float32x4_t b = vreinterpretq_f32_m128(_b); +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128_f32( + vsubq_f32(vuzp1q_f32(a, b), vuzp2q_f32(a, b))); #else - float32x4x2_t c = - vuzpq_f32(vreinterpretq_f32_m128(_a), vreinterpretq_f32_m128(_b)); + float32x4x2_t c = vuzpq_f32(a, b); return vreinterpretq_m128_f32(vsubq_f32(c.val[0], c.val[1])); #endif } @@ -6166,27 +6022,20 @@ FORCE_INLINE __m128 _mm_hsub_ps(__m128 _a, __m128 _b) // Load 128-bits of integer data from unaligned memory into dst. This intrinsic // may perform better than _mm_loadu_si128 when the data crosses a cache line // boundary. -// -// dst[127:0] := MEM[mem_addr+127:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_lddqu_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_lddqu_si128 #define _mm_lddqu_si128 _mm_loadu_si128 // Load a double-precision (64-bit) floating-point element from memory into both // elements of dst. -// -// dst[63:0] := MEM[mem_addr+63:mem_addr] -// dst[127:64] := MEM[mem_addr+63:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_loaddup_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_loaddup_pd #define _mm_loaddup_pd _mm_load1_pd // Duplicate the low double-precision (64-bit) floating-point element from a, // and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_movedup_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movedup_pd FORCE_INLINE __m128d _mm_movedup_pd(__m128d a) { -#if (__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64( vdupq_laneq_f64(vreinterpretq_f64_m128d(a), 0)); #else @@ -6197,11 +6046,14 @@ FORCE_INLINE __m128d _mm_movedup_pd(__m128d a) // Duplicate odd-indexed single-precision (32-bit) floating-point elements // from a, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_movehdup_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_movehdup_ps FORCE_INLINE __m128 _mm_movehdup_ps(__m128 a) { -#if __has_builtin(__builtin_shufflevector) - return vreinterpretq_m128_f32(__builtin_shufflevector( +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128_f32( + vtrn2q_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a))); +#elif defined(_sse2neon_shuffle) + return vreinterpretq_m128_f32(vshuffleq_s32( vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a), 1, 1, 3, 3)); #else float32_t a1 = vgetq_lane_f32(vreinterpretq_f32_m128(a), 1); @@ -6213,11 +6065,14 @@ FORCE_INLINE __m128 _mm_movehdup_ps(__m128 a) // Duplicate even-indexed single-precision (32-bit) floating-point elements // from a, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_moveldup_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_moveldup_ps FORCE_INLINE __m128 _mm_moveldup_ps(__m128 a) { -#if __has_builtin(__builtin_shufflevector) - return vreinterpretq_m128_f32(__builtin_shufflevector( +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128_f32( + vtrn1q_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a))); +#elif defined(_sse2neon_shuffle) + return vreinterpretq_m128_f32(vshuffleq_s32( vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a), 0, 0, 2, 2)); #else float32_t a0 = vgetq_lane_f32(vreinterpretq_f32_m128(a), 0); @@ -6231,13 +6086,7 @@ FORCE_INLINE __m128 _mm_moveldup_ps(__m128 a) // Compute the absolute value of packed signed 16-bit integers in a, and store // the unsigned results in dst. -// -// FOR j := 0 to 7 -// i := j*16 -// dst[i+15:i] := ABS(a[i+15:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_abs_epi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_abs_epi16 FORCE_INLINE __m128i _mm_abs_epi16(__m128i a) { return vreinterpretq_m128i_s16(vabsq_s16(vreinterpretq_s16_m128i(a))); @@ -6245,13 +6094,7 @@ FORCE_INLINE __m128i _mm_abs_epi16(__m128i a) // Compute the absolute value of packed signed 32-bit integers in a, and store // the unsigned results in dst. -// -// FOR j := 0 to 3 -// i := j*32 -// dst[i+31:i] := ABS(a[i+31:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_abs_epi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_abs_epi32 FORCE_INLINE __m128i _mm_abs_epi32(__m128i a) { return vreinterpretq_m128i_s32(vabsq_s32(vreinterpretq_s32_m128i(a))); @@ -6259,13 +6102,7 @@ FORCE_INLINE __m128i _mm_abs_epi32(__m128i a) // Compute the absolute value of packed signed 8-bit integers in a, and store // the unsigned results in dst. -// -// FOR j := 0 to 15 -// i := j*8 -// dst[i+7:i] := ABS(a[i+7:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_abs_epi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_abs_epi8 FORCE_INLINE __m128i _mm_abs_epi8(__m128i a) { return vreinterpretq_m128i_s8(vabsq_s8(vreinterpretq_s8_m128i(a))); @@ -6273,13 +6110,7 @@ FORCE_INLINE __m128i _mm_abs_epi8(__m128i a) // Compute the absolute value of packed signed 16-bit integers in a, and store // the unsigned results in dst. -// -// FOR j := 0 to 3 -// i := j*16 -// dst[i+15:i] := ABS(a[i+15:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_abs_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_abs_pi16 FORCE_INLINE __m64 _mm_abs_pi16(__m64 a) { return vreinterpret_m64_s16(vabs_s16(vreinterpret_s16_m64(a))); @@ -6287,13 +6118,7 @@ FORCE_INLINE __m64 _mm_abs_pi16(__m64 a) // Compute the absolute value of packed signed 32-bit integers in a, and store // the unsigned results in dst. -// -// FOR j := 0 to 1 -// i := j*32 -// dst[i+31:i] := ABS(a[i+31:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_abs_pi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_abs_pi32 FORCE_INLINE __m64 _mm_abs_pi32(__m64 a) { return vreinterpret_m64_s32(vabs_s32(vreinterpret_s32_m64(a))); @@ -6301,13 +6126,7 @@ FORCE_INLINE __m64 _mm_abs_pi32(__m64 a) // Compute the absolute value of packed signed 8-bit integers in a, and store // the unsigned results in dst. -// -// FOR j := 0 to 7 -// i := j*8 -// dst[i+7:i] := ABS(a[i+7:i]) -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_abs_pi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_abs_pi8 FORCE_INLINE __m64 _mm_abs_pi8(__m64 a) { return vreinterpret_m64_s8(vabs_s8(vreinterpret_s8_m64(a))); @@ -6315,71 +6134,69 @@ FORCE_INLINE __m64 _mm_abs_pi8(__m64 a) // Concatenate 16-byte blocks in a and b into a 32-byte temporary result, shift // the result right by imm8 bytes, and store the low 16 bytes in dst. -// -// tmp[255:0] := ((a[127:0] << 128)[255:0] OR b[127:0]) >> (imm8*8) -// dst[127:0] := tmp[127:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_alignr_epi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_alignr_epi8 +#if defined(__GNUC__) && !defined(__clang__) #define _mm_alignr_epi8(a, b, imm) \ __extension__({ \ + uint8x16_t _a = vreinterpretq_u8_m128i(a); \ + uint8x16_t _b = vreinterpretq_u8_m128i(b); \ __m128i ret; \ - if (unlikely((imm) >= 32)) { \ - ret = _mm_setzero_si128(); \ - } else { \ - uint8x16_t tmp_low, tmp_high; \ - if (imm >= 16) { \ - const int idx = imm - 16; \ - tmp_low = vreinterpretq_u8_m128i(a); \ - tmp_high = vdupq_n_u8(0); \ - ret = \ - vreinterpretq_m128i_u8(vextq_u8(tmp_low, tmp_high, idx)); \ - } else { \ - const int idx = imm; \ - tmp_low = vreinterpretq_u8_m128i(b); \ - tmp_high = vreinterpretq_u8_m128i(a); \ - ret = \ - vreinterpretq_m128i_u8(vextq_u8(tmp_low, tmp_high, idx)); \ - } \ - } \ + if (_sse2neon_unlikely((imm) & ~31)) \ + ret = vreinterpretq_m128i_u8(vdupq_n_u8(0)); \ + else if (imm >= 16) \ + ret = _mm_srli_si128(a, imm >= 16 ? imm - 16 : 0); \ + else \ + ret = \ + vreinterpretq_m128i_u8(vextq_u8(_b, _a, imm < 16 ? imm : 0)); \ ret; \ }) +#else +#define _mm_alignr_epi8(a, b, imm) \ + _sse2neon_define2( \ + __m128i, a, b, uint8x16_t __a = vreinterpretq_u8_m128i(_a); \ + uint8x16_t __b = vreinterpretq_u8_m128i(_b); __m128i ret; \ + if (_sse2neon_unlikely((imm) & ~31)) ret = \ + vreinterpretq_m128i_u8(vdupq_n_u8(0)); \ + else if (imm >= 16) ret = \ + _mm_srli_si128(_a, imm >= 16 ? imm - 16 : 0); \ + else ret = \ + vreinterpretq_m128i_u8(vextq_u8(__b, __a, imm < 16 ? imm : 0)); \ + _sse2neon_return(ret);) + +#endif + // Concatenate 8-byte blocks in a and b into a 16-byte temporary result, shift // the result right by imm8 bytes, and store the low 8 bytes in dst. -// -// tmp[127:0] := ((a[63:0] << 64)[127:0] OR b[63:0]) >> (imm8*8) -// dst[63:0] := tmp[63:0] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_alignr_pi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_alignr_pi8 #define _mm_alignr_pi8(a, b, imm) \ - __extension__({ \ - __m64 ret; \ - if (unlikely((imm) >= 16)) { \ + _sse2neon_define2( \ + __m64, a, b, __m64 ret; if (_sse2neon_unlikely((imm) >= 16)) { \ ret = vreinterpret_m64_s8(vdup_n_s8(0)); \ } else { \ - uint8x8_t tmp_low, tmp_high; \ - if (imm >= 8) { \ - const int idx = imm - 8; \ - tmp_low = vreinterpret_u8_m64(a); \ + uint8x8_t tmp_low; \ + uint8x8_t tmp_high; \ + if ((imm) >= 8) { \ + const int idx = (imm) -8; \ + tmp_low = vreinterpret_u8_m64(_a); \ tmp_high = vdup_n_u8(0); \ ret = vreinterpret_m64_u8(vext_u8(tmp_low, tmp_high, idx)); \ } else { \ - const int idx = imm; \ - tmp_low = vreinterpret_u8_m64(b); \ - tmp_high = vreinterpret_u8_m64(a); \ + const int idx = (imm); \ + tmp_low = vreinterpret_u8_m64(_b); \ + tmp_high = vreinterpret_u8_m64(_a); \ ret = vreinterpret_m64_u8(vext_u8(tmp_low, tmp_high, idx)); \ } \ - } \ - ret; \ - }) + } _sse2neon_return(ret);) -// Computes pairwise add of each argument as a 16-bit signed or unsigned integer -// values a and b. +// Horizontally add adjacent pairs of 16-bit integers in a and b, and pack the +// signed 16-bit results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hadd_epi16 FORCE_INLINE __m128i _mm_hadd_epi16(__m128i _a, __m128i _b) { int16x8_t a = vreinterpretq_s16_m128i(_a); int16x8_t b = vreinterpretq_s16_m128i(_b); -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_s16(vpaddq_s16(a, b)); #else return vreinterpretq_m128i_s16( @@ -6388,20 +6205,25 @@ FORCE_INLINE __m128i _mm_hadd_epi16(__m128i _a, __m128i _b) #endif } -// Computes pairwise add of each argument as a 32-bit signed or unsigned integer -// values a and b. +// Horizontally add adjacent pairs of 32-bit integers in a and b, and pack the +// signed 32-bit results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hadd_epi32 FORCE_INLINE __m128i _mm_hadd_epi32(__m128i _a, __m128i _b) { int32x4_t a = vreinterpretq_s32_m128i(_a); int32x4_t b = vreinterpretq_s32_m128i(_b); +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128i_s32(vpaddq_s32(a, b)); +#else return vreinterpretq_m128i_s32( vcombine_s32(vpadd_s32(vget_low_s32(a), vget_high_s32(a)), vpadd_s32(vget_low_s32(b), vget_high_s32(b)))); +#endif } // Horizontally add adjacent pairs of 16-bit integers in a and b, and pack the // signed 16-bit results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hadd_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hadd_pi16 FORCE_INLINE __m64 _mm_hadd_pi16(__m64 a, __m64 b) { return vreinterpret_m64_s16( @@ -6410,18 +6232,19 @@ FORCE_INLINE __m64 _mm_hadd_pi16(__m64 a, __m64 b) // Horizontally add adjacent pairs of 32-bit integers in a and b, and pack the // signed 32-bit results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hadd_pi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hadd_pi32 FORCE_INLINE __m64 _mm_hadd_pi32(__m64 a, __m64 b) { return vreinterpret_m64_s32( vpadd_s32(vreinterpret_s32_m64(a), vreinterpret_s32_m64(b))); } -// Computes saturated pairwise sub of each argument as a 16-bit signed -// integer values a and b. +// Horizontally add adjacent pairs of signed 16-bit integers in a and b using +// saturation, and pack the signed 16-bit results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hadds_epi16 FORCE_INLINE __m128i _mm_hadds_epi16(__m128i _a, __m128i _b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) int16x8_t a = vreinterpretq_s16_m128i(_a); int16x8_t b = vreinterpretq_s16_m128i(_b); return vreinterpretq_s64_s16( @@ -6441,12 +6264,12 @@ FORCE_INLINE __m128i _mm_hadds_epi16(__m128i _a, __m128i _b) // Horizontally add adjacent pairs of signed 16-bit integers in a and b using // saturation, and pack the signed 16-bit results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hadds_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hadds_pi16 FORCE_INLINE __m64 _mm_hadds_pi16(__m64 _a, __m64 _b) { int16x4_t a = vreinterpret_s16_m64(_a); int16x4_t b = vreinterpret_s16_m64(_b); -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpret_s64_s16(vqadd_s16(vuzp1_s16(a, b), vuzp2_s16(a, b))); #else int16x4x2_t res = vuzp_s16(a, b); @@ -6454,101 +6277,96 @@ FORCE_INLINE __m64 _mm_hadds_pi16(__m64 _a, __m64 _b) #endif } -// Computes pairwise difference of each argument as a 16-bit signed or unsigned -// integer values a and b. +// Horizontally subtract adjacent pairs of 16-bit integers in a and b, and pack +// the signed 16-bit results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hsub_epi16 FORCE_INLINE __m128i _mm_hsub_epi16(__m128i _a, __m128i _b) { - int32x4_t a = vreinterpretq_s32_m128i(_a); - int32x4_t b = vreinterpretq_s32_m128i(_b); - // Interleave using vshrn/vmovn - // [a0|a2|a4|a6|b0|b2|b4|b6] - // [a1|a3|a5|a7|b1|b3|b5|b7] - int16x8_t ab0246 = vcombine_s16(vmovn_s32(a), vmovn_s32(b)); - int16x8_t ab1357 = vcombine_s16(vshrn_n_s32(a, 16), vshrn_n_s32(b, 16)); - // Subtract - return vreinterpretq_m128i_s16(vsubq_s16(ab0246, ab1357)); + int16x8_t a = vreinterpretq_s16_m128i(_a); + int16x8_t b = vreinterpretq_s16_m128i(_b); +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128i_s16( + vsubq_s16(vuzp1q_s16(a, b), vuzp2q_s16(a, b))); +#else + int16x8x2_t c = vuzpq_s16(a, b); + return vreinterpretq_m128i_s16(vsubq_s16(c.val[0], c.val[1])); +#endif } -// Computes pairwise difference of each argument as a 32-bit signed or unsigned -// integer values a and b. +// Horizontally subtract adjacent pairs of 32-bit integers in a and b, and pack +// the signed 32-bit results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hsub_epi32 FORCE_INLINE __m128i _mm_hsub_epi32(__m128i _a, __m128i _b) { - int64x2_t a = vreinterpretq_s64_m128i(_a); - int64x2_t b = vreinterpretq_s64_m128i(_b); - // Interleave using vshrn/vmovn - // [a0|a2|b0|b2] - // [a1|a2|b1|b3] - int32x4_t ab02 = vcombine_s32(vmovn_s64(a), vmovn_s64(b)); - int32x4_t ab13 = vcombine_s32(vshrn_n_s64(a, 32), vshrn_n_s64(b, 32)); - // Subtract - return vreinterpretq_m128i_s32(vsubq_s32(ab02, ab13)); + int32x4_t a = vreinterpretq_s32_m128i(_a); + int32x4_t b = vreinterpretq_s32_m128i(_b); +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128i_s32( + vsubq_s32(vuzp1q_s32(a, b), vuzp2q_s32(a, b))); +#else + int32x4x2_t c = vuzpq_s32(a, b); + return vreinterpretq_m128i_s32(vsubq_s32(c.val[0], c.val[1])); +#endif } // Horizontally subtract adjacent pairs of 16-bit integers in a and b, and pack // the signed 16-bit results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hsub_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hsub_pi16 FORCE_INLINE __m64 _mm_hsub_pi16(__m64 _a, __m64 _b) { - int32x4_t ab = - vcombine_s32(vreinterpret_s32_m64(_a), vreinterpret_s32_m64(_b)); - - int16x4_t ab_low_bits = vmovn_s32(ab); - int16x4_t ab_high_bits = vshrn_n_s32(ab, 16); - - return vreinterpret_m64_s16(vsub_s16(ab_low_bits, ab_high_bits)); + int16x4_t a = vreinterpret_s16_m64(_a); + int16x4_t b = vreinterpret_s16_m64(_b); +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpret_m64_s16(vsub_s16(vuzp1_s16(a, b), vuzp2_s16(a, b))); +#else + int16x4x2_t c = vuzp_s16(a, b); + return vreinterpret_m64_s16(vsub_s16(c.val[0], c.val[1])); +#endif } // Horizontally subtract adjacent pairs of 32-bit integers in a and b, and pack // the signed 32-bit results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=mm_hsub_pi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=mm_hsub_pi32 FORCE_INLINE __m64 _mm_hsub_pi32(__m64 _a, __m64 _b) { -#if defined(__aarch64__) int32x2_t a = vreinterpret_s32_m64(_a); int32x2_t b = vreinterpret_s32_m64(_b); - return vreinterpret_m64_s32(vsub_s32(vtrn1_s32(a, b), vtrn2_s32(a, b))); +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpret_m64_s32(vsub_s32(vuzp1_s32(a, b), vuzp2_s32(a, b))); #else - int32x2x2_t trn_ab = - vtrn_s32(vreinterpret_s32_m64(_a), vreinterpret_s32_m64(_b)); - return vreinterpret_m64_s32(vsub_s32(trn_ab.val[0], trn_ab.val[1])); + int32x2x2_t c = vuzp_s32(a, b); + return vreinterpret_m64_s32(vsub_s32(c.val[0], c.val[1])); #endif } -// Computes saturated pairwise difference of each argument as a 16-bit signed -// integer values a and b. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hsubs_epi16 +// Horizontally subtract adjacent pairs of signed 16-bit integers in a and b +// using saturation, and pack the signed 16-bit results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hsubs_epi16 FORCE_INLINE __m128i _mm_hsubs_epi16(__m128i _a, __m128i _b) { -#if defined(__aarch64__) int16x8_t a = vreinterpretq_s16_m128i(_a); int16x8_t b = vreinterpretq_s16_m128i(_b); - return vreinterpretq_s64_s16( +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpretq_m128i_s16( vqsubq_s16(vuzp1q_s16(a, b), vuzp2q_s16(a, b))); #else - int32x4_t a = vreinterpretq_s32_m128i(_a); - int32x4_t b = vreinterpretq_s32_m128i(_b); - // Interleave using vshrn/vmovn - // [a0|a2|a4|a6|b0|b2|b4|b6] - // [a1|a3|a5|a7|b1|b3|b5|b7] - int16x8_t ab0246 = vcombine_s16(vmovn_s32(a), vmovn_s32(b)); - int16x8_t ab1357 = vcombine_s16(vshrn_n_s32(a, 16), vshrn_n_s32(b, 16)); - // Saturated subtract - return vreinterpretq_m128i_s16(vqsubq_s16(ab0246, ab1357)); + int16x8x2_t c = vuzpq_s16(a, b); + return vreinterpretq_m128i_s16(vqsubq_s16(c.val[0], c.val[1])); #endif } // Horizontally subtract adjacent pairs of signed 16-bit integers in a and b // using saturation, and pack the signed 16-bit results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_hsubs_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_hsubs_pi16 FORCE_INLINE __m64 _mm_hsubs_pi16(__m64 _a, __m64 _b) { int16x4_t a = vreinterpret_s16_m64(_a); int16x4_t b = vreinterpret_s16_m64(_b); -#if defined(__aarch64__) - return vreinterpret_s64_s16(vqsub_s16(vuzp1_s16(a, b), vuzp2_s16(a, b))); +#if defined(__aarch64__) || defined(_M_ARM64) + return vreinterpret_m64_s16(vqsub_s16(vuzp1_s16(a, b), vuzp2_s16(a, b))); #else - int16x4x2_t res = vuzp_s16(a, b); - return vreinterpret_s64_s16(vqsub_s16(res.val[0], res.val[1])); + int16x4x2_t c = vuzp_s16(a, b); + return vreinterpret_m64_s16(vqsub_s16(c.val[0], c.val[1])); #endif } @@ -6556,15 +6374,10 @@ FORCE_INLINE __m64 _mm_hsubs_pi16(__m64 _a, __m64 _b) // signed 8-bit integer from b, producing intermediate signed 16-bit integers. // Horizontally add adjacent pairs of intermediate signed 16-bit integers, // and pack the saturated results in dst. -// -// FOR j := 0 to 7 -// i := j*16 -// dst[i+15:i] := Saturate_To_Int16( a[i+15:i+8]*b[i+15:i+8] + -// a[i+7:i]*b[i+7:i] ) -// ENDFOR +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_maddubs_epi16 FORCE_INLINE __m128i _mm_maddubs_epi16(__m128i _a, __m128i _b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) uint8x16_t a = vreinterpretq_u8_m128i(_a); int8x16_t b = vreinterpretq_s8_m128i(_b); int16x8_t tl = vmulq_s16(vreinterpretq_s16_u16(vmovl_u8(vget_low_u8(a))), @@ -6596,15 +6409,36 @@ FORCE_INLINE __m128i _mm_maddubs_epi16(__m128i _a, __m128i _b) #endif } +// Vertically multiply each unsigned 8-bit integer from a with the corresponding +// signed 8-bit integer from b, producing intermediate signed 16-bit integers. +// Horizontally add adjacent pairs of intermediate signed 16-bit integers, and +// pack the saturated results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_maddubs_pi16 +FORCE_INLINE __m64 _mm_maddubs_pi16(__m64 _a, __m64 _b) +{ + uint16x4_t a = vreinterpret_u16_m64(_a); + int16x4_t b = vreinterpret_s16_m64(_b); + + // Zero extend a + int16x4_t a_odd = vreinterpret_s16_u16(vshr_n_u16(a, 8)); + int16x4_t a_even = vreinterpret_s16_u16(vand_u16(a, vdup_n_u16(0xff))); + + // Sign extend by shifting left then shifting right. + int16x4_t b_even = vshr_n_s16(vshl_n_s16(b, 8), 8); + int16x4_t b_odd = vshr_n_s16(b, 8); + + // multiply + int16x4_t prod1 = vmul_s16(a_even, b_even); + int16x4_t prod2 = vmul_s16(a_odd, b_odd); + + // saturated add + return vreinterpret_m64_s16(vqadd_s16(prod1, prod2)); +} + // Multiply packed signed 16-bit integers in a and b, producing intermediate // signed 32-bit integers. Shift right by 15 bits while rounding up, and store // the packed 16-bit integers in dst. -// -// r0 := Round(((int32_t)a0 * (int32_t)b0) >> 15) -// r1 := Round(((int32_t)a1 * (int32_t)b1) >> 15) -// r2 := Round(((int32_t)a2 * (int32_t)b2) >> 15) -// ... -// r7 := Round(((int32_t)a7 * (int32_t)b7) >> 15) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mulhrs_epi16 FORCE_INLINE __m128i _mm_mulhrs_epi16(__m128i a, __m128i b) { // Has issues due to saturation @@ -6628,7 +6462,7 @@ FORCE_INLINE __m128i _mm_mulhrs_epi16(__m128i a, __m128i b) // Multiply packed signed 16-bit integers in a and b, producing intermediate // signed 32-bit integers. Truncate each intermediate integer to the 18 most // significant bits, round by adding 1, and store bits [16:1] to dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_mulhrs_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mulhrs_pi16 FORCE_INLINE __m64 _mm_mulhrs_pi16(__m64 a, __m64 b) { int32x4_t mul_extend = @@ -6640,14 +6474,14 @@ FORCE_INLINE __m64 _mm_mulhrs_pi16(__m64 a, __m64 b) // Shuffle packed 8-bit integers in a according to shuffle control mask in the // corresponding 8-bit element of b, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_shuffle_epi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_shuffle_epi8 FORCE_INLINE __m128i _mm_shuffle_epi8(__m128i a, __m128i b) { int8x16_t tbl = vreinterpretq_s8_m128i(a); // input a uint8x16_t idx = vreinterpretq_u8_m128i(b); // input b uint8x16_t idx_masked = vandq_u8(idx, vdupq_n_u8(0x8F)); // avoid using meaningless bits -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_s8(vqtbl1q_s8(tbl, idx_masked)); #elif defined(__GNUC__) int8x16_t ret; @@ -6668,20 +6502,22 @@ FORCE_INLINE __m128i _mm_shuffle_epi8(__m128i a, __m128i b) #endif } +// Shuffle packed 8-bit integers in a according to shuffle control mask in the +// corresponding 8-bit element of b, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_shuffle_pi8 +FORCE_INLINE __m64 _mm_shuffle_pi8(__m64 a, __m64 b) +{ + const int8x8_t controlMask = + vand_s8(vreinterpret_s8_m64(b), vdup_n_s8((int8_t) (0x1 << 7 | 0x07))); + int8x8_t res = vtbl1_s8(vreinterpret_s8_m64(a), controlMask); + return vreinterpret_m64_s8(res); +} + // Negate packed 16-bit integers in a when the corresponding signed // 16-bit integer in b is negative, and store the results in dst. // Element in dst are zeroed out when the corresponding element // in b is zero. -// -// for i in 0..7 -// if b[i] < 0 -// r[i] := -a[i] -// else if b[i] == 0 -// r[i] := 0 -// else -// r[i] := a[i] -// fi -// done +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sign_epi16 FORCE_INLINE __m128i _mm_sign_epi16(__m128i _a, __m128i _b) { int16x8_t a = vreinterpretq_s16_m128i(_a); @@ -6691,7 +6527,7 @@ FORCE_INLINE __m128i _mm_sign_epi16(__m128i _a, __m128i _b) // (b < 0) ? 0xFFFF : 0 uint16x8_t ltMask = vreinterpretq_u16_s16(vshrq_n_s16(b, 15)); // (b == 0) ? 0xFFFF : 0 -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) int16x8_t zeroMask = vreinterpretq_s16_u16(vceqzq_s16(b)); #else int16x8_t zeroMask = vreinterpretq_s16_u16(vceqq_s16(b, vdupq_n_s16(0))); @@ -6709,16 +6545,7 @@ FORCE_INLINE __m128i _mm_sign_epi16(__m128i _a, __m128i _b) // 32-bit integer in b is negative, and store the results in dst. // Element in dst are zeroed out when the corresponding element // in b is zero. -// -// for i in 0..3 -// if b[i] < 0 -// r[i] := -a[i] -// else if b[i] == 0 -// r[i] := 0 -// else -// r[i] := a[i] -// fi -// done +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sign_epi32 FORCE_INLINE __m128i _mm_sign_epi32(__m128i _a, __m128i _b) { int32x4_t a = vreinterpretq_s32_m128i(_a); @@ -6729,7 +6556,7 @@ FORCE_INLINE __m128i _mm_sign_epi32(__m128i _a, __m128i _b) uint32x4_t ltMask = vreinterpretq_u32_s32(vshrq_n_s32(b, 31)); // (b == 0) ? 0xFFFFFFFF : 0 -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) int32x4_t zeroMask = vreinterpretq_s32_u32(vceqzq_s32(b)); #else int32x4_t zeroMask = vreinterpretq_s32_u32(vceqq_s32(b, vdupq_n_s32(0))); @@ -6747,16 +6574,7 @@ FORCE_INLINE __m128i _mm_sign_epi32(__m128i _a, __m128i _b) // 8-bit integer in b is negative, and store the results in dst. // Element in dst are zeroed out when the corresponding element // in b is zero. -// -// for i in 0..15 -// if b[i] < 0 -// r[i] := -a[i] -// else if b[i] == 0 -// r[i] := 0 -// else -// r[i] := a[i] -// fi -// done +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sign_epi8 FORCE_INLINE __m128i _mm_sign_epi8(__m128i _a, __m128i _b) { int8x16_t a = vreinterpretq_s8_m128i(_a); @@ -6767,13 +6585,13 @@ FORCE_INLINE __m128i _mm_sign_epi8(__m128i _a, __m128i _b) uint8x16_t ltMask = vreinterpretq_u8_s8(vshrq_n_s8(b, 7)); // (b == 0) ? 0xFF : 0 -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) int8x16_t zeroMask = vreinterpretq_s8_u8(vceqzq_s8(b)); #else int8x16_t zeroMask = vreinterpretq_s8_u8(vceqq_s8(b, vdupq_n_s8(0))); #endif - // bitwise select either a or nagative 'a' (vnegq_s8(a) return nagative 'a') + // bitwise select either a or negative 'a' (vnegq_s8(a) return negative 'a') // based on ltMask int8x16_t masked = vbslq_s8(ltMask, vnegq_s8(a), a); // res = masked & (~zeroMask) @@ -6785,19 +6603,7 @@ FORCE_INLINE __m128i _mm_sign_epi8(__m128i _a, __m128i _b) // Negate packed 16-bit integers in a when the corresponding signed 16-bit // integer in b is negative, and store the results in dst. Element in dst are // zeroed out when the corresponding element in b is zero. -// -// FOR j := 0 to 3 -// i := j*16 -// IF b[i+15:i] < 0 -// dst[i+15:i] := -(a[i+15:i]) -// ELSE IF b[i+15:i] == 0 -// dst[i+15:i] := 0 -// ELSE -// dst[i+15:i] := a[i+15:i] -// FI -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sign_pi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sign_pi16 FORCE_INLINE __m64 _mm_sign_pi16(__m64 _a, __m64 _b) { int16x4_t a = vreinterpret_s16_m64(_a); @@ -6808,13 +6614,13 @@ FORCE_INLINE __m64 _mm_sign_pi16(__m64 _a, __m64 _b) uint16x4_t ltMask = vreinterpret_u16_s16(vshr_n_s16(b, 15)); // (b == 0) ? 0xFFFF : 0 -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) int16x4_t zeroMask = vreinterpret_s16_u16(vceqz_s16(b)); #else int16x4_t zeroMask = vreinterpret_s16_u16(vceq_s16(b, vdup_n_s16(0))); #endif - // bitwise select either a or nagative 'a' (vneg_s16(a) return nagative 'a') + // bitwise select either a or negative 'a' (vneg_s16(a) return negative 'a') // based on ltMask int16x4_t masked = vbsl_s16(ltMask, vneg_s16(a), a); // res = masked & (~zeroMask) @@ -6826,19 +6632,7 @@ FORCE_INLINE __m64 _mm_sign_pi16(__m64 _a, __m64 _b) // Negate packed 32-bit integers in a when the corresponding signed 32-bit // integer in b is negative, and store the results in dst. Element in dst are // zeroed out when the corresponding element in b is zero. -// -// FOR j := 0 to 1 -// i := j*32 -// IF b[i+31:i] < 0 -// dst[i+31:i] := -(a[i+31:i]) -// ELSE IF b[i+31:i] == 0 -// dst[i+31:i] := 0 -// ELSE -// dst[i+31:i] := a[i+31:i] -// FI -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sign_pi32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sign_pi32 FORCE_INLINE __m64 _mm_sign_pi32(__m64 _a, __m64 _b) { int32x2_t a = vreinterpret_s32_m64(_a); @@ -6849,13 +6643,13 @@ FORCE_INLINE __m64 _mm_sign_pi32(__m64 _a, __m64 _b) uint32x2_t ltMask = vreinterpret_u32_s32(vshr_n_s32(b, 31)); // (b == 0) ? 0xFFFFFFFF : 0 -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) int32x2_t zeroMask = vreinterpret_s32_u32(vceqz_s32(b)); #else int32x2_t zeroMask = vreinterpret_s32_u32(vceq_s32(b, vdup_n_s32(0))); #endif - // bitwise select either a or nagative 'a' (vneg_s32(a) return nagative 'a') + // bitwise select either a or negative 'a' (vneg_s32(a) return negative 'a') // based on ltMask int32x2_t masked = vbsl_s32(ltMask, vneg_s32(a), a); // res = masked & (~zeroMask) @@ -6867,19 +6661,7 @@ FORCE_INLINE __m64 _mm_sign_pi32(__m64 _a, __m64 _b) // Negate packed 8-bit integers in a when the corresponding signed 8-bit integer // in b is negative, and store the results in dst. Element in dst are zeroed out // when the corresponding element in b is zero. -// -// FOR j := 0 to 7 -// i := j*8 -// IF b[i+7:i] < 0 -// dst[i+7:i] := -(a[i+7:i]) -// ELSE IF b[i+7:i] == 0 -// dst[i+7:i] := 0 -// ELSE -// dst[i+7:i] := a[i+7:i] -// FI -// ENDFOR -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_sign_pi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_sign_pi8 FORCE_INLINE __m64 _mm_sign_pi8(__m64 _a, __m64 _b) { int8x8_t a = vreinterpret_s8_m64(_a); @@ -6890,13 +6672,13 @@ FORCE_INLINE __m64 _mm_sign_pi8(__m64 _a, __m64 _b) uint8x8_t ltMask = vreinterpret_u8_s8(vshr_n_s8(b, 7)); // (b == 0) ? 0xFF : 0 -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) int8x8_t zeroMask = vreinterpret_s8_u8(vceqz_s8(b)); #else int8x8_t zeroMask = vreinterpret_s8_u8(vceq_s8(b, vdup_n_s8(0))); #endif - // bitwise select either a or nagative 'a' (vneg_s8(a) return nagative 'a') + // bitwise select either a or negative 'a' (vneg_s8(a) return negative 'a') // based on ltMask int8x8_t masked = vbsl_s8(ltMask, vneg_s8(a), a); // res = masked & (~zeroMask) @@ -6909,50 +6691,43 @@ FORCE_INLINE __m64 _mm_sign_pi8(__m64 _a, __m64 _b) // Blend packed 16-bit integers from a and b using control mask imm8, and store // the results in dst. -// -// FOR j := 0 to 7 -// i := j*16 -// IF imm8[j] -// dst[i+15:i] := b[i+15:i] -// ELSE -// dst[i+15:i] := a[i+15:i] -// FI -// ENDFOR +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_blend_epi16 // FORCE_INLINE __m128i _mm_blend_epi16(__m128i a, __m128i b, // __constrange(0,255) int imm) -#define _mm_blend_epi16(a, b, imm) \ - __extension__({ \ - const uint16_t _mask[8] = {((imm) & (1 << 0)) ? (uint16_t) -1 : 0x0, \ - ((imm) & (1 << 1)) ? (uint16_t) -1 : 0x0, \ - ((imm) & (1 << 2)) ? (uint16_t) -1 : 0x0, \ - ((imm) & (1 << 3)) ? (uint16_t) -1 : 0x0, \ - ((imm) & (1 << 4)) ? (uint16_t) -1 : 0x0, \ - ((imm) & (1 << 5)) ? (uint16_t) -1 : 0x0, \ - ((imm) & (1 << 6)) ? (uint16_t) -1 : 0x0, \ - ((imm) & (1 << 7)) ? (uint16_t) -1 : 0x0}; \ - uint16x8_t _mask_vec = vld1q_u16(_mask); \ - uint16x8_t _a = vreinterpretq_u16_m128i(a); \ - uint16x8_t _b = vreinterpretq_u16_m128i(b); \ - vreinterpretq_m128i_u16(vbslq_u16(_mask_vec, _b, _a)); \ - }) +#define _mm_blend_epi16(a, b, imm) \ + _sse2neon_define2( \ + __m128i, a, b, \ + const uint16_t _mask[8] = \ + _sse2neon_init(((imm) & (1 << 0)) ? (uint16_t) -1 : 0x0, \ + ((imm) & (1 << 1)) ? (uint16_t) -1 : 0x0, \ + ((imm) & (1 << 2)) ? (uint16_t) -1 : 0x0, \ + ((imm) & (1 << 3)) ? (uint16_t) -1 : 0x0, \ + ((imm) & (1 << 4)) ? (uint16_t) -1 : 0x0, \ + ((imm) & (1 << 5)) ? (uint16_t) -1 : 0x0, \ + ((imm) & (1 << 6)) ? (uint16_t) -1 : 0x0, \ + ((imm) & (1 << 7)) ? (uint16_t) -1 : 0x0); \ + uint16x8_t _mask_vec = vld1q_u16(_mask); \ + uint16x8_t __a = vreinterpretq_u16_m128i(_a); \ + uint16x8_t __b = vreinterpretq_u16_m128i(_b); _sse2neon_return( \ + vreinterpretq_m128i_u16(vbslq_u16(_mask_vec, __b, __a)));) // Blend packed double-precision (64-bit) floating-point elements from a and b // using control mask imm8, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_blend_pd -#define _mm_blend_pd(a, b, imm) \ - __extension__({ \ - const uint64_t _mask[2] = { \ - ((imm) & (1 << 0)) ? ~UINT64_C(0) : UINT64_C(0), \ - ((imm) & (1 << 1)) ? ~UINT64_C(0) : UINT64_C(0)}; \ - uint64x2_t _mask_vec = vld1q_u64(_mask); \ - uint64x2_t _a = vreinterpretq_u64_m128d(a); \ - uint64x2_t _b = vreinterpretq_u64_m128d(b); \ - vreinterpretq_m128d_u64(vbslq_u64(_mask_vec, _b, _a)); \ - }) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_blend_pd +#define _mm_blend_pd(a, b, imm) \ + _sse2neon_define2( \ + __m128d, a, b, \ + const uint64_t _mask[2] = \ + _sse2neon_init(((imm) & (1 << 0)) ? ~UINT64_C(0) : UINT64_C(0), \ + ((imm) & (1 << 1)) ? ~UINT64_C(0) : UINT64_C(0)); \ + uint64x2_t _mask_vec = vld1q_u64(_mask); \ + uint64x2_t __a = vreinterpretq_u64_m128d(_a); \ + uint64x2_t __b = vreinterpretq_u64_m128d(_b); _sse2neon_return( \ + vreinterpretq_m128d_u64(vbslq_u64(_mask_vec, __b, __a)));) // Blend packed single-precision (32-bit) floating-point elements from a and b // using mask, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_blend_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_blend_ps FORCE_INLINE __m128 _mm_blend_ps(__m128 _a, __m128 _b, const char imm8) { const uint32_t ALIGN_STRUCT(16) @@ -6968,15 +6743,7 @@ FORCE_INLINE __m128 _mm_blend_ps(__m128 _a, __m128 _b, const char imm8) // Blend packed 8-bit integers from a and b using mask, and store the results in // dst. -// -// FOR j := 0 to 15 -// i := j*8 -// IF mask[i+7] -// dst[i+7:i] := b[i+7:i] -// ELSE -// dst[i+7:i] := a[i+7:i] -// FI -// ENDFOR +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_blendv_epi8 FORCE_INLINE __m128i _mm_blendv_epi8(__m128i _a, __m128i _b, __m128i _mask) { // Use a signed shift right to create a mask with the sign bit @@ -6989,12 +6756,12 @@ FORCE_INLINE __m128i _mm_blendv_epi8(__m128i _a, __m128i _b, __m128i _mask) // Blend packed double-precision (64-bit) floating-point elements from a and b // using mask, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_blendv_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_blendv_pd FORCE_INLINE __m128d _mm_blendv_pd(__m128d _a, __m128d _b, __m128d _mask) { uint64x2_t mask = vreinterpretq_u64_s64(vshrq_n_s64(vreinterpretq_s64_m128d(_mask), 63)); -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) float64x2_t a = vreinterpretq_f64_m128d(_a); float64x2_t b = vreinterpretq_f64_m128d(_b); return vreinterpretq_m128d_f64(vbslq_f64(mask, b, a)); @@ -7007,7 +6774,7 @@ FORCE_INLINE __m128d _mm_blendv_pd(__m128d _a, __m128d _b, __m128d _mask) // Blend packed single-precision (32-bit) floating-point elements from a and b // using mask, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_blendv_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_blendv_ps FORCE_INLINE __m128 _mm_blendv_ps(__m128 _a, __m128 _b, __m128 _mask) { // Use a signed shift right to create a mask with the sign bit @@ -7021,10 +6788,10 @@ FORCE_INLINE __m128 _mm_blendv_ps(__m128 _a, __m128 _b, __m128 _mask) // Round the packed double-precision (64-bit) floating-point elements in a up // to an integer value, and store the results as packed double-precision // floating-point elements in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_ceil_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_ceil_pd FORCE_INLINE __m128d _mm_ceil_pd(__m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vrndpq_f64(vreinterpretq_f64_m128d(a))); #else double *f = (double *) &a; @@ -7035,10 +6802,11 @@ FORCE_INLINE __m128d _mm_ceil_pd(__m128d a) // Round the packed single-precision (32-bit) floating-point elements in a up to // an integer value, and store the results as packed single-precision // floating-point elements in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_ceil_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_ceil_ps FORCE_INLINE __m128 _mm_ceil_ps(__m128 a) { -#if defined(__aarch64__) +#if (defined(__aarch64__) || defined(_M_ARM64)) || \ + defined(__ARM_FEATURE_DIRECTED_ROUNDING) return vreinterpretq_m128_f32(vrndpq_f32(vreinterpretq_f32_m128(a))); #else float *f = (float *) &a; @@ -7050,7 +6818,7 @@ FORCE_INLINE __m128 _mm_ceil_ps(__m128 a) // an integer value, store the result as a double-precision floating-point // element in the lower element of dst, and copy the upper element from a to the // upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_ceil_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_ceil_sd FORCE_INLINE __m128d _mm_ceil_sd(__m128d a, __m128d b) { return _mm_move_sd(a, _mm_ceil_pd(b)); @@ -7060,11 +6828,7 @@ FORCE_INLINE __m128d _mm_ceil_sd(__m128d a, __m128d b) // an integer value, store the result as a single-precision floating-point // element in the lower element of dst, and copy the upper 3 packed elements // from a to the upper elements of dst. -// -// dst[31:0] := CEIL(b[31:0]) -// dst[127:32] := a[127:32] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_ceil_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_ceil_ss FORCE_INLINE __m128 _mm_ceil_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_ceil_ps(b)); @@ -7074,7 +6838,7 @@ FORCE_INLINE __m128 _mm_ceil_ss(__m128 a, __m128 b) // in dst FORCE_INLINE __m128i _mm_cmpeq_epi64(__m128i a, __m128i b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_u64( vceqq_u64(vreinterpretq_u64_m128i(a), vreinterpretq_u64_m128i(b))); #else @@ -7087,16 +6851,18 @@ FORCE_INLINE __m128i _mm_cmpeq_epi64(__m128i a, __m128i b) #endif } -// Converts the four signed 16-bit integers in the lower 64 bits to four signed -// 32-bit integers. +// Sign extend packed 16-bit integers in a to packed 32-bit integers, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepi16_epi32 FORCE_INLINE __m128i _mm_cvtepi16_epi32(__m128i a) { return vreinterpretq_m128i_s32( vmovl_s16(vget_low_s16(vreinterpretq_s16_m128i(a)))); } -// Converts the two signed 16-bit integers in the lower 32 bits two signed -// 32-bit integers. +// Sign extend packed 16-bit integers in a to packed 64-bit integers, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepi16_epi64 FORCE_INLINE __m128i _mm_cvtepi16_epi64(__m128i a) { int16x8_t s16x8 = vreinterpretq_s16_m128i(a); /* xxxx xxxx xxxx 0B0A */ @@ -7105,16 +6871,18 @@ FORCE_INLINE __m128i _mm_cvtepi16_epi64(__m128i a) return vreinterpretq_m128i_s64(s64x2); } -// Converts the two signed 32-bit integers in the lower 64 bits to two signed -// 64-bit integers. +// Sign extend packed 32-bit integers in a to packed 64-bit integers, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepi32_epi64 FORCE_INLINE __m128i _mm_cvtepi32_epi64(__m128i a) { return vreinterpretq_m128i_s64( vmovl_s32(vget_low_s32(vreinterpretq_s32_m128i(a)))); } -// Converts the four unsigned 8-bit integers in the lower 16 bits to four -// unsigned 32-bit integers. +// Sign extend packed 8-bit integers in a to packed 16-bit integers, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepi8_epi16 FORCE_INLINE __m128i _mm_cvtepi8_epi16(__m128i a) { int8x16_t s8x16 = vreinterpretq_s8_m128i(a); /* xxxx xxxx xxxx DCBA */ @@ -7122,8 +6890,9 @@ FORCE_INLINE __m128i _mm_cvtepi8_epi16(__m128i a) return vreinterpretq_m128i_s16(s16x8); } -// Converts the four unsigned 8-bit integers in the lower 32 bits to four -// unsigned 32-bit integers. +// Sign extend packed 8-bit integers in a to packed 32-bit integers, and store +// the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepi8_epi32 FORCE_INLINE __m128i _mm_cvtepi8_epi32(__m128i a) { int8x16_t s8x16 = vreinterpretq_s8_m128i(a); /* xxxx xxxx xxxx DCBA */ @@ -7132,8 +6901,9 @@ FORCE_INLINE __m128i _mm_cvtepi8_epi32(__m128i a) return vreinterpretq_m128i_s32(s32x4); } -// Converts the two signed 8-bit integers in the lower 32 bits to four -// signed 64-bit integers. +// Sign extend packed 8-bit integers in the low 8 bytes of a to packed 64-bit +// integers, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepi8_epi64 FORCE_INLINE __m128i _mm_cvtepi8_epi64(__m128i a) { int8x16_t s8x16 = vreinterpretq_s8_m128i(a); /* xxxx xxxx xxxx xxBA */ @@ -7143,16 +6913,18 @@ FORCE_INLINE __m128i _mm_cvtepi8_epi64(__m128i a) return vreinterpretq_m128i_s64(s64x2); } -// Converts the four unsigned 16-bit integers in the lower 64 bits to four -// unsigned 32-bit integers. +// Zero extend packed unsigned 16-bit integers in a to packed 32-bit integers, +// and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepu16_epi32 FORCE_INLINE __m128i _mm_cvtepu16_epi32(__m128i a) { return vreinterpretq_m128i_u32( vmovl_u16(vget_low_u16(vreinterpretq_u16_m128i(a)))); } -// Converts the two unsigned 16-bit integers in the lower 32 bits to two -// unsigned 64-bit integers. +// Zero extend packed unsigned 16-bit integers in a to packed 64-bit integers, +// and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepu16_epi64 FORCE_INLINE __m128i _mm_cvtepu16_epi64(__m128i a) { uint16x8_t u16x8 = vreinterpretq_u16_m128i(a); /* xxxx xxxx xxxx 0B0A */ @@ -7161,8 +6933,9 @@ FORCE_INLINE __m128i _mm_cvtepu16_epi64(__m128i a) return vreinterpretq_m128i_u64(u64x2); } -// Converts the two unsigned 32-bit integers in the lower 64 bits to two -// unsigned 64-bit integers. +// Zero extend packed unsigned 32-bit integers in a to packed 64-bit integers, +// and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepu32_epi64 FORCE_INLINE __m128i _mm_cvtepu32_epi64(__m128i a) { return vreinterpretq_m128i_u64( @@ -7171,7 +6944,7 @@ FORCE_INLINE __m128i _mm_cvtepu32_epi64(__m128i a) // Zero extend packed unsigned 8-bit integers in a to packed 16-bit integers, // and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_cvtepu8_epi16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepu8_epi16 FORCE_INLINE __m128i _mm_cvtepu8_epi16(__m128i a) { uint8x16_t u8x16 = vreinterpretq_u8_m128i(a); /* xxxx xxxx HGFE DCBA */ @@ -7179,9 +6952,9 @@ FORCE_INLINE __m128i _mm_cvtepu8_epi16(__m128i a) return vreinterpretq_m128i_u16(u16x8); } -// Converts the four unsigned 8-bit integers in the lower 32 bits to four -// unsigned 32-bit integers. -// https://msdn.microsoft.com/en-us/library/bb531467%28v=vs.100%29.aspx +// Zero extend packed unsigned 8-bit integers in a to packed 32-bit integers, +// and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepu8_epi32 FORCE_INLINE __m128i _mm_cvtepu8_epi32(__m128i a) { uint8x16_t u8x16 = vreinterpretq_u8_m128i(a); /* xxxx xxxx xxxx DCBA */ @@ -7190,8 +6963,9 @@ FORCE_INLINE __m128i _mm_cvtepu8_epi32(__m128i a) return vreinterpretq_m128i_u32(u32x4); } -// Converts the two unsigned 8-bit integers in the lower 16 bits to two -// unsigned 64-bit integers. +// Zero extend packed unsigned 8-bit integers in the low 8 bytes of a to packed +// 64-bit integers, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cvtepu8_epi64 FORCE_INLINE __m128i _mm_cvtepu8_epi64(__m128i a) { uint8x16_t u8x16 = vreinterpretq_u8_m128i(a); /* xxxx xxxx xxxx xxBA */ @@ -7201,66 +6975,118 @@ FORCE_INLINE __m128i _mm_cvtepu8_epi64(__m128i a) return vreinterpretq_m128i_u64(u64x2); } +// Conditionally multiply the packed double-precision (64-bit) floating-point +// elements in a and b using the high 4 bits in imm8, sum the four products, and +// conditionally store the sum in dst using the low 4 bits of imm8. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_dp_pd +FORCE_INLINE __m128d _mm_dp_pd(__m128d a, __m128d b, const int imm) +{ + // Generate mask value from constant immediate bit value + const int64_t bit0Mask = imm & 0x01 ? UINT64_MAX : 0; + const int64_t bit1Mask = imm & 0x02 ? UINT64_MAX : 0; +#if !SSE2NEON_PRECISE_DP + const int64_t bit4Mask = imm & 0x10 ? UINT64_MAX : 0; + const int64_t bit5Mask = imm & 0x20 ? UINT64_MAX : 0; +#endif + // Conditional multiplication +#if !SSE2NEON_PRECISE_DP + __m128d mul = _mm_mul_pd(a, b); + const __m128d mulMask = + _mm_castsi128_pd(_mm_set_epi64x(bit5Mask, bit4Mask)); + __m128d tmp = _mm_and_pd(mul, mulMask); +#else +#if defined(__aarch64__) || defined(_M_ARM64) + double d0 = (imm & 0x10) ? vgetq_lane_f64(vreinterpretq_f64_m128d(a), 0) * + vgetq_lane_f64(vreinterpretq_f64_m128d(b), 0) + : 0; + double d1 = (imm & 0x20) ? vgetq_lane_f64(vreinterpretq_f64_m128d(a), 1) * + vgetq_lane_f64(vreinterpretq_f64_m128d(b), 1) + : 0; +#else + double d0 = (imm & 0x10) ? ((double *) &a)[0] * ((double *) &b)[0] : 0; + double d1 = (imm & 0x20) ? ((double *) &a)[1] * ((double *) &b)[1] : 0; +#endif + __m128d tmp = _mm_set_pd(d1, d0); +#endif + // Sum the products +#if defined(__aarch64__) || defined(_M_ARM64) + double sum = vpaddd_f64(vreinterpretq_f64_m128d(tmp)); +#else + double sum = *((double *) &tmp) + *(((double *) &tmp) + 1); +#endif + // Conditionally store the sum + const __m128d sumMask = + _mm_castsi128_pd(_mm_set_epi64x(bit1Mask, bit0Mask)); + __m128d res = _mm_and_pd(_mm_set_pd1(sum), sumMask); + return res; +} + // Conditionally multiply the packed single-precision (32-bit) floating-point // elements in a and b using the high 4 bits in imm8, sum the four products, // and conditionally store the sum in dst using the low 4 bits of imm. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_dp_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_dp_ps FORCE_INLINE __m128 _mm_dp_ps(__m128 a, __m128 b, const int imm) { -#if defined(__aarch64__) + float32x4_t elementwise_prod = _mm_mul_ps(a, b); + +#if defined(__aarch64__) || defined(_M_ARM64) /* shortcuts */ if (imm == 0xFF) { - return _mm_set1_ps(vaddvq_f32(_mm_mul_ps(a, b))); + return _mm_set1_ps(vaddvq_f32(elementwise_prod)); } - if (imm == 0x7F) { - float32x4_t m = _mm_mul_ps(a, b); - m[3] = 0; - return _mm_set1_ps(vaddvq_f32(m)); + + if ((imm & 0x0F) == 0x0F) { + if (!(imm & (1 << 4))) + elementwise_prod = vsetq_lane_f32(0.0f, elementwise_prod, 0); + if (!(imm & (1 << 5))) + elementwise_prod = vsetq_lane_f32(0.0f, elementwise_prod, 1); + if (!(imm & (1 << 6))) + elementwise_prod = vsetq_lane_f32(0.0f, elementwise_prod, 2); + if (!(imm & (1 << 7))) + elementwise_prod = vsetq_lane_f32(0.0f, elementwise_prod, 3); + + return _mm_set1_ps(vaddvq_f32(elementwise_prod)); } #endif - float s = 0, c = 0; - float32x4_t f32a = vreinterpretq_f32_m128(a); - float32x4_t f32b = vreinterpretq_f32_m128(b); + float s = 0.0f; - /* To improve the accuracy of floating-point summation, Kahan algorithm - * is used for each operation. - */ if (imm & (1 << 4)) - _sse2neon_kadd_f32(&s, &c, f32a[0] * f32b[0]); + s += vgetq_lane_f32(elementwise_prod, 0); if (imm & (1 << 5)) - _sse2neon_kadd_f32(&s, &c, f32a[1] * f32b[1]); + s += vgetq_lane_f32(elementwise_prod, 1); if (imm & (1 << 6)) - _sse2neon_kadd_f32(&s, &c, f32a[2] * f32b[2]); + s += vgetq_lane_f32(elementwise_prod, 2); if (imm & (1 << 7)) - _sse2neon_kadd_f32(&s, &c, f32a[3] * f32b[3]); - s += c; - - float32x4_t res = { - (imm & 0x1) ? s : 0, - (imm & 0x2) ? s : 0, - (imm & 0x4) ? s : 0, - (imm & 0x8) ? s : 0, + s += vgetq_lane_f32(elementwise_prod, 3); + + const float32_t res[4] = { + (imm & 0x1) ? s : 0.0f, + (imm & 0x2) ? s : 0.0f, + (imm & 0x4) ? s : 0.0f, + (imm & 0x8) ? s : 0.0f, }; - return vreinterpretq_m128_f32(res); + return vreinterpretq_m128_f32(vld1q_f32(res)); } -// Extracts the selected signed or unsigned 32-bit integer from a and zero -// extends. +// Extract a 32-bit integer from a, selected with imm8, and store the result in +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_extract_epi32 // FORCE_INLINE int _mm_extract_epi32(__m128i a, __constrange(0,4) int imm) #define _mm_extract_epi32(a, imm) \ vgetq_lane_s32(vreinterpretq_s32_m128i(a), (imm)) -// Extracts the selected signed or unsigned 64-bit integer from a and zero -// extends. +// Extract a 64-bit integer from a, selected with imm8, and store the result in +// dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_extract_epi64 // FORCE_INLINE __int64 _mm_extract_epi64(__m128i a, __constrange(0,2) int imm) #define _mm_extract_epi64(a, imm) \ vgetq_lane_s64(vreinterpretq_s64_m128i(a), (imm)) -// Extracts the selected signed or unsigned 8-bit integer from a and zero -// extends. -// FORCE_INLINE int _mm_extract_epi8(__m128i a, __constrange(0,16) int imm) -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_extract_epi8 +// Extract an 8-bit integer from a, selected with imm8, and store the result in +// the lower element of dst. FORCE_INLINE int _mm_extract_epi8(__m128i a, +// __constrange(0,16) int imm) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_extract_epi8 #define _mm_extract_epi8(a, imm) vgetq_lane_u8(vreinterpretq_u8_m128i(a), (imm)) // Extracts the selected single-precision (32-bit) floating-point from a. @@ -7270,10 +7096,10 @@ FORCE_INLINE __m128 _mm_dp_ps(__m128 a, __m128 b, const int imm) // Round the packed double-precision (64-bit) floating-point elements in a down // to an integer value, and store the results as packed double-precision // floating-point elements in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_floor_pd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_floor_pd FORCE_INLINE __m128d _mm_floor_pd(__m128d a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128d_f64(vrndmq_f64(vreinterpretq_f64_m128d(a))); #else double *f = (double *) &a; @@ -7284,10 +7110,11 @@ FORCE_INLINE __m128d _mm_floor_pd(__m128d a) // Round the packed single-precision (32-bit) floating-point elements in a down // to an integer value, and store the results as packed single-precision // floating-point elements in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_floor_ps +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_floor_ps FORCE_INLINE __m128 _mm_floor_ps(__m128 a) { -#if defined(__aarch64__) +#if (defined(__aarch64__) || defined(_M_ARM64)) || \ + defined(__ARM_FEATURE_DIRECTED_ROUNDING) return vreinterpretq_m128_f32(vrndmq_f32(vreinterpretq_f32_m128(a))); #else float *f = (float *) &a; @@ -7299,7 +7126,7 @@ FORCE_INLINE __m128 _mm_floor_ps(__m128 a) // an integer value, store the result as a double-precision floating-point // element in the lower element of dst, and copy the upper element from a to the // upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_floor_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_floor_sd FORCE_INLINE __m128d _mm_floor_sd(__m128d a, __m128d b) { return _mm_move_sd(a, _mm_floor_pd(b)); @@ -7309,79 +7136,65 @@ FORCE_INLINE __m128d _mm_floor_sd(__m128d a, __m128d b) // an integer value, store the result as a single-precision floating-point // element in the lower element of dst, and copy the upper 3 packed elements // from a to the upper elements of dst. -// -// dst[31:0] := FLOOR(b[31:0]) -// dst[127:32] := a[127:32] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_floor_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_floor_ss FORCE_INLINE __m128 _mm_floor_ss(__m128 a, __m128 b) { return _mm_move_ss(a, _mm_floor_ps(b)); } -// Inserts the least significant 32 bits of b into the selected 32-bit integer -// of a. +// Copy a to dst, and insert the 32-bit integer i into dst at the location +// specified by imm8. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_insert_epi32 // FORCE_INLINE __m128i _mm_insert_epi32(__m128i a, int b, // __constrange(0,4) int imm) -#define _mm_insert_epi32(a, b, imm) \ - __extension__({ \ - vreinterpretq_m128i_s32( \ - vsetq_lane_s32((b), vreinterpretq_s32_m128i(a), (imm))); \ - }) +#define _mm_insert_epi32(a, b, imm) \ + vreinterpretq_m128i_s32( \ + vsetq_lane_s32((b), vreinterpretq_s32_m128i(a), (imm))) -// Inserts the least significant 64 bits of b into the selected 64-bit integer -// of a. +// Copy a to dst, and insert the 64-bit integer i into dst at the location +// specified by imm8. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_insert_epi64 // FORCE_INLINE __m128i _mm_insert_epi64(__m128i a, __int64 b, // __constrange(0,2) int imm) -#define _mm_insert_epi64(a, b, imm) \ - __extension__({ \ - vreinterpretq_m128i_s64( \ - vsetq_lane_s64((b), vreinterpretq_s64_m128i(a), (imm))); \ - }) +#define _mm_insert_epi64(a, b, imm) \ + vreinterpretq_m128i_s64( \ + vsetq_lane_s64((b), vreinterpretq_s64_m128i(a), (imm))) -// Inserts the least significant 8 bits of b into the selected 8-bit integer -// of a. +// Copy a to dst, and insert the lower 8-bit integer from i into dst at the +// location specified by imm8. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_insert_epi8 // FORCE_INLINE __m128i _mm_insert_epi8(__m128i a, int b, // __constrange(0,16) int imm) -#define _mm_insert_epi8(a, b, imm) \ - __extension__({ \ - vreinterpretq_m128i_s8( \ - vsetq_lane_s8((b), vreinterpretq_s8_m128i(a), (imm))); \ - }) +#define _mm_insert_epi8(a, b, imm) \ + vreinterpretq_m128i_s8(vsetq_lane_s8((b), vreinterpretq_s8_m128i(a), (imm))) // Copy a to tmp, then insert a single-precision (32-bit) floating-point // element from b into tmp using the control in imm8. Store tmp to dst using // the mask in imm8 (elements are zeroed out when the corresponding bit is set). -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=insert_ps -#define _mm_insert_ps(a, b, imm8) \ - __extension__({ \ - float32x4_t tmp1 = vsetq_lane_f32(vgetq_lane_f32(b, (imm >> 6) & 0x3), \ - vreinterpretq_f32_m128(a), 0); \ - float32x4_t tmp2 = \ - vsetq_lane_f32(vgetq_lane_f32(tmp1, 0), vreinterpretq_f32_m128(a), \ - ((imm >> 4) & 0x3)); \ - const uint32_t data[4] = {((imm8) & (1 << 0)) ? UINT32_MAX : 0, \ - ((imm8) & (1 << 1)) ? UINT32_MAX : 0, \ - ((imm8) & (1 << 2)) ? UINT32_MAX : 0, \ - ((imm8) & (1 << 3)) ? UINT32_MAX : 0}; \ - uint32x4_t mask = vld1q_u32(data); \ - float32x4_t all_zeros = vdupq_n_f32(0); \ - \ - vreinterpretq_m128_f32( \ - vbslq_f32(mask, all_zeros, vreinterpretq_f32_m128(tmp2))); \ - }) - -// epi versions of min/max -// Computes the pariwise maximums of the four signed 32-bit integer values of a -// and b. -// -// A 128-bit parameter that can be defined with the following equations: -// r0 := (a0 > b0) ? a0 : b0 -// r1 := (a1 > b1) ? a1 : b1 -// r2 := (a2 > b2) ? a2 : b2 -// r3 := (a3 > b3) ? a3 : b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/bb514055(v=vs.100).aspx +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=insert_ps +#define _mm_insert_ps(a, b, imm8) \ + _sse2neon_define2( \ + __m128, a, b, \ + float32x4_t tmp1 = \ + vsetq_lane_f32(vgetq_lane_f32(_b, (imm8 >> 6) & 0x3), \ + vreinterpretq_f32_m128(_a), 0); \ + float32x4_t tmp2 = \ + vsetq_lane_f32(vgetq_lane_f32(tmp1, 0), \ + vreinterpretq_f32_m128(_a), ((imm8 >> 4) & 0x3)); \ + const uint32_t data[4] = \ + _sse2neon_init(((imm8) & (1 << 0)) ? UINT32_MAX : 0, \ + ((imm8) & (1 << 1)) ? UINT32_MAX : 0, \ + ((imm8) & (1 << 2)) ? UINT32_MAX : 0, \ + ((imm8) & (1 << 3)) ? UINT32_MAX : 0); \ + uint32x4_t mask = vld1q_u32(data); \ + float32x4_t all_zeros = vdupq_n_f32(0); \ + \ + _sse2neon_return(vreinterpretq_m128_f32( \ + vbslq_f32(mask, all_zeros, vreinterpretq_f32_m128(tmp2))));) + +// Compare packed signed 32-bit integers in a and b, and store packed maximum +// values in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_epi32 FORCE_INLINE __m128i _mm_max_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( @@ -7390,7 +7203,7 @@ FORCE_INLINE __m128i _mm_max_epi32(__m128i a, __m128i b) // Compare packed signed 8-bit integers in a and b, and store packed maximum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_max_epi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_epi8 FORCE_INLINE __m128i _mm_max_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_s8( @@ -7399,7 +7212,7 @@ FORCE_INLINE __m128i _mm_max_epi8(__m128i a, __m128i b) // Compare packed unsigned 16-bit integers in a and b, and store packed maximum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_max_epu16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_epu16 FORCE_INLINE __m128i _mm_max_epu16(__m128i a, __m128i b) { return vreinterpretq_m128i_u16( @@ -7408,23 +7221,16 @@ FORCE_INLINE __m128i _mm_max_epu16(__m128i a, __m128i b) // Compare packed unsigned 32-bit integers in a and b, and store packed maximum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_max_epu32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_epu32 FORCE_INLINE __m128i _mm_max_epu32(__m128i a, __m128i b) { return vreinterpretq_m128i_u32( vmaxq_u32(vreinterpretq_u32_m128i(a), vreinterpretq_u32_m128i(b))); } -// Computes the pariwise minima of the four signed 32-bit integer values of a -// and b. -// -// A 128-bit parameter that can be defined with the following equations: -// r0 := (a0 < b0) ? a0 : b0 -// r1 := (a1 < b1) ? a1 : b1 -// r2 := (a2 < b2) ? a2 : b2 -// r3 := (a3 < b3) ? a3 : b3 -// -// https://msdn.microsoft.com/en-us/library/vstudio/bb531476(v=vs.100).aspx +// Compare packed signed 32-bit integers in a and b, and store packed minimum +// values in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_epi32 FORCE_INLINE __m128i _mm_min_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( @@ -7433,7 +7239,7 @@ FORCE_INLINE __m128i _mm_min_epi32(__m128i a, __m128i b) // Compare packed signed 8-bit integers in a and b, and store packed minimum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_min_epi8 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_epi8 FORCE_INLINE __m128i _mm_min_epi8(__m128i a, __m128i b) { return vreinterpretq_m128i_s8( @@ -7442,7 +7248,7 @@ FORCE_INLINE __m128i _mm_min_epi8(__m128i a, __m128i b) // Compare packed unsigned 16-bit integers in a and b, and store packed minimum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_min_epu16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_min_epu16 FORCE_INLINE __m128i _mm_min_epu16(__m128i a, __m128i b) { return vreinterpretq_m128i_u16( @@ -7451,7 +7257,7 @@ FORCE_INLINE __m128i _mm_min_epu16(__m128i a, __m128i b) // Compare packed unsigned 32-bit integers in a and b, and store packed minimum // values in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_max_epu32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_max_epu32 FORCE_INLINE __m128i _mm_min_epu32(__m128i a, __m128i b) { return vreinterpretq_m128i_u32( @@ -7460,29 +7266,22 @@ FORCE_INLINE __m128i _mm_min_epu32(__m128i a, __m128i b) // Horizontally compute the minimum amongst the packed unsigned 16-bit integers // in a, store the minimum and index in dst, and zero the remaining bits in dst. -// -// index[2:0] := 0 -// min[15:0] := a[15:0] -// FOR j := 0 to 7 -// i := j*16 -// IF a[i+15:i] < min[15:0] -// index[2:0] := j -// min[15:0] := a[i+15:i] -// FI -// ENDFOR -// dst[15:0] := min[15:0] -// dst[18:16] := index[2:0] -// dst[127:19] := 0 -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_minpos_epu16 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_minpos_epu16 FORCE_INLINE __m128i _mm_minpos_epu16(__m128i a) { __m128i dst; uint16_t min, idx = 0; +#if defined(__aarch64__) || defined(_M_ARM64) // Find the minimum value -#if defined(__aarch64__) min = vminvq_u16(vreinterpretq_u16_m128i(a)); + + // Get the index of the minimum value + static const uint16_t idxv[] = {0, 1, 2, 3, 4, 5, 6, 7}; + uint16x8_t minv = vdupq_n_u16(min); + uint16x8_t cmeq = vceqq_u16(minv, vreinterpretq_u16_m128i(a)); + idx = vminvq_u16(vornq_u16(vld1q_u16(idxv), cmeq)); #else + // Find the minimum value __m64 tmp; tmp = vreinterpret_m64_u16( vmin_u16(vget_low_u16(vreinterpretq_u16_m128i(a)), @@ -7492,7 +7291,6 @@ FORCE_INLINE __m128i _mm_minpos_epu16(__m128i a) tmp = vreinterpret_m64_u16( vpmin_u16(vreinterpret_u16_m64(tmp), vreinterpret_u16_m64(tmp))); min = vget_lane_u16(vreinterpret_u16_m64(tmp), 0); -#endif // Get the index of the minimum value int i; for (i = 0; i < 8; i++) { @@ -7502,6 +7300,7 @@ FORCE_INLINE __m128i _mm_minpos_epu16(__m128i a) } a = _mm_srli_si128(a, 2); } +#endif // Generate result dst = _mm_setzero_si128(); dst = vreinterpretq_m128i_u16( @@ -7511,11 +7310,97 @@ FORCE_INLINE __m128i _mm_minpos_epu16(__m128i a) return dst; } +// Compute the sum of absolute differences (SADs) of quadruplets of unsigned +// 8-bit integers in a compared to those in b, and store the 16-bit results in +// dst. Eight SADs are performed using one quadruplet from b and eight +// quadruplets from a. One quadruplet is selected from b starting at on the +// offset specified in imm8. Eight quadruplets are formed from sequential 8-bit +// integers selected from a starting at the offset specified in imm8. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mpsadbw_epu8 +FORCE_INLINE __m128i _mm_mpsadbw_epu8(__m128i a, __m128i b, const int imm) +{ + uint8x16_t _a, _b; + + switch (imm & 0x4) { + case 0: + // do nothing + _a = vreinterpretq_u8_m128i(a); + break; + case 4: + _a = vreinterpretq_u8_u32(vextq_u32(vreinterpretq_u32_m128i(a), + vreinterpretq_u32_m128i(a), 1)); + break; + default: +#if defined(__GNUC__) || defined(__clang__) + __builtin_unreachable(); +#elif defined(_MSC_VER) + __assume(0); +#endif + break; + } + + switch (imm & 0x3) { + case 0: + _b = vreinterpretq_u8_u32( + vdupq_n_u32(vgetq_lane_u32(vreinterpretq_u32_m128i(b), 0))); + break; + case 1: + _b = vreinterpretq_u8_u32( + vdupq_n_u32(vgetq_lane_u32(vreinterpretq_u32_m128i(b), 1))); + break; + case 2: + _b = vreinterpretq_u8_u32( + vdupq_n_u32(vgetq_lane_u32(vreinterpretq_u32_m128i(b), 2))); + break; + case 3: + _b = vreinterpretq_u8_u32( + vdupq_n_u32(vgetq_lane_u32(vreinterpretq_u32_m128i(b), 3))); + break; + default: +#if defined(__GNUC__) || defined(__clang__) + __builtin_unreachable(); +#elif defined(_MSC_VER) + __assume(0); +#endif + break; + } + + int16x8_t c04, c15, c26, c37; + uint8x8_t low_b = vget_low_u8(_b); + c04 = vreinterpretq_s16_u16(vabdl_u8(vget_low_u8(_a), low_b)); + uint8x16_t _a_1 = vextq_u8(_a, _a, 1); + c15 = vreinterpretq_s16_u16(vabdl_u8(vget_low_u8(_a_1), low_b)); + uint8x16_t _a_2 = vextq_u8(_a, _a, 2); + c26 = vreinterpretq_s16_u16(vabdl_u8(vget_low_u8(_a_2), low_b)); + uint8x16_t _a_3 = vextq_u8(_a, _a, 3); + c37 = vreinterpretq_s16_u16(vabdl_u8(vget_low_u8(_a_3), low_b)); +#if defined(__aarch64__) || defined(_M_ARM64) + // |0|4|2|6| + c04 = vpaddq_s16(c04, c26); + // |1|5|3|7| + c15 = vpaddq_s16(c15, c37); + + int32x4_t trn1_c = + vtrn1q_s32(vreinterpretq_s32_s16(c04), vreinterpretq_s32_s16(c15)); + int32x4_t trn2_c = + vtrn2q_s32(vreinterpretq_s32_s16(c04), vreinterpretq_s32_s16(c15)); + return vreinterpretq_m128i_s16(vpaddq_s16(vreinterpretq_s16_s32(trn1_c), + vreinterpretq_s16_s32(trn2_c))); +#else + int16x4_t c01, c23, c45, c67; + c01 = vpadd_s16(vget_low_s16(c04), vget_low_s16(c15)); + c23 = vpadd_s16(vget_low_s16(c26), vget_low_s16(c37)); + c45 = vpadd_s16(vget_high_s16(c04), vget_high_s16(c15)); + c67 = vpadd_s16(vget_high_s16(c26), vget_high_s16(c37)); + + return vreinterpretq_m128i_s16( + vcombine_s16(vpadd_s16(c01, c23), vpadd_s16(c45, c67))); +#endif +} + // Multiply the low signed 32-bit integers from each packed 64-bit element in // a and b, and store the signed 64-bit results in dst. -// -// r0 := (int64_t)(int32_t)a0 * (int64_t)(int32_t)b0 -// r1 := (int64_t)(int32_t)a2 * (int64_t)(int32_t)b2 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mul_epi32 FORCE_INLINE __m128i _mm_mul_epi32(__m128i a, __m128i b) { // vmull_s32 upcasts instead of masking, so we downcast. @@ -7524,26 +7409,18 @@ FORCE_INLINE __m128i _mm_mul_epi32(__m128i a, __m128i b) return vreinterpretq_m128i_s64(vmull_s32(a_lo, b_lo)); } -// Multiplies the 4 signed or unsigned 32-bit integers from a by the 4 signed or -// unsigned 32-bit integers from b. -// https://msdn.microsoft.com/en-us/library/vstudio/bb531409(v=vs.100).aspx +// Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit +// integers, and store the low 32 bits of the intermediate integers in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_mullo_epi32 FORCE_INLINE __m128i _mm_mullo_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_s32( vmulq_s32(vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(b))); } -// Packs the 8 unsigned 32-bit integers from a and b into unsigned 16-bit -// integers and saturates. -// -// r0 := UnsignedSaturate(a0) -// r1 := UnsignedSaturate(a1) -// r2 := UnsignedSaturate(a2) -// r3 := UnsignedSaturate(a3) -// r4 := UnsignedSaturate(b0) -// r5 := UnsignedSaturate(b1) -// r6 := UnsignedSaturate(b2) -// r7 := UnsignedSaturate(b3) +// Convert packed signed 32-bit integers from a and b to packed 16-bit integers +// using unsigned saturation, and store the results in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_packus_epi32 FORCE_INLINE __m128i _mm_packus_epi32(__m128i a, __m128i b) { return vreinterpretq_m128i_u16( @@ -7554,10 +7431,10 @@ FORCE_INLINE __m128i _mm_packus_epi32(__m128i a, __m128i b) // Round the packed double-precision (64-bit) floating-point elements in a using // the rounding parameter, and store the results as packed double-precision // floating-point elements in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_round_pd -FORCE_INLINE __m128d _mm_round_pd(__m128d a, int rounding) +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_round_pd +FORCE_INLINE_OPTNONE __m128d _mm_round_pd(__m128d a, int rounding) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) switch (rounding) { case (_MM_FROUND_TO_NEAREST_INT | _MM_FROUND_NO_EXC): return vreinterpretq_m128d_f64(vrndnq_f64(vreinterpretq_f64_m128d(a))); @@ -7624,9 +7501,10 @@ FORCE_INLINE __m128d _mm_round_pd(__m128d a, int rounding) // the rounding parameter, and store the results as packed single-precision // floating-point elements in dst. // software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_round_ps -FORCE_INLINE __m128 _mm_round_ps(__m128 a, int rounding) +FORCE_INLINE_OPTNONE __m128 _mm_round_ps(__m128 a, int rounding) { -#if defined(__aarch64__) +#if (defined(__aarch64__) || defined(_M_ARM64)) || \ + defined(__ARM_FEATURE_DIRECTED_ROUNDING) switch (rounding) { case (_MM_FROUND_TO_NEAREST_INT | _MM_FROUND_NO_EXC): return vreinterpretq_m128_f32(vrndnq_f32(vreinterpretq_f32_m128(a))); @@ -7683,7 +7561,7 @@ FORCE_INLINE __m128 _mm_round_ps(__m128 a, int rounding) // the rounding parameter, store the result as a double-precision floating-point // element in the lower element of dst, and copy the upper element from a to the // upper element of dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_round_sd +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_round_sd FORCE_INLINE __m128d _mm_round_sd(__m128d a, __m128d b, int rounding) { return _mm_move_sd(a, _mm_round_pd(b, rounding)); @@ -7703,7 +7581,7 @@ FORCE_INLINE __m128d _mm_round_sd(__m128d a, __m128d b, int rounding) // (_MM_FROUND_TO_ZERO |_MM_FROUND_NO_EXC) // truncate, and suppress // exceptions _MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see // _MM_SET_ROUNDING_MODE -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_round_ss +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_round_ss FORCE_INLINE __m128 _mm_round_ss(__m128 a, __m128 b, int rounding) { return _mm_move_ss(a, _mm_round_ps(b, rounding)); @@ -7712,10 +7590,7 @@ FORCE_INLINE __m128 _mm_round_ss(__m128 a, __m128 b, int rounding) // Load 128-bits of integer data from memory into dst using a non-temporal // memory hint. mem_addr must be aligned on a 16-byte boundary or a // general-protection exception may be generated. -// -// dst[127:0] := MEM[mem_addr+127:mem_addr] -// -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_stream_load_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_stream_load_si128 FORCE_INLINE __m128i _mm_stream_load_si128(__m128i *p) { #if __has_builtin(__builtin_nontemporal_store) @@ -7727,16 +7602,16 @@ FORCE_INLINE __m128i _mm_stream_load_si128(__m128i *p) // Compute the bitwise NOT of a and then AND with a 128-bit vector containing // all 1's, and return 1 if the result is zero, otherwise return 0. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_test_all_ones +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_test_all_ones FORCE_INLINE int _mm_test_all_ones(__m128i a) { - return (uint64_t)(vgetq_lane_s64(a, 0) & vgetq_lane_s64(a, 1)) == + return (uint64_t) (vgetq_lane_s64(a, 0) & vgetq_lane_s64(a, 1)) == ~(uint64_t) 0; } // Compute the bitwise AND of 128 bits (representing integer data) in a and // mask, and return 1 if the result is zero, otherwise return 0. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_test_all_zeros +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_test_all_zeros FORCE_INLINE int _mm_test_all_zeros(__m128i a, __m128i mask) { int64x2_t a_and_mask = @@ -7749,27 +7624,34 @@ FORCE_INLINE int _mm_test_all_zeros(__m128i a, __m128i mask) // the bitwise NOT of a and then AND with mask, and set CF to 1 if the result is // zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, // otherwise return 0. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=mm_test_mix_ones_zero +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=mm_test_mix_ones_zero +// Note: Argument names may be wrong in the Intel intrinsics guide. FORCE_INLINE int _mm_test_mix_ones_zeros(__m128i a, __m128i mask) { - uint64x2_t zf = - vandq_u64(vreinterpretq_u64_m128i(mask), vreinterpretq_u64_m128i(a)); - uint64x2_t cf = - vbicq_u64(vreinterpretq_u64_m128i(mask), vreinterpretq_u64_m128i(a)); - uint64x2_t result = vandq_u64(zf, cf); - return !(vgetq_lane_u64(result, 0) | vgetq_lane_u64(result, 1)); + uint64x2_t v = vreinterpretq_u64_m128i(a); + uint64x2_t m = vreinterpretq_u64_m128i(mask); + + // find ones (set-bits) and zeros (clear-bits) under clip mask + uint64x2_t ones = vandq_u64(m, v); + uint64x2_t zeros = vbicq_u64(m, v); + + // If both 128-bit variables are populated (non-zero) then return 1. + // For comparison purposes, first compact each var down to 32-bits. + uint32x2_t reduced = vpmax_u32(vqmovn_u64(ones), vqmovn_u64(zeros)); + + // if folding minimum is non-zero then both vars must be non-zero + return (vget_lane_u32(vpmin_u32(reduced, reduced), 0) != 0); } // Compute the bitwise AND of 128 bits (representing integer data) in a and b, // and set ZF to 1 if the result is zero, otherwise set ZF to 0. Compute the // bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, // otherwise set CF to 0. Return the CF value. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_testc_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_testc_si128 FORCE_INLINE int _mm_testc_si128(__m128i a, __m128i b) { int64x2_t s64 = - vandq_s64(vreinterpretq_s64_s32(vmvnq_s32(vreinterpretq_s32_m128i(a))), - vreinterpretq_s64_m128i(b)); + vbicq_s64(vreinterpretq_s64_m128i(b), vreinterpretq_s64_m128i(a)); return !(vgetq_lane_s64(s64, 0) | vgetq_lane_s64(s64, 1)); } @@ -7778,14 +7660,14 @@ FORCE_INLINE int _mm_testc_si128(__m128i a, __m128i b) // bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, // otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, // otherwise return 0. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_testnzc_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_testnzc_si128 #define _mm_testnzc_si128(a, b) _mm_test_mix_ones_zeros(a, b) // Compute the bitwise AND of 128 bits (representing integer data) in a and b, // and set ZF to 1 if the result is zero, otherwise set ZF to 0. Compute the // bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, // otherwise set CF to 0. Return the ZF value. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_testz_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_testz_si128 FORCE_INLINE int _mm_testz_si128(__m128i a, __m128i b) { int64x2_t s64 = @@ -7795,11 +7677,768 @@ FORCE_INLINE int _mm_testz_si128(__m128i a, __m128i b) /* SSE4.2 */ +static const uint16_t ALIGN_STRUCT(16) _sse2neon_cmpestr_mask16b[8] = { + 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, +}; +static const uint8_t ALIGN_STRUCT(16) _sse2neon_cmpestr_mask8b[16] = { + 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, + 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, +}; + +/* specify the source data format */ +#define _SIDD_UBYTE_OPS 0x00 /* unsigned 8-bit characters */ +#define _SIDD_UWORD_OPS 0x01 /* unsigned 16-bit characters */ +#define _SIDD_SBYTE_OPS 0x02 /* signed 8-bit characters */ +#define _SIDD_SWORD_OPS 0x03 /* signed 16-bit characters */ + +/* specify the comparison operation */ +#define _SIDD_CMP_EQUAL_ANY 0x00 /* compare equal any: strchr */ +#define _SIDD_CMP_RANGES 0x04 /* compare ranges */ +#define _SIDD_CMP_EQUAL_EACH 0x08 /* compare equal each: strcmp */ +#define _SIDD_CMP_EQUAL_ORDERED 0x0C /* compare equal ordered */ + +/* specify the polarity */ +#define _SIDD_POSITIVE_POLARITY 0x00 +#define _SIDD_MASKED_POSITIVE_POLARITY 0x20 +#define _SIDD_NEGATIVE_POLARITY 0x10 /* negate results */ +#define _SIDD_MASKED_NEGATIVE_POLARITY \ + 0x30 /* negate results only before end of string */ + +/* specify the output selection in _mm_cmpXstri */ +#define _SIDD_LEAST_SIGNIFICANT 0x00 +#define _SIDD_MOST_SIGNIFICANT 0x40 + +/* specify the output selection in _mm_cmpXstrm */ +#define _SIDD_BIT_MASK 0x00 +#define _SIDD_UNIT_MASK 0x40 + +/* Pattern Matching for C macros. + * https://github.com/pfultz2/Cloak/wiki/C-Preprocessor-tricks,-tips,-and-idioms + */ + +/* catenate */ +#define SSE2NEON_PRIMITIVE_CAT(a, ...) a##__VA_ARGS__ +#define SSE2NEON_CAT(a, b) SSE2NEON_PRIMITIVE_CAT(a, b) + +#define SSE2NEON_IIF(c) SSE2NEON_PRIMITIVE_CAT(SSE2NEON_IIF_, c) +/* run the 2nd parameter */ +#define SSE2NEON_IIF_0(t, ...) __VA_ARGS__ +/* run the 1st parameter */ +#define SSE2NEON_IIF_1(t, ...) t + +#define SSE2NEON_COMPL(b) SSE2NEON_PRIMITIVE_CAT(SSE2NEON_COMPL_, b) +#define SSE2NEON_COMPL_0 1 +#define SSE2NEON_COMPL_1 0 + +#define SSE2NEON_DEC(x) SSE2NEON_PRIMITIVE_CAT(SSE2NEON_DEC_, x) +#define SSE2NEON_DEC_1 0 +#define SSE2NEON_DEC_2 1 +#define SSE2NEON_DEC_3 2 +#define SSE2NEON_DEC_4 3 +#define SSE2NEON_DEC_5 4 +#define SSE2NEON_DEC_6 5 +#define SSE2NEON_DEC_7 6 +#define SSE2NEON_DEC_8 7 +#define SSE2NEON_DEC_9 8 +#define SSE2NEON_DEC_10 9 +#define SSE2NEON_DEC_11 10 +#define SSE2NEON_DEC_12 11 +#define SSE2NEON_DEC_13 12 +#define SSE2NEON_DEC_14 13 +#define SSE2NEON_DEC_15 14 +#define SSE2NEON_DEC_16 15 + +/* detection */ +#define SSE2NEON_CHECK_N(x, n, ...) n +#define SSE2NEON_CHECK(...) SSE2NEON_CHECK_N(__VA_ARGS__, 0, ) +#define SSE2NEON_PROBE(x) x, 1, + +#define SSE2NEON_NOT(x) SSE2NEON_CHECK(SSE2NEON_PRIMITIVE_CAT(SSE2NEON_NOT_, x)) +#define SSE2NEON_NOT_0 SSE2NEON_PROBE(~) + +#define SSE2NEON_BOOL(x) SSE2NEON_COMPL(SSE2NEON_NOT(x)) +#define SSE2NEON_IF(c) SSE2NEON_IIF(SSE2NEON_BOOL(c)) + +#define SSE2NEON_EAT(...) +#define SSE2NEON_EXPAND(...) __VA_ARGS__ +#define SSE2NEON_WHEN(c) SSE2NEON_IF(c)(SSE2NEON_EXPAND, SSE2NEON_EAT) + +/* recursion */ +/* deferred expression */ +#define SSE2NEON_EMPTY() +#define SSE2NEON_DEFER(id) id SSE2NEON_EMPTY() +#define SSE2NEON_OBSTRUCT(...) __VA_ARGS__ SSE2NEON_DEFER(SSE2NEON_EMPTY)() +#define SSE2NEON_EXPAND(...) __VA_ARGS__ + +#define SSE2NEON_EVAL(...) \ + SSE2NEON_EVAL1(SSE2NEON_EVAL1(SSE2NEON_EVAL1(__VA_ARGS__))) +#define SSE2NEON_EVAL1(...) \ + SSE2NEON_EVAL2(SSE2NEON_EVAL2(SSE2NEON_EVAL2(__VA_ARGS__))) +#define SSE2NEON_EVAL2(...) \ + SSE2NEON_EVAL3(SSE2NEON_EVAL3(SSE2NEON_EVAL3(__VA_ARGS__))) +#define SSE2NEON_EVAL3(...) __VA_ARGS__ + +#define SSE2NEON_REPEAT(count, macro, ...) \ + SSE2NEON_WHEN(count) \ + (SSE2NEON_OBSTRUCT(SSE2NEON_REPEAT_INDIRECT)()( \ + SSE2NEON_DEC(count), macro, \ + __VA_ARGS__) SSE2NEON_OBSTRUCT(macro)(SSE2NEON_DEC(count), \ + __VA_ARGS__)) +#define SSE2NEON_REPEAT_INDIRECT() SSE2NEON_REPEAT + +#define SSE2NEON_SIZE_OF_byte 8 +#define SSE2NEON_NUMBER_OF_LANES_byte 16 +#define SSE2NEON_SIZE_OF_word 16 +#define SSE2NEON_NUMBER_OF_LANES_word 8 + +#define SSE2NEON_COMPARE_EQUAL_THEN_FILL_LANE(i, type) \ + mtx[i] = vreinterpretq_m128i_##type(vceqq_##type( \ + vdupq_n_##type(vgetq_lane_##type(vreinterpretq_##type##_m128i(b), i)), \ + vreinterpretq_##type##_m128i(a))); + +#define SSE2NEON_FILL_LANE(i, type) \ + vec_b[i] = \ + vdupq_n_##type(vgetq_lane_##type(vreinterpretq_##type##_m128i(b), i)); + +#define PCMPSTR_RANGES(a, b, mtx, data_type_prefix, type_prefix, size, \ + number_of_lanes, byte_or_word) \ + do { \ + SSE2NEON_CAT( \ + data_type_prefix, \ + SSE2NEON_CAT(size, \ + SSE2NEON_CAT(x, SSE2NEON_CAT(number_of_lanes, _t)))) \ + vec_b[number_of_lanes]; \ + __m128i mask = SSE2NEON_IIF(byte_or_word)( \ + vreinterpretq_m128i_u16(vdupq_n_u16(0xff)), \ + vreinterpretq_m128i_u32(vdupq_n_u32(0xffff))); \ + SSE2NEON_EVAL(SSE2NEON_REPEAT(number_of_lanes, SSE2NEON_FILL_LANE, \ + SSE2NEON_CAT(type_prefix, size))) \ + for (int i = 0; i < number_of_lanes; i++) { \ + mtx[i] = SSE2NEON_CAT(vreinterpretq_m128i_u, \ + size)(SSE2NEON_CAT(vbslq_u, size)( \ + SSE2NEON_CAT(vreinterpretq_u, \ + SSE2NEON_CAT(size, _m128i))(mask), \ + SSE2NEON_CAT(vcgeq_, SSE2NEON_CAT(type_prefix, size))( \ + vec_b[i], \ + SSE2NEON_CAT( \ + vreinterpretq_, \ + SSE2NEON_CAT(type_prefix, \ + SSE2NEON_CAT(size, _m128i(a))))), \ + SSE2NEON_CAT(vcleq_, SSE2NEON_CAT(type_prefix, size))( \ + vec_b[i], \ + SSE2NEON_CAT( \ + vreinterpretq_, \ + SSE2NEON_CAT(type_prefix, \ + SSE2NEON_CAT(size, _m128i(a))))))); \ + } \ + } while (0) + +#define PCMPSTR_EQ(a, b, mtx, size, number_of_lanes) \ + do { \ + SSE2NEON_EVAL(SSE2NEON_REPEAT(number_of_lanes, \ + SSE2NEON_COMPARE_EQUAL_THEN_FILL_LANE, \ + SSE2NEON_CAT(u, size))) \ + } while (0) + +#define SSE2NEON_CMP_EQUAL_ANY_IMPL(type) \ + static int _sse2neon_cmp_##type##_equal_any(__m128i a, int la, __m128i b, \ + int lb) \ + { \ + __m128i mtx[16]; \ + PCMPSTR_EQ(a, b, mtx, SSE2NEON_CAT(SSE2NEON_SIZE_OF_, type), \ + SSE2NEON_CAT(SSE2NEON_NUMBER_OF_LANES_, type)); \ + return SSE2NEON_CAT( \ + _sse2neon_aggregate_equal_any_, \ + SSE2NEON_CAT( \ + SSE2NEON_CAT(SSE2NEON_SIZE_OF_, type), \ + SSE2NEON_CAT(x, SSE2NEON_CAT(SSE2NEON_NUMBER_OF_LANES_, \ + type))))(la, lb, mtx); \ + } + +#define SSE2NEON_CMP_RANGES_IMPL(type, data_type, us, byte_or_word) \ + static int _sse2neon_cmp_##us##type##_ranges(__m128i a, int la, __m128i b, \ + int lb) \ + { \ + __m128i mtx[16]; \ + PCMPSTR_RANGES( \ + a, b, mtx, data_type, us, SSE2NEON_CAT(SSE2NEON_SIZE_OF_, type), \ + SSE2NEON_CAT(SSE2NEON_NUMBER_OF_LANES_, type), byte_or_word); \ + return SSE2NEON_CAT( \ + _sse2neon_aggregate_ranges_, \ + SSE2NEON_CAT( \ + SSE2NEON_CAT(SSE2NEON_SIZE_OF_, type), \ + SSE2NEON_CAT(x, SSE2NEON_CAT(SSE2NEON_NUMBER_OF_LANES_, \ + type))))(la, lb, mtx); \ + } + +#define SSE2NEON_CMP_EQUAL_ORDERED_IMPL(type) \ + static int _sse2neon_cmp_##type##_equal_ordered(__m128i a, int la, \ + __m128i b, int lb) \ + { \ + __m128i mtx[16]; \ + PCMPSTR_EQ(a, b, mtx, SSE2NEON_CAT(SSE2NEON_SIZE_OF_, type), \ + SSE2NEON_CAT(SSE2NEON_NUMBER_OF_LANES_, type)); \ + return SSE2NEON_CAT( \ + _sse2neon_aggregate_equal_ordered_, \ + SSE2NEON_CAT( \ + SSE2NEON_CAT(SSE2NEON_SIZE_OF_, type), \ + SSE2NEON_CAT(x, \ + SSE2NEON_CAT(SSE2NEON_NUMBER_OF_LANES_, type))))( \ + SSE2NEON_CAT(SSE2NEON_NUMBER_OF_LANES_, type), la, lb, mtx); \ + } + +static int _sse2neon_aggregate_equal_any_8x16(int la, int lb, __m128i mtx[16]) +{ + int res = 0; + int m = (1 << la) - 1; + uint8x8_t vec_mask = vld1_u8(_sse2neon_cmpestr_mask8b); + uint8x8_t t_lo = vtst_u8(vdup_n_u8(m & 0xff), vec_mask); + uint8x8_t t_hi = vtst_u8(vdup_n_u8(m >> 8), vec_mask); + uint8x16_t vec = vcombine_u8(t_lo, t_hi); + for (int j = 0; j < lb; j++) { + mtx[j] = vreinterpretq_m128i_u8( + vandq_u8(vec, vreinterpretq_u8_m128i(mtx[j]))); + mtx[j] = vreinterpretq_m128i_u8( + vshrq_n_u8(vreinterpretq_u8_m128i(mtx[j]), 7)); + int tmp = _sse2neon_vaddvq_u8(vreinterpretq_u8_m128i(mtx[j])) ? 1 : 0; + res |= (tmp << j); + } + return res; +} + +static int _sse2neon_aggregate_equal_any_16x8(int la, int lb, __m128i mtx[16]) +{ + int res = 0; + int m = (1 << la) - 1; + uint16x8_t vec = + vtstq_u16(vdupq_n_u16(m), vld1q_u16(_sse2neon_cmpestr_mask16b)); + for (int j = 0; j < lb; j++) { + mtx[j] = vreinterpretq_m128i_u16( + vandq_u16(vec, vreinterpretq_u16_m128i(mtx[j]))); + mtx[j] = vreinterpretq_m128i_u16( + vshrq_n_u16(vreinterpretq_u16_m128i(mtx[j]), 15)); + int tmp = _sse2neon_vaddvq_u16(vreinterpretq_u16_m128i(mtx[j])) ? 1 : 0; + res |= (tmp << j); + } + return res; +} + +/* clang-format off */ +#define SSE2NEON_GENERATE_CMP_EQUAL_ANY(prefix) \ + prefix##IMPL(byte) \ + prefix##IMPL(word) +/* clang-format on */ + +SSE2NEON_GENERATE_CMP_EQUAL_ANY(SSE2NEON_CMP_EQUAL_ANY_) + +static int _sse2neon_aggregate_ranges_16x8(int la, int lb, __m128i mtx[16]) +{ + int res = 0; + int m = (1 << la) - 1; + uint16x8_t vec = + vtstq_u16(vdupq_n_u16(m), vld1q_u16(_sse2neon_cmpestr_mask16b)); + for (int j = 0; j < lb; j++) { + mtx[j] = vreinterpretq_m128i_u16( + vandq_u16(vec, vreinterpretq_u16_m128i(mtx[j]))); + mtx[j] = vreinterpretq_m128i_u16( + vshrq_n_u16(vreinterpretq_u16_m128i(mtx[j]), 15)); + __m128i tmp = vreinterpretq_m128i_u32( + vshrq_n_u32(vreinterpretq_u32_m128i(mtx[j]), 16)); + uint32x4_t vec_res = vandq_u32(vreinterpretq_u32_m128i(mtx[j]), + vreinterpretq_u32_m128i(tmp)); +#if defined(__aarch64__) || defined(_M_ARM64) + int t = vaddvq_u32(vec_res) ? 1 : 0; +#else + uint64x2_t sumh = vpaddlq_u32(vec_res); + int t = vgetq_lane_u64(sumh, 0) + vgetq_lane_u64(sumh, 1); +#endif + res |= (t << j); + } + return res; +} + +static int _sse2neon_aggregate_ranges_8x16(int la, int lb, __m128i mtx[16]) +{ + int res = 0; + int m = (1 << la) - 1; + uint8x8_t vec_mask = vld1_u8(_sse2neon_cmpestr_mask8b); + uint8x8_t t_lo = vtst_u8(vdup_n_u8(m & 0xff), vec_mask); + uint8x8_t t_hi = vtst_u8(vdup_n_u8(m >> 8), vec_mask); + uint8x16_t vec = vcombine_u8(t_lo, t_hi); + for (int j = 0; j < lb; j++) { + mtx[j] = vreinterpretq_m128i_u8( + vandq_u8(vec, vreinterpretq_u8_m128i(mtx[j]))); + mtx[j] = vreinterpretq_m128i_u8( + vshrq_n_u8(vreinterpretq_u8_m128i(mtx[j]), 7)); + __m128i tmp = vreinterpretq_m128i_u16( + vshrq_n_u16(vreinterpretq_u16_m128i(mtx[j]), 8)); + uint16x8_t vec_res = vandq_u16(vreinterpretq_u16_m128i(mtx[j]), + vreinterpretq_u16_m128i(tmp)); + int t = _sse2neon_vaddvq_u16(vec_res) ? 1 : 0; + res |= (t << j); + } + return res; +} + +#define SSE2NEON_CMP_RANGES_IS_BYTE 1 +#define SSE2NEON_CMP_RANGES_IS_WORD 0 + +/* clang-format off */ +#define SSE2NEON_GENERATE_CMP_RANGES(prefix) \ + prefix##IMPL(byte, uint, u, prefix##IS_BYTE) \ + prefix##IMPL(byte, int, s, prefix##IS_BYTE) \ + prefix##IMPL(word, uint, u, prefix##IS_WORD) \ + prefix##IMPL(word, int, s, prefix##IS_WORD) +/* clang-format on */ + +SSE2NEON_GENERATE_CMP_RANGES(SSE2NEON_CMP_RANGES_) + +#undef SSE2NEON_CMP_RANGES_IS_BYTE +#undef SSE2NEON_CMP_RANGES_IS_WORD + +static int _sse2neon_cmp_byte_equal_each(__m128i a, int la, __m128i b, int lb) +{ + uint8x16_t mtx = + vceqq_u8(vreinterpretq_u8_m128i(a), vreinterpretq_u8_m128i(b)); + int m0 = (la < lb) ? 0 : ((1 << la) - (1 << lb)); + int m1 = 0x10000 - (1 << la); + int tb = 0x10000 - (1 << lb); + uint8x8_t vec_mask, vec0_lo, vec0_hi, vec1_lo, vec1_hi; + uint8x8_t tmp_lo, tmp_hi, res_lo, res_hi; + vec_mask = vld1_u8(_sse2neon_cmpestr_mask8b); + vec0_lo = vtst_u8(vdup_n_u8(m0), vec_mask); + vec0_hi = vtst_u8(vdup_n_u8(m0 >> 8), vec_mask); + vec1_lo = vtst_u8(vdup_n_u8(m1), vec_mask); + vec1_hi = vtst_u8(vdup_n_u8(m1 >> 8), vec_mask); + tmp_lo = vtst_u8(vdup_n_u8(tb), vec_mask); + tmp_hi = vtst_u8(vdup_n_u8(tb >> 8), vec_mask); + + res_lo = vbsl_u8(vec0_lo, vdup_n_u8(0), vget_low_u8(mtx)); + res_hi = vbsl_u8(vec0_hi, vdup_n_u8(0), vget_high_u8(mtx)); + res_lo = vbsl_u8(vec1_lo, tmp_lo, res_lo); + res_hi = vbsl_u8(vec1_hi, tmp_hi, res_hi); + res_lo = vand_u8(res_lo, vec_mask); + res_hi = vand_u8(res_hi, vec_mask); + + int res = _sse2neon_vaddv_u8(res_lo) + (_sse2neon_vaddv_u8(res_hi) << 8); + return res; +} + +static int _sse2neon_cmp_word_equal_each(__m128i a, int la, __m128i b, int lb) +{ + uint16x8_t mtx = + vceqq_u16(vreinterpretq_u16_m128i(a), vreinterpretq_u16_m128i(b)); + int m0 = (la < lb) ? 0 : ((1 << la) - (1 << lb)); + int m1 = 0x100 - (1 << la); + int tb = 0x100 - (1 << lb); + uint16x8_t vec_mask = vld1q_u16(_sse2neon_cmpestr_mask16b); + uint16x8_t vec0 = vtstq_u16(vdupq_n_u16(m0), vec_mask); + uint16x8_t vec1 = vtstq_u16(vdupq_n_u16(m1), vec_mask); + uint16x8_t tmp = vtstq_u16(vdupq_n_u16(tb), vec_mask); + mtx = vbslq_u16(vec0, vdupq_n_u16(0), mtx); + mtx = vbslq_u16(vec1, tmp, mtx); + mtx = vandq_u16(mtx, vec_mask); + return _sse2neon_vaddvq_u16(mtx); +} + +#define SSE2NEON_AGGREGATE_EQUAL_ORDER_IS_UBYTE 1 +#define SSE2NEON_AGGREGATE_EQUAL_ORDER_IS_UWORD 0 + +#define SSE2NEON_AGGREGATE_EQUAL_ORDER_IMPL(size, number_of_lanes, data_type) \ + static int _sse2neon_aggregate_equal_ordered_##size##x##number_of_lanes( \ + int bound, int la, int lb, __m128i mtx[16]) \ + { \ + int res = 0; \ + int m1 = SSE2NEON_IIF(data_type)(0x10000, 0x100) - (1 << la); \ + uint##size##x8_t vec_mask = SSE2NEON_IIF(data_type)( \ + vld1_u##size(_sse2neon_cmpestr_mask##size##b), \ + vld1q_u##size(_sse2neon_cmpestr_mask##size##b)); \ + uint##size##x##number_of_lanes##_t vec1 = SSE2NEON_IIF(data_type)( \ + vcombine_u##size(vtst_u##size(vdup_n_u##size(m1), vec_mask), \ + vtst_u##size(vdup_n_u##size(m1 >> 8), vec_mask)), \ + vtstq_u##size(vdupq_n_u##size(m1), vec_mask)); \ + uint##size##x##number_of_lanes##_t vec_minusone = vdupq_n_u##size(-1); \ + uint##size##x##number_of_lanes##_t vec_zero = vdupq_n_u##size(0); \ + for (int j = 0; j < lb; j++) { \ + mtx[j] = vreinterpretq_m128i_u##size(vbslq_u##size( \ + vec1, vec_minusone, vreinterpretq_u##size##_m128i(mtx[j]))); \ + } \ + for (int j = lb; j < bound; j++) { \ + mtx[j] = vreinterpretq_m128i_u##size( \ + vbslq_u##size(vec1, vec_minusone, vec_zero)); \ + } \ + unsigned SSE2NEON_IIF(data_type)(char, short) *ptr = \ + (unsigned SSE2NEON_IIF(data_type)(char, short) *) mtx; \ + for (int i = 0; i < bound; i++) { \ + int val = 1; \ + for (int j = 0, k = i; j < bound - i && k < bound; j++, k++) \ + val &= ptr[k * bound + j]; \ + res += val << i; \ + } \ + return res; \ + } + +/* clang-format off */ +#define SSE2NEON_GENERATE_AGGREGATE_EQUAL_ORDER(prefix) \ + prefix##IMPL(8, 16, prefix##IS_UBYTE) \ + prefix##IMPL(16, 8, prefix##IS_UWORD) +/* clang-format on */ + +SSE2NEON_GENERATE_AGGREGATE_EQUAL_ORDER(SSE2NEON_AGGREGATE_EQUAL_ORDER_) + +#undef SSE2NEON_AGGREGATE_EQUAL_ORDER_IS_UBYTE +#undef SSE2NEON_AGGREGATE_EQUAL_ORDER_IS_UWORD + +/* clang-format off */ +#define SSE2NEON_GENERATE_CMP_EQUAL_ORDERED(prefix) \ + prefix##IMPL(byte) \ + prefix##IMPL(word) +/* clang-format on */ + +SSE2NEON_GENERATE_CMP_EQUAL_ORDERED(SSE2NEON_CMP_EQUAL_ORDERED_) + +#define SSE2NEON_CMPESTR_LIST \ + _(CMP_UBYTE_EQUAL_ANY, cmp_byte_equal_any) \ + _(CMP_UWORD_EQUAL_ANY, cmp_word_equal_any) \ + _(CMP_SBYTE_EQUAL_ANY, cmp_byte_equal_any) \ + _(CMP_SWORD_EQUAL_ANY, cmp_word_equal_any) \ + _(CMP_UBYTE_RANGES, cmp_ubyte_ranges) \ + _(CMP_UWORD_RANGES, cmp_uword_ranges) \ + _(CMP_SBYTE_RANGES, cmp_sbyte_ranges) \ + _(CMP_SWORD_RANGES, cmp_sword_ranges) \ + _(CMP_UBYTE_EQUAL_EACH, cmp_byte_equal_each) \ + _(CMP_UWORD_EQUAL_EACH, cmp_word_equal_each) \ + _(CMP_SBYTE_EQUAL_EACH, cmp_byte_equal_each) \ + _(CMP_SWORD_EQUAL_EACH, cmp_word_equal_each) \ + _(CMP_UBYTE_EQUAL_ORDERED, cmp_byte_equal_ordered) \ + _(CMP_UWORD_EQUAL_ORDERED, cmp_word_equal_ordered) \ + _(CMP_SBYTE_EQUAL_ORDERED, cmp_byte_equal_ordered) \ + _(CMP_SWORD_EQUAL_ORDERED, cmp_word_equal_ordered) + +enum { +#define _(name, func_suffix) name, + SSE2NEON_CMPESTR_LIST +#undef _ +}; +typedef int (*cmpestr_func_t)(__m128i a, int la, __m128i b, int lb); +static cmpestr_func_t _sse2neon_cmpfunc_table[] = { +#define _(name, func_suffix) _sse2neon_##func_suffix, + SSE2NEON_CMPESTR_LIST +#undef _ +}; + +FORCE_INLINE int _sse2neon_sido_negative(int res, int lb, int imm8, int bound) +{ + switch (imm8 & 0x30) { + case _SIDD_NEGATIVE_POLARITY: + res ^= 0xffffffff; + break; + case _SIDD_MASKED_NEGATIVE_POLARITY: + res ^= (1 << lb) - 1; + break; + default: + break; + } + + return res & ((bound == 8) ? 0xFF : 0xFFFF); +} + +FORCE_INLINE int _sse2neon_clz(unsigned int x) +{ +#ifdef _MSC_VER + unsigned long cnt = 0; + if (_BitScanReverse(&cnt, x)) + return 31 - cnt; + return 32; +#else + return x != 0 ? __builtin_clz(x) : 32; +#endif +} + +FORCE_INLINE int _sse2neon_ctz(unsigned int x) +{ +#ifdef _MSC_VER + unsigned long cnt = 0; + if (_BitScanForward(&cnt, x)) + return cnt; + return 32; +#else + return x != 0 ? __builtin_ctz(x) : 32; +#endif +} + +FORCE_INLINE int _sse2neon_ctzll(unsigned long long x) +{ +#ifdef _MSC_VER + unsigned long cnt; +#if defined(SSE2NEON_HAS_BITSCAN64) + if (_BitScanForward64(&cnt, x)) + return (int) (cnt); +#else + if (_BitScanForward(&cnt, (unsigned long) (x))) + return (int) cnt; + if (_BitScanForward(&cnt, (unsigned long) (x >> 32))) + return (int) (cnt + 32); +#endif /* SSE2NEON_HAS_BITSCAN64 */ + return 64; +#else /* assume GNU compatible compilers */ + return x != 0 ? __builtin_ctzll(x) : 64; +#endif +} + +#define SSE2NEON_MIN(x, y) (x) < (y) ? (x) : (y) + +#define SSE2NEON_CMPSTR_SET_UPPER(var, imm) \ + const int var = (imm & 0x01) ? 8 : 16 + +#define SSE2NEON_CMPESTRX_LEN_PAIR(a, b, la, lb) \ + int tmp1 = la ^ (la >> 31); \ + la = tmp1 - (la >> 31); \ + int tmp2 = lb ^ (lb >> 31); \ + lb = tmp2 - (lb >> 31); \ + la = SSE2NEON_MIN(la, bound); \ + lb = SSE2NEON_MIN(lb, bound) + +// Compare all pairs of character in string a and b, +// then aggregate the result. +// As the only difference of PCMPESTR* and PCMPISTR* is the way to calculate the +// length of string, we use SSE2NEON_CMP{I,E}STRX_GET_LEN to get the length of +// string a and b. +#define SSE2NEON_COMP_AGG(a, b, la, lb, imm8, IE) \ + SSE2NEON_CMPSTR_SET_UPPER(bound, imm8); \ + SSE2NEON_##IE##_LEN_PAIR(a, b, la, lb); \ + int r2 = (_sse2neon_cmpfunc_table[imm8 & 0x0f])(a, la, b, lb); \ + r2 = _sse2neon_sido_negative(r2, lb, imm8, bound) + +#define SSE2NEON_CMPSTR_GENERATE_INDEX(r2, bound, imm8) \ + return (r2 == 0) ? bound \ + : ((imm8 & 0x40) ? (31 - _sse2neon_clz(r2)) \ + : _sse2neon_ctz(r2)) + +#define SSE2NEON_CMPSTR_GENERATE_MASK(dst) \ + __m128i dst = vreinterpretq_m128i_u8(vdupq_n_u8(0)); \ + if (imm8 & 0x40) { \ + if (bound == 8) { \ + uint16x8_t tmp = vtstq_u16(vdupq_n_u16(r2), \ + vld1q_u16(_sse2neon_cmpestr_mask16b)); \ + dst = vreinterpretq_m128i_u16(vbslq_u16( \ + tmp, vdupq_n_u16(-1), vreinterpretq_u16_m128i(dst))); \ + } else { \ + uint8x16_t vec_r2 = \ + vcombine_u8(vdup_n_u8(r2), vdup_n_u8(r2 >> 8)); \ + uint8x16_t tmp = \ + vtstq_u8(vec_r2, vld1q_u8(_sse2neon_cmpestr_mask8b)); \ + dst = vreinterpretq_m128i_u8( \ + vbslq_u8(tmp, vdupq_n_u8(-1), vreinterpretq_u8_m128i(dst))); \ + } \ + } else { \ + if (bound == 16) { \ + dst = vreinterpretq_m128i_u16( \ + vsetq_lane_u16(r2 & 0xffff, vreinterpretq_u16_m128i(dst), 0)); \ + } else { \ + dst = vreinterpretq_m128i_u8( \ + vsetq_lane_u8(r2 & 0xff, vreinterpretq_u8_m128i(dst), 0)); \ + } \ + } \ + return dst + +// Compare packed strings in a and b with lengths la and lb using the control +// in imm8, and returns 1 if b did not contain a null character and the +// resulting mask was zero, and 0 otherwise. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpestra +FORCE_INLINE int _mm_cmpestra(__m128i a, + int la, + __m128i b, + int lb, + const int imm8) +{ + int lb_cpy = lb; + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPESTRX); + return !r2 & (lb_cpy > bound); +} + +// Compare packed strings in a and b with lengths la and lb using the control in +// imm8, and returns 1 if the resulting mask was non-zero, and 0 otherwise. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpestrc +FORCE_INLINE int _mm_cmpestrc(__m128i a, + int la, + __m128i b, + int lb, + const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPESTRX); + return r2 != 0; +} + +// Compare packed strings in a and b with lengths la and lb using the control +// in imm8, and store the generated index in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpestri +FORCE_INLINE int _mm_cmpestri(__m128i a, + int la, + __m128i b, + int lb, + const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPESTRX); + SSE2NEON_CMPSTR_GENERATE_INDEX(r2, bound, imm8); +} + +// Compare packed strings in a and b with lengths la and lb using the control +// in imm8, and store the generated mask in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpestrm +FORCE_INLINE __m128i +_mm_cmpestrm(__m128i a, int la, __m128i b, int lb, const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPESTRX); + SSE2NEON_CMPSTR_GENERATE_MASK(dst); +} + +// Compare packed strings in a and b with lengths la and lb using the control in +// imm8, and returns bit 0 of the resulting bit mask. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpestro +FORCE_INLINE int _mm_cmpestro(__m128i a, + int la, + __m128i b, + int lb, + const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPESTRX); + return r2 & 1; +} + +// Compare packed strings in a and b with lengths la and lb using the control in +// imm8, and returns 1 if any character in a was null, and 0 otherwise. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpestrs +FORCE_INLINE int _mm_cmpestrs(__m128i a, + int la, + __m128i b, + int lb, + const int imm8) +{ + (void) a; + (void) b; + (void) lb; + SSE2NEON_CMPSTR_SET_UPPER(bound, imm8); + return la <= (bound - 1); +} + +// Compare packed strings in a and b with lengths la and lb using the control in +// imm8, and returns 1 if any character in b was null, and 0 otherwise. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpestrz +FORCE_INLINE int _mm_cmpestrz(__m128i a, + int la, + __m128i b, + int lb, + const int imm8) +{ + (void) a; + (void) b; + (void) la; + SSE2NEON_CMPSTR_SET_UPPER(bound, imm8); + return lb <= (bound - 1); +} + +#define SSE2NEON_CMPISTRX_LENGTH(str, len, imm8) \ + do { \ + if (imm8 & 0x01) { \ + uint16x8_t equal_mask_##str = \ + vceqq_u16(vreinterpretq_u16_m128i(str), vdupq_n_u16(0)); \ + uint8x8_t res_##str = vshrn_n_u16(equal_mask_##str, 4); \ + uint64_t matches_##str = \ + vget_lane_u64(vreinterpret_u64_u8(res_##str), 0); \ + len = _sse2neon_ctzll(matches_##str) >> 3; \ + } else { \ + uint16x8_t equal_mask_##str = vreinterpretq_u16_u8( \ + vceqq_u8(vreinterpretq_u8_m128i(str), vdupq_n_u8(0))); \ + uint8x8_t res_##str = vshrn_n_u16(equal_mask_##str, 4); \ + uint64_t matches_##str = \ + vget_lane_u64(vreinterpret_u64_u8(res_##str), 0); \ + len = _sse2neon_ctzll(matches_##str) >> 2; \ + } \ + } while (0) + +#define SSE2NEON_CMPISTRX_LEN_PAIR(a, b, la, lb) \ + int la, lb; \ + do { \ + SSE2NEON_CMPISTRX_LENGTH(a, la, imm8); \ + SSE2NEON_CMPISTRX_LENGTH(b, lb, imm8); \ + } while (0) + +// Compare packed strings with implicit lengths in a and b using the control in +// imm8, and returns 1 if b did not contain a null character and the resulting +// mask was zero, and 0 otherwise. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpistra +FORCE_INLINE int _mm_cmpistra(__m128i a, __m128i b, const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPISTRX); + return !r2 & (lb >= bound); +} + +// Compare packed strings with implicit lengths in a and b using the control in +// imm8, and returns 1 if the resulting mask was non-zero, and 0 otherwise. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpistrc +FORCE_INLINE int _mm_cmpistrc(__m128i a, __m128i b, const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPISTRX); + return r2 != 0; +} + +// Compare packed strings with implicit lengths in a and b using the control in +// imm8, and store the generated index in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpistri +FORCE_INLINE int _mm_cmpistri(__m128i a, __m128i b, const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPISTRX); + SSE2NEON_CMPSTR_GENERATE_INDEX(r2, bound, imm8); +} + +// Compare packed strings with implicit lengths in a and b using the control in +// imm8, and store the generated mask in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpistrm +FORCE_INLINE __m128i _mm_cmpistrm(__m128i a, __m128i b, const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPISTRX); + SSE2NEON_CMPSTR_GENERATE_MASK(dst); +} + +// Compare packed strings with implicit lengths in a and b using the control in +// imm8, and returns bit 0 of the resulting bit mask. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpistro +FORCE_INLINE int _mm_cmpistro(__m128i a, __m128i b, const int imm8) +{ + SSE2NEON_COMP_AGG(a, b, la, lb, imm8, CMPISTRX); + return r2 & 1; +} + +// Compare packed strings with implicit lengths in a and b using the control in +// imm8, and returns 1 if any character in a was null, and 0 otherwise. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpistrs +FORCE_INLINE int _mm_cmpistrs(__m128i a, __m128i b, const int imm8) +{ + (void) b; + SSE2NEON_CMPSTR_SET_UPPER(bound, imm8); + int la; + SSE2NEON_CMPISTRX_LENGTH(a, la, imm8); + return la <= (bound - 1); +} + +// Compare packed strings with implicit lengths in a and b using the control in +// imm8, and returns 1 if any character in b was null, and 0 otherwise. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpistrz +FORCE_INLINE int _mm_cmpistrz(__m128i a, __m128i b, const int imm8) +{ + (void) a; + SSE2NEON_CMPSTR_SET_UPPER(bound, imm8); + int lb; + SSE2NEON_CMPISTRX_LENGTH(b, lb, imm8); + return lb <= (bound - 1); +} + // Compares the 2 signed 64-bit integers in a and the 2 signed 64-bit integers // in b for greater than. FORCE_INLINE __m128i _mm_cmpgt_epi64(__m128i a, __m128i b) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) return vreinterpretq_m128i_u64( vcgtq_s64(vreinterpretq_s64_m128i(a), vreinterpretq_s64_m128i(b))); #else @@ -7810,14 +8449,17 @@ FORCE_INLINE __m128i _mm_cmpgt_epi64(__m128i a, __m128i b) } // Starting with the initial value in crc, accumulates a CRC32 value for -// unsigned 16-bit integer v. -// https://msdn.microsoft.com/en-us/library/bb531411(v=vs.100) +// unsigned 16-bit integer v, and stores the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_crc32_u16 FORCE_INLINE uint32_t _mm_crc32_u16(uint32_t crc, uint16_t v) { #if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32) __asm__ __volatile__("crc32ch %w[c], %w[c], %w[v]\n\t" : [c] "+r"(crc) : [v] "r"(v)); +#elif ((__ARM_ARCH == 8) && defined(__ARM_FEATURE_CRC32)) || \ + (defined(_M_ARM64) && !defined(__clang__)) + crc = __crc32ch(crc, v); #else crc = _mm_crc32_u8(crc, v & 0xff); crc = _mm_crc32_u8(crc, (v >> 8) & 0xff); @@ -7826,14 +8468,17 @@ FORCE_INLINE uint32_t _mm_crc32_u16(uint32_t crc, uint16_t v) } // Starting with the initial value in crc, accumulates a CRC32 value for -// unsigned 32-bit integer v. -// https://msdn.microsoft.com/en-us/library/bb531394(v=vs.100) +// unsigned 32-bit integer v, and stores the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_crc32_u32 FORCE_INLINE uint32_t _mm_crc32_u32(uint32_t crc, uint32_t v) { #if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32) __asm__ __volatile__("crc32cw %w[c], %w[c], %w[v]\n\t" : [c] "+r"(crc) : [v] "r"(v)); +#elif ((__ARM_ARCH == 8) && defined(__ARM_FEATURE_CRC32)) || \ + (defined(_M_ARM64) && !defined(__clang__)) + crc = __crc32cw(crc, v); #else crc = _mm_crc32_u16(crc, v & 0xffff); crc = _mm_crc32_u16(crc, (v >> 16) & 0xffff); @@ -7842,47 +8487,87 @@ FORCE_INLINE uint32_t _mm_crc32_u32(uint32_t crc, uint32_t v) } // Starting with the initial value in crc, accumulates a CRC32 value for -// unsigned 64-bit integer v. -// https://msdn.microsoft.com/en-us/library/bb514033(v=vs.100) +// unsigned 64-bit integer v, and stores the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_crc32_u64 FORCE_INLINE uint64_t _mm_crc32_u64(uint64_t crc, uint64_t v) { #if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32) __asm__ __volatile__("crc32cx %w[c], %w[c], %x[v]\n\t" : [c] "+r"(crc) : [v] "r"(v)); +#elif (defined(_M_ARM64) && !defined(__clang__)) + crc = __crc32cd((uint32_t) crc, v); #else - crc = _mm_crc32_u32((uint32_t)(crc), v & 0xffffffff); - crc = _mm_crc32_u32((uint32_t)(crc), (v >> 32) & 0xffffffff); + crc = _mm_crc32_u32((uint32_t) (crc), v & 0xffffffff); + crc = _mm_crc32_u32((uint32_t) (crc), (v >> 32) & 0xffffffff); #endif return crc; } // Starting with the initial value in crc, accumulates a CRC32 value for -// unsigned 8-bit integer v. -// https://msdn.microsoft.com/en-us/library/bb514036(v=vs.100) +// unsigned 8-bit integer v, and stores the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_crc32_u8 FORCE_INLINE uint32_t _mm_crc32_u8(uint32_t crc, uint8_t v) { #if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32) __asm__ __volatile__("crc32cb %w[c], %w[c], %w[v]\n\t" : [c] "+r"(crc) : [v] "r"(v)); +#elif ((__ARM_ARCH == 8) && defined(__ARM_FEATURE_CRC32)) || \ + (defined(_M_ARM64) && !defined(__clang__)) + crc = __crc32cb(crc, v); #else crc ^= v; - for (int bit = 0; bit < 8; bit++) { - if (crc & 1) - crc = (crc >> 1) ^ UINT32_C(0x82f63b78); - else - crc = (crc >> 1); - } +#if defined(__ARM_FEATURE_CRYPTO) + // Adapted from: https://mary.rs/lab/crc32/ + // Barrent reduction + uint64x2_t orig = + vcombine_u64(vcreate_u64((uint64_t) (crc) << 24), vcreate_u64(0x0)); + uint64x2_t tmp = orig; + + // Polynomial P(x) of CRC32C + uint64_t p = 0x105EC76F1; + // Barrett Reduction (in bit-reflected form) constant mu_{64} = \lfloor + // 2^{64} / P(x) \rfloor = 0x11f91caf6 + uint64_t mu = 0x1dea713f1; + + // Multiply by mu_{64} + tmp = _sse2neon_vmull_p64(vget_low_u64(tmp), vcreate_u64(mu)); + // Divide by 2^{64} (mask away the unnecessary bits) + tmp = + vandq_u64(tmp, vcombine_u64(vcreate_u64(0xFFFFFFFF), vcreate_u64(0x0))); + // Multiply by P(x) (shifted left by 1 for alignment reasons) + tmp = _sse2neon_vmull_p64(vget_low_u64(tmp), vcreate_u64(p)); + // Subtract original from result + tmp = veorq_u64(tmp, orig); + + // Extract the 'lower' (in bit-reflected sense) 32 bits + crc = vgetq_lane_u32(vreinterpretq_u32_u64(tmp), 1); +#else // Fall back to the generic table lookup approach + // Adapted from: https://create.stephan-brumme.com/crc32/ + // Apply half-byte comparison algorithm for the best ratio between + // performance and lookup table. + + // The lookup table just needs to store every 16th entry + // of the standard look-up table. + static const uint32_t crc32_half_byte_tbl[] = { + 0x00000000, 0x105ec76f, 0x20bd8ede, 0x30e349b1, 0x417b1dbc, 0x5125dad3, + 0x61c69362, 0x7198540d, 0x82f63b78, 0x92a8fc17, 0xa24bb5a6, 0xb21572c9, + 0xc38d26c4, 0xd3d3e1ab, 0xe330a81a, 0xf36e6f75, + }; + + crc = (crc >> 4) ^ crc32_half_byte_tbl[crc & 0x0F]; + crc = (crc >> 4) ^ crc32_half_byte_tbl[crc & 0x0F]; +#endif #endif return crc; } /* AES */ -#if !defined(__ARM_FEATURE_CRYPTO) +#if !defined(__ARM_FEATURE_CRYPTO) && (!defined(_M_ARM64) || defined(__clang__)) /* clang-format off */ -#define SSE2NEON_AES_DATA(w) \ +#define SSE2NEON_AES_SBOX(w) \ { \ w(0x63), w(0x7c), w(0x77), w(0x7b), w(0xf2), w(0x6b), w(0x6f), \ w(0xc5), w(0x30), w(0x01), w(0x67), w(0x2b), w(0xfe), w(0xd7), \ @@ -7922,53 +8607,114 @@ FORCE_INLINE uint32_t _mm_crc32_u8(uint32_t crc, uint8_t v) w(0xe6), w(0x42), w(0x68), w(0x41), w(0x99), w(0x2d), w(0x0f), \ w(0xb0), w(0x54), w(0xbb), w(0x16) \ } +#define SSE2NEON_AES_RSBOX(w) \ + { \ + w(0x52), w(0x09), w(0x6a), w(0xd5), w(0x30), w(0x36), w(0xa5), \ + w(0x38), w(0xbf), w(0x40), w(0xa3), w(0x9e), w(0x81), w(0xf3), \ + w(0xd7), w(0xfb), w(0x7c), w(0xe3), w(0x39), w(0x82), w(0x9b), \ + w(0x2f), w(0xff), w(0x87), w(0x34), w(0x8e), w(0x43), w(0x44), \ + w(0xc4), w(0xde), w(0xe9), w(0xcb), w(0x54), w(0x7b), w(0x94), \ + w(0x32), w(0xa6), w(0xc2), w(0x23), w(0x3d), w(0xee), w(0x4c), \ + w(0x95), w(0x0b), w(0x42), w(0xfa), w(0xc3), w(0x4e), w(0x08), \ + w(0x2e), w(0xa1), w(0x66), w(0x28), w(0xd9), w(0x24), w(0xb2), \ + w(0x76), w(0x5b), w(0xa2), w(0x49), w(0x6d), w(0x8b), w(0xd1), \ + w(0x25), w(0x72), w(0xf8), w(0xf6), w(0x64), w(0x86), w(0x68), \ + w(0x98), w(0x16), w(0xd4), w(0xa4), w(0x5c), w(0xcc), w(0x5d), \ + w(0x65), w(0xb6), w(0x92), w(0x6c), w(0x70), w(0x48), w(0x50), \ + w(0xfd), w(0xed), w(0xb9), w(0xda), w(0x5e), w(0x15), w(0x46), \ + w(0x57), w(0xa7), w(0x8d), w(0x9d), w(0x84), w(0x90), w(0xd8), \ + w(0xab), w(0x00), w(0x8c), w(0xbc), w(0xd3), w(0x0a), w(0xf7), \ + w(0xe4), w(0x58), w(0x05), w(0xb8), w(0xb3), w(0x45), w(0x06), \ + w(0xd0), w(0x2c), w(0x1e), w(0x8f), w(0xca), w(0x3f), w(0x0f), \ + w(0x02), w(0xc1), w(0xaf), w(0xbd), w(0x03), w(0x01), w(0x13), \ + w(0x8a), w(0x6b), w(0x3a), w(0x91), w(0x11), w(0x41), w(0x4f), \ + w(0x67), w(0xdc), w(0xea), w(0x97), w(0xf2), w(0xcf), w(0xce), \ + w(0xf0), w(0xb4), w(0xe6), w(0x73), w(0x96), w(0xac), w(0x74), \ + w(0x22), w(0xe7), w(0xad), w(0x35), w(0x85), w(0xe2), w(0xf9), \ + w(0x37), w(0xe8), w(0x1c), w(0x75), w(0xdf), w(0x6e), w(0x47), \ + w(0xf1), w(0x1a), w(0x71), w(0x1d), w(0x29), w(0xc5), w(0x89), \ + w(0x6f), w(0xb7), w(0x62), w(0x0e), w(0xaa), w(0x18), w(0xbe), \ + w(0x1b), w(0xfc), w(0x56), w(0x3e), w(0x4b), w(0xc6), w(0xd2), \ + w(0x79), w(0x20), w(0x9a), w(0xdb), w(0xc0), w(0xfe), w(0x78), \ + w(0xcd), w(0x5a), w(0xf4), w(0x1f), w(0xdd), w(0xa8), w(0x33), \ + w(0x88), w(0x07), w(0xc7), w(0x31), w(0xb1), w(0x12), w(0x10), \ + w(0x59), w(0x27), w(0x80), w(0xec), w(0x5f), w(0x60), w(0x51), \ + w(0x7f), w(0xa9), w(0x19), w(0xb5), w(0x4a), w(0x0d), w(0x2d), \ + w(0xe5), w(0x7a), w(0x9f), w(0x93), w(0xc9), w(0x9c), w(0xef), \ + w(0xa0), w(0xe0), w(0x3b), w(0x4d), w(0xae), w(0x2a), w(0xf5), \ + w(0xb0), w(0xc8), w(0xeb), w(0xbb), w(0x3c), w(0x83), w(0x53), \ + w(0x99), w(0x61), w(0x17), w(0x2b), w(0x04), w(0x7e), w(0xba), \ + w(0x77), w(0xd6), w(0x26), w(0xe1), w(0x69), w(0x14), w(0x63), \ + w(0x55), w(0x21), w(0x0c), w(0x7d) \ + } /* clang-format on */ /* X Macro trick. See https://en.wikipedia.org/wiki/X_Macro */ #define SSE2NEON_AES_H0(x) (x) -static const uint8_t SSE2NEON_sbox[256] = SSE2NEON_AES_DATA(SSE2NEON_AES_H0); +static const uint8_t _sse2neon_sbox[256] = SSE2NEON_AES_SBOX(SSE2NEON_AES_H0); +static const uint8_t _sse2neon_rsbox[256] = SSE2NEON_AES_RSBOX(SSE2NEON_AES_H0); #undef SSE2NEON_AES_H0 -// In the absence of crypto extensions, implement aesenc using regular neon +/* x_time function and matrix multiply function */ +#if !defined(__aarch64__) && !defined(_M_ARM64) +#define SSE2NEON_XT(x) (((x) << 1) ^ ((((x) >> 7) & 1) * 0x1b)) +#define SSE2NEON_MULTIPLY(x, y) \ + (((y & 1) * x) ^ ((y >> 1 & 1) * SSE2NEON_XT(x)) ^ \ + ((y >> 2 & 1) * SSE2NEON_XT(SSE2NEON_XT(x))) ^ \ + ((y >> 3 & 1) * SSE2NEON_XT(SSE2NEON_XT(SSE2NEON_XT(x)))) ^ \ + ((y >> 4 & 1) * SSE2NEON_XT(SSE2NEON_XT(SSE2NEON_XT(SSE2NEON_XT(x)))))) +#endif + +// In the absence of crypto extensions, implement aesenc using regular NEON // intrinsics instead. See: // https://www.workofard.com/2017/01/accelerated-aes-for-the-arm64-linux-kernel/ // https://www.workofard.com/2017/07/ghash-for-low-end-cores/ and -// https://github.com/ColinIanKing/linux-next-mirror/blob/b5f466091e130caaf0735976648f72bd5e09aa84/crypto/aegis128-neon-inner.c#L52 -// for more information Reproduced with permission of the author. -FORCE_INLINE __m128i _mm_aesenc_si128(__m128i EncBlock, __m128i RoundKey) +// for more information. +FORCE_INLINE __m128i _mm_aesenc_si128(__m128i a, __m128i RoundKey) { -#if defined(__aarch64__) - static const uint8_t shift_rows[] = {0x0, 0x5, 0xa, 0xf, 0x4, 0x9, - 0xe, 0x3, 0x8, 0xd, 0x2, 0x7, - 0xc, 0x1, 0x6, 0xb}; - static const uint8_t ror32by8[] = {0x1, 0x2, 0x3, 0x0, 0x5, 0x6, 0x7, 0x4, - 0x9, 0xa, 0xb, 0x8, 0xd, 0xe, 0xf, 0xc}; +#if defined(__aarch64__) || defined(_M_ARM64) + static const uint8_t shift_rows[] = { + 0x0, 0x5, 0xa, 0xf, 0x4, 0x9, 0xe, 0x3, + 0x8, 0xd, 0x2, 0x7, 0xc, 0x1, 0x6, 0xb, + }; + static const uint8_t ror32by8[] = { + 0x1, 0x2, 0x3, 0x0, 0x5, 0x6, 0x7, 0x4, + 0x9, 0xa, 0xb, 0x8, 0xd, 0xe, 0xf, 0xc, + }; uint8x16_t v; - uint8x16_t w = vreinterpretq_u8_m128i(EncBlock); + uint8x16_t w = vreinterpretq_u8_m128i(a); - // shift rows + /* shift rows */ w = vqtbl1q_u8(w, vld1q_u8(shift_rows)); - // sub bytes - v = vqtbl4q_u8(_sse2neon_vld1q_u8_x4(SSE2NEON_sbox), w); - v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(SSE2NEON_sbox + 0x40), w - 0x40); - v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(SSE2NEON_sbox + 0x80), w - 0x80); - v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(SSE2NEON_sbox + 0xc0), w - 0xc0); - - // mix columns - w = (v << 1) ^ (uint8x16_t)(((int8x16_t) v >> 7) & 0x1b); + /* sub bytes */ + // Here, we separate the whole 256-bytes table into 4 64-bytes tables, and + // look up each of the table. After each lookup, we load the next table + // which locates at the next 64-bytes. In the meantime, the index in the + // table would be smaller than it was, so the index parameters of + // `vqtbx4q_u8()` need to be added the same constant as the loaded tables. + v = vqtbl4q_u8(_sse2neon_vld1q_u8_x4(_sse2neon_sbox), w); + // 'w-0x40' equals to 'vsubq_u8(w, vdupq_n_u8(0x40))' + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0x40), w - 0x40); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0x80), w - 0x80); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0xc0), w - 0xc0); + + /* mix columns */ + w = (v << 1) ^ (uint8x16_t) (((int8x16_t) v >> 7) & 0x1b); w ^= (uint8x16_t) vrev32q_u16((uint16x8_t) v); w ^= vqtbl1q_u8(v ^ w, vld1q_u8(ror32by8)); - // add round key + /* add round key */ return vreinterpretq_m128i_u8(w) ^ RoundKey; -#else /* ARMv7-A NEON implementation */ -#define SSE2NEON_AES_B2W(b0, b1, b2, b3) \ - (((uint32_t)(b3) << 24) | ((uint32_t)(b2) << 16) | ((uint32_t)(b1) << 8) | \ - (b0)) +#else /* ARMv7-A implementation for a table-based AES */ +#define SSE2NEON_AES_B2W(b0, b1, b2, b3) \ + (((uint32_t) (b3) << 24) | ((uint32_t) (b2) << 16) | \ + ((uint32_t) (b1) << 8) | (uint32_t) (b0)) +// multiplying 'x' by 2 in GF(2^8) #define SSE2NEON_AES_F2(x) ((x << 1) ^ (((x >> 7) & 1) * 0x011b /* WPOLY */)) +// multiplying 'x' by 3 in GF(2^8) #define SSE2NEON_AES_F3(x) (SSE2NEON_AES_F2(x) ^ x) #define SSE2NEON_AES_U0(p) \ SSE2NEON_AES_B2W(SSE2NEON_AES_F2(p), p, p, SSE2NEON_AES_F3(p)) @@ -7978,11 +8724,14 @@ FORCE_INLINE __m128i _mm_aesenc_si128(__m128i EncBlock, __m128i RoundKey) SSE2NEON_AES_B2W(p, SSE2NEON_AES_F3(p), SSE2NEON_AES_F2(p), p) #define SSE2NEON_AES_U3(p) \ SSE2NEON_AES_B2W(p, p, SSE2NEON_AES_F3(p), SSE2NEON_AES_F2(p)) + + // this generates a table containing every possible permutation of + // shift_rows() and sub_bytes() with mix_columns(). static const uint32_t ALIGN_STRUCT(16) aes_table[4][256] = { - SSE2NEON_AES_DATA(SSE2NEON_AES_U0), - SSE2NEON_AES_DATA(SSE2NEON_AES_U1), - SSE2NEON_AES_DATA(SSE2NEON_AES_U2), - SSE2NEON_AES_DATA(SSE2NEON_AES_U3), + SSE2NEON_AES_SBOX(SSE2NEON_AES_U0), + SSE2NEON_AES_SBOX(SSE2NEON_AES_U1), + SSE2NEON_AES_SBOX(SSE2NEON_AES_U2), + SSE2NEON_AES_SBOX(SSE2NEON_AES_U3), }; #undef SSE2NEON_AES_B2W #undef SSE2NEON_AES_F2 @@ -7992,11 +8741,15 @@ FORCE_INLINE __m128i _mm_aesenc_si128(__m128i EncBlock, __m128i RoundKey) #undef SSE2NEON_AES_U2 #undef SSE2NEON_AES_U3 - uint32_t x0 = _mm_cvtsi128_si32(EncBlock); - uint32_t x1 = _mm_cvtsi128_si32(_mm_shuffle_epi32(EncBlock, 0x55)); - uint32_t x2 = _mm_cvtsi128_si32(_mm_shuffle_epi32(EncBlock, 0xAA)); - uint32_t x3 = _mm_cvtsi128_si32(_mm_shuffle_epi32(EncBlock, 0xFF)); + uint32_t x0 = _mm_cvtsi128_si32(a); // get a[31:0] + uint32_t x1 = + _mm_cvtsi128_si32(_mm_shuffle_epi32(a, 0x55)); // get a[63:32] + uint32_t x2 = + _mm_cvtsi128_si32(_mm_shuffle_epi32(a, 0xAA)); // get a[95:64] + uint32_t x3 = + _mm_cvtsi128_si32(_mm_shuffle_epi32(a, 0xFF)); // get a[127:96] + // finish the modulo addition step in mix_columns() __m128i out = _mm_set_epi32( (aes_table[0][x3 & 0xff] ^ aes_table[1][(x0 >> 8) & 0xff] ^ aes_table[2][(x1 >> 16) & 0xff] ^ aes_table[3][x2 >> 24]), @@ -8011,54 +8764,254 @@ FORCE_INLINE __m128i _mm_aesenc_si128(__m128i EncBlock, __m128i RoundKey) #endif } +// Perform one round of an AES decryption flow on data (state) in a using the +// round key in RoundKey, and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aesdec_si128 +FORCE_INLINE __m128i _mm_aesdec_si128(__m128i a, __m128i RoundKey) +{ +#if defined(__aarch64__) + static const uint8_t inv_shift_rows[] = { + 0x0, 0xd, 0xa, 0x7, 0x4, 0x1, 0xe, 0xb, + 0x8, 0x5, 0x2, 0xf, 0xc, 0x9, 0x6, 0x3, + }; + static const uint8_t ror32by8[] = { + 0x1, 0x2, 0x3, 0x0, 0x5, 0x6, 0x7, 0x4, + 0x9, 0xa, 0xb, 0x8, 0xd, 0xe, 0xf, 0xc, + }; + + uint8x16_t v; + uint8x16_t w = vreinterpretq_u8_m128i(a); + + // inverse shift rows + w = vqtbl1q_u8(w, vld1q_u8(inv_shift_rows)); + + // inverse sub bytes + v = vqtbl4q_u8(_sse2neon_vld1q_u8_x4(_sse2neon_rsbox), w); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_rsbox + 0x40), w - 0x40); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_rsbox + 0x80), w - 0x80); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_rsbox + 0xc0), w - 0xc0); + + // inverse mix columns + // multiplying 'v' by 4 in GF(2^8) + w = (v << 1) ^ (uint8x16_t) (((int8x16_t) v >> 7) & 0x1b); + w = (w << 1) ^ (uint8x16_t) (((int8x16_t) w >> 7) & 0x1b); + v ^= w; + v ^= (uint8x16_t) vrev32q_u16((uint16x8_t) w); + + w = (v << 1) ^ (uint8x16_t) (((int8x16_t) v >> 7) & + 0x1b); // multiplying 'v' by 2 in GF(2^8) + w ^= (uint8x16_t) vrev32q_u16((uint16x8_t) v); + w ^= vqtbl1q_u8(v ^ w, vld1q_u8(ror32by8)); + + // add round key + return vreinterpretq_m128i_u8(w) ^ RoundKey; + +#else /* ARMv7-A NEON implementation */ + /* FIXME: optimized for NEON */ + uint8_t i, e, f, g, h, v[4][4]; + uint8_t *_a = (uint8_t *) &a; + for (i = 0; i < 16; ++i) { + v[((i / 4) + (i % 4)) % 4][i % 4] = _sse2neon_rsbox[_a[i]]; + } + + // inverse mix columns + for (i = 0; i < 4; ++i) { + e = v[i][0]; + f = v[i][1]; + g = v[i][2]; + h = v[i][3]; + + v[i][0] = SSE2NEON_MULTIPLY(e, 0x0e) ^ SSE2NEON_MULTIPLY(f, 0x0b) ^ + SSE2NEON_MULTIPLY(g, 0x0d) ^ SSE2NEON_MULTIPLY(h, 0x09); + v[i][1] = SSE2NEON_MULTIPLY(e, 0x09) ^ SSE2NEON_MULTIPLY(f, 0x0e) ^ + SSE2NEON_MULTIPLY(g, 0x0b) ^ SSE2NEON_MULTIPLY(h, 0x0d); + v[i][2] = SSE2NEON_MULTIPLY(e, 0x0d) ^ SSE2NEON_MULTIPLY(f, 0x09) ^ + SSE2NEON_MULTIPLY(g, 0x0e) ^ SSE2NEON_MULTIPLY(h, 0x0b); + v[i][3] = SSE2NEON_MULTIPLY(e, 0x0b) ^ SSE2NEON_MULTIPLY(f, 0x0d) ^ + SSE2NEON_MULTIPLY(g, 0x09) ^ SSE2NEON_MULTIPLY(h, 0x0e); + } + + return vreinterpretq_m128i_u8(vld1q_u8((uint8_t *) v)) ^ RoundKey; +#endif +} + // Perform the last round of an AES encryption flow on data (state) in a using // the round key in RoundKey, and store the result in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_aesenclast_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aesenclast_si128 FORCE_INLINE __m128i _mm_aesenclast_si128(__m128i a, __m128i RoundKey) { +#if defined(__aarch64__) + static const uint8_t shift_rows[] = { + 0x0, 0x5, 0xa, 0xf, 0x4, 0x9, 0xe, 0x3, + 0x8, 0xd, 0x2, 0x7, 0xc, 0x1, 0x6, 0xb, + }; + + uint8x16_t v; + uint8x16_t w = vreinterpretq_u8_m128i(a); + + // shift rows + w = vqtbl1q_u8(w, vld1q_u8(shift_rows)); + + // sub bytes + v = vqtbl4q_u8(_sse2neon_vld1q_u8_x4(_sse2neon_sbox), w); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0x40), w - 0x40); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0x80), w - 0x80); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0xc0), w - 0xc0); + + // add round key + return vreinterpretq_m128i_u8(v) ^ RoundKey; + +#else /* ARMv7-A implementation */ + uint8_t v[16] = { + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 0)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 5)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 10)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 15)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 4)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 9)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 14)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 3)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 8)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 13)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 2)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 7)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 12)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 1)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 6)], + _sse2neon_sbox[vgetq_lane_u8(vreinterpretq_u8_m128i(a), 11)], + }; + + return vreinterpretq_m128i_u8(vld1q_u8(v)) ^ RoundKey; +#endif +} + +// Perform the last round of an AES decryption flow on data (state) in a using +// the round key in RoundKey, and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aesdeclast_si128 +FORCE_INLINE __m128i _mm_aesdeclast_si128(__m128i a, __m128i RoundKey) +{ +#if defined(__aarch64__) + static const uint8_t inv_shift_rows[] = { + 0x0, 0xd, 0xa, 0x7, 0x4, 0x1, 0xe, 0xb, + 0x8, 0x5, 0x2, 0xf, 0xc, 0x9, 0x6, 0x3, + }; + + uint8x16_t v; + uint8x16_t w = vreinterpretq_u8_m128i(a); + + // inverse shift rows + w = vqtbl1q_u8(w, vld1q_u8(inv_shift_rows)); + + // inverse sub bytes + v = vqtbl4q_u8(_sse2neon_vld1q_u8_x4(_sse2neon_rsbox), w); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_rsbox + 0x40), w - 0x40); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_rsbox + 0x80), w - 0x80); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_rsbox + 0xc0), w - 0xc0); + + // add round key + return vreinterpretq_m128i_u8(v) ^ RoundKey; + +#else /* ARMv7-A NEON implementation */ /* FIXME: optimized for NEON */ - uint8_t v[4][4] = { - {SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 0)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 5)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 10)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 15)]}, - {SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 4)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 9)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 14)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 3)]}, - {SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 8)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 13)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 2)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 7)]}, - {SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 12)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 1)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 6)], - SSE2NEON_sbox[vreinterpretq_nth_u8_m128i(a, 11)]}, + uint8_t v[4][4]; + uint8_t *_a = (uint8_t *) &a; + for (int i = 0; i < 16; ++i) { + v[((i / 4) + (i % 4)) % 4][i % 4] = _sse2neon_rsbox[_a[i]]; + } + + return vreinterpretq_m128i_u8(vld1q_u8((uint8_t *) v)) ^ RoundKey; +#endif +} + +// Perform the InvMixColumns transformation on a and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aesimc_si128 +FORCE_INLINE __m128i _mm_aesimc_si128(__m128i a) +{ +#if defined(__aarch64__) + static const uint8_t ror32by8[] = { + 0x1, 0x2, 0x3, 0x0, 0x5, 0x6, 0x7, 0x4, + 0x9, 0xa, 0xb, 0x8, 0xd, 0xe, 0xf, 0xc, }; - for (int i = 0; i < 16; i++) - vreinterpretq_nth_u8_m128i(a, i) = - v[i / 4][i % 4] ^ vreinterpretq_nth_u8_m128i(RoundKey, i); - return a; + uint8x16_t v = vreinterpretq_u8_m128i(a); + uint8x16_t w; + + // multiplying 'v' by 4 in GF(2^8) + w = (v << 1) ^ (uint8x16_t) (((int8x16_t) v >> 7) & 0x1b); + w = (w << 1) ^ (uint8x16_t) (((int8x16_t) w >> 7) & 0x1b); + v ^= w; + v ^= (uint8x16_t) vrev32q_u16((uint16x8_t) w); + + // multiplying 'v' by 2 in GF(2^8) + w = (v << 1) ^ (uint8x16_t) (((int8x16_t) v >> 7) & 0x1b); + w ^= (uint8x16_t) vrev32q_u16((uint16x8_t) v); + w ^= vqtbl1q_u8(v ^ w, vld1q_u8(ror32by8)); + return vreinterpretq_m128i_u8(w); + +#else /* ARMv7-A NEON implementation */ + uint8_t i, e, f, g, h, v[4][4]; + vst1q_u8((uint8_t *) v, vreinterpretq_u8_m128i(a)); + for (i = 0; i < 4; ++i) { + e = v[i][0]; + f = v[i][1]; + g = v[i][2]; + h = v[i][3]; + + v[i][0] = SSE2NEON_MULTIPLY(e, 0x0e) ^ SSE2NEON_MULTIPLY(f, 0x0b) ^ + SSE2NEON_MULTIPLY(g, 0x0d) ^ SSE2NEON_MULTIPLY(h, 0x09); + v[i][1] = SSE2NEON_MULTIPLY(e, 0x09) ^ SSE2NEON_MULTIPLY(f, 0x0e) ^ + SSE2NEON_MULTIPLY(g, 0x0b) ^ SSE2NEON_MULTIPLY(h, 0x0d); + v[i][2] = SSE2NEON_MULTIPLY(e, 0x0d) ^ SSE2NEON_MULTIPLY(f, 0x09) ^ + SSE2NEON_MULTIPLY(g, 0x0e) ^ SSE2NEON_MULTIPLY(h, 0x0b); + v[i][3] = SSE2NEON_MULTIPLY(e, 0x0b) ^ SSE2NEON_MULTIPLY(f, 0x0d) ^ + SSE2NEON_MULTIPLY(g, 0x09) ^ SSE2NEON_MULTIPLY(h, 0x0e); + } + + return vreinterpretq_m128i_u8(vld1q_u8((uint8_t *) v)); +#endif } +// Assist in expanding the AES cipher key by computing steps towards generating +// a round key for encryption cipher using data from a and an 8-bit round +// constant specified in imm8, and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aeskeygenassist_si128 +// // Emits the Advanced Encryption Standard (AES) instruction aeskeygenassist. // This instruction generates a round key for AES encryption. See // https://kazakov.life/2017/11/01/cryptocurrency-mining-on-ios-devices/ // for details. -// -// https://msdn.microsoft.com/en-us/library/cc714138(v=vs.120).aspx -FORCE_INLINE __m128i _mm_aeskeygenassist_si128(__m128i key, const int rcon) +FORCE_INLINE __m128i _mm_aeskeygenassist_si128(__m128i a, const int rcon) { - uint32_t X1 = _mm_cvtsi128_si32(_mm_shuffle_epi32(key, 0x55)); - uint32_t X3 = _mm_cvtsi128_si32(_mm_shuffle_epi32(key, 0xFF)); +#if defined(__aarch64__) + uint8x16_t _a = vreinterpretq_u8_m128i(a); + uint8x16_t v = vqtbl4q_u8(_sse2neon_vld1q_u8_x4(_sse2neon_sbox), _a); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0x40), _a - 0x40); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0x80), _a - 0x80); + v = vqtbx4q_u8(v, _sse2neon_vld1q_u8_x4(_sse2neon_sbox + 0xc0), _a - 0xc0); + + uint32x4_t v_u32 = vreinterpretq_u32_u8(v); + uint32x4_t ror_v = vorrq_u32(vshrq_n_u32(v_u32, 8), vshlq_n_u32(v_u32, 24)); + uint32x4_t ror_xor_v = veorq_u32(ror_v, vdupq_n_u32(rcon)); + + return vreinterpretq_m128i_u32(vtrn2q_u32(v_u32, ror_xor_v)); + +#else /* ARMv7-A NEON implementation */ + uint32_t X1 = _mm_cvtsi128_si32(_mm_shuffle_epi32(a, 0x55)); + uint32_t X3 = _mm_cvtsi128_si32(_mm_shuffle_epi32(a, 0xFF)); for (int i = 0; i < 4; ++i) { - ((uint8_t *) &X1)[i] = SSE2NEON_sbox[((uint8_t *) &X1)[i]]; - ((uint8_t *) &X3)[i] = SSE2NEON_sbox[((uint8_t *) &X3)[i]]; + ((uint8_t *) &X1)[i] = _sse2neon_sbox[((uint8_t *) &X1)[i]]; + ((uint8_t *) &X3)[i] = _sse2neon_sbox[((uint8_t *) &X3)[i]]; } return _mm_set_epi32(((X3 >> 8) | (X3 << 24)) ^ rcon, X3, ((X1 >> 8) | (X1 << 24)) ^ rcon, X1); +#endif } -#undef SSE2NEON_AES_DATA +#undef SSE2NEON_AES_SBOX +#undef SSE2NEON_AES_RSBOX + +#if defined(__aarch64__) +#undef SSE2NEON_XT +#undef SSE2NEON_MULTIPLY +#endif #else /* __ARM_FEATURE_CRYPTO */ // Implements equivalent of 'aesenc' by combining AESE (with an empty key) and @@ -8069,12 +9022,24 @@ FORCE_INLINE __m128i _mm_aeskeygenassist_si128(__m128i key, const int rcon) // for more details. FORCE_INLINE __m128i _mm_aesenc_si128(__m128i a, __m128i b) { - return vreinterpretq_m128i_u8( - vaesmcq_u8(vaeseq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0))) ^ - vreinterpretq_u8_m128i(b)); + return vreinterpretq_m128i_u8(veorq_u8( + vaesmcq_u8(vaeseq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0))), + vreinterpretq_u8_m128i(b))); +} + +// Perform one round of an AES decryption flow on data (state) in a using the +// round key in RoundKey, and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aesdec_si128 +FORCE_INLINE __m128i _mm_aesdec_si128(__m128i a, __m128i RoundKey) +{ + return vreinterpretq_m128i_u8(veorq_u8( + vaesimcq_u8(vaesdq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0))), + vreinterpretq_u8_m128i(RoundKey))); } -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_aesenclast_si128 +// Perform the last round of an AES encryption flow on data (state) in a using +// the round key in RoundKey, and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aesenclast_si128 FORCE_INLINE __m128i _mm_aesenclast_si128(__m128i a, __m128i RoundKey) { return _mm_xor_si128(vreinterpretq_m128i_u8(vaeseq_u8( @@ -8082,11 +9047,33 @@ FORCE_INLINE __m128i _mm_aesenclast_si128(__m128i a, __m128i RoundKey) RoundKey); } +// Perform the last round of an AES decryption flow on data (state) in a using +// the round key in RoundKey, and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aesdeclast_si128 +FORCE_INLINE __m128i _mm_aesdeclast_si128(__m128i a, __m128i RoundKey) +{ + return vreinterpretq_m128i_u8( + veorq_u8(vaesdq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0)), + vreinterpretq_u8_m128i(RoundKey))); +} + +// Perform the InvMixColumns transformation on a and store the result in dst. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aesimc_si128 +FORCE_INLINE __m128i _mm_aesimc_si128(__m128i a) +{ + return vreinterpretq_m128i_u8(vaesimcq_u8(vreinterpretq_u8_m128i(a))); +} + +// Assist in expanding the AES cipher key by computing steps towards generating +// a round key for encryption cipher using data from a and an 8-bit round +// constant specified in imm8, and store the result in dst." +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_aeskeygenassist_si128 FORCE_INLINE __m128i _mm_aeskeygenassist_si128(__m128i a, const int rcon) { // AESE does ShiftRows and SubBytes on A uint8x16_t u8 = vaeseq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0)); +#ifndef _MSC_VER uint8x16_t dest = { // Undo ShiftRows step from AESE and extract X1 and X3 u8[0x4], u8[0x1], u8[0xE], u8[0xB], // SubBytes(X1) @@ -8096,6 +9083,33 @@ FORCE_INLINE __m128i _mm_aeskeygenassist_si128(__m128i a, const int rcon) }; uint32x4_t r = {0, (unsigned) rcon, 0, (unsigned) rcon}; return vreinterpretq_m128i_u8(dest) ^ vreinterpretq_m128i_u32(r); +#else + // We have to do this hack because MSVC is strictly adhering to the CPP + // standard, in particular C++03 8.5.1 sub-section 15, which states that + // unions must be initialized by their first member type. + + // As per the Windows ARM64 ABI, it is always little endian, so this works + __n128 dest{ + ((uint64_t) u8.n128_u8[0x4] << 0) | ((uint64_t) u8.n128_u8[0x1] << 8) | + ((uint64_t) u8.n128_u8[0xE] << 16) | + ((uint64_t) u8.n128_u8[0xB] << 24) | + ((uint64_t) u8.n128_u8[0x1] << 32) | + ((uint64_t) u8.n128_u8[0xE] << 40) | + ((uint64_t) u8.n128_u8[0xB] << 48) | + ((uint64_t) u8.n128_u8[0x4] << 56), + ((uint64_t) u8.n128_u8[0xC] << 0) | ((uint64_t) u8.n128_u8[0x9] << 8) | + ((uint64_t) u8.n128_u8[0x6] << 16) | + ((uint64_t) u8.n128_u8[0x3] << 24) | + ((uint64_t) u8.n128_u8[0x9] << 32) | + ((uint64_t) u8.n128_u8[0x6] << 40) | + ((uint64_t) u8.n128_u8[0x3] << 48) | + ((uint64_t) u8.n128_u8[0xC] << 56)}; + + dest.n128_u32[1] = dest.n128_u32[1] ^ rcon; + dest.n128_u32[3] = dest.n128_u32[3] ^ rcon; + + return dest; +#endif } #endif @@ -8103,7 +9117,7 @@ FORCE_INLINE __m128i _mm_aeskeygenassist_si128(__m128i a, const int rcon) // Perform a carry-less multiplication of two 64-bit integers, selected from a // and b according to imm8, and store the results in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_clmulepi64_si128 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_clmulepi64_si128 FORCE_INLINE __m128i _mm_clmulepi64_si128(__m128i _a, __m128i _b, const int imm) { uint64x2_t a = vreinterpretq_u64_m128i(_a); @@ -8126,14 +9140,36 @@ FORCE_INLINE __m128i _mm_clmulepi64_si128(__m128i _a, __m128i _b, const int imm) } } +FORCE_INLINE unsigned int _sse2neon_mm_get_denormals_zero_mode(void) +{ + union { + fpcr_bitfield field; +#if defined(__aarch64__) || defined(_M_ARM64) + uint64_t value; +#else + uint32_t value; +#endif + } r; + +#if defined(__aarch64__) || defined(_M_ARM64) + r.value = _sse2neon_get_fpcr(); +#else + __asm__ __volatile__("vmrs %0, FPSCR" : "=r"(r.value)); /* read */ +#endif + + return r.field.bit24 ? _MM_DENORMALS_ZERO_ON : _MM_DENORMALS_ZERO_OFF; +} + // Count the number of bits set to 1 in unsigned 32-bit integer a, and // return that count in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_popcnt_u32 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_popcnt_u32 FORCE_INLINE int _mm_popcnt_u32(unsigned int a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) #if __has_builtin(__builtin_popcount) return __builtin_popcount(a); +#elif defined(_MSC_VER) + return _CountOneBits(a); #else return (int) vaddlv_u8(vcnt_u8(vcreate_u8((uint64_t) a))); #endif @@ -8155,12 +9191,14 @@ FORCE_INLINE int _mm_popcnt_u32(unsigned int a) // Count the number of bits set to 1 in unsigned 64-bit integer a, and // return that count in dst. -// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_popcnt_u64 +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_popcnt_u64 FORCE_INLINE int64_t _mm_popcnt_u64(uint64_t a) { -#if defined(__aarch64__) +#if defined(__aarch64__) || defined(_M_ARM64) #if __has_builtin(__builtin_popcountll) return __builtin_popcountll(a); +#elif defined(_MSC_VER) + return _CountOneBits64(a); #else return (int64_t) vaddlv_u8(vcnt_u8(vcreate_u8(a))); #endif @@ -8181,9 +9219,79 @@ FORCE_INLINE int64_t _mm_popcnt_u64(uint64_t a) #endif } +FORCE_INLINE void _sse2neon_mm_set_denormals_zero_mode(unsigned int flag) +{ + // AArch32 Advanced SIMD arithmetic always uses the Flush-to-zero setting, + // regardless of the value of the FZ bit. + union { + fpcr_bitfield field; +#if defined(__aarch64__) || defined(_M_ARM64) + uint64_t value; +#else + uint32_t value; +#endif + } r; + +#if defined(__aarch64__) || defined(_M_ARM64) + r.value = _sse2neon_get_fpcr(); +#else + __asm__ __volatile__("vmrs %0, FPSCR" : "=r"(r.value)); /* read */ +#endif + + r.field.bit24 = (flag & _MM_DENORMALS_ZERO_MASK) == _MM_DENORMALS_ZERO_ON; + +#if defined(__aarch64__) || defined(_M_ARM64) + _sse2neon_set_fpcr(r.value); +#else + __asm__ __volatile__("vmsr FPSCR, %0" ::"r"(r)); /* write */ +#endif +} + +// Return the current 64-bit value of the processor's time-stamp counter. +// https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=rdtsc +FORCE_INLINE uint64_t _rdtsc(void) +{ +#if defined(__aarch64__) || defined(_M_ARM64) + uint64_t val; + + /* According to ARM DDI 0487F.c, from Armv8.0 to Armv8.5 inclusive, the + * system counter is at least 56 bits wide; from Armv8.6, the counter + * must be 64 bits wide. So the system counter could be less than 64 + * bits wide and it is attributed with the flag 'cap_user_time_short' + * is true. + */ +#if defined(_MSC_VER) + val = _ReadStatusReg(ARM64_SYSREG(3, 3, 14, 0, 2)); +#else + __asm__ __volatile__("mrs %0, cntvct_el0" : "=r"(val)); +#endif + + return val; +#else + uint32_t pmccntr, pmuseren, pmcntenset; + // Read the user mode Performance Monitoring Unit (PMU) + // User Enable Register (PMUSERENR) access permissions. + __asm__ __volatile__("mrc p15, 0, %0, c9, c14, 0" : "=r"(pmuseren)); + if (pmuseren & 1) { // Allows reading PMUSERENR for user mode code. + __asm__ __volatile__("mrc p15, 0, %0, c9, c12, 1" : "=r"(pmcntenset)); + if (pmcntenset & 0x80000000UL) { // Is it counting? + __asm__ __volatile__("mrc p15, 0, %0, c9, c13, 0" : "=r"(pmccntr)); + // The counter is set up to count every 64th cycle + return (uint64_t) (pmccntr) << 6; + } + } + + // Fallback to syscall as we can't enable PMUSERENR in user mode. + struct timeval tv; + gettimeofday(&tv, NULL); + return (uint64_t) (tv.tv_sec) * 1000000 + tv.tv_usec; +#endif +} + #if defined(__GNUC__) || defined(__clang__) #pragma pop_macro("ALIGN_STRUCT") #pragma pop_macro("FORCE_INLINE") +#pragma pop_macro("FORCE_INLINE_OPTNONE") #endif #if defined(__GNUC__) && !defined(__clang__) diff --git a/external/sse2neon/tests/README.md b/external/sse2neon/tests/README.md index afb1f9f8..b345b085 100644 --- a/external/sse2neon/tests/README.md +++ b/external/sse2neon/tests/README.md @@ -7,10 +7,10 @@ Once the conversion is implemented, the test can be added with the following ste * File `tests/impl.h` - Add the intrinsic under `#define INTRIN_FOREACH(TYPE)` macro. The naming convention + Add the intrinsic under `INTRIN_LIST` macro. The naming convention should be `mm_xxx`. Place it in the correct classification with the alphabetical order. - The classification can be referenced from [Intel Intrinsics Guide](https://software.intel.com/sites/landingpage/IntrinsicsGuide/#). + The classification can be referenced from [Intel Intrinsics Guide](https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html). * File `tests/impl.cpp` ```c diff --git a/external/sse2neon/tests/binding.cpp b/external/sse2neon/tests/binding.cpp index 4e9eaddb..ca1bb57f 100644 --- a/external/sse2neon/tests/binding.cpp +++ b/external/sse2neon/tests/binding.cpp @@ -8,8 +8,13 @@ namespace SSE2NEON void *platformAlignedAlloc(size_t size) { void *address; +#if defined(_WIN32) + address = _aligned_malloc(size, 16); + if (!address) { +#else int ret = posix_memalign(&address, 16, size); if (ret != 0) { +#endif fprintf(stderr, "Error at File %s line number %d\n", __FILE__, __LINE__); exit(EXIT_FAILURE); @@ -19,7 +24,12 @@ void *platformAlignedAlloc(size_t size) void platformAlignedFree(void *ptr) { +#if defined(_WIN32) + _aligned_free(ptr); +#else free(ptr); +#endif } + } // namespace SSE2NEON diff --git a/external/sse2neon/tests/common.cpp b/external/sse2neon/tests/common.cpp index ae849d27..f1427aca 100644 --- a/external/sse2neon/tests/common.cpp +++ b/external/sse2neon/tests/common.cpp @@ -274,7 +274,7 @@ result_t validateSingleFloatPair(float a, float b) const uint32_t *ua = (const uint32_t *) &a; const uint32_t *ub = (const uint32_t *) &b; // We do an integer (binary) compare rather than a - // floating point compare to take nands and infinities + // floating point compare to take NaNs and infinities // into account as well. return (*ua) == (*ub) ? TEST_SUCCESS : TEST_FAIL; } @@ -284,7 +284,7 @@ result_t validateSingleDoublePair(double a, double b) const uint64_t *ua = (const uint64_t *) &a; const uint64_t *ub = (const uint64_t *) &b; // We do an integer (binary) compare rather than a - // floating point compare to take nands and infinities + // floating point compare to take NaNs and infinities // into account as well. if (std::isnan(a) && std::isnan(b)) { @@ -316,6 +316,27 @@ result_t validateFloatEpsilon(__m128 a, float df1 = fabsf(t[1] - f1); float df2 = fabsf(t[2] - f2); float df3 = fabsf(t[3] - f3); + + // Due to floating-point error, subtracting floating-point number with NaN + // and zero value usually produces erroneous result. Therefore, we directly + // define the difference of two floating-point numbers to zero if both + // numbers are NaN or zero. + if ((std::isnan(t[0]) && std::isnan(f0)) || (t[0] == 0 && f0 == 0)) { + df0 = 0; + } + + if ((std::isnan(t[1]) && std::isnan(f1)) || (t[1] == 0 && f1 == 0)) { + df1 = 0; + } + + if ((std::isnan(t[2]) && std::isnan(f2)) || (t[2] == 0 && f2 == 0)) { + df2 = 0; + } + + if ((std::isnan(t[3]) && std::isnan(f3)) || (t[3] == 0 && f3 == 0)) { + df3 = 0; + } + ASSERT_RETURN(df0 < epsilon); ASSERT_RETURN(df1 < epsilon); ASSERT_RETURN(df2 < epsilon); @@ -336,19 +357,23 @@ result_t validateFloatError(__m128 a, float df2 = fabsf((t[2] - f2) / f2); float df3 = fabsf((t[3] - f3) / f3); - if (std::isnan(t[0]) && std::isnan(f0)) { + if ((std::isnan(t[0]) && std::isnan(f0)) || (t[0] == 0 && f0 == 0) || + (std::isinf(t[0]) && std::isinf(f0))) { df0 = 0; } - if (std::isnan(t[1]) && std::isnan(f1)) { + if ((std::isnan(t[1]) && std::isnan(f1)) || (t[1] == 0 && f1 == 0) || + (std::isinf(t[1]) && std::isinf(f1))) { df1 = 0; } - if (std::isnan(t[2]) && std::isnan(f2)) { + if ((std::isnan(t[2]) && std::isnan(f2)) || (t[2] == 0 && f2 == 0) || + (std::isinf(t[2]) && std::isinf(f2))) { df2 = 0; } - if (std::isnan(t[3]) && std::isnan(f3)) { + if ((std::isnan(t[3]) && std::isnan(f3)) || (t[3] == 0 && f3 == 0) || + (std::isinf(t[3]) && std::isinf(f3))) { df3 = 0; } diff --git a/external/sse2neon/tests/common.h b/external/sse2neon/tests/common.h index caadee72..163d4e68 100644 --- a/external/sse2neon/tests/common.h +++ b/external/sse2neon/tests/common.h @@ -1,15 +1,26 @@ #ifndef SSE2NEONCOMMON_H #define SSE2NEONCOMMON_H #include -#if defined(__aarch64__) || defined(__arm__) +#if (defined(__aarch64__) || defined(_M_ARM64)) || defined(__arm__) #include "sse2neon.h" #elif defined(__x86_64__) || defined(__i386__) #include #include #include #include +#include #include +// __int64 is defined in the Intrinsics Guide which maps to different datatype +// in different data model +#if !(defined(_WIN32) || defined(_WIN64) || defined(__int64)) +#if (defined(__x86_64__) || defined(__i386__)) +#define __int64 long long +#else +#define __int64 int64_t +#endif +#endif + #if defined(__GNUC__) || defined(__clang__) #pragma push_macro("ALIGN_STRUCT") #define ALIGN_STRUCT(x) __attribute__((aligned(x))) @@ -32,6 +43,12 @@ typedef union ALIGN_STRUCT(16) SIMDVec { #if defined(__GNUC__) || defined(__clang__) #pragma pop_macro("ALIGN_STRUCT") #endif + +/* Tunable testing configuration for precise testing */ +/* _mm_min|max_ps|ss|pd|sd */ +#ifndef SSE2NEON_PRECISE_MINMAX +#define SSE2NEON_PRECISE_MINMAX (0) +#endif #endif #define ASSERT_RETURN(x) \ @@ -50,22 +67,6 @@ extern int64_t NaN64; #define ALL_BIT_1_32 (*(float *) &NaN) #define ALL_BIT_1_64 (*(double *) &NaN64) -inline float getNAN(void) -{ - const float *fn = (const float *) &NaN; - return *fn; -} - -inline bool isNAN(float a) -{ - const int32_t *ia = (const int32_t *) &a; - return (*ia) == NaN ? true : false; -} -inline bool isNAN(double a) -{ - const int64_t *ia = (const int64_t *) &a; - return (*ia) == NaN64 ? true : false; -} template result_t validate128(T a, T b) { @@ -187,6 +188,303 @@ result_t validateFloatError(__m128 a, float err); result_t validateDouble(__m128d a, double d0, double d1); result_t validateFloatError(__m128d a, double d0, double d1, double err); + +#define VALIDATE_INT8_M128(A, B) \ + validateInt8(A, B[0], B[1], B[2], B[3], B[4], B[5], B[6], B[7], B[8], \ + B[9], B[10], B[11], B[12], B[13], B[14], B[15]) +#define VALIDATE_UINT8_M128(A, B) \ + validateUInt8(A, B[0], B[1], B[2], B[3], B[4], B[5], B[6], B[7], B[8], \ + B[9], B[10], B[11], B[12], B[13], B[14], B[15]) +#define VALIDATE_INT16_M128(A, B) \ + validateInt16(A, B[0], B[1], B[2], B[3], B[4], B[5], B[6], B[7]) +#define VALIDATE_UINT16_M128(A, B) \ + validateUInt16(A, B[0], B[1], B[2], B[3], B[4], B[5], B[6], B[7]) +#define VALIDATE_INT32_M128(A, B) validateInt32(A, B[0], B[1], B[2], B[3]) +#define VALIDATE_UINT32_M128(A, B) validateUInt32(A, B[0], B[1], B[2], B[3]) + +#define VALIDATE_INT8_M64(A, B) \ + validateInt8(A, B[0], B[1], B[2], B[3], B[4], B[5], B[6], B[7]) +#define VALIDATE_UINT8_M64(A, B) \ + validateUInt8(A, B[0], B[1], B[2], B[3], B[4], B[5], B[6], B[7]) +#define VALIDATE_INT16_M64(A, B) validateInt16(A, B[0], B[1], B[2], B[3]) +#define VALIDATE_UINT16_M64(A, B) validateUInt16(A, B[0], B[1], B[2], B[3]) +#define VALIDATE_INT32_M64(A, B) validateInt32(A, B[0], B[1]) +#define VALIDATE_UINT32_M64(A, B) validateUInt32(A, B[0], B[1]) +#define CHECK_RESULT(EXP) \ + if (EXP != TEST_SUCCESS) { \ + return TEST_FAIL; \ + } +#define IMM_2_ITER \ + TEST_IMPL(0) \ + TEST_IMPL(1) +#define IMM_4_ITER \ + IMM_2_ITER \ + TEST_IMPL(2) \ + TEST_IMPL(3) +#define IMM_8_ITER \ + IMM_4_ITER \ + TEST_IMPL(4) \ + TEST_IMPL(5) \ + TEST_IMPL(6) \ + TEST_IMPL(7) +#define IMM_16_ITER \ + IMM_8_ITER \ + TEST_IMPL(8) \ + TEST_IMPL(9) \ + TEST_IMPL(10) \ + TEST_IMPL(11) \ + TEST_IMPL(12) \ + TEST_IMPL(13) \ + TEST_IMPL(14) \ + TEST_IMPL(15) +#define IMM_32_ITER \ + IMM_16_ITER \ + TEST_IMPL(16) \ + TEST_IMPL(17) \ + TEST_IMPL(18) \ + TEST_IMPL(19) \ + TEST_IMPL(20) \ + TEST_IMPL(21) \ + TEST_IMPL(22) \ + TEST_IMPL(23) \ + TEST_IMPL(24) \ + TEST_IMPL(25) \ + TEST_IMPL(26) \ + TEST_IMPL(27) \ + TEST_IMPL(28) \ + TEST_IMPL(29) \ + TEST_IMPL(30) \ + TEST_IMPL(31) +#define IMM_64_ITER \ + IMM_32_ITER \ + TEST_IMPL(32) \ + TEST_IMPL(33) \ + TEST_IMPL(34) \ + TEST_IMPL(35) \ + TEST_IMPL(36) \ + TEST_IMPL(37) \ + TEST_IMPL(38) \ + TEST_IMPL(39) \ + TEST_IMPL(40) \ + TEST_IMPL(41) \ + TEST_IMPL(42) \ + TEST_IMPL(43) \ + TEST_IMPL(44) \ + TEST_IMPL(45) \ + TEST_IMPL(46) \ + TEST_IMPL(47) \ + TEST_IMPL(48) \ + TEST_IMPL(49) \ + TEST_IMPL(50) \ + TEST_IMPL(51) \ + TEST_IMPL(52) \ + TEST_IMPL(53) \ + TEST_IMPL(54) \ + TEST_IMPL(55) \ + TEST_IMPL(56) \ + TEST_IMPL(57) \ + TEST_IMPL(58) \ + TEST_IMPL(59) \ + TEST_IMPL(60) \ + TEST_IMPL(61) \ + TEST_IMPL(62) \ + TEST_IMPL(63) +#define IMM_128_ITER \ + IMM_64_ITER \ + TEST_IMPL(64) \ + TEST_IMPL(65) \ + TEST_IMPL(66) \ + TEST_IMPL(67) \ + TEST_IMPL(68) \ + TEST_IMPL(69) \ + TEST_IMPL(70) \ + TEST_IMPL(71) \ + TEST_IMPL(72) \ + TEST_IMPL(73) \ + TEST_IMPL(74) \ + TEST_IMPL(75) \ + TEST_IMPL(76) \ + TEST_IMPL(77) \ + TEST_IMPL(78) \ + TEST_IMPL(79) \ + TEST_IMPL(80) \ + TEST_IMPL(81) \ + TEST_IMPL(82) \ + TEST_IMPL(83) \ + TEST_IMPL(84) \ + TEST_IMPL(85) \ + TEST_IMPL(86) \ + TEST_IMPL(87) \ + TEST_IMPL(88) \ + TEST_IMPL(89) \ + TEST_IMPL(90) \ + TEST_IMPL(91) \ + TEST_IMPL(92) \ + TEST_IMPL(93) \ + TEST_IMPL(94) \ + TEST_IMPL(95) \ + TEST_IMPL(96) \ + TEST_IMPL(97) \ + TEST_IMPL(98) \ + TEST_IMPL(99) \ + TEST_IMPL(100) \ + TEST_IMPL(101) \ + TEST_IMPL(102) \ + TEST_IMPL(103) \ + TEST_IMPL(104) \ + TEST_IMPL(105) \ + TEST_IMPL(106) \ + TEST_IMPL(107) \ + TEST_IMPL(108) \ + TEST_IMPL(109) \ + TEST_IMPL(110) \ + TEST_IMPL(111) \ + TEST_IMPL(112) \ + TEST_IMPL(113) \ + TEST_IMPL(114) \ + TEST_IMPL(115) \ + TEST_IMPL(116) \ + TEST_IMPL(117) \ + TEST_IMPL(118) \ + TEST_IMPL(119) \ + TEST_IMPL(120) \ + TEST_IMPL(121) \ + TEST_IMPL(122) \ + TEST_IMPL(123) \ + TEST_IMPL(124) \ + TEST_IMPL(125) \ + TEST_IMPL(126) \ + TEST_IMPL(127) +#define IMM_256_ITER \ + IMM_128_ITER \ + TEST_IMPL(128) \ + TEST_IMPL(129) \ + TEST_IMPL(130) \ + TEST_IMPL(131) \ + TEST_IMPL(132) \ + TEST_IMPL(133) \ + TEST_IMPL(134) \ + TEST_IMPL(135) \ + TEST_IMPL(136) \ + TEST_IMPL(137) \ + TEST_IMPL(138) \ + TEST_IMPL(139) \ + TEST_IMPL(140) \ + TEST_IMPL(141) \ + TEST_IMPL(142) \ + TEST_IMPL(143) \ + TEST_IMPL(144) \ + TEST_IMPL(145) \ + TEST_IMPL(146) \ + TEST_IMPL(147) \ + TEST_IMPL(148) \ + TEST_IMPL(149) \ + TEST_IMPL(150) \ + TEST_IMPL(151) \ + TEST_IMPL(152) \ + TEST_IMPL(153) \ + TEST_IMPL(154) \ + TEST_IMPL(155) \ + TEST_IMPL(156) \ + TEST_IMPL(157) \ + TEST_IMPL(158) \ + TEST_IMPL(159) \ + TEST_IMPL(160) \ + TEST_IMPL(161) \ + TEST_IMPL(162) \ + TEST_IMPL(163) \ + TEST_IMPL(164) \ + TEST_IMPL(165) \ + TEST_IMPL(166) \ + TEST_IMPL(167) \ + TEST_IMPL(168) \ + TEST_IMPL(169) \ + TEST_IMPL(170) \ + TEST_IMPL(171) \ + TEST_IMPL(172) \ + TEST_IMPL(173) \ + TEST_IMPL(174) \ + TEST_IMPL(175) \ + TEST_IMPL(176) \ + TEST_IMPL(177) \ + TEST_IMPL(178) \ + TEST_IMPL(179) \ + TEST_IMPL(180) \ + TEST_IMPL(181) \ + TEST_IMPL(182) \ + TEST_IMPL(183) \ + TEST_IMPL(184) \ + TEST_IMPL(185) \ + TEST_IMPL(186) \ + TEST_IMPL(187) \ + TEST_IMPL(188) \ + TEST_IMPL(189) \ + TEST_IMPL(190) \ + TEST_IMPL(191) \ + TEST_IMPL(192) \ + TEST_IMPL(193) \ + TEST_IMPL(194) \ + TEST_IMPL(195) \ + TEST_IMPL(196) \ + TEST_IMPL(197) \ + TEST_IMPL(198) \ + TEST_IMPL(199) \ + TEST_IMPL(200) \ + TEST_IMPL(201) \ + TEST_IMPL(202) \ + TEST_IMPL(203) \ + TEST_IMPL(204) \ + TEST_IMPL(205) \ + TEST_IMPL(206) \ + TEST_IMPL(207) \ + TEST_IMPL(208) \ + TEST_IMPL(209) \ + TEST_IMPL(210) \ + TEST_IMPL(211) \ + TEST_IMPL(212) \ + TEST_IMPL(213) \ + TEST_IMPL(214) \ + TEST_IMPL(215) \ + TEST_IMPL(216) \ + TEST_IMPL(217) \ + TEST_IMPL(218) \ + TEST_IMPL(219) \ + TEST_IMPL(220) \ + TEST_IMPL(221) \ + TEST_IMPL(222) \ + TEST_IMPL(223) \ + TEST_IMPL(224) \ + TEST_IMPL(225) \ + TEST_IMPL(226) \ + TEST_IMPL(227) \ + TEST_IMPL(228) \ + TEST_IMPL(229) \ + TEST_IMPL(230) \ + TEST_IMPL(231) \ + TEST_IMPL(232) \ + TEST_IMPL(233) \ + TEST_IMPL(234) \ + TEST_IMPL(235) \ + TEST_IMPL(236) \ + TEST_IMPL(237) \ + TEST_IMPL(238) \ + TEST_IMPL(239) \ + TEST_IMPL(240) \ + TEST_IMPL(241) \ + TEST_IMPL(242) \ + TEST_IMPL(243) \ + TEST_IMPL(244) \ + TEST_IMPL(245) \ + TEST_IMPL(246) \ + TEST_IMPL(247) \ + TEST_IMPL(248) \ + TEST_IMPL(249) \ + TEST_IMPL(250) \ + TEST_IMPL(251) \ + TEST_IMPL(252) \ + TEST_IMPL(253) \ + TEST_IMPL(254) \ + TEST_IMPL(255) } // namespace SSE2NEON #endif diff --git a/external/sse2neon/tests/impl.cpp b/external/sse2neon/tests/impl.cpp index 841bfaf1..74330f5c 100644 --- a/external/sse2neon/tests/impl.cpp +++ b/external/sse2neon/tests/impl.cpp @@ -1,21 +1,50 @@ -#include "impl.h" #include #include +#include #include +#include #include #include #include #include #include + #include "binding.h" +#include "impl.h" // Try 10,000 random floating point values for each test we run #define MAX_TEST_VALUE 10000 +/* Pattern Matching for C macros. + * https://github.com/pfultz2/Cloak/wiki/C-Preprocessor-tricks,-tips,-and-idioms + */ + +/* catenate */ +#define PRIMITIVE_CAT(a, ...) a##__VA_ARGS__ + +#define IIF(c) PRIMITIVE_CAT(IIF_, c) +/* run the 2nd parameter */ +#define IIF_0(t, ...) __VA_ARGS__ +/* run the 1st parameter */ +#define IIF_1(t, ...) t + +// Some intrinsics operate on unaligned data types. +#if defined(__GNUC__) || defined(__clang__) +#define ALIGN_STRUCT(x) __attribute__((aligned(x))) +#elif defined(_MSC_VER) +#ifndef ALIGN_STRUCT +#define ALIGN_STRUCT(x) __declspec(align(x)) +#endif +#endif + +typedef int16_t ALIGN_STRUCT(1) unaligned_int16_t; +typedef int32_t ALIGN_STRUCT(1) unaligned_int32_t; +typedef int64_t ALIGN_STRUCT(1) unaligned_int64_t; + // This program a set of unit tests to ensure that each SSE call provide the // output we expect. If this fires an assert, then something didn't match up. // -// Functions with `test_` prefix will be called in runSingleTest. +// Functions with "test_" prefix will be called in runSingleTest. namespace SSE2NEON { // Forward declaration @@ -33,6 +62,10 @@ class SSE2NEONTestImpl : public SSE2NEONTest int32_t *mTestIntPointer2; float mTestFloats[MAX_TEST_VALUE]; int32_t mTestInts[MAX_TEST_VALUE]; + int8_t mTestUnalignedInts[32] = { + 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, + 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, + }; virtual ~SSE2NEONTestImpl(void) { @@ -62,6 +95,27 @@ class SSE2NEONTestImpl : public SSE2NEONTest mTestFloatPointer1[2] = 1.0f / mTestFloatPointer1[2]; mTestFloatPointer1[3] = 1.0f / mTestFloatPointer1[3]; } + if (test == it_mm_rcp_ps || test == it_mm_rcp_ss || + test == it_mm_rsqrt_ps || test == it_mm_rsqrt_ss) { + if ((rand() & 3) == 0) { + uint32_t r1 = rand() & 3; + uint32_t r2 = rand() & 3; + uint32_t r3 = rand() & 3; + uint32_t r4 = rand() & 3; + uint32_t r5 = rand() & 3; + uint32_t r6 = rand() & 3; + uint32_t r7 = rand() & 3; + uint32_t r8 = rand() & 3; + mTestFloatPointer1[r1] = 0.0f; + mTestFloatPointer1[r2] = 0.0f; + mTestFloatPointer1[r3] = 0.0f; + mTestFloatPointer1[r4] = 0.0f; + mTestFloatPointer1[r5] = -0.0f; + mTestFloatPointer1[r6] = -0.0f; + mTestFloatPointer1[r7] = -0.0f; + mTestFloatPointer1[r8] = -0.0f; + } + } if (test == it_mm_cmpge_ps || test == it_mm_cmpge_ss || test == it_mm_cmple_ps || test == it_mm_cmple_ss || test == it_mm_cmpeq_ps || test == it_mm_cmpeq_ss) { @@ -69,20 +123,111 @@ class SSE2NEONTestImpl : public SSE2NEONTest mTestFloatPointer1[3] = mTestFloatPointer2[3]; } - if (test == it_mm_cmpord_ps || test == it_mm_comilt_ss || - test == it_mm_comile_ss || test == it_mm_comige_ss || - test == it_mm_comieq_ss || test == it_mm_comineq_ss || - test == it_mm_comigt_ss) { // if testing for NAN's make sure we - // have some nans - // One out of four times - // Make sure a couple of values have NANs for testing purposes + if (test == it_mm_cmpord_ps || test == it_mm_cmpord_ss || + test == it_mm_cmpunord_ps || test == it_mm_cmpunord_ss || + test == it_mm_cmpeq_ps || test == it_mm_cmpeq_ss || + test == it_mm_cmpge_ps || test == it_mm_cmpge_ss || + test == it_mm_cmpgt_ps || test == it_mm_cmpgt_ss || + test == it_mm_cmple_ps || test == it_mm_cmple_ss || + test == it_mm_cmplt_ps || test == it_mm_cmplt_ss || + test == it_mm_cmpneq_ps || test == it_mm_cmpneq_ss || + test == it_mm_cmpnge_ps || test == it_mm_cmpnge_ss || + test == it_mm_cmpngt_ps || test == it_mm_cmpngt_ss || + test == it_mm_cmpnle_ps || test == it_mm_cmpnle_ss || + test == it_mm_cmpnlt_ps || test == it_mm_cmpnlt_ss || + test == it_mm_comieq_ss || test == it_mm_ucomieq_ss || + test == it_mm_comige_ss || test == it_mm_ucomige_ss || + test == it_mm_comigt_ss || test == it_mm_ucomigt_ss || + test == it_mm_comile_ss || test == it_mm_ucomile_ss || + test == it_mm_comilt_ss || test == it_mm_ucomilt_ss || + test == it_mm_comineq_ss || test == it_mm_ucomineq_ss) { + // Make sure the NaN values are included in the testing + // one out of four times. + if ((rand() & 3) == 0) { + uint32_t r1 = rand() & 3; + uint32_t r2 = rand() & 3; + mTestFloatPointer1[r1] = nanf(""); + mTestFloatPointer2[r2] = nanf(""); + } + } + + if (test == it_mm_cmpord_pd || test == it_mm_cmpord_sd || + test == it_mm_cmpunord_pd || test == it_mm_cmpunord_sd || + test == it_mm_cmpeq_pd || test == it_mm_cmpeq_sd || + test == it_mm_cmpge_pd || test == it_mm_cmpge_sd || + test == it_mm_cmpgt_pd || test == it_mm_cmpgt_sd || + test == it_mm_cmple_pd || test == it_mm_cmple_sd || + test == it_mm_cmplt_pd || test == it_mm_cmplt_sd || + test == it_mm_cmpneq_pd || test == it_mm_cmpneq_sd || + test == it_mm_cmpnge_pd || test == it_mm_cmpnge_sd || + test == it_mm_cmpngt_pd || test == it_mm_cmpngt_sd || + test == it_mm_cmpnle_pd || test == it_mm_cmpnle_sd || + test == it_mm_cmpnlt_pd || test == it_mm_cmpnlt_sd || + test == it_mm_comieq_sd || test == it_mm_ucomieq_sd || + test == it_mm_comige_sd || test == it_mm_ucomige_sd || + test == it_mm_comigt_sd || test == it_mm_ucomigt_sd || + test == it_mm_comile_sd || test == it_mm_ucomile_sd || + test == it_mm_comilt_sd || test == it_mm_ucomilt_sd || + test == it_mm_comineq_sd || test == it_mm_ucomineq_sd) { + // Make sure the NaN values are included in the testing + // one out of four times. + if ((rand() & 3) == 0) { + // FIXME: + // The argument "0xFFFFFFFFFFFF" is a tricky workaround to + // set the NaN value for doubles. The code is not intuitive + // and should be fixed in the future. + uint32_t r1 = ((rand() & 1) << 1) + 1; + uint32_t r2 = ((rand() & 1) << 1) + 1; + mTestFloatPointer1[r1] = nanf("0xFFFFFFFFFFFF"); + mTestFloatPointer2[r2] = nanf("0xFFFFFFFFFFFF"); + } + } + + if (test == it_mm_max_pd || test == it_mm_max_sd || + test == it_mm_min_pd || test == it_mm_min_sd) { + // Make sure the positive/negative inifinity values are included + // in the testing one out of four times. + if ((rand() & 3) == 0) { + uint32_t r1 = ((rand() & 1) << 1) + 1; + uint32_t r2 = ((rand() & 1) << 1) + 1; + uint32_t r3 = ((rand() & 1) << 1) + 1; + uint32_t r4 = ((rand() & 1) << 1) + 1; + mTestFloatPointer1[r1] = INFINITY; + mTestFloatPointer2[r2] = INFINITY; + mTestFloatPointer1[r3] = -INFINITY; + mTestFloatPointer1[r4] = -INFINITY; + } + } + +#if SSE2NEON_PRECISE_MINMAX + if (test == it_mm_max_ps || test == it_mm_max_ss || + test == it_mm_min_ps || test == it_mm_min_ss) { + // Make sure the NaN values are included in the testing + // one out of four times. if ((rand() & 3) == 0) { uint32_t r1 = rand() & 3; uint32_t r2 = rand() & 3; - mTestFloatPointer1[r1] = getNAN(); - mTestFloatPointer2[r2] = getNAN(); + mTestFloatPointer1[r1] = nanf(""); + mTestFloatPointer2[r2] = nanf(""); + } + } + + if (test == it_mm_max_pd || test == it_mm_max_sd || + test == it_mm_min_pd || test == it_mm_min_sd) { + // Make sure the NaN values are included in the testing + // one out of four times. + if ((rand() & 3) == 0) { + // FIXME: + // The argument "0xFFFFFFFFFFFF" is a tricky workaround to + // set the NaN value for doubles. The code is not intuitive + // and should be fixed in the future. + uint32_t r1 = ((rand() & 1) << 1) + 1; + uint32_t r2 = ((rand() & 1) << 1) + 1; + mTestFloatPointer1[r1] = nanf("0xFFFFFFFFFFFF"); + mTestFloatPointer2[r2] = nanf("0xFFFFFFFFFFFF"); } } +#endif // one out of every random 64 times or so, mix up the test floats to // contain some integer values @@ -136,16 +281,7 @@ class SSE2NEONTestImpl : public SSE2NEONTest } } } -#if 0 - { - mTestFloatPointer1[0] = getNAN(); - mTestFloatPointer2[0] = getNAN(); - result_t ok = test_mm_comilt_ss(mTestFloatPointer1, mTestFloatPointer1); - if (ok == TEST_FAIL) { - printf("Debug me"); - } - } -#endif + ret = runSingleTest(test, i); if (ret == TEST_FAIL) // the test failed... { @@ -159,7 +295,11 @@ class SSE2NEONTestImpl : public SSE2NEONTest } }; -const char *instructionString[] = {INTRIN_FOREACH(STR)}; +const char *instructionString[] = { +#define _(x) #x, + INTRIN_LIST +#undef _ +}; // Produce rounding which is the same as SSE instructions with _MM_ROUND_NEAREST // rounding mode @@ -227,10 +367,27 @@ static inline double bankersRounding(double val) return ret; } -static float ranf(void) +// SplitMix64 PRNG by Sebastiano Vigna, see: +// +static uint64_t state; // the state of SplitMix64 PRNG +const double TWOPOWER64 = pow(2, 64); + +#define SSE2NEON_INIT_RNG(seed) \ + do { \ + state = seed; \ + } while (0) + +static double next() +{ + uint64_t z = (state += 0x9e3779b97f4a7c15); + z = (z ^ (z >> 30)) * 0xbf58476d1ce4e5b9; + z = (z ^ (z >> 27)) * 0x94d049bb133111eb; + return (double) (z ^ (z >> 31)); +} + +static float ranf() { - uint32_t ir = rand() & 0x7FFF; - return (float) ir * (1.0f / 32768.0f); + return (float) (next() / TWOPOWER64); } static float ranf(float low, float high) @@ -243,8 +400,8 @@ result_t test_mm_slli_si128(const SSE2NEONTestImpl &impl, uint32_t iter); result_t test_mm_srli_si128(const SSE2NEONTestImpl &impl, uint32_t iter); result_t test_mm_shuffle_pi16(const SSE2NEONTestImpl &impl, uint32_t iter); -// This function is not called from `runSingleTest`, but for other intrinsic -// tests that might need to call `_mm_set_epi32`. +// This function is not called from "runSingleTest", but for other intrinsic +// tests that might need to call "_mm_set_epi32". __m128i do_mm_set_epi32(int32_t x, int32_t y, int32_t z, int32_t w) { __m128i a = _mm_set_epi32(x, y, z, w); @@ -252,45 +409,42 @@ __m128i do_mm_set_epi32(int32_t x, int32_t y, int32_t z, int32_t w) return a; } -// This function is not called from `runSingleTest`, but for other intrinsic +// This function is not called from "runSingleTest", but for other intrinsic // tests that might need to load __m64 data. -__m64 do_mm_load_m64(const int64_t *p) +template +__m64 load_m64(const T *p) { - __m64 a = *((const __m64 *) p); - validateInt64(a, p[0]); - return a; + return *((const __m64 *) p); } -// This function is not called from `runSingleTest`, but for other intrinsic -// tests that might need to call `_mm_load_ps`. -__m128 do_mm_load_ps(const float *p) +// This function is not called from "runSingleTest", but for other intrinsic +// tests that might need to call "_mm_load_ps". +template +__m128 load_m128(const T *p) { - __m128 a = _mm_load_ps(p); - validateFloat(a, p[0], p[1], p[2], p[3]); - return a; + return _mm_loadu_ps((const float *) p); } -// This function is not called from `runSingleTest`, but for other intrinsic -// tests that might need to call `_mm_load_ps`. -__m128i do_mm_load_ps(const int32_t *p) +// This function is not called from "runSingleTest", but for other intrinsic +// tests that might need to call "_mm_load_ps". +template +__m128i load_m128i(const T *p) { - __m128 a = _mm_load_ps((const float *) p); + __m128 a = _mm_loadu_ps((const float *) p); __m128i ia = *(const __m128i *) &a; - validateInt32(ia, p[0], p[1], p[2], p[3]); return ia; } -// This function is not called from `runSingleTest`, but for other intrinsic -// tests that might need to call `_mm_load_pd`. -__m128d do_mm_load_pd(const double *p) +// This function is not called from "runSingleTest", but for other intrinsic +// tests that might need to call "_mm_load_pd". +template +__m128d load_m128d(const T *p) { - __m128d a = _mm_load_pd(p); - validateDouble(a, p[0], p[1]); - return a; + return _mm_loadu_pd((const double *) p); } -// This function is not called from `runSingleTest`, but for other intrinsic -// tests that might need to call `_mm_store_ps`. +// This function is not called from "runSingleTest", but for other intrinsic +// tests that might need to call "_mm_store_ps". result_t do_mm_store_ps(float *p, float x, float y, float z, float w) { __m128 a = _mm_set_ps(x, y, z, w); @@ -302,8 +456,8 @@ result_t do_mm_store_ps(float *p, float x, float y, float z, float w) return TEST_SUCCESS; } -// This function is not called from `runSingleTest`, but for other intrinsic -// tests that might need to call `_mm_store_ps`. +// This function is not called from "runSingleTest", but for other intrinsic +// tests that might need to call "_mm_store_ps". result_t do_mm_store_ps(int32_t *p, int32_t x, int32_t y, int32_t z, int32_t w) { __m128i a = _mm_set_epi32(x, y, z, w); @@ -315,118 +469,66 @@ result_t do_mm_store_ps(int32_t *p, int32_t x, int32_t y, int32_t z, int32_t w) return TEST_SUCCESS; } -float compord(float a, float b) +float cmp_noNaN(float a, float b) { - float ret; + return (!isnan(a) && !isnan(b)) ? ALL_BIT_1_32 : 0.0f; +} - bool isNANA = isNAN(a); - bool isNANB = isNAN(b); - ret = (!isNANA && !isNANB) ? getNAN() : 0.0f; - return ret; +double cmp_noNaN(double a, double b) +{ + return (!isnan(a) && !isnan(b)) ? ALL_BIT_1_64 : 0.0f; } -double compord(double a, double b) +float cmp_hasNaN(float a, float b) { - double ret; + return (isnan(a) || isnan(b)) ? ALL_BIT_1_32 : 0.0f; +} - bool isNANA = isNAN(a); - bool isNANB = isNAN(b); - ret = (!isNANA && !isNANB) ? getNAN() : 0.0f; - return ret; +double cmp_hasNaN(double a, double b) +{ + return (isnan(a) || isnan(b)) ? ALL_BIT_1_64 : 0.0f; } int32_t comilt_ss(float a, float b) { - int32_t ret; - - bool isNANA = isNAN(a); - bool isNANB = isNAN(b); - if (!isNANA && !isNANB) { - ret = a < b ? 1 : 0; - } else { - ret = 0; // **NOTE** The documentation on MSDN is in error! The actual - // hardware returns a 0, not a 1 if either of the values is a - // NAN! - } - return ret; + if (isnan(a) || isnan(b)) + return 0; + return (a < b); } int32_t comigt_ss(float a, float b) { - int32_t ret; - - bool isNANA = isNAN(a); - bool isNANB = isNAN(b); - if (!isNANA && !isNANB) { - ret = a > b ? 1 : 0; - } else { - ret = 0; // **NOTE** The documentation on MSDN is in error! The actual - // hardware returns a 0, not a 1 if either of the values is a - // NAN! - } - return ret; + if (isnan(a) || isnan(b)) + return 0; + return (a > b); } int32_t comile_ss(float a, float b) { - int32_t ret; - - bool isNANA = isNAN(a); - bool isNANB = isNAN(b); - if (!isNANA && !isNANB) { - ret = a <= b ? 1 : 0; - } else { - ret = 0; // **NOTE** The documentation on MSDN is in error! The actual - // hardware returns a 0, not a 1 if either of the values is a - // NAN! - } - return ret; + if (isnan(a) || isnan(b)) + return 0; + return (a <= b); } int32_t comige_ss(float a, float b) { - int32_t ret; - - bool isNANA = isNAN(a); - bool isNANB = isNAN(b); - if (!isNANA && !isNANB) { - ret = a >= b ? 1 : 0; - } else { - ret = 0; // **NOTE** The documentation on MSDN is in error! The actual - // hardware returns a 0, not a 1 if either of the values is a - // NAN! - } - return ret; + if (isnan(a) || isnan(b)) + return 0; + return (a >= b); } int32_t comieq_ss(float a, float b) { - int32_t ret; - - bool isNANA = isNAN(a); - bool isNANB = isNAN(b); - if (!isNANA && !isNANB) { - ret = a == b ? 1 : 0; - } else { - ret = 0; // **NOTE** The documentation on MSDN is in error! The actual - // hardware returns a 0, not a 1 if either of the values is a - // NAN! - } - return ret; + if (isnan(a) || isnan(b)) + return 0; + return (a == b); } int32_t comineq_ss(float a, float b) { - int32_t ret; - - bool isNANA = isNAN(a); - bool isNANB = isNAN(b); - if (!isNANA && !isNANB) { - ret = a != b ? 1 : 0; - } else { - ret = 1; - } - return ret; + if (isnan(a) || isnan(b)) + return 1; + return (a != b); } static inline int16_t saturate_16(int32_t a) @@ -445,7 +547,7 @@ uint32_t canonical_crc32_u8(uint32_t crc, uint8_t v) crc ^= v; for (int bit = 0; bit < 8; bit++) { if (crc & 1) - crc = (crc >> 1) ^ uint32_t(0x82f63b78); + crc = (crc >> 1) ^ UINT32_C(0x82f63b78); else crc = (crc >> 1); } @@ -468,8 +570,8 @@ uint32_t canonical_crc32_u32(uint32_t crc, uint32_t v) uint64_t canonical_crc32_u64(uint64_t crc, uint64_t v) { - crc = canonical_crc32_u32((uint32_t)(crc), v & 0xffffffff); - crc = canonical_crc32_u32((uint32_t)(crc), (v >> 32) & 0xffffffff); + crc = canonical_crc32_u32((uint32_t) (crc), v & 0xffffffff); + crc = canonical_crc32_u32((uint32_t) (crc), (v >> 32) & 0xffffffff); return crc; } @@ -498,7 +600,34 @@ static const uint8_t crypto_aes_sbox[256] = { 0xb0, 0x54, 0xbb, 0x16, }; +static const uint8_t crypto_aes_rsbox[256] = { + 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf, 0x40, 0xa3, 0x9e, + 0x81, 0xf3, 0xd7, 0xfb, 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, + 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, 0x54, 0x7b, 0x94, 0x32, + 0xa6, 0xc2, 0x23, 0x3d, 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, + 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, 0x76, 0x5b, 0xa2, 0x49, + 0x6d, 0x8b, 0xd1, 0x25, 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, + 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, 0x6c, 0x70, 0x48, 0x50, + 0xfd, 0xed, 0xb9, 0xda, 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, + 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, 0xf7, 0xe4, 0x58, 0x05, + 0xb8, 0xb3, 0x45, 0x06, 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, + 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, 0x3a, 0x91, 0x11, 0x41, + 0x4f, 0x67, 0xdc, 0xea, 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, + 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, 0xe2, 0xf9, 0x37, 0xe8, + 0x1c, 0x75, 0xdf, 0x6e, 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, + 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, 0xfc, 0x56, 0x3e, 0x4b, + 0xc6, 0xd2, 0x79, 0x20, 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, + 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, 0xb1, 0x12, 0x10, 0x59, + 0x27, 0x80, 0xec, 0x5f, 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, + 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, 0xa0, 0xe0, 0x3b, 0x4d, + 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, + 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, + 0x55, 0x21, 0x0c, 0x7d, +}; + +// XT is x_time function that muliplies 'x' by 2 in GF(2^8) #define XT(x) (((x) << 1) ^ ((((x) >> 7) & 1) * 0x1b)) + inline __m128i aesenc_128_reference(__m128i a, __m128i b) { uint8_t i, t, u, v[4][4]; @@ -514,6 +643,43 @@ inline __m128i aesenc_128_reference(__m128i a, __m128i b) v[i][2] ^= u ^ XT(v[i][2] ^ v[i][3]); v[i][3] ^= u ^ XT(v[i][3] ^ t); } + + for (i = 0; i < 16; ++i) { + ((SIMDVec *) &a)->m128_u8[i] = + v[i / 4][i % 4] ^ ((SIMDVec *) &b)->m128_u8[i]; + } + + return a; +} + +#define MULTIPLY(x, y) \ + (((y & 1) * x) ^ ((y >> 1 & 1) * XT(x)) ^ ((y >> 2 & 1) * XT(XT(x))) ^ \ + ((y >> 3 & 1) * XT(XT(XT(x)))) ^ ((y >> 4 & 1) * XT(XT(XT(XT(x)))))) + +inline __m128i aesdec_128_reference(__m128i a, __m128i b) +{ + uint8_t i, e, f, g, h, v[4][4]; + for (i = 0; i < 16; ++i) { + v[((i / 4) + (i % 4)) % 4][i % 4] = + crypto_aes_rsbox[((SIMDVec *) &a)->m128_u8[i]]; + } + + for (i = 0; i < 4; ++i) { + e = v[i][0]; + f = v[i][1]; + g = v[i][2]; + h = v[i][3]; + + v[i][0] = MULTIPLY(e, 0x0e) ^ MULTIPLY(f, 0x0b) ^ MULTIPLY(g, 0x0d) ^ + MULTIPLY(h, 0x09); + v[i][1] = MULTIPLY(e, 0x09) ^ MULTIPLY(f, 0x0e) ^ MULTIPLY(g, 0x0b) ^ + MULTIPLY(h, 0x0d); + v[i][2] = MULTIPLY(e, 0x0d) ^ MULTIPLY(f, 0x09) ^ MULTIPLY(g, 0x0e) ^ + MULTIPLY(h, 0x0b); + v[i][3] = MULTIPLY(e, 0x0b) ^ MULTIPLY(f, 0x0d) ^ MULTIPLY(g, 0x09) ^ + MULTIPLY(h, 0x0e); + } + for (i = 0; i < 16; ++i) { ((SIMDVec *) &a)->m128_u8[i] = v[i / 4][i % 4] ^ ((SIMDVec *) &b)->m128_u8[i]; @@ -533,27 +699,12 @@ inline __m128i aesenclast_128_reference(__m128i s, __m128i rk) return s; } -static inline uint32_t sub_word(uint32_t key) -{ - return (crypto_aes_sbox[key >> 24] << 24) | - (crypto_aes_sbox[(key >> 16) & 0xff] << 16) | - (crypto_aes_sbox[(key >> 8) & 0xff] << 8) | - crypto_aes_sbox[key & 0xff]; -} - // Rotates right (circular right shift) value by "amount" positions static inline uint32_t rotr(uint32_t value, uint32_t amount) { return (value >> amount) | (value << ((32 - amount) & 31)); } -inline __m128i aeskeygenassist_128_reference(__m128i a, const int rcon) -{ - const uint32_t X1 = sub_word(_mm_cvtsi128_si32(_mm_shuffle_epi32(a, 0x55))); - const uint32_t X3 = sub_word(_mm_cvtsi128_si32(_mm_shuffle_epi32(a, 0xFF))); - return _mm_set_epi32(rotr(X3, 8) ^ rcon, X3, rotr(X1, 8) ^ rcon, X1); -} - static inline uint64_t MUL(uint32_t a, uint32_t b) { return (uint64_t) a * (uint64_t) b; @@ -644,6 +795,12 @@ static std::pair clmul_64(uint64_t x, uint64_t y) return std::make_pair(xy0, xy1); } +/* MMX */ +result_t test_mm_empty(const SSE2NEONTestImpl &impl, uint32_t iter) +{ + return TEST_SUCCESS; +} + /* SSE */ result_t test_mm_add_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { @@ -654,8 +811,8 @@ result_t test_mm_add_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2] + _b[2]; float dw = _a[3] + _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_add_ps(a, b); return validateFloat(c, dx, dy, dz, dw); } @@ -681,22 +838,23 @@ result_t test_mm_and_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_and_ps(a, b); // now for the assertion... const uint32_t *ia = (const uint32_t *) &a; const uint32_t *ib = (const uint32_t *) &b; - uint32_t r0 = ia[0] & ib[0]; - uint32_t r1 = ia[1] & ib[1]; - uint32_t r2 = ia[2] & ib[2]; - uint32_t r3 = ia[3] & ib[3]; - __m128i ret = do_mm_set_epi32(r3, r2, r1, r0); - result_t r = validateInt32(*(const __m128i *) &c, r0, r1, r2, r3); - if (r) { - r = validateInt32(ret, r0, r1, r2, r3); + uint32_t r[4]; + r[0] = ia[0] & ib[0]; + r[1] = ia[1] & ib[1]; + r[2] = ia[2] & ib[2]; + r[3] = ia[3] & ib[3]; + __m128i ret = do_mm_set_epi32(r[3], r[2], r[1], r[0]); + result_t res = VALIDATE_INT32_M128(*(const __m128i *) &c, r); + if (res) { + res = VALIDATE_INT32_M128(ret, r); } - return r; + return res; } // r0 := ~a0 & b0 @@ -705,70 +863,73 @@ result_t test_mm_and_ps(const SSE2NEONTestImpl &impl, uint32_t iter) // r3 := ~a3 & b3 result_t test_mm_andnot_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { - result_t r = TEST_FAIL; const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_andnot_ps(a, b); // now for the assertion... const uint32_t *ia = (const uint32_t *) &a; const uint32_t *ib = (const uint32_t *) &b; - uint32_t r0 = ~ia[0] & ib[0]; - uint32_t r1 = ~ia[1] & ib[1]; - uint32_t r2 = ~ia[2] & ib[2]; - uint32_t r3 = ~ia[3] & ib[3]; - __m128i ret = do_mm_set_epi32(r3, r2, r1, r0); - r = validateInt32(*(const __m128i *) &c, r0, r1, r2, r3); - if (r) { - r = validateInt32(ret, r0, r1, r2, r3); + uint32_t r[4]; + r[0] = ~ia[0] & ib[0]; + r[1] = ~ia[1] & ib[1]; + r[2] = ~ia[2] & ib[2]; + r[3] = ~ia[3] & ib[3]; + __m128i ret = do_mm_set_epi32(r[3], r[2], r[1], r[0]); + result_t res = TEST_FAIL; + res = VALIDATE_INT32_M128(*(const __m128i *) &c, r); + if (res) { + res = VALIDATE_INT32_M128(ret, r); } - return r; + return res; } result_t test_mm_avg_pu16(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint16_t *_a = (const uint16_t *) impl.mTestIntPointer1; const uint16_t *_b = (const uint16_t *) impl.mTestIntPointer2; - uint16_t d0 = (_a[0] + _b[0] + 1) >> 1; - uint16_t d1 = (_a[1] + _b[1] + 1) >> 1; - uint16_t d2 = (_a[2] + _b[2] + 1) >> 1; - uint16_t d3 = (_a[3] + _b[3] + 1) >> 1; + uint16_t d[4]; + d[0] = (_a[0] + _b[0] + 1) >> 1; + d[1] = (_a[1] + _b[1] + 1) >> 1; + d[2] = (_a[2] + _b[2] + 1) >> 1; + d[3] = (_a[3] + _b[3] + 1) >> 1; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_avg_pu16(a, b); - return validateUInt16(c, d0, d1, d2, d3); + return VALIDATE_UINT16_M64(c, d); } result_t test_mm_avg_pu8(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint8_t *_a = (const uint8_t *) impl.mTestIntPointer1; const uint8_t *_b = (const uint8_t *) impl.mTestIntPointer2; - uint8_t d0 = (_a[0] + _b[0] + 1) >> 1; - uint8_t d1 = (_a[1] + _b[1] + 1) >> 1; - uint8_t d2 = (_a[2] + _b[2] + 1) >> 1; - uint8_t d3 = (_a[3] + _b[3] + 1) >> 1; - uint8_t d4 = (_a[4] + _b[4] + 1) >> 1; - uint8_t d5 = (_a[5] + _b[5] + 1) >> 1; - uint8_t d6 = (_a[6] + _b[6] + 1) >> 1; - uint8_t d7 = (_a[7] + _b[7] + 1) >> 1; - - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + uint8_t d[8]; + d[0] = (_a[0] + _b[0] + 1) >> 1; + d[1] = (_a[1] + _b[1] + 1) >> 1; + d[2] = (_a[2] + _b[2] + 1) >> 1; + d[3] = (_a[3] + _b[3] + 1) >> 1; + d[4] = (_a[4] + _b[4] + 1) >> 1; + d[5] = (_a[5] + _b[5] + 1) >> 1; + d[6] = (_a[6] + _b[6] + 1) >> 1; + d[7] = (_a[7] + _b[7] + 1) >> 1; + + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_avg_pu8(a, b); - return validateUInt8(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_UINT8_M64(c, d); } result_t test_mm_cmpeq_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result[4]; result[0] = _a[0] == _b[0] ? -1 : 0; @@ -778,15 +939,15 @@ result_t test_mm_cmpeq_ps(const SSE2NEONTestImpl &impl, uint32_t iter) __m128 ret = _mm_cmpeq_ps(a, b); __m128i iret = *(const __m128i *) &ret; - return validateInt32(iret, result[0], result[1], result[2], result[3]); + return VALIDATE_INT32_M128(iret, result); } result_t test_mm_cmpeq_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = _a[0] == _b[0] ? ALL_BIT_1_32 : 0; @@ -802,8 +963,8 @@ result_t test_mm_cmpge_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result[4]; result[0] = _a[0] >= _b[0] ? -1 : 0; @@ -813,15 +974,15 @@ result_t test_mm_cmpge_ps(const SSE2NEONTestImpl &impl, uint32_t iter) __m128 ret = _mm_cmpge_ps(a, b); __m128i iret = *(const __m128i *) &ret; - return validateInt32(iret, result[0], result[1], result[2], result[3]); + return VALIDATE_INT32_M128(iret, result); } result_t test_mm_cmpge_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = _a[0] >= _b[0] ? ALL_BIT_1_32 : 0; @@ -837,8 +998,8 @@ result_t test_mm_cmpgt_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result[4]; result[0] = _a[0] > _b[0] ? -1 : 0; @@ -848,15 +1009,15 @@ result_t test_mm_cmpgt_ps(const SSE2NEONTestImpl &impl, uint32_t iter) __m128 ret = _mm_cmpgt_ps(a, b); __m128i iret = *(const __m128i *) &ret; - return validateInt32(iret, result[0], result[1], result[2], result[3]); + return VALIDATE_INT32_M128(iret, result); } result_t test_mm_cmpgt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = _a[0] > _b[0] ? ALL_BIT_1_32 : 0; @@ -872,8 +1033,8 @@ result_t test_mm_cmple_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result[4]; result[0] = _a[0] <= _b[0] ? -1 : 0; @@ -883,15 +1044,15 @@ result_t test_mm_cmple_ps(const SSE2NEONTestImpl &impl, uint32_t iter) __m128 ret = _mm_cmple_ps(a, b); __m128i iret = *(const __m128i *) &ret; - return validateInt32(iret, result[0], result[1], result[2], result[3]); + return VALIDATE_INT32_M128(iret, result); } result_t test_mm_cmple_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = _a[0] <= _b[0] ? ALL_BIT_1_32 : 0; @@ -907,8 +1068,8 @@ result_t test_mm_cmplt_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result[4]; result[0] = _a[0] < _b[0] ? -1 : 0; @@ -918,7 +1079,7 @@ result_t test_mm_cmplt_ps(const SSE2NEONTestImpl &impl, uint32_t iter) __m128 ret = _mm_cmplt_ps(a, b); __m128i iret = *(const __m128i *) &ret; - return validateInt32(iret, result[0], result[1], result[2], result[3]); + return VALIDATE_INT32_M128(iret, result); } result_t test_mm_cmplt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -926,8 +1087,8 @@ result_t test_mm_cmplt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = _a[0] < _b[0] ? ALL_BIT_1_32 : 0; @@ -943,8 +1104,8 @@ result_t test_mm_cmpneq_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result[4]; result[0] = _a[0] != _b[0] ? -1 : 0; @@ -954,15 +1115,15 @@ result_t test_mm_cmpneq_ps(const SSE2NEONTestImpl &impl, uint32_t iter) __m128 ret = _mm_cmpneq_ps(a, b); __m128i iret = *(const __m128i *) &ret; - return validateInt32(iret, result[0], result[1], result[2], result[3]); + return VALIDATE_INT32_M128(iret, result); } result_t test_mm_cmpneq_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = _a[0] != _b[0] ? ALL_BIT_1_32 : 0; @@ -978,8 +1139,8 @@ result_t test_mm_cmpnge_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = !(_a[0] >= _b[0]) ? ALL_BIT_1_32 : 0; @@ -995,8 +1156,8 @@ result_t test_mm_cmpnge_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = !(_a[0] >= _b[0]) ? ALL_BIT_1_32 : 0; @@ -1012,8 +1173,8 @@ result_t test_mm_cmpngt_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = !(_a[0] > _b[0]) ? ALL_BIT_1_32 : 0; @@ -1029,8 +1190,8 @@ result_t test_mm_cmpngt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = !(_a[0] > _b[0]) ? ALL_BIT_1_32 : 0; @@ -1046,8 +1207,8 @@ result_t test_mm_cmpnle_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = !(_a[0] <= _b[0]) ? ALL_BIT_1_32 : 0; @@ -1063,8 +1224,8 @@ result_t test_mm_cmpnle_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = !(_a[0] <= _b[0]) ? ALL_BIT_1_32 : 0; @@ -1080,8 +1241,8 @@ result_t test_mm_cmpnlt_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = !(_a[0] < _b[0]) ? ALL_BIT_1_32 : 0; @@ -1097,8 +1258,8 @@ result_t test_mm_cmpnlt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; result[0] = !(_a[0] < _b[0]) ? ALL_BIT_1_32 : 0; @@ -1114,13 +1275,13 @@ result_t test_mm_cmpord_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; for (uint32_t i = 0; i < 4; i++) { - result[i] = compord(_a[i], _b[i]); + result[i] = cmp_noNaN(_a[i], _b[i]); } __m128 ret = _mm_cmpord_ps(a, b); @@ -1132,11 +1293,11 @@ result_t test_mm_cmpord_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; - result[0] = compord(_a[0], _b[0]); + result[0] = cmp_noNaN(_a[0], _b[0]); result[1] = _a[1]; result[2] = _a[2]; result[3] = _a[3]; @@ -1150,13 +1311,13 @@ result_t test_mm_cmpunord_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; for (uint32_t i = 0; i < 4; i++) { - result[i] = (isNAN(_a[i]) || isNAN(_b[i])) ? getNAN() : 0.0f; + result[i] = cmp_hasNaN(_a[i], _b[i]); } __m128 ret = _mm_cmpunord_ps(a, b); @@ -1168,11 +1329,11 @@ result_t test_mm_cmpunord_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; - result[0] = (isNAN(_a[0]) || isNAN(_b[0])) ? getNAN() : 0.0f; + result[0] = cmp_hasNaN(_a[0], _b[0]); result[1] = _a[1]; result[2] = _a[2]; result[3] = _a[3]; @@ -1184,28 +1345,32 @@ result_t test_mm_cmpunord_ss(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_comieq_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { + // FIXME: + // The GCC does not implement _mm_comieq_ss correctly. + // See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98612 for more + // information. +#if defined(__GNUC__) && !defined(__clang__) + return TEST_UNIMPL; +#else const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); - - if (isNAN(_a[0]) || isNAN(_b[0])) - // Test disabled: GCC and Clang on x86_64 return different values. - return TEST_SUCCESS; + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result = comieq_ss(_a[0], _b[0]); int32_t ret = _mm_comieq_ss(a, b); return result == ret ? TEST_SUCCESS : TEST_FAIL; +#endif } result_t test_mm_comige_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result = comige_ss(_a[0], _b[0]); int32_t ret = _mm_comige_ss(a, b); @@ -1217,8 +1382,8 @@ result_t test_mm_comigt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result = comigt_ss(_a[0], _b[0]); int32_t ret = _mm_comigt_ss(a, b); @@ -1228,54 +1393,66 @@ result_t test_mm_comigt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_comile_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { + // FIXME: + // The GCC does not implement _mm_comile_ss correctly. + // See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98612 for more + // information. +#if defined(__GNUC__) && !defined(__clang__) + return TEST_UNIMPL; +#else const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); - - if (isNAN(_a[0]) || isNAN(_b[0])) - // Test disabled: GCC and Clang on x86_64 return different values. - return TEST_SUCCESS; + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result = comile_ss(_a[0], _b[0]); int32_t ret = _mm_comile_ss(a, b); return result == ret ? TEST_SUCCESS : TEST_FAIL; +#endif } result_t test_mm_comilt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { + // FIXME: + // The GCC does not implement _mm_comilt_ss correctly. + // See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98612 for more + // information. +#if defined(__GNUC__) && !defined(__clang__) + return TEST_UNIMPL; +#else const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); - - if (isNAN(_a[0]) || isNAN(_b[0])) - // Test disabled: GCC and Clang on x86_64 return different values. - return TEST_SUCCESS; + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result = comilt_ss(_a[0], _b[0]); int32_t ret = _mm_comilt_ss(a, b); return result == ret ? TEST_SUCCESS : TEST_FAIL; +#endif } result_t test_mm_comineq_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { + // FIXME: + // The GCC does not implement _mm_comineq_ss correctly. + // See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98612 for more + // information. +#if defined(__GNUC__) && !defined(__clang__) + return TEST_UNIMPL; +#else const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); - - if (isNAN(_a[0]) || isNAN(_b[0])) - // Test disabled: GCC and Clang on x86_64 return different values. - return TEST_SUCCESS; + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); int32_t result = comineq_ss(_a[0], _b[0]); int32_t ret = _mm_comineq_ss(a, b); return result == ret ? TEST_SUCCESS : TEST_FAIL; +#endif } result_t test_mm_cvt_pi2ps(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -1288,8 +1465,8 @@ result_t test_mm_cvt_pi2ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2]; float dw = _a[3]; - __m128 a = do_mm_load_ps(_a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m128 a = load_m128(_a); + __m64 b = load_m64(_b); __m128 c = _mm_cvt_pi2ps(a, b); return validateFloat(c, dx, dy, dz, dw); @@ -1304,27 +1481,27 @@ result_t test_mm_cvt_ps2pi(const SSE2NEONTestImpl &impl, uint32_t iter) switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - d[idx] = (int32_t)(bankersRounding(_a[idx])); + d[idx] = (int32_t) (bankersRounding(_a[idx])); break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); - d[idx] = (int32_t)(floorf(_a[idx])); + d[idx] = (int32_t) (floorf(_a[idx])); break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); - d[idx] = (int32_t)(ceilf(_a[idx])); + d[idx] = (int32_t) (ceilf(_a[idx])); break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); - d[idx] = (int32_t)(_a[idx]); + d[idx] = (int32_t) (_a[idx]); break; } } - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m64 ret = _mm_cvt_ps2pi(a); - return validateInt32(ret, d[0], d[1]); + return VALIDATE_INT32_M64(ret, d); } result_t test_mm_cvt_si2ss(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -1337,7 +1514,7 @@ result_t test_mm_cvt_si2ss(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2]; float dw = _a[3]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_cvt_si2ss(a, b); return validateFloat(c, dx, dy, dz, dw); @@ -1351,23 +1528,23 @@ result_t test_mm_cvt_ss2si(const SSE2NEONTestImpl &impl, uint32_t iter) switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - d0 = (int32_t)(bankersRounding(_a[0])); + d0 = (int32_t) (bankersRounding(_a[0])); break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); - d0 = (int32_t)(floorf(_a[0])); + d0 = (int32_t) (floorf(_a[0])); break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); - d0 = (int32_t)(ceilf(_a[0])); + d0 = (int32_t) (ceilf(_a[0])); break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); - d0 = (int32_t)(_a[0]); + d0 = (int32_t) (_a[0]); break; } - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); int32_t ret = _mm_cvt_ss2si(a); return ret == d0 ? TEST_SUCCESS : TEST_FAIL; } @@ -1381,7 +1558,7 @@ result_t test_mm_cvtpi16_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = (float) _a[2]; float dw = (float) _a[3]; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); __m128 c = _mm_cvtpi16_ps(a); return validateFloat(c, dx, dy, dz, dw); @@ -1397,8 +1574,8 @@ result_t test_mm_cvtpi32_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2]; float dw = _a[3]; - __m128 a = do_mm_load_ps(_a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m128 a = load_m128(_a); + __m64 b = load_m64(_b); __m128 c = _mm_cvtpi32_ps(a, b); return validateFloat(c, dx, dy, dz, dw); @@ -1414,8 +1591,8 @@ result_t test_mm_cvtpi32x2_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = (float) _b[0]; float dw = (float) _b[1]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m128 c = _mm_cvtpi32x2_ps(a, b); return validateFloat(c, dx, dy, dz, dw); @@ -1430,7 +1607,7 @@ result_t test_mm_cvtpi8_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = (float) _a[2]; float dw = (float) _a[3]; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); __m128 c = _mm_cvtpi8_ps(a); return validateFloat(c, dx, dy, dz, dw); @@ -1439,53 +1616,109 @@ result_t test_mm_cvtpi8_ps(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cvtps_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - float _b[4]; - int16_t trun[4]; + int16_t rnd[4]; - // FIXME: The rounding mode would affect the testing result - _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - // Beyond int16_t range _mm_cvtps_pi16 function (both native and arm) - // do not behave the same as BankersRounding. - // Forcing the float input values to be in the int16_t range - // Dividing by 10.0f ensures (with the current data set) it, - // without forcing a saturation. - for (int j = 0; j < 4; j++) { - _b[j] = fabsf(_a[j]) > 32767.0f ? _a[j] / 10.0f : _a[j]; - trun[j] = (int16_t)(bankersRounding(_b[j])); + for (int i = 0; i < 4; i++) { + if ((float) INT16_MAX <= _a[i] && _a[i] <= (float) INT32_MAX) { + rnd[i] = INT16_MAX; + } else if (INT16_MIN < _a[i] && _a[i] < INT16_MAX) { + switch (iter & 0x3) { + case 0: + _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); + rnd[i] = (int16_t) bankersRounding(_a[i]); + break; + case 1: + _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); + rnd[i] = (int16_t) floorf(_a[i]); + break; + case 2: + _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); + rnd[i] = (int16_t) ceilf(_a[i]); + break; + case 3: + _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); + rnd[i] = (int16_t) _a[i]; + break; + } + } else { + rnd[i] = INT16_MIN; + } } - __m128 b = do_mm_load_ps(_b); - __m64 ret = _mm_cvtps_pi16(b); - return validateInt16(ret, trun[0], trun[1], trun[2], trun[3]); + __m128 a = load_m128(_a); + __m64 ret = _mm_cvtps_pi16(a); + return VALIDATE_INT16_M64(ret, rnd); } result_t test_mm_cvtps_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - int32_t d[2]; + int32_t d[2] = {}; - for (int i = 0; i < 2; i++) { - int32_t f = (int32_t) floor(_a[i]); - int32_t c = (int32_t) ceil(_a[i]); - float diff = _a[i] - floor(_a[i]); - // Round to nearest, ties to even - if (diff > 0.5) - d[i] = c; - else if (diff == 0.5) - d[i] = c & 1 ? f : c; - else - d[i] = f; + switch (iter & 0x3) { + case 0: + _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); + d[0] = (int32_t) bankersRounding(_a[0]); + d[1] = (int32_t) bankersRounding(_a[1]); + break; + case 1: + _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); + d[0] = (int32_t) floorf(_a[0]); + d[1] = (int32_t) floorf(_a[1]); + break; + case 2: + _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); + d[0] = (int32_t) ceilf(_a[0]); + d[1] = (int32_t) ceilf(_a[1]); + break; + case 3: + _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); + d[0] = (int32_t) _a[0]; + d[1] = (int32_t) _a[1]; + break; } - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m64 ret = _mm_cvtps_pi32(a); - return validateInt32(ret, d[0], d[1]); + return VALIDATE_INT32_M64(ret, d); } result_t test_mm_cvtps_pi8(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + const float *_a = impl.mTestFloatPointer1; + int8_t rnd[8] = {}; + + for (int i = 0; i < 4; i++) { + if ((float) INT8_MAX <= _a[i] && _a[i] <= (float) INT32_MAX) { + rnd[i] = INT8_MAX; + } else if (INT8_MIN < _a[i] && _a[i] < INT8_MAX) { + switch (iter & 0x3) { + case 0: + _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); + rnd[i] = (int8_t) bankersRounding(_a[i]); + break; + case 1: + _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); + rnd[i] = (int8_t) floorf(_a[i]); + break; + case 2: + _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); + rnd[i] = (int8_t) ceilf(_a[i]); + break; + case 3: + _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); + rnd[i] = (int8_t) _a[i]; + break; + } + } else { + rnd[i] = INT8_MIN; + } + } + + __m128 a = load_m128(_a); + __m64 ret = _mm_cvtps_pi8(a); + return VALIDATE_INT8_M64(ret, rnd); } result_t test_mm_cvtpu16_ps(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -1497,7 +1730,7 @@ result_t test_mm_cvtpu16_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = (float) _a[2]; float dw = (float) _a[3]; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); __m128 c = _mm_cvtpu16_ps(a); return validateFloat(c, dx, dy, dz, dw); @@ -1512,7 +1745,7 @@ result_t test_mm_cvtpu8_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = (float) _a[2]; float dw = (float) _a[3]; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); __m128 c = _mm_cvtpu8_ps(a); return validateFloat(c, dx, dy, dz, dw); @@ -1528,7 +1761,7 @@ result_t test_mm_cvtsi32_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2]; float dw = _a[3]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_cvtsi32_ss(a, b); return validateFloat(c, dx, dy, dz, dw); @@ -1544,7 +1777,7 @@ result_t test_mm_cvtsi64_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2]; float dw = _a[3]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_cvtsi64_ss(a, b); return validateFloat(c, dx, dy, dz, dw); @@ -1556,7 +1789,7 @@ result_t test_mm_cvtss_f32(const SSE2NEONTestImpl &impl, uint32_t iter) float f = _a[0]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); float c = _mm_cvtss_f32(a); return f == c ? TEST_SUCCESS : TEST_FAIL; @@ -1566,27 +1799,27 @@ result_t test_mm_cvtss_si32(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - int32_t d0; + int32_t d0 = 0; switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - d0 = (int32_t)(bankersRounding(_a[0])); + d0 = (int32_t) (bankersRounding(_a[0])); break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); - d0 = (int32_t)(floorf(_a[0])); + d0 = (int32_t) (floorf(_a[0])); break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); - d0 = (int32_t)(ceilf(_a[0])); + d0 = (int32_t) (ceilf(_a[0])); break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); - d0 = (int32_t)(_a[0]); + d0 = (int32_t) (_a[0]); break; } - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); int32_t ret = _mm_cvtss_si32(a); return ret == d0 ? TEST_SUCCESS : TEST_FAIL; @@ -1596,27 +1829,27 @@ result_t test_mm_cvtss_si64(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - int64_t d0; + int64_t d0 = 0; switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - d0 = (int64_t)(bankersRounding(_a[0])); + d0 = (int64_t) (bankersRounding(_a[0])); break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); - d0 = (int64_t)(floorf(_a[0])); + d0 = (int64_t) (floorf(_a[0])); break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); - d0 = (int64_t)(ceilf(_a[0])); + d0 = (int64_t) (ceilf(_a[0])); break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); - d0 = (int64_t)(_a[0]); + d0 = (int64_t) (_a[0]); break; } - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); int64_t ret = _mm_cvtss_si64(a); return ret == d0 ? TEST_SUCCESS : TEST_FAIL; @@ -1630,17 +1863,17 @@ result_t test_mm_cvtt_ps2pi(const SSE2NEONTestImpl &impl, uint32_t iter) d[0] = (int32_t) _a[0]; d[1] = (int32_t) _a[1]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m64 ret = _mm_cvtt_ps2pi(a); - return validateInt32(ret, d[0], d[1]); + return VALIDATE_INT32_M64(ret, d); } result_t test_mm_cvtt_ss2si(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); int ret = _mm_cvtt_ss2si(a); return ret == (int32_t) _a[0] ? TEST_SUCCESS : TEST_FAIL; @@ -1654,17 +1887,17 @@ result_t test_mm_cvttps_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) d[0] = (int32_t) _a[0]; d[1] = (int32_t) _a[1]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m64 ret = _mm_cvttps_pi32(a); - return validateInt32(ret, d[0], d[1]); + return VALIDATE_INT32_M64(ret, d); } result_t test_mm_cvttss_si32(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); int ret = _mm_cvttss_si32(a); return ret == (int32_t) _a[0] ? TEST_SUCCESS : TEST_FAIL; @@ -1674,7 +1907,7 @@ result_t test_mm_cvttss_si64(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); int64_t ret = _mm_cvttss_si64(a); return ret == (int64_t) _a[0] ? TEST_SUCCESS : TEST_FAIL; @@ -1689,11 +1922,11 @@ result_t test_mm_div_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float f2 = _a[2] / _b[2]; float f3 = _a[3] / _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_div_ps(a, b); -#if defined(__arm__) && !defined(__aarch64__) +#if defined(__arm__) && !defined(__aarch64__) && !defined(_M_ARM64) // The implementation of "_mm_div_ps()" on ARM 32bit doesn't use "DIV" // instruction directly, instead it uses "FRECPE" instruction to approximate // it. Therefore, the precision is not as small as other architecture @@ -1713,11 +1946,11 @@ result_t test_mm_div_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float d2 = _a[2]; float d3 = _a[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_div_ss(a, b); -#if defined(__arm__) && !defined(__aarch64__) +#if defined(__arm__) && !defined(__aarch64__) && !defined(_M_ARM64) // The implementation of "_mm_div_ps()" on ARM 32bit doesn't use "DIV" // instruction directly, instead it uses "FRECPE" instruction to approximate // it. Therefore, the precision is not as small as other architecture @@ -1729,15 +1962,15 @@ result_t test_mm_div_ss(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_extract_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) { - // FIXME GCC has bug on `_mm_extract_pi16` intrinsics. We will enable this + // FIXME GCC has bug on "_mm_extract_pi16" intrinsics. We will enable this // test when GCC fix this bug. // see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98495 for more // information -#if defined(__clang__) +#if defined(__clang__) || defined(_MSC_VER) uint64_t *_a = (uint64_t *) impl.mTestIntPointer1; const int idx = iter & 0x3; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); int c; switch (idx) { case 0: @@ -1762,9 +1995,23 @@ result_t test_mm_extract_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) #endif } +result_t test_mm_malloc(const SSE2NEONTestImpl &impl, uint32_t iter); result_t test_mm_free(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + /* We verify _mm_malloc first, and there is no need to check _mm_free . */ + return test_mm_malloc(impl, iter); +} + +result_t test_mm_get_flush_zero_mode(const SSE2NEONTestImpl &impl, + uint32_t iter) +{ + int res_flush_zero_on, res_flush_zero_off; + _MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON); + res_flush_zero_on = _MM_GET_FLUSH_ZERO_MODE() == _MM_FLUSH_ZERO_ON; + _MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_OFF); + res_flush_zero_off = _MM_GET_FLUSH_ZERO_MODE() == _MM_FLUSH_ZERO_OFF; + + return (res_flush_zero_on && res_flush_zero_off) ? TEST_SUCCESS : TEST_FAIL; } result_t test_mm_get_rounding_mode(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -1788,25 +2035,45 @@ result_t test_mm_get_rounding_mode(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_getcsr(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + // store original csr value for post test restoring + unsigned int originalCsr = _mm_getcsr(); + + unsigned int roundings[] = {_MM_ROUND_TOWARD_ZERO, _MM_ROUND_DOWN, + _MM_ROUND_UP, _MM_ROUND_NEAREST}; + for (size_t i = 0; i < sizeof(roundings) / sizeof(roundings[0]); i++) { + _mm_setcsr(_mm_getcsr() | roundings[i]); + if ((_mm_getcsr() & roundings[i]) != roundings[i]) { + return TEST_FAIL; + } + } + + // restore original csr value for remaining tests + _mm_setcsr(originalCsr); + + return TEST_SUCCESS; } result_t test_mm_insert_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t insert = (int16_t) impl.mTestInts[iter]; - const int imm8 = 2; - - int16_t d[4]; - for (int i = 0; i < 4; i++) { - d[i] = _a[i]; - } - d[imm8] = insert; - - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = _mm_insert_pi16(a, insert, imm8); - - return validateInt16(b, d[0], d[1], d[2], d[3]); + __m64 a; + __m64 b; + +#define TEST_IMPL(IDX) \ + int16_t d##IDX[4]; \ + for (int i = 0; i < 4; i++) { \ + d##IDX[i] = _a[i]; \ + } \ + d##IDX[IDX] = insert; \ + \ + a = load_m64(_a); \ + b = _mm_insert_pi16(a, insert, IDX); \ + CHECK_RESULT(VALIDATE_INT16_M64(b, d##IDX)) + + IMM_4_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_load_ps(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -1885,25 +2152,36 @@ result_t test_mm_loadu_ps(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_loadu_si16(const SSE2NEONTestImpl &impl, uint32_t iter) { -#if defined(__clang__) - const int16_t *addr = (const int16_t *) impl.mTestIntPointer1; + // The GCC version before 11 does not implement intrinsic function + // _mm_loadu_si16. Check https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95483 + // for more information. +#if (defined(__GNUC__) && !defined(__clang__)) && (__GNUC__ <= 10) + return TEST_UNIMPL; +#else + const unaligned_int16_t *addr = + (const unaligned_int16_t *) (impl.mTestUnalignedInts + 1); __m128i ret = _mm_loadu_si16((const void *) addr); return validateInt16(ret, addr[0], 0, 0, 0, 0, 0, 0, 0); -#else - // The intrinsic _mm_loadu_si16() does not exist in GCC - return TEST_UNIMPL; #endif } result_t test_mm_loadu_si64(const SSE2NEONTestImpl &impl, uint32_t iter) { - const int64_t *addr = (const int64_t *) impl.mTestIntPointer1; + // Versions of GCC prior to 9 do not implement intrinsic function + // _mm_loadu_si64. Check https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78782 + // for more information. +#if (defined(__GNUC__) && !defined(__clang__)) && (__GNUC__ < 9) + return TEST_UNIMPL; +#else + const unaligned_int64_t *addr = + (const unaligned_int64_t *) (impl.mTestUnalignedInts + 1); __m128i ret = _mm_loadu_si64((const void *) addr); return validateInt64(ret, addr[0], 0); +#endif } result_t test_mm_malloc(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -1956,10 +2234,10 @@ result_t test_mm_max_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) c[2] = _a[2] > _b[2] ? _a[2] : _b[2]; c[3] = _a[3] > _b[3] ? _a[3] : _b[3]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 ret = _mm_max_pi16(a, b); - return validateInt16(ret, c[0], c[1], c[2], c[3]); + return VALIDATE_INT16_M64(ret, c); } result_t test_mm_max_ps(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -1973,8 +2251,8 @@ result_t test_mm_max_ps(const SSE2NEONTestImpl &impl, uint32_t iter) c[2] = _a[2] > _b[2] ? _a[2] : _b[2]; c[3] = _a[3] > _b[3] ? _a[3] : _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 ret = _mm_max_ps(a, b); return validateFloat(ret, c[0], c[1], c[2], c[3]); } @@ -1994,10 +2272,10 @@ result_t test_mm_max_pu8(const SSE2NEONTestImpl &impl, uint32_t iter) c[6] = _a[6] > _b[6] ? _a[6] : _b[6]; c[7] = _a[7] > _b[7] ? _a[7] : _b[7]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 ret = _mm_max_pu8(a, b); - return validateUInt8(ret, c[0], c[1], c[2], c[3], c[4], c[5], c[6], c[7]); + return VALIDATE_UINT8_M64(ret, c); } result_t test_mm_max_ss(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2028,10 +2306,10 @@ result_t test_mm_min_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) c[2] = _a[2] < _b[2] ? _a[2] : _b[2]; c[3] = _a[3] < _b[3] ? _a[3] : _b[3]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 ret = _mm_min_pi16(a, b); - return validateInt16(ret, c[0], c[1], c[2], c[3]); + return VALIDATE_INT16_M64(ret, c); } result_t test_mm_min_ps(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2045,8 +2323,8 @@ result_t test_mm_min_ps(const SSE2NEONTestImpl &impl, uint32_t iter) c[2] = _a[2] < _b[2] ? _a[2] : _b[2]; c[3] = _a[3] < _b[3] ? _a[3] : _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 ret = _mm_min_ps(a, b); return validateFloat(ret, c[0], c[1], c[2], c[3]); } @@ -2066,10 +2344,10 @@ result_t test_mm_min_pu8(const SSE2NEONTestImpl &impl, uint32_t iter) c[6] = _a[6] < _b[6] ? _a[6] : _b[6]; c[7] = _a[7] < _b[7] ? _a[7] : _b[7]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 ret = _mm_min_pu8(a, b); - return validateUInt8(ret, c[0], c[1], c[2], c[3], c[4], c[5], c[6], c[7]); + return VALIDATE_UINT8_M64(ret, c); } result_t test_mm_min_ss(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2080,8 +2358,8 @@ result_t test_mm_min_ss(const SSE2NEONTestImpl &impl, uint32_t iter) c = _a[0] < _b[0] ? _a[0] : _b[0]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 ret = _mm_min_ss(a, b); return validateFloat(ret, c, _a[1], _a[2], _a[3]); @@ -2091,14 +2369,14 @@ result_t test_mm_move_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); float result[4]; - result[0] = b[0]; - result[1] = a[1]; - result[2] = a[2]; - result[3] = a[3]; + result[0] = _b[0]; + result[1] = _a[1]; + result[2] = _a[2]; + result[3] = _a[3]; __m128 ret = _mm_move_ss(a, b); return validateFloat(ret, result[0], result[1], result[2], result[3]); @@ -2114,8 +2392,8 @@ result_t test_mm_movehl_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float f2 = _a[2]; float f3 = _a[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 ret = _mm_movehl_ps(a, b); return validateFloat(ret, f0, f1, f2, f3); @@ -2131,8 +2409,8 @@ result_t test_mm_movelh_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float f2 = _b[0]; float f3 = _b[1]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 ret = _mm_movelh_ps(a, b); return validateFloat(ret, f0, f1, f2, f3); @@ -2173,7 +2451,7 @@ result_t test_mm_movemask_ps(const SSE2NEONTestImpl &impl, uint32_t iter) if (ip[3] & 0x80000000) { ret |= 8; } - __m128 a = do_mm_load_ps(p); + __m128 a = load_m128(p); int val = _mm_movemask_ps(a); return val == ret ? TEST_SUCCESS : TEST_FAIL; } @@ -2187,8 +2465,8 @@ result_t test_mm_mul_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2] * _b[2]; float dw = _a[3] * _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_mul_ps(a, b); return validateFloat(c, dx, dy, dz, dw); } @@ -2203,8 +2481,8 @@ result_t test_mm_mul_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2]; float dw = _a[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_mul_ss(a, b); return validateFloat(c, dx, dy, dz, dw); } @@ -2216,34 +2494,37 @@ result_t test_mm_mulhi_pu16(const SSE2NEONTestImpl &impl, uint32_t iter) uint16_t d[4]; for (uint32_t i = 0; i < 4; i++) { uint32_t m = (uint32_t) _a[i] * (uint32_t) _b[i]; - d[i] = (uint16_t)(m >> 16); + d[i] = (uint16_t) (m >> 16); } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_mulhi_pu16(a, b); - return validateUInt16(c, d[0], d[1], d[2], d[3]); + return VALIDATE_UINT16_M64(c, d); } result_t test_mm_or_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_or_ps(a, b); // now for the assertion... const uint32_t *ia = (const uint32_t *) &a; const uint32_t *ib = (const uint32_t *) &b; - uint32_t r0 = ia[0] | ib[0]; - uint32_t r1 = ia[1] | ib[1]; - uint32_t r2 = ia[2] | ib[2]; - uint32_t r3 = ia[3] | ib[3]; - __m128i ret = do_mm_set_epi32(r3, r2, r1, r0); - result_t r = validateInt32(*(const __m128i *) &c, r0, r1, r2, r3); - if (r) - r = validateInt32(ret, r0, r1, r2, r3); - return r; + uint32_t r[4]; + r[0] = ia[0] | ib[0]; + r[1] = ia[1] | ib[1]; + r[2] = ia[2] | ib[2]; + r[3] = ia[3] | ib[3]; + __m128i ret = do_mm_set_epi32(r[3], r[2], r[1], r[0]); + result_t res = VALIDATE_INT32_M128(*(const __m128i *) &c, r); + if (res) { + res = VALIDATE_INT32_M128(ret, r); + } + + return res; } result_t test_m_pavgb(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2298,7 +2579,53 @@ result_t test_m_pmulhuw(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_prefetch(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + typedef struct { + __m128 a; + float r[4]; + } prefetch_test_t; + prefetch_test_t test_vec[8] = { + { + _mm_set_ps(-0.1f, 0.2f, 0.3f, 0.4f), + {0.4f, 0.3f, 0.2f, -0.1f}, + }, + { + _mm_set_ps(0.5f, 0.6f, -0.7f, -0.8f), + {-0.8f, -0.7f, 0.6f, 0.5f}, + }, + { + _mm_set_ps(0.9f, 0.10f, -0.11f, 0.12f), + {0.12f, -0.11f, 0.10f, 0.9f}, + }, + { + _mm_set_ps(-1.1f, -2.1f, -3.1f, -4.1f), + {-4.1f, -3.1f, -2.1f, -1.1f}, + }, + { + _mm_set_ps(100.0f, -110.0f, 120.0f, -130.0f), + {-130.0f, 120.0f, -110.0f, 100.0f}, + }, + { + _mm_set_ps(200.5f, 210.5f, -220.5f, 230.5f), + {995.74f, -93.04f, 144.03f, 902.50f}, + }, + { + _mm_set_ps(10.11f, -11.12f, -12.13f, 13.14f), + {13.14f, -12.13f, -11.12f, 10.11f}, + }, + { + _mm_set_ps(10.1f, -20.2f, 30.3f, 40.4f), + {40.4f, 30.3f, -20.2f, 10.1f}, + }, + }; + + for (size_t i = 0; i < (sizeof(test_vec) / (sizeof(test_vec[0]))); i++) { + _mm_prefetch(((const char *) &test_vec[i].a), _MM_HINT_T0); + _mm_prefetch(((const char *) &test_vec[i].a), _MM_HINT_T1); + _mm_prefetch(((const char *) &test_vec[i].a), _MM_HINT_T2); + _mm_prefetch(((const char *) &test_vec[i].a), _MM_HINT_NTA); + } + + return TEST_SUCCESS; } result_t test_m_psadbw(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2310,8 +2637,8 @@ result_t test_m_psadbw(const SSE2NEONTestImpl &impl, uint32_t iter) d += abs(_a[i] - _b[i]); } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _m_psadbw(a, b); return validateUInt16(c, d, 0, 0, 0); } @@ -2329,9 +2656,9 @@ result_t test_mm_rcp_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = 1.0f / _a[2]; float dw = 1.0f / _a[3]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_rcp_ps(a); - return validateFloatEpsilon(c, dx, dy, dz, dw, 300.0f); + return validateFloatError(c, dx, dy, dz, dw, 0.001f); } result_t test_mm_rcp_ss(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2342,44 +2669,43 @@ result_t test_mm_rcp_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float dy = _a[1]; float dz = _a[2]; float dw = _a[3]; - - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_rcp_ss(a); - return validateFloatEpsilon(c, dx, dy, dz, dw, 300.0f); + return validateFloatError(c, dx, dy, dz, dw, 0.001f); } result_t test_mm_rsqrt_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = (const float *) impl.mTestFloatPointer1; - float f0 = 1 / sqrt(_a[0]); - float f1 = 1 / sqrt(_a[1]); - float f2 = 1 / sqrt(_a[2]); - float f3 = 1 / sqrt(_a[3]); + float f0 = 1 / sqrtf(_a[0]); + float f1 = 1 / sqrtf(_a[1]); + float f2 = 1 / sqrtf(_a[2]); + float f3 = 1 / sqrtf(_a[3]); - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_rsqrt_ps(a); - // Here, we ensure `_mm_rsqrt_ps()`'s error is under 1% compares to the C - // implementation. - return validateFloatError(c, f0, f1, f2, f3, 0.01f); + // Here, we ensure the error rate of "_mm_rsqrt_ps()" is under 0.1% compared + // to the C implementation. + return validateFloatError(c, f0, f1, f2, f3, 0.001f); } result_t test_mm_rsqrt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = (const float *) impl.mTestFloatPointer1; - float f0 = 1 / sqrt(_a[0]); + float f0 = 1 / sqrtf(_a[0]); float f1 = _a[1]; float f2 = _a[2]; float f3 = _a[3]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_rsqrt_ss(a); - // Here, we ensure `_mm_rsqrt_ps()`'s error is under 1% compares to the C - // implementation. - return validateFloatError(c, f0, f1, f2, f3, 0.01f); + // Here, we ensure the error rate of "_mm_rsqrt_ps()" is under 0.1% compared + // to the C implementation. + return validateFloatError(c, f0, f1, f2, f3, 0.001f); } result_t test_mm_sad_pu8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2391,12 +2717,21 @@ result_t test_mm_sad_pu8(const SSE2NEONTestImpl &impl, uint32_t iter) d += abs(_a[i] - _b[i]); } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_sad_pu8(a, b); return validateUInt16(c, d, 0, 0, 0); } +result_t test_mm_set_flush_zero_mode(const SSE2NEONTestImpl &impl, + uint32_t iter) +{ + // TODO: + // After the behavior of denormal number and flush zero mode is fully + // investigated, the testing would be added. + return TEST_UNIMPL; +} + result_t test_mm_set_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { float x = impl.mTestFloats[iter]; @@ -2421,8 +2756,8 @@ result_t test_mm_set_rounding_mode(const SSE2NEONTestImpl &impl, uint32_t iter) const float *_a = impl.mTestFloatPointer1; result_t res_toward_zero, res_to_neg_inf, res_to_pos_inf, res_nearest; - __m128 a = do_mm_load_ps(_a); - __m128 b, c; + __m128 a = load_m128(_a); + __m128 b = _mm_setzero_ps(), c = _mm_setzero_ps(); _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); b = _mm_round_ps(a, _MM_FROUND_CUR_DIRECTION); @@ -2491,24 +2826,35 @@ result_t test_mm_setzero_ps(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_sfence(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + /* FIXME: Assume that memory barriers always function as intended. */ + return TEST_SUCCESS; } result_t test_mm_shuffle_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) { +#if (__GNUC__ == 8) || (__GNUC__ == 9 && __GNUC_MINOR__ == 2) +#error Using older gcc versions can lead to an operand mismatch error. This issue affects all versions prior to gcc 10. +#else const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - const int32_t imm = 73; - - __m64 a = do_mm_load_m64((const int64_t *) _a); - - int16_t d0 = _a[imm & 0x3]; - int16_t d1 = _a[(imm >> 2) & 0x3]; - int16_t d2 = _a[(imm >> 4) & 0x3]; - int16_t d3 = _a[(imm >> 6) & 0x3]; - - __m64 d = _mm_shuffle_pi16(a, imm); + __m64 a; + __m64 d; + int16_t _d[4]; +#define TEST_IMPL(IDX) \ + a = load_m64(_a); \ + d = _mm_shuffle_pi16(a, IDX); \ + \ + _d[0] = _a[IDX & 0x3]; \ + _d[1] = _a[(IDX >> 2) & 0x3]; \ + _d[2] = _a[(IDX >> 4) & 0x3]; \ + _d[3] = _a[(IDX >> 6) & 0x3]; \ + if (VALIDATE_INT16_M64(d, _d) != TEST_SUCCESS) { \ + return TEST_FAIL; \ + } - return validateInt16(d, d0, d1, d2, d3); + IMM_256_ITER +#undef TEST_IMPL + return TEST_SUCCESS; +#endif } // Note, NEON does not have a general purpose shuffled command like SSE. @@ -2519,8 +2865,8 @@ result_t test_mm_shuffle_ps(const SSE2NEONTestImpl &impl, uint32_t iter) const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; result_t isValid = TEST_SUCCESS; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); // Test many permutations of the shuffle operation, including all // permutations which have an optimized/customized implementation __m128 ret; @@ -2584,34 +2930,46 @@ result_t test_mm_sqrt_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = (const float *) impl.mTestFloatPointer1; - float f0 = sqrt(_a[0]); - float f1 = sqrt(_a[1]); - float f2 = sqrt(_a[2]); - float f3 = sqrt(_a[3]); + float f0 = sqrtf(_a[0]); + float f1 = sqrtf(_a[1]); + float f2 = sqrtf(_a[2]); + float f3 = sqrtf(_a[3]); - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_sqrt_ps(a); - // Here, we ensure `_mm_sqrt_ps()`'s error is under 1% compares to the C - // implementation. - return validateFloatError(c, f0, f1, f2, f3, 0.01f); +#if defined(__arm__) && !defined(__arm64__) && !defined(_M_ARM64) + // Here, we ensure the error rate of "_mm_sqrt_ps()" ARMv7-A implementation + // is under 10^-4% compared to the C implementation. + return validateFloatError(c, f0, f1, f2, f3, 0.0001f); +#else + // Here, we ensure the error rate of "_mm_sqrt_ps()" is under 10^-6% + // compared to the C implementation. + return validateFloatError(c, f0, f1, f2, f3, 0.000001f); +#endif } result_t test_mm_sqrt_ss(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = (const float *) impl.mTestFloatPointer1; - float f0 = sqrt(_a[0]); + float f0 = sqrtf(_a[0]); float f1 = _a[1]; float f2 = _a[2]; float f3 = _a[3]; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_sqrt_ss(a); - // Here, we ensure `_mm_sqrt_ps()`'s error is under 1% compares to the C - // implementation. - return validateFloatError(c, f0, f1, f2, f3, 0.01f); +#if defined(__arm__) && !defined(__arm64__) && !defined(_M_ARM64) + // Here, we ensure the error rate of "_mm_sqrt_ps()" ARMv7-A implementation + // is under 10^-4% compared to the C implementation. + return validateFloatError(c, f0, f1, f2, f3, 0.0001f); +#else + // Here, we ensure the error rate of "_mm_sqrt_ps()" is under 10^-6% + // compared to the C implementation. + return validateFloatError(c, f0, f1, f2, f3, 0.000001f); +#endif } result_t test_mm_store_ps(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2635,7 +2993,7 @@ result_t test_mm_store_ps1(const SSE2NEONTestImpl &impl, uint32_t iter) float *p = impl.mTestFloatPointer1; float d[4]; - __m128 a = do_mm_load_ps(p); + __m128 a = load_m128(p); _mm_store_ps1(d, a); ASSERT_RETURN(d[0] == *p); @@ -2661,7 +3019,7 @@ result_t test_mm_store1_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float *p = impl.mTestFloatPointer1; float d[4]; - __m128 a = do_mm_load_ps(p); + __m128 a = load_m128(p); _mm_store1_ps(d, a); ASSERT_RETURN(d[0] == *p); @@ -2706,7 +3064,7 @@ result_t test_mm_storer_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float *p = impl.mTestFloatPointer1; float d[4]; - __m128 a = do_mm_load_ps(p); + __m128 a = load_m128(p); _mm_storer_ps(d, a); ASSERT_RETURN(d[0] == p[3]); @@ -2731,12 +3089,12 @@ result_t test_mm_storeu_si16(const SSE2NEONTestImpl &impl, uint32_t iter) // The GCC version before 11 does not implement intrinsic function // _mm_storeu_si16. Check https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95483 // for more information. -#if defined(__GNUC__) && __GNUC__ <= 10 +#if (defined(__GNUC__) && !defined(__clang__)) && (__GNUC__ <= 10) return TEST_UNIMPL; #else const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - __m128i b; - __m128i a = do_mm_load_ps(_a); + __m128i b = _mm_setzero_si128(); + __m128i a = load_m128i(_a); _mm_storeu_si16(&b, a); int16_t *_b = (int16_t *) &b; int16_t *_c = (int16_t *) &a; @@ -2747,19 +3105,26 @@ result_t test_mm_storeu_si16(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_storeu_si64(const SSE2NEONTestImpl &impl, uint32_t iter) { + // Versions of GCC prior to 9 do not implement intrinsic function + // _mm_storeu_si64. Check https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87558 + // for more information. +#if (defined(__GNUC__) && !defined(__clang__)) && (__GNUC__ < 9) + return TEST_UNIMPL; +#else const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - __m128i b; - __m128i a = do_mm_load_ps(_a); + __m128i b = _mm_setzero_si128(); + __m128i a = load_m128i(_a); _mm_storeu_si64(&b, a); int64_t *_b = (int64_t *) &b; int64_t *_c = (int64_t *) &a; return validateInt64(b, _c[0], _b[1]); +#endif } result_t test_mm_stream_pi(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; - __m64 a = do_mm_load_m64(_a); + __m64 a = load_m64(_a); __m64 p; _mm_stream_pi(&p, a); @@ -2769,8 +3134,8 @@ result_t test_mm_stream_pi(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_stream_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - __m128 a = do_mm_load_ps(_a); - float p[4]; + __m128 a = load_m128(_a); + alignas(16) float p[4]; _mm_stream_ps(p, a); ASSERT_RETURN(p[0] == _a[0]); @@ -2789,8 +3154,8 @@ result_t test_mm_sub_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2] - _b[2]; float dw = _a[3] - _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_sub_ps(a, b); return validateFloat(c, dx, dy, dz, dw); } @@ -2804,8 +3169,8 @@ result_t test_mm_sub_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = _a[2]; float dw = _a[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_sub_ss(a, b); return validateFloat(c, dx, dy, dz, dw); } @@ -2848,7 +3213,9 @@ result_t test_mm_ucomineq_ss(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_undefined_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + __m128 a = _mm_undefined_ps(); + a = _mm_xor_ps(a, a); + return validateFloat(a, 0, 0, 0, 0); } result_t test_mm_unpackhi_ps(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2894,8 +3261,8 @@ result_t test_mm_xor_ps(const SSE2NEONTestImpl &impl, uint32_t iter) int32_t d2 = _a[2] ^ _b[2]; int32_t d3 = _a[3] ^ _b[3]; - __m128 a = do_mm_load_ps((const float *) _a); - __m128 b = do_mm_load_ps((const float *) _b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_xor_ps(a, b); return validateFloat(c, *((float *) &d0), *((float *) &d1), @@ -2908,35 +3275,37 @@ result_t test_mm_add_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = _a[0] + _b[0]; - int16_t d1 = _a[1] + _b[1]; - int16_t d2 = _a[2] + _b[2]; - int16_t d3 = _a[3] + _b[3]; - int16_t d4 = _a[4] + _b[4]; - int16_t d5 = _a[5] + _b[5]; - int16_t d6 = _a[6] + _b[6]; - int16_t d7 = _a[7] + _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[0] + _b[0]; + d[1] = _a[1] + _b[1]; + d[2] = _a[2] + _b[2]; + d[3] = _a[3] + _b[3]; + d[4] = _a[4] + _b[4]; + d[5] = _a[5] + _b[5]; + d[6] = _a[6] + _b[6]; + d[7] = _a[7] + _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_add_epi16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_add_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - int32_t dx = _a[0] + _b[0]; - int32_t dy = _a[1] + _b[1]; - int32_t dz = _a[2] + _b[2]; - int32_t dw = _a[3] + _b[3]; + int32_t d[4]; + d[0] = _a[0] + _b[0]; + d[1] = _a[1] + _b[1]; + d[2] = _a[2] + _b[2]; + d[3] = _a[3] + _b[3]; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_add_epi32(a, b); - return validateInt32(c, dx, dy, dz, dw); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_add_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2947,8 +3316,8 @@ result_t test_mm_add_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0] + _b[0]; int64_t d1 = _a[1] + _b[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_add_epi64(a, b); return validateInt64(c, d0, d1); @@ -2958,28 +3327,28 @@ result_t test_mm_add_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t d0 = _a[0] + _b[0]; - int8_t d1 = _a[1] + _b[1]; - int8_t d2 = _a[2] + _b[2]; - int8_t d3 = _a[3] + _b[3]; - int8_t d4 = _a[4] + _b[4]; - int8_t d5 = _a[5] + _b[5]; - int8_t d6 = _a[6] + _b[6]; - int8_t d7 = _a[7] + _b[7]; - int8_t d8 = _a[8] + _b[8]; - int8_t d9 = _a[9] + _b[9]; - int8_t d10 = _a[10] + _b[10]; - int8_t d11 = _a[11] + _b[11]; - int8_t d12 = _a[12] + _b[12]; - int8_t d13 = _a[13] + _b[13]; - int8_t d14 = _a[14] + _b[14]; - int8_t d15 = _a[15] + _b[15]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = _a[0] + _b[0]; + d[1] = _a[1] + _b[1]; + d[2] = _a[2] + _b[2]; + d[3] = _a[3] + _b[3]; + d[4] = _a[4] + _b[4]; + d[5] = _a[5] + _b[5]; + d[6] = _a[6] + _b[6]; + d[7] = _a[7] + _b[7]; + d[8] = _a[8] + _b[8]; + d[9] = _a[9] + _b[9]; + d[10] = _a[10] + _b[10]; + d[11] = _a[11] + _b[11]; + d[12] = _a[12] + _b[12]; + d[13] = _a[13] + _b[13]; + d[14] = _a[14] + _b[14]; + d[15] = _a[15] + _b[15]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_add_epi8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_add_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -2989,8 +3358,8 @@ result_t test_mm_add_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double d0 = _a[0] + _b[0]; double d1 = _a[1] + _b[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_add_pd(a, b); return validateDouble(c, d0, d1); } @@ -3002,8 +3371,8 @@ result_t test_mm_add_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double d0 = _a[0] + _b[0]; double d1 = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_add_sd(a, b); return validateDouble(c, d0, d1); } @@ -3015,8 +3384,8 @@ result_t test_mm_add_si64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0] + _b[0]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_add_si64(a, b); return validateInt64(c, d0); @@ -3026,54 +3395,53 @@ result_t test_mm_adds_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int32_t d0 = (int32_t) _a[0] + (int32_t) _b[0]; - if (d0 > 32767) - d0 = 32767; - if (d0 < -32768) - d0 = -32768; - int32_t d1 = (int32_t) _a[1] + (int32_t) _b[1]; - if (d1 > 32767) - d1 = 32767; - if (d1 < -32768) - d1 = -32768; - int32_t d2 = (int32_t) _a[2] + (int32_t) _b[2]; - if (d2 > 32767) - d2 = 32767; - if (d2 < -32768) - d2 = -32768; - int32_t d3 = (int32_t) _a[3] + (int32_t) _b[3]; - if (d3 > 32767) - d3 = 32767; - if (d3 < -32768) - d3 = -32768; - int32_t d4 = (int32_t) _a[4] + (int32_t) _b[4]; - if (d4 > 32767) - d4 = 32767; - if (d4 < -32768) - d4 = -32768; - int32_t d5 = (int32_t) _a[5] + (int32_t) _b[5]; - if (d5 > 32767) - d5 = 32767; - if (d5 < -32768) - d5 = -32768; - int32_t d6 = (int32_t) _a[6] + (int32_t) _b[6]; - if (d6 > 32767) - d6 = 32767; - if (d6 < -32768) - d6 = -32768; - int32_t d7 = (int32_t) _a[7] + (int32_t) _b[7]; - if (d7 > 32767) - d7 = 32767; - if (d7 < -32768) - d7 = -32768; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int32_t d[8]; + d[0] = (int32_t) _a[0] + (int32_t) _b[0]; + if (d[0] > 32767) + d[0] = 32767; + if (d[0] < -32768) + d[0] = -32768; + d[1] = (int32_t) _a[1] + (int32_t) _b[1]; + if (d[1] > 32767) + d[1] = 32767; + if (d[1] < -32768) + d[1] = -32768; + d[2] = (int32_t) _a[2] + (int32_t) _b[2]; + if (d[2] > 32767) + d[2] = 32767; + if (d[2] < -32768) + d[2] = -32768; + d[3] = (int32_t) _a[3] + (int32_t) _b[3]; + if (d[3] > 32767) + d[3] = 32767; + if (d[3] < -32768) + d[3] = -32768; + d[4] = (int32_t) _a[4] + (int32_t) _b[4]; + if (d[4] > 32767) + d[4] = 32767; + if (d[4] < -32768) + d[4] = -32768; + d[5] = (int32_t) _a[5] + (int32_t) _b[5]; + if (d[5] > 32767) + d[5] = 32767; + if (d[5] < -32768) + d[5] = -32768; + d[6] = (int32_t) _a[6] + (int32_t) _b[6]; + if (d[6] > 32767) + d[6] = 32767; + if (d[6] < -32768) + d[6] = -32768; + d[7] = (int32_t) _a[7] + (int32_t) _b[7]; + if (d[7] > 32767) + d[7] = 32767; + if (d[7] < -32768) + d[7] = -32768; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_adds_epi16(a, b); - return validateInt16(c, (int16_t) d0, (int16_t) d1, (int16_t) d2, - (int16_t) d3, (int16_t) d4, (int16_t) d5, (int16_t) d6, - (int16_t) d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_adds_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3090,15 +3458,11 @@ result_t test_mm_adds_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) d[i] = -128; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_adds_epi8(a, b); - return validateInt8( - c, (int8_t) d[0], (int8_t) d[1], (int8_t) d[2], (int8_t) d[3], - (int8_t) d[4], (int8_t) d[5], (int8_t) d[6], (int8_t) d[7], - (int8_t) d[8], (int8_t) d[9], (int8_t) d[10], (int8_t) d[11], - (int8_t) d[12], (int8_t) d[13], (int8_t) d[14], (int8_t) d[15]); + return VALIDATE_INT8_M128(c, (int8_t) d); } result_t test_mm_adds_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3107,88 +3471,81 @@ result_t test_mm_adds_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) const uint16_t *_a = (const uint16_t *) impl.mTestIntPointer1; const uint16_t *_b = (const uint16_t *) impl.mTestIntPointer2; - uint16_t d0 = - (uint32_t) _a[0] + (uint32_t) _b[0] > max ? max : _a[0] + _b[0]; - uint16_t d1 = - (uint32_t) _a[1] + (uint32_t) _b[1] > max ? max : _a[1] + _b[1]; - uint16_t d2 = - (uint32_t) _a[2] + (uint32_t) _b[2] > max ? max : _a[2] + _b[2]; - uint16_t d3 = - (uint32_t) _a[3] + (uint32_t) _b[3] > max ? max : _a[3] + _b[3]; - uint16_t d4 = - (uint32_t) _a[4] + (uint32_t) _b[4] > max ? max : _a[4] + _b[4]; - uint16_t d5 = - (uint32_t) _a[5] + (uint32_t) _b[5] > max ? max : _a[5] + _b[5]; - uint16_t d6 = - (uint32_t) _a[6] + (uint32_t) _b[6] > max ? max : _a[6] + _b[6]; - uint16_t d7 = - (uint32_t) _a[7] + (uint32_t) _b[7] > max ? max : _a[7] + _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint16_t d[8]; + d[0] = (uint32_t) _a[0] + (uint32_t) _b[0] > max ? max : _a[0] + _b[0]; + d[1] = (uint32_t) _a[1] + (uint32_t) _b[1] > max ? max : _a[1] + _b[1]; + d[2] = (uint32_t) _a[2] + (uint32_t) _b[2] > max ? max : _a[2] + _b[2]; + d[3] = (uint32_t) _a[3] + (uint32_t) _b[3] > max ? max : _a[3] + _b[3]; + d[4] = (uint32_t) _a[4] + (uint32_t) _b[4] > max ? max : _a[4] + _b[4]; + d[5] = (uint32_t) _a[5] + (uint32_t) _b[5] > max ? max : _a[5] + _b[5]; + d[6] = (uint32_t) _a[6] + (uint32_t) _b[6] > max ? max : _a[6] + _b[6]; + d[7] = (uint32_t) _a[7] + (uint32_t) _b[7] > max ? max : _a[7] + _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_adds_epu16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_adds_epu8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - uint8_t d0 = (uint8_t) _a[0] + (uint8_t) _b[0]; - if (d0 < (uint8_t) _a[0]) - d0 = 255; - uint8_t d1 = (uint8_t) _a[1] + (uint8_t) _b[1]; - if (d1 < (uint8_t) _a[1]) - d1 = 255; - uint8_t d2 = (uint8_t) _a[2] + (uint8_t) _b[2]; - if (d2 < (uint8_t) _a[2]) - d2 = 255; - uint8_t d3 = (uint8_t) _a[3] + (uint8_t) _b[3]; - if (d3 < (uint8_t) _a[3]) - d3 = 255; - uint8_t d4 = (uint8_t) _a[4] + (uint8_t) _b[4]; - if (d4 < (uint8_t) _a[4]) - d4 = 255; - uint8_t d5 = (uint8_t) _a[5] + (uint8_t) _b[5]; - if (d5 < (uint8_t) _a[5]) - d5 = 255; - uint8_t d6 = (uint8_t) _a[6] + (uint8_t) _b[6]; - if (d6 < (uint8_t) _a[6]) - d6 = 255; - uint8_t d7 = (uint8_t) _a[7] + (uint8_t) _b[7]; - if (d7 < (uint8_t) _a[7]) - d7 = 255; - uint8_t d8 = (uint8_t) _a[8] + (uint8_t) _b[8]; - if (d8 < (uint8_t) _a[8]) - d8 = 255; - uint8_t d9 = (uint8_t) _a[9] + (uint8_t) _b[9]; - if (d9 < (uint8_t) _a[9]) - d9 = 255; - uint8_t d10 = (uint8_t) _a[10] + (uint8_t) _b[10]; - if (d10 < (uint8_t) _a[10]) - d10 = 255; - uint8_t d11 = (uint8_t) _a[11] + (uint8_t) _b[11]; - if (d11 < (uint8_t) _a[11]) - d11 = 255; - uint8_t d12 = (uint8_t) _a[12] + (uint8_t) _b[12]; - if (d12 < (uint8_t) _a[12]) - d12 = 255; - uint8_t d13 = (uint8_t) _a[13] + (uint8_t) _b[13]; - if (d13 < (uint8_t) _a[13]) - d13 = 255; - uint8_t d14 = (uint8_t) _a[14] + (uint8_t) _b[14]; - if (d14 < (uint8_t) _a[14]) - d14 = 255; - uint8_t d15 = (uint8_t) _a[15] + (uint8_t) _b[15]; - if (d15 < (uint8_t) _a[15]) - d15 = 255; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint8_t d[16]; + d[0] = (uint8_t) _a[0] + (uint8_t) _b[0]; + if (d[0] < (uint8_t) _a[0]) + d[0] = 255; + d[1] = (uint8_t) _a[1] + (uint8_t) _b[1]; + if (d[1] < (uint8_t) _a[1]) + d[1] = 255; + d[2] = (uint8_t) _a[2] + (uint8_t) _b[2]; + if (d[2] < (uint8_t) _a[2]) + d[2] = 255; + d[3] = (uint8_t) _a[3] + (uint8_t) _b[3]; + if (d[3] < (uint8_t) _a[3]) + d[3] = 255; + d[4] = (uint8_t) _a[4] + (uint8_t) _b[4]; + if (d[4] < (uint8_t) _a[4]) + d[4] = 255; + d[5] = (uint8_t) _a[5] + (uint8_t) _b[5]; + if (d[5] < (uint8_t) _a[5]) + d[5] = 255; + d[6] = (uint8_t) _a[6] + (uint8_t) _b[6]; + if (d[6] < (uint8_t) _a[6]) + d[6] = 255; + d[7] = (uint8_t) _a[7] + (uint8_t) _b[7]; + if (d[7] < (uint8_t) _a[7]) + d[7] = 255; + d[8] = (uint8_t) _a[8] + (uint8_t) _b[8]; + if (d[8] < (uint8_t) _a[8]) + d[8] = 255; + d[9] = (uint8_t) _a[9] + (uint8_t) _b[9]; + if (d[9] < (uint8_t) _a[9]) + d[9] = 255; + d[10] = (uint8_t) _a[10] + (uint8_t) _b[10]; + if (d[10] < (uint8_t) _a[10]) + d[10] = 255; + d[11] = (uint8_t) _a[11] + (uint8_t) _b[11]; + if (d[11] < (uint8_t) _a[11]) + d[11] = 255; + d[12] = (uint8_t) _a[12] + (uint8_t) _b[12]; + if (d[12] < (uint8_t) _a[12]) + d[12] = 255; + d[13] = (uint8_t) _a[13] + (uint8_t) _b[13]; + if (d[13] < (uint8_t) _a[13]) + d[13] = 255; + d[14] = (uint8_t) _a[14] + (uint8_t) _b[14]; + if (d[14] < (uint8_t) _a[14]) + d[14] = 255; + d[15] = (uint8_t) _a[15] + (uint8_t) _b[15]; + if (d[15] < (uint8_t) _a[15]) + d[15] = 255; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_adds_epu8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_and_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3199,8 +3556,8 @@ result_t test_mm_and_pd(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0] & _b[0]; int64_t d1 = _a[1] & _b[1]; - __m128d a = do_mm_load_pd((const double *) _a); - __m128d b = do_mm_load_pd((const double *) _b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_and_pd(a, b); return validateDouble(c, *((double *) &d0), *((double *) &d1)); @@ -3210,23 +3567,24 @@ result_t test_mm_and_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128 fc = _mm_and_ps(*(const __m128 *) &a, *(const __m128 *) &b); __m128i c = *(const __m128i *) &fc; // now for the assertion... const uint32_t *ia = (const uint32_t *) &a; const uint32_t *ib = (const uint32_t *) &b; - uint32_t r0 = ia[0] & ib[0]; - uint32_t r1 = ia[1] & ib[1]; - uint32_t r2 = ia[2] & ib[2]; - uint32_t r3 = ia[3] & ib[3]; - __m128i ret = do_mm_set_epi32(r3, r2, r1, r0); - result_t r = validateInt32(c, r0, r1, r2, r3); - if (r) { - r = validateInt32(ret, r0, r1, r2, r3); + uint32_t r[4]; + r[0] = ia[0] & ib[0]; + r[1] = ia[1] & ib[1]; + r[2] = ia[2] & ib[2]; + r[3] = ia[3] & ib[3]; + __m128i ret = do_mm_set_epi32(r[3], r[2], r[1], r[0]); + result_t res = VALIDATE_INT32_M128(c, r); + if (res) { + res = VALIDATE_INT32_M128(ret, r); } - return r; + return res; } result_t test_mm_andnot_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3234,8 +3592,8 @@ result_t test_mm_andnot_pd(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_andnot_pd(a, b); // Take AND operation a complement of 'a' and 'b'. Bitwise operations are @@ -3252,69 +3610,71 @@ result_t test_mm_andnot_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - result_t r = TEST_SUCCESS; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128 fc = _mm_andnot_ps(*(const __m128 *) &a, *(const __m128 *) &b); __m128i c = *(const __m128i *) &fc; // now for the assertion... const uint32_t *ia = (const uint32_t *) &a; const uint32_t *ib = (const uint32_t *) &b; - uint32_t r0 = ~ia[0] & ib[0]; - uint32_t r1 = ~ia[1] & ib[1]; - uint32_t r2 = ~ia[2] & ib[2]; - uint32_t r3 = ~ia[3] & ib[3]; - __m128i ret = do_mm_set_epi32(r3, r2, r1, r0); - r = validateInt32(c, r0, r1, r2, r3); - if (r) { - r = validateInt32(ret, r0, r1, r2, r3); + uint32_t r[4]; + r[0] = ~ia[0] & ib[0]; + r[1] = ~ia[1] & ib[1]; + r[2] = ~ia[2] & ib[2]; + r[3] = ~ia[3] & ib[3]; + __m128i ret = do_mm_set_epi32(r[3], r[2], r[1], r[0]); + result_t res = TEST_SUCCESS; + res = VALIDATE_INT32_M128(c, r); + if (res) { + res = VALIDATE_INT32_M128(ret, r); } - return r; + return res; } result_t test_mm_avg_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - uint16_t d0 = ((uint16_t) _a[0] + (uint16_t) _b[0] + 1) >> 1; - uint16_t d1 = ((uint16_t) _a[1] + (uint16_t) _b[1] + 1) >> 1; - uint16_t d2 = ((uint16_t) _a[2] + (uint16_t) _b[2] + 1) >> 1; - uint16_t d3 = ((uint16_t) _a[3] + (uint16_t) _b[3] + 1) >> 1; - uint16_t d4 = ((uint16_t) _a[4] + (uint16_t) _b[4] + 1) >> 1; - uint16_t d5 = ((uint16_t) _a[5] + (uint16_t) _b[5] + 1) >> 1; - uint16_t d6 = ((uint16_t) _a[6] + (uint16_t) _b[6] + 1) >> 1; - uint16_t d7 = ((uint16_t) _a[7] + (uint16_t) _b[7] + 1) >> 1; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint16_t d[8]; + d[0] = ((uint16_t) _a[0] + (uint16_t) _b[0] + 1) >> 1; + d[1] = ((uint16_t) _a[1] + (uint16_t) _b[1] + 1) >> 1; + d[2] = ((uint16_t) _a[2] + (uint16_t) _b[2] + 1) >> 1; + d[3] = ((uint16_t) _a[3] + (uint16_t) _b[3] + 1) >> 1; + d[4] = ((uint16_t) _a[4] + (uint16_t) _b[4] + 1) >> 1; + d[5] = ((uint16_t) _a[5] + (uint16_t) _b[5] + 1) >> 1; + d[6] = ((uint16_t) _a[6] + (uint16_t) _b[6] + 1) >> 1; + d[7] = ((uint16_t) _a[7] + (uint16_t) _b[7] + 1) >> 1; + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_avg_epu16(a, b); - return validateUInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_UINT16_M128(c, d); } result_t test_mm_avg_epu8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - uint8_t d0 = ((uint8_t) _a[0] + (uint8_t) _b[0] + 1) >> 1; - uint8_t d1 = ((uint8_t) _a[1] + (uint8_t) _b[1] + 1) >> 1; - uint8_t d2 = ((uint8_t) _a[2] + (uint8_t) _b[2] + 1) >> 1; - uint8_t d3 = ((uint8_t) _a[3] + (uint8_t) _b[3] + 1) >> 1; - uint8_t d4 = ((uint8_t) _a[4] + (uint8_t) _b[4] + 1) >> 1; - uint8_t d5 = ((uint8_t) _a[5] + (uint8_t) _b[5] + 1) >> 1; - uint8_t d6 = ((uint8_t) _a[6] + (uint8_t) _b[6] + 1) >> 1; - uint8_t d7 = ((uint8_t) _a[7] + (uint8_t) _b[7] + 1) >> 1; - uint8_t d8 = ((uint8_t) _a[8] + (uint8_t) _b[8] + 1) >> 1; - uint8_t d9 = ((uint8_t) _a[9] + (uint8_t) _b[9] + 1) >> 1; - uint8_t d10 = ((uint8_t) _a[10] + (uint8_t) _b[10] + 1) >> 1; - uint8_t d11 = ((uint8_t) _a[11] + (uint8_t) _b[11] + 1) >> 1; - uint8_t d12 = ((uint8_t) _a[12] + (uint8_t) _b[12] + 1) >> 1; - uint8_t d13 = ((uint8_t) _a[13] + (uint8_t) _b[13] + 1) >> 1; - uint8_t d14 = ((uint8_t) _a[14] + (uint8_t) _b[14] + 1) >> 1; - uint8_t d15 = ((uint8_t) _a[15] + (uint8_t) _b[15] + 1) >> 1; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint8_t d[16]; + d[0] = ((uint8_t) _a[0] + (uint8_t) _b[0] + 1) >> 1; + d[1] = ((uint8_t) _a[1] + (uint8_t) _b[1] + 1) >> 1; + d[2] = ((uint8_t) _a[2] + (uint8_t) _b[2] + 1) >> 1; + d[3] = ((uint8_t) _a[3] + (uint8_t) _b[3] + 1) >> 1; + d[4] = ((uint8_t) _a[4] + (uint8_t) _b[4] + 1) >> 1; + d[5] = ((uint8_t) _a[5] + (uint8_t) _b[5] + 1) >> 1; + d[6] = ((uint8_t) _a[6] + (uint8_t) _b[6] + 1) >> 1; + d[7] = ((uint8_t) _a[7] + (uint8_t) _b[7] + 1) >> 1; + d[8] = ((uint8_t) _a[8] + (uint8_t) _b[8] + 1) >> 1; + d[9] = ((uint8_t) _a[9] + (uint8_t) _b[9] + 1) >> 1; + d[10] = ((uint8_t) _a[10] + (uint8_t) _b[10] + 1) >> 1; + d[11] = ((uint8_t) _a[11] + (uint8_t) _b[11] + 1) >> 1; + d[12] = ((uint8_t) _a[12] + (uint8_t) _b[12] + 1) >> 1; + d[13] = ((uint8_t) _a[13] + (uint8_t) _b[13] + 1) >> 1; + d[14] = ((uint8_t) _a[14] + (uint8_t) _b[14] + 1) >> 1; + d[15] = ((uint8_t) _a[15] + (uint8_t) _b[15] + 1) >> 1; + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_avg_epu8(a, b); - return validateUInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_UINT8_M128(c, d); } result_t test_mm_bslli_si128(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3330,8 +3690,8 @@ result_t test_mm_bsrli_si128(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_castpd_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - const __m128d a = do_mm_load_pd((const double *) _a); - const __m128 _c = do_mm_load_ps(_a); + const __m128d a = load_m128d(_a); + const __m128 _c = load_m128(_a); __m128 r = _mm_castpd_ps(a); @@ -3341,7 +3701,7 @@ result_t test_mm_castpd_ps(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_castpd_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - const __m128d a = do_mm_load_pd((const double *) _a); + const __m128d a = load_m128d(_a); const __m128i *_c = (const __m128i *) _a; __m128i r = _mm_castpd_si128(a); @@ -3352,7 +3712,7 @@ result_t test_mm_castpd_si128(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_castps_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - const __m128 a = do_mm_load_ps(_a); + const __m128 a = load_m128(_a); const __m128d *_c = (const __m128d *) _a; __m128d r = _mm_castps_pd(a); @@ -3366,7 +3726,7 @@ result_t test_mm_castps_si128(const SSE2NEONTestImpl &impl, uint32_t iter) const __m128i *_c = (const __m128i *) _a; - const __m128 a = do_mm_load_ps(_a); + const __m128 a = load_m128(_a); __m128i r = _mm_castps_si128(a); return validate128(r, *_c); @@ -3378,7 +3738,7 @@ result_t test_mm_castsi128_pd(const SSE2NEONTestImpl &impl, uint32_t iter) const __m128d *_c = (const __m128d *) _a; - const __m128i a = do_mm_load_ps(_a); + const __m128i a = load_m128i(_a); __m128d r = _mm_castsi128_pd(a); return validate128(r, *_c); @@ -3390,7 +3750,7 @@ result_t test_mm_castsi128_ps(const SSE2NEONTestImpl &impl, uint32_t iter) const __m128 *_c = (const __m128 *) _a; - const __m128i a = do_mm_load_ps(_a); + const __m128i a = load_m128i(_a); __m128 r = _mm_castsi128_ps(a); return validate128(r, *_c); @@ -3398,26 +3758,28 @@ result_t test_mm_castsi128_ps(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_clflush(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + /* FIXME: Assume that we have portable mechanisms to flush cache. */ + return TEST_SUCCESS; } result_t test_mm_cmpeq_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = (_a[0] == _b[0]) ? ~UINT16_C(0) : 0x0; - int16_t d1 = (_a[1] == _b[1]) ? ~UINT16_C(0) : 0x0; - int16_t d2 = (_a[2] == _b[2]) ? ~UINT16_C(0) : 0x0; - int16_t d3 = (_a[3] == _b[3]) ? ~UINT16_C(0) : 0x0; - int16_t d4 = (_a[4] == _b[4]) ? ~UINT16_C(0) : 0x0; - int16_t d5 = (_a[5] == _b[5]) ? ~UINT16_C(0) : 0x0; - int16_t d6 = (_a[6] == _b[6]) ? ~UINT16_C(0) : 0x0; - int16_t d7 = (_a[7] == _b[7]) ? ~UINT16_C(0) : 0x0; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = (_a[0] == _b[0]) ? ~UINT16_C(0) : 0x0; + d[1] = (_a[1] == _b[1]) ? ~UINT16_C(0) : 0x0; + d[2] = (_a[2] == _b[2]) ? ~UINT16_C(0) : 0x0; + d[3] = (_a[3] == _b[3]) ? ~UINT16_C(0) : 0x0; + d[4] = (_a[4] == _b[4]) ? ~UINT16_C(0) : 0x0; + d[5] = (_a[5] == _b[5]) ? ~UINT16_C(0) : 0x0; + d[6] = (_a[6] == _b[6]) ? ~UINT16_C(0) : 0x0; + d[7] = (_a[7] == _b[7]) ? ~UINT16_C(0) : 0x0; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_cmpeq_epi16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_cmpeq_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3425,44 +3787,45 @@ result_t test_mm_cmpeq_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - int32_t d0 = (_a[0] == _b[0]) ? ~UINT32_C(0) : 0x0; - int32_t d1 = (_a[1] == _b[1]) ? ~UINT32_C(0) : 0x0; - int32_t d2 = (_a[2] == _b[2]) ? ~UINT32_C(0) : 0x0; - int32_t d3 = (_a[3] == _b[3]) ? ~UINT32_C(0) : 0x0; + int32_t d[4]; + d[0] = (_a[0] == _b[0]) ? ~UINT32_C(0) : 0x0; + d[1] = (_a[1] == _b[1]) ? ~UINT32_C(0) : 0x0; + d[2] = (_a[2] == _b[2]) ? ~UINT32_C(0) : 0x0; + d[3] = (_a[3] == _b[3]) ? ~UINT32_C(0) : 0x0; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_cmpeq_epi32(a, b); - return validateInt32(c, d0, d1, d2, d3); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_cmpeq_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t d0 = (_a[0] == _b[0]) ? ~UINT8_C(0) : 0x00; - int8_t d1 = (_a[1] == _b[1]) ? ~UINT8_C(0) : 0x00; - int8_t d2 = (_a[2] == _b[2]) ? ~UINT8_C(0) : 0x00; - int8_t d3 = (_a[3] == _b[3]) ? ~UINT8_C(0) : 0x00; - int8_t d4 = (_a[4] == _b[4]) ? ~UINT8_C(0) : 0x00; - int8_t d5 = (_a[5] == _b[5]) ? ~UINT8_C(0) : 0x00; - int8_t d6 = (_a[6] == _b[6]) ? ~UINT8_C(0) : 0x00; - int8_t d7 = (_a[7] == _b[7]) ? ~UINT8_C(0) : 0x00; - int8_t d8 = (_a[8] == _b[8]) ? ~UINT8_C(0) : 0x00; - int8_t d9 = (_a[9] == _b[9]) ? ~UINT8_C(0) : 0x00; - int8_t d10 = (_a[10] == _b[10]) ? ~UINT8_C(0) : 0x00; - int8_t d11 = (_a[11] == _b[11]) ? ~UINT8_C(0) : 0x00; - int8_t d12 = (_a[12] == _b[12]) ? ~UINT8_C(0) : 0x00; - int8_t d13 = (_a[13] == _b[13]) ? ~UINT8_C(0) : 0x00; - int8_t d14 = (_a[14] == _b[14]) ? ~UINT8_C(0) : 0x00; - int8_t d15 = (_a[15] == _b[15]) ? ~UINT8_C(0) : 0x00; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = (_a[0] == _b[0]) ? ~UINT8_C(0) : 0x00; + d[1] = (_a[1] == _b[1]) ? ~UINT8_C(0) : 0x00; + d[2] = (_a[2] == _b[2]) ? ~UINT8_C(0) : 0x00; + d[3] = (_a[3] == _b[3]) ? ~UINT8_C(0) : 0x00; + d[4] = (_a[4] == _b[4]) ? ~UINT8_C(0) : 0x00; + d[5] = (_a[5] == _b[5]) ? ~UINT8_C(0) : 0x00; + d[6] = (_a[6] == _b[6]) ? ~UINT8_C(0) : 0x00; + d[7] = (_a[7] == _b[7]) ? ~UINT8_C(0) : 0x00; + d[8] = (_a[8] == _b[8]) ? ~UINT8_C(0) : 0x00; + d[9] = (_a[9] == _b[9]) ? ~UINT8_C(0) : 0x00; + d[10] = (_a[10] == _b[10]) ? ~UINT8_C(0) : 0x00; + d[11] = (_a[11] == _b[11]) ? ~UINT8_C(0) : 0x00; + d[12] = (_a[12] == _b[12]) ? ~UINT8_C(0) : 0x00; + d[13] = (_a[13] == _b[13]) ? ~UINT8_C(0) : 0x00; + d[14] = (_a[14] == _b[14]) ? ~UINT8_C(0) : 0x00; + d[15] = (_a[15] == _b[15]) ? ~UINT8_C(0) : 0x00; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_cmpeq_epi8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_cmpeq_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3472,8 +3835,8 @@ result_t test_mm_cmpeq_pd(const SSE2NEONTestImpl &impl, uint32_t iter) uint64_t d0 = (_a[0] == _b[0]) ? 0xffffffffffffffff : 0; uint64_t d1 = (_a[1] == _b[1]) ? 0xffffffffffffffff : 0; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmpeq_pd(a, b); return validateDouble(c, *(double *) &d0, *(double *) &d1); } @@ -3485,8 +3848,8 @@ result_t test_mm_cmpeq_sd(const SSE2NEONTestImpl &impl, uint32_t iter) const uint64_t d0 = (_a[0] == _b[0]) ? ~UINT64_C(0) : 0; const uint64_t d1 = ((const uint64_t *) _a)[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmpeq_sd(a, b); return validateDouble(c, *(const double *) &d0, *(const double *) &d1); @@ -3499,8 +3862,8 @@ result_t test_mm_cmpge_pd(const SSE2NEONTestImpl &impl, uint32_t iter) uint64_t d0 = (_a[0] >= _b[0]) ? ~UINT64_C(0) : 0; uint64_t d1 = (_a[1] >= _b[1]) ? ~UINT64_C(0) : 0; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmpge_pd(a, b); return validateDouble(c, *(double *) &d0, *(double *) &d1); @@ -3513,8 +3876,8 @@ result_t test_mm_cmpge_sd(const SSE2NEONTestImpl &impl, uint32_t iter) uint64_t d0 = (_a[0] >= _b[0]) ? ~UINT64_C(0) : 0; uint64_t d1 = ((uint64_t *) _a)[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmpge_sd(a, b); return validateDouble(c, *(double *) &d0, *(double *) &d1); @@ -3524,28 +3887,29 @@ result_t test_mm_cmpgt_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - uint16_t d0 = _a[0] > _b[0] ? ~UINT16_C(0) : 0; - uint16_t d1 = _a[1] > _b[1] ? ~UINT16_C(0) : 0; - uint16_t d2 = _a[2] > _b[2] ? ~UINT16_C(0) : 0; - uint16_t d3 = _a[3] > _b[3] ? ~UINT16_C(0) : 0; - uint16_t d4 = _a[4] > _b[4] ? ~UINT16_C(0) : 0; - uint16_t d5 = _a[5] > _b[5] ? ~UINT16_C(0) : 0; - uint16_t d6 = _a[6] > _b[6] ? ~UINT16_C(0) : 0; - uint16_t d7 = _a[7] > _b[7] ? ~UINT16_C(0) : 0; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint16_t d[8]; + d[0] = _a[0] > _b[0] ? ~UINT16_C(0) : 0; + d[1] = _a[1] > _b[1] ? ~UINT16_C(0) : 0; + d[2] = _a[2] > _b[2] ? ~UINT16_C(0) : 0; + d[3] = _a[3] > _b[3] ? ~UINT16_C(0) : 0; + d[4] = _a[4] > _b[4] ? ~UINT16_C(0) : 0; + d[5] = _a[5] > _b[5] ? ~UINT16_C(0) : 0; + d[6] = _a[6] > _b[6] ? ~UINT16_C(0) : 0; + d[7] = _a[7] > _b[7] ? ~UINT16_C(0) : 0; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_cmpgt_epi16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_cmpgt_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); int32_t result[4]; @@ -3555,35 +3919,35 @@ result_t test_mm_cmpgt_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) result[3] = _a[3] > _b[3] ? -1 : 0; __m128i iret = _mm_cmpgt_epi32(a, b); - return validateInt32(iret, result[0], result[1], result[2], result[3]); + return VALIDATE_INT32_M128(iret, result); } result_t test_mm_cmpgt_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t d0 = (_a[0] > _b[0]) ? ~UINT8_C(0) : 0x00; - int8_t d1 = (_a[1] > _b[1]) ? ~UINT8_C(0) : 0x00; - int8_t d2 = (_a[2] > _b[2]) ? ~UINT8_C(0) : 0x00; - int8_t d3 = (_a[3] > _b[3]) ? ~UINT8_C(0) : 0x00; - int8_t d4 = (_a[4] > _b[4]) ? ~UINT8_C(0) : 0x00; - int8_t d5 = (_a[5] > _b[5]) ? ~UINT8_C(0) : 0x00; - int8_t d6 = (_a[6] > _b[6]) ? ~UINT8_C(0) : 0x00; - int8_t d7 = (_a[7] > _b[7]) ? ~UINT8_C(0) : 0x00; - int8_t d8 = (_a[8] > _b[8]) ? ~UINT8_C(0) : 0x00; - int8_t d9 = (_a[9] > _b[9]) ? ~UINT8_C(0) : 0x00; - int8_t d10 = (_a[10] > _b[10]) ? ~UINT8_C(0) : 0x00; - int8_t d11 = (_a[11] > _b[11]) ? ~UINT8_C(0) : 0x00; - int8_t d12 = (_a[12] > _b[12]) ? ~UINT8_C(0) : 0x00; - int8_t d13 = (_a[13] > _b[13]) ? ~UINT8_C(0) : 0x00; - int8_t d14 = (_a[14] > _b[14]) ? ~UINT8_C(0) : 0x00; - int8_t d15 = (_a[15] > _b[15]) ? ~UINT8_C(0) : 0x00; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = (_a[0] > _b[0]) ? ~UINT8_C(0) : 0x00; + d[1] = (_a[1] > _b[1]) ? ~UINT8_C(0) : 0x00; + d[2] = (_a[2] > _b[2]) ? ~UINT8_C(0) : 0x00; + d[3] = (_a[3] > _b[3]) ? ~UINT8_C(0) : 0x00; + d[4] = (_a[4] > _b[4]) ? ~UINT8_C(0) : 0x00; + d[5] = (_a[5] > _b[5]) ? ~UINT8_C(0) : 0x00; + d[6] = (_a[6] > _b[6]) ? ~UINT8_C(0) : 0x00; + d[7] = (_a[7] > _b[7]) ? ~UINT8_C(0) : 0x00; + d[8] = (_a[8] > _b[8]) ? ~UINT8_C(0) : 0x00; + d[9] = (_a[9] > _b[9]) ? ~UINT8_C(0) : 0x00; + d[10] = (_a[10] > _b[10]) ? ~UINT8_C(0) : 0x00; + d[11] = (_a[11] > _b[11]) ? ~UINT8_C(0) : 0x00; + d[12] = (_a[12] > _b[12]) ? ~UINT8_C(0) : 0x00; + d[13] = (_a[13] > _b[13]) ? ~UINT8_C(0) : 0x00; + d[14] = (_a[14] > _b[14]) ? ~UINT8_C(0) : 0x00; + d[15] = (_a[15] > _b[15]) ? ~UINT8_C(0) : 0x00; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_cmpgt_epi8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_cmpgt_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3593,8 +3957,8 @@ result_t test_mm_cmpgt_pd(const SSE2NEONTestImpl &impl, uint32_t iter) uint64_t d0 = (_a[0] > _b[0]) ? ~UINT64_C(0) : 0; uint64_t d1 = (_a[1] > _b[1]) ? ~UINT64_C(0) : 0; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmpgt_pd(a, b); return validateDouble(c, *(double *) &d0, *(double *) &d1); @@ -3607,8 +3971,8 @@ result_t test_mm_cmpgt_sd(const SSE2NEONTestImpl &impl, uint32_t iter) uint64_t d0 = (_a[0] > _b[0]) ? ~UINT64_C(0) : 0; uint64_t d1 = ((uint64_t *) _a)[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmpgt_sd(a, b); return validateDouble(c, *(double *) &d0, *(double *) &d1); @@ -3621,8 +3985,8 @@ result_t test_mm_cmple_pd(const SSE2NEONTestImpl &impl, uint32_t iter) uint64_t d0 = (_a[0] <= _b[0]) ? ~UINT64_C(0) : 0; uint64_t d1 = (_a[1] <= _b[1]) ? ~UINT64_C(0) : 0; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmple_pd(a, b); return validateDouble(c, *(double *) &d0, *(double *) &d1); @@ -3635,8 +3999,8 @@ result_t test_mm_cmple_sd(const SSE2NEONTestImpl &impl, uint32_t iter) uint64_t d0 = (_a[0] <= _b[0]) ? ~UINT64_C(0) : 0; uint64_t d1 = ((uint64_t *) _a)[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmple_sd(a, b); return validateDouble(c, *(double *) &d0, *(double *) &d1); @@ -3646,28 +4010,29 @@ result_t test_mm_cmplt_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - uint16_t d0 = _a[0] < _b[0] ? ~UINT16_C(0) : 0; - uint16_t d1 = _a[1] < _b[1] ? ~UINT16_C(0) : 0; - uint16_t d2 = _a[2] < _b[2] ? ~UINT16_C(0) : 0; - uint16_t d3 = _a[3] < _b[3] ? ~UINT16_C(0) : 0; - uint16_t d4 = _a[4] < _b[4] ? ~UINT16_C(0) : 0; - uint16_t d5 = _a[5] < _b[5] ? ~UINT16_C(0) : 0; - uint16_t d6 = _a[6] < _b[6] ? ~UINT16_C(0) : 0; - uint16_t d7 = _a[7] < _b[7] ? ~UINT16_C(0) : 0; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint16_t d[8]; + d[0] = _a[0] < _b[0] ? ~UINT16_C(0) : 0; + d[1] = _a[1] < _b[1] ? ~UINT16_C(0) : 0; + d[2] = _a[2] < _b[2] ? ~UINT16_C(0) : 0; + d[3] = _a[3] < _b[3] ? ~UINT16_C(0) : 0; + d[4] = _a[4] < _b[4] ? ~UINT16_C(0) : 0; + d[5] = _a[5] < _b[5] ? ~UINT16_C(0) : 0; + d[6] = _a[6] < _b[6] ? ~UINT16_C(0) : 0; + d[7] = _a[7] < _b[7] ? ~UINT16_C(0) : 0; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_cmplt_epi16(a, b); - return validateUInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_UINT16_M128(c, d); } result_t test_mm_cmplt_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); int32_t result[4]; result[0] = _a[0] < _b[0] ? -1 : 0; @@ -3676,35 +4041,35 @@ result_t test_mm_cmplt_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) result[3] = _a[3] < _b[3] ? -1 : 0; __m128i iret = _mm_cmplt_epi32(a, b); - return validateInt32(iret, result[0], result[1], result[2], result[3]); + return VALIDATE_INT32_M128(iret, result); } result_t test_mm_cmplt_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t d0 = (_a[0] < _b[0]) ? ~UINT8_C(0) : 0x00; - int8_t d1 = (_a[1] < _b[1]) ? ~UINT8_C(0) : 0x00; - int8_t d2 = (_a[2] < _b[2]) ? ~UINT8_C(0) : 0x00; - int8_t d3 = (_a[3] < _b[3]) ? ~UINT8_C(0) : 0x00; - int8_t d4 = (_a[4] < _b[4]) ? ~UINT8_C(0) : 0x00; - int8_t d5 = (_a[5] < _b[5]) ? ~UINT8_C(0) : 0x00; - int8_t d6 = (_a[6] < _b[6]) ? ~UINT8_C(0) : 0x00; - int8_t d7 = (_a[7] < _b[7]) ? ~UINT8_C(0) : 0x00; - int8_t d8 = (_a[8] < _b[8]) ? ~UINT8_C(0) : 0x00; - int8_t d9 = (_a[9] < _b[9]) ? ~UINT8_C(0) : 0x00; - int8_t d10 = (_a[10] < _b[10]) ? ~UINT8_C(0) : 0x00; - int8_t d11 = (_a[11] < _b[11]) ? ~UINT8_C(0) : 0x00; - int8_t d12 = (_a[12] < _b[12]) ? ~UINT8_C(0) : 0x00; - int8_t d13 = (_a[13] < _b[13]) ? ~UINT8_C(0) : 0x00; - int8_t d14 = (_a[14] < _b[14]) ? ~UINT8_C(0) : 0x00; - int8_t d15 = (_a[15] < _b[15]) ? ~UINT8_C(0) : 0x00; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = (_a[0] < _b[0]) ? ~UINT8_C(0) : 0x00; + d[1] = (_a[1] < _b[1]) ? ~UINT8_C(0) : 0x00; + d[2] = (_a[2] < _b[2]) ? ~UINT8_C(0) : 0x00; + d[3] = (_a[3] < _b[3]) ? ~UINT8_C(0) : 0x00; + d[4] = (_a[4] < _b[4]) ? ~UINT8_C(0) : 0x00; + d[5] = (_a[5] < _b[5]) ? ~UINT8_C(0) : 0x00; + d[6] = (_a[6] < _b[6]) ? ~UINT8_C(0) : 0x00; + d[7] = (_a[7] < _b[7]) ? ~UINT8_C(0) : 0x00; + d[8] = (_a[8] < _b[8]) ? ~UINT8_C(0) : 0x00; + d[9] = (_a[9] < _b[9]) ? ~UINT8_C(0) : 0x00; + d[10] = (_a[10] < _b[10]) ? ~UINT8_C(0) : 0x00; + d[11] = (_a[11] < _b[11]) ? ~UINT8_C(0) : 0x00; + d[12] = (_a[12] < _b[12]) ? ~UINT8_C(0) : 0x00; + d[13] = (_a[13] < _b[13]) ? ~UINT8_C(0) : 0x00; + d[14] = (_a[14] < _b[14]) ? ~UINT8_C(0) : 0x00; + d[15] = (_a[15] < _b[15]) ? ~UINT8_C(0) : 0x00; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_cmplt_epi8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_cmplt_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3715,8 +4080,8 @@ result_t test_mm_cmplt_pd(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t f0 = (_a[0] < _b[0]) ? ~UINT64_C(0) : UINT64_C(0); int64_t f1 = (_a[1] < _b[1]) ? ~UINT64_C(0) : UINT64_C(0); - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmplt_pd(a, b); return validateDouble(c, *(double *) &f0, *(double *) &f1); @@ -3726,11 +4091,11 @@ result_t test_mm_cmplt_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { double *_a = (double *) impl.mTestFloatPointer1; double *_b = (double *) impl.mTestFloatPointer2; - uint64_t d0 = (_a[0] <= _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d0 = (_a[0] < _b[0]) ? ~UINT64_C(0) : 0; uint64_t d1 = ((uint64_t *) _a)[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmplt_sd(a, b); return validateDouble(c, *(double *) &d0, *(double *) &d1); @@ -3744,8 +4109,8 @@ result_t test_mm_cmpneq_pd(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t f0 = (_a[0] != _b[0]) ? ~UINT64_C(0) : UINT64_C(0); int64_t f1 = (_a[1] != _b[1]) ? ~UINT64_C(0) : UINT64_C(0); - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmpneq_pd(a, b); return validateDouble(c, *(double *) &f0, *(double *) &f1); @@ -3759,8 +4124,8 @@ result_t test_mm_cmpneq_sd(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t f0 = (_a[0] != _b[0]) ? ~UINT64_C(0) : UINT64_C(0); int64_t f1 = ((int64_t *) _a)[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_cmpneq_sd(a, b); return validateDouble(c, *(double *) &f0, *(double *) &f1); @@ -3768,42 +4133,114 @@ result_t test_mm_cmpneq_sd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cmpnge_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return test_mm_cmplt_pd(impl, iter); + const double *_a = (const double *) impl.mTestFloatPointer1; + const double *_b = (const double *) impl.mTestFloatPointer2; + uint64_t d0 = !(_a[0] >= _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d1 = !(_a[1] >= _b[1]) ? ~UINT64_C(0) : 0; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d c = _mm_cmpnge_pd(a, b); + + return validateDouble(c, *(double *) &d0, *(double *) &d1); } result_t test_mm_cmpnge_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return test_mm_cmplt_sd(impl, iter); + double *_a = (double *) impl.mTestFloatPointer1; + double *_b = (double *) impl.mTestFloatPointer2; + uint64_t d0 = !(_a[0] >= _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d1 = ((uint64_t *) _a)[1]; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d c = _mm_cmpnge_sd(a, b); + + return validateDouble(c, *(double *) &d0, *(double *) &d1); } result_t test_mm_cmpngt_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return test_mm_cmple_pd(impl, iter); + const double *_a = (const double *) impl.mTestFloatPointer1; + const double *_b = (const double *) impl.mTestFloatPointer2; + uint64_t d0 = !(_a[0] > _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d1 = !(_a[1] > _b[1]) ? ~UINT64_C(0) : 0; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d c = _mm_cmpngt_pd(a, b); + + return validateDouble(c, *(double *) &d0, *(double *) &d1); } result_t test_mm_cmpngt_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return test_mm_cmple_sd(impl, iter); + double *_a = (double *) impl.mTestFloatPointer1; + double *_b = (double *) impl.mTestFloatPointer2; + uint64_t d0 = !(_a[0] > _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d1 = ((uint64_t *) _a)[1]; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d c = _mm_cmpngt_sd(a, b); + + return validateDouble(c, *(double *) &d0, *(double *) &d1); } result_t test_mm_cmpnle_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return test_mm_cmpgt_pd(impl, iter); + const double *_a = (const double *) impl.mTestFloatPointer1; + const double *_b = (const double *) impl.mTestFloatPointer2; + uint64_t d0 = !(_a[0] <= _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d1 = !(_a[1] <= _b[1]) ? ~UINT64_C(0) : 0; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d c = _mm_cmpnle_pd(a, b); + + return validateDouble(c, *(double *) &d0, *(double *) &d1); } result_t test_mm_cmpnle_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return test_mm_cmpgt_sd(impl, iter); + double *_a = (double *) impl.mTestFloatPointer1; + double *_b = (double *) impl.mTestFloatPointer2; + uint64_t d0 = !(_a[0] <= _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d1 = ((uint64_t *) _a)[1]; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d c = _mm_cmpnle_sd(a, b); + + return validateDouble(c, *(double *) &d0, *(double *) &d1); } result_t test_mm_cmpnlt_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return test_mm_cmpge_pd(impl, iter); + const double *_a = (const double *) impl.mTestFloatPointer1; + const double *_b = (const double *) impl.mTestFloatPointer2; + uint64_t d0 = !(_a[0] < _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d1 = !(_a[1] < _b[1]) ? ~UINT64_C(0) : 0; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d c = _mm_cmpnlt_pd(a, b); + + return validateDouble(c, *(double *) &d0, *(double *) &d1); } result_t test_mm_cmpnlt_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return test_mm_cmpge_sd(impl, iter); + double *_a = (double *) impl.mTestFloatPointer1; + double *_b = (double *) impl.mTestFloatPointer2; + uint64_t d0 = !(_a[0] < _b[0]) ? ~UINT64_C(0) : 0; + uint64_t d1 = ((uint64_t *) _a)[1]; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d c = _mm_cmpnlt_sd(a, b); + + return validateDouble(c, *(double *) &d0, *(double *) &d1); } result_t test_mm_cmpord_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3816,7 +4253,7 @@ result_t test_mm_cmpord_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double result[2]; for (uint32_t i = 0; i < 2; i++) { - result[i] = compord(_a[i], _b[i]); + result[i] = cmp_noNaN(_a[i], _b[i]); } __m128d ret = _mm_cmpord_pd(a, b); @@ -3831,7 +4268,7 @@ result_t test_mm_cmpord_sd(const SSE2NEONTestImpl &impl, uint32_t iter) __m128d a = _mm_load_pd(_a); __m128d b = _mm_load_pd(_b); - double c0 = compord(_a[0], _b[0]); + double c0 = cmp_noNaN(_a[0], _b[0]); double c1 = _a[1]; __m128d ret = _mm_cmpord_sd(a, b); @@ -3845,13 +4282,12 @@ result_t test_mm_cmpunord_pd(const SSE2NEONTestImpl &impl, uint32_t iter) __m128d a = _mm_load_pd(_a); __m128d b = _mm_load_pd(_b); - uint64_t result[2]; - result[0] = !((_a[0] == _a[0]) && (_b[0] == _b[0])) ? UINT64_MAX : 0; - result[1] = !((_a[1] == _a[1]) && (_b[1] == _b[1])) ? UINT64_MAX : 0; + double result[2]; + result[0] = cmp_hasNaN(_a[0], _b[0]); + result[1] = cmp_hasNaN(_a[1], _b[1]); __m128d ret = _mm_cmpunord_pd(a, b); - return validateDouble(ret, ((double *) &result)[0], - ((double *) &result)[1]); + return validateDouble(ret, result[0], result[1]); } result_t test_mm_cmpunord_sd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -3861,67 +4297,134 @@ result_t test_mm_cmpunord_sd(const SSE2NEONTestImpl &impl, uint32_t iter) __m128d a = _mm_load_pd(_a); __m128d b = _mm_load_pd(_b); - uint64_t result[2]; - result[0] = !((_a[0] == _a[0]) && (_b[0] == _b[0])) ? UINT64_MAX : 0; - result[1] = ((uint64_t *) _a)[1]; + double result[2]; + result[0] = cmp_hasNaN(_a[0], _b[0]); + result[1] = _a[1]; __m128d ret = _mm_cmpunord_sd(a, b); - return validateDouble(ret, ((double *) &result)[0], - ((double *) &result)[1]); + return validateDouble(ret, result[0], result[1]); } result_t test_mm_comieq_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { + // FIXME: + // The GCC does not implement _mm_comieq_sd correctly. + // See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98612 for more + // information. +#if defined(__GNUC__) && !defined(__clang__) + return TEST_UNIMPL; +#else const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; int32_t _c = (_a[0] == _b[0]) ? 1 : 0; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); int32_t c = _mm_comieq_sd(a, b); ASSERT_RETURN(c == _c); return TEST_SUCCESS; +#endif } result_t test_mm_comige_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + const double *_a = (const double *) impl.mTestFloatPointer1; + const double *_b = (const double *) impl.mTestFloatPointer2; + int32_t _c = (_a[0] >= _b[0]) ? 1 : 0; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + int32_t c = _mm_comige_sd(a, b); + + ASSERT_RETURN(c == _c); + return TEST_SUCCESS; } result_t test_mm_comigt_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + const double *_a = (const double *) impl.mTestFloatPointer1; + const double *_b = (const double *) impl.mTestFloatPointer2; + int32_t _c = (_a[0] > _b[0]) ? 1 : 0; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + int32_t c = _mm_comigt_sd(a, b); + + ASSERT_RETURN(c == _c); + return TEST_SUCCESS; } result_t test_mm_comile_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { + // FIXME: + // The GCC does not implement _mm_comile_sd correctly. + // See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98612 for more + // information. +#if defined(__GNUC__) && !defined(__clang__) return TEST_UNIMPL; +#else + const double *_a = (const double *) impl.mTestFloatPointer1; + const double *_b = (const double *) impl.mTestFloatPointer2; + int32_t _c = (_a[0] <= _b[0]) ? 1 : 0; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + int32_t c = _mm_comile_sd(a, b); + + ASSERT_RETURN(c == _c); + return TEST_SUCCESS; +#endif } result_t test_mm_comilt_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { + // FIXME: + // The GCC does not implement _mm_comilt_sd correctly. + // See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98612 for more + // information. +#if defined(__GNUC__) && !defined(__clang__) return TEST_UNIMPL; +#else + const double *_a = (const double *) impl.mTestFloatPointer1; + const double *_b = (const double *) impl.mTestFloatPointer2; + int32_t _c = (_a[0] < _b[0]) ? 1 : 0; + + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + int32_t c = _mm_comilt_sd(a, b); + + ASSERT_RETURN(c == _c); + return TEST_SUCCESS; +#endif } result_t test_mm_comineq_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { + // FIXME: + // The GCC does not implement _mm_comineq_sd correctly. + // See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98612 for more + // information. +#if defined(__GNUC__) && !defined(__clang__) + return TEST_UNIMPL; +#else const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; int32_t _c = (_a[0] != _b[0]) ? 1 : 0; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); int32_t c = _mm_comineq_sd(a, b); ASSERT_RETURN(c == _c); return TEST_SUCCESS; +#endif } result_t test_mm_cvtepi32_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); double trun[2] = {(double) _a[0], (double) _a[1]}; __m128d ret = _mm_cvtepi32_pd(a); @@ -3931,7 +4434,7 @@ result_t test_mm_cvtepi32_pd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cvtepi32_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); float trun[4]; for (uint32_t i = 0; i < 4; i++) { trun[i] = (float) _a[i]; @@ -3944,32 +4447,43 @@ result_t test_mm_cvtepi32_ps(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cvtpd_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; - int32_t d[2]; + int32_t d[2] = {}; switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - d[0] = (int32_t)(bankersRounding(_a[0])); - d[1] = (int32_t)(bankersRounding(_a[1])); + d[0] = (int32_t) (bankersRounding(_a[0])); + d[1] = (int32_t) (bankersRounding(_a[1])); break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); - d[0] = (int32_t)(floor(_a[0])); - d[1] = (int32_t)(floor(_a[1])); + d[0] = (int32_t) (floor(_a[0])); + d[1] = (int32_t) (floor(_a[1])); break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); - d[0] = (int32_t)(ceil(_a[0])); - d[1] = (int32_t)(ceil(_a[1])); + d[0] = (int32_t) (ceil(_a[0])); + d[1] = (int32_t) (ceil(_a[1])); break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); - d[0] = (int32_t)(_a[0]); - d[1] = (int32_t)(_a[1]); + d[0] = (int32_t) (_a[0]); + d[1] = (int32_t) (_a[1]); break; } - __m128d a = do_mm_load_pd(_a); +#if defined(__ARM_FEATURE_FRINT) && !defined(__clang__) + /* Floats that cannot fit into 32-bits should instead return + * indefinite integer value (INT32_MIN). This behaviour is + * currently only emulated when using the round-to-integral + * instructions. */ + for (int i = 0; i < 2; i++) { + if (_a[i] > (float) INT32_MAX || _a[i] < (float) INT32_MIN) + d[i] = INT32_MIN; + } +#endif + + __m128d a = load_m128d(_a); __m128i ret = _mm_cvtpd_epi32(a); return validateInt32(ret, d[0], d[1], 0, 0); @@ -3978,35 +4492,35 @@ result_t test_mm_cvtpd_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cvtpd_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; - int32_t d[2]; + int32_t d[2] = {}; switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - d[0] = (int32_t)(bankersRounding(_a[0])); - d[1] = (int32_t)(bankersRounding(_a[1])); + d[0] = (int32_t) (bankersRounding(_a[0])); + d[1] = (int32_t) (bankersRounding(_a[1])); break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); - d[0] = (int32_t)(floor(_a[0])); - d[1] = (int32_t)(floor(_a[1])); + d[0] = (int32_t) (floor(_a[0])); + d[1] = (int32_t) (floor(_a[1])); break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); - d[0] = (int32_t)(ceil(_a[0])); - d[1] = (int32_t)(ceil(_a[1])); + d[0] = (int32_t) (ceil(_a[0])); + d[1] = (int32_t) (ceil(_a[1])); break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); - d[0] = (int32_t)(_a[0]); - d[1] = (int32_t)(_a[1]); + d[0] = (int32_t) (_a[0]); + d[1] = (int32_t) (_a[1]); break; } - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); __m64 ret = _mm_cvtpd_pi32(a); - return validateInt32(ret, d[0], d[1]); + return VALIDATE_INT32_M64(ret, d); } result_t test_mm_cvtpd_ps(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4014,7 +4528,7 @@ result_t test_mm_cvtpd_ps(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; float f0 = (float) _a[0]; float f1 = (float) _a[1]; - const __m128d a = do_mm_load_pd(_a); + const __m128d a = load_m128d(_a); __m128 r = _mm_cvtpd_ps(a); @@ -4024,7 +4538,7 @@ result_t test_mm_cvtpd_ps(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cvtpi32_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); double trun[2] = {(double) _a[0], (double) _a[1]}; @@ -4036,37 +4550,37 @@ result_t test_mm_cvtpi32_pd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cvtps_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); int32_t d[4]; switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); for (uint32_t i = 0; i < 4; i++) { - d[i] = (int32_t)(bankersRounding(_a[i])); + d[i] = (int32_t) (bankersRounding(_a[i])); } break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); for (uint32_t i = 0; i < 4; i++) { - d[i] = (int32_t)(floorf(_a[i])); + d[i] = (int32_t) (floorf(_a[i])); } break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); for (uint32_t i = 0; i < 4; i++) { - d[i] = (int32_t)(ceilf(_a[i])); + d[i] = (int32_t) (ceilf(_a[i])); } break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); for (uint32_t i = 0; i < 4; i++) { - d[i] = (int32_t)(_a[i]); + d[i] = (int32_t) (_a[i]); } break; } __m128i ret = _mm_cvtps_epi32(a); - return validateInt32(ret, d[0], d[1], d[2], d[3]); + return VALIDATE_INT32_M128(ret, d); } result_t test_mm_cvtps_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4074,7 +4588,7 @@ result_t test_mm_cvtps_pd(const SSE2NEONTestImpl &impl, uint32_t iter) const float *_a = impl.mTestFloatPointer1; double d0 = (double) _a[0]; double d1 = (double) _a[1]; - const __m128 a = do_mm_load_ps(_a); + const __m128 a = load_m128(_a); __m128d r = _mm_cvtps_pd(a); @@ -4101,23 +4615,23 @@ result_t test_mm_cvtsd_si32(const SSE2NEONTestImpl &impl, uint32_t iter) switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - d = (int32_t)(bankersRounding(_a[0])); + d = (int32_t) (bankersRounding(_a[0])); break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); - d = (int32_t)(floor(_a[0])); + d = (int32_t) (floor(_a[0])); break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); - d = (int32_t)(ceil(_a[0])); + d = (int32_t) (ceil(_a[0])); break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); - d = (int32_t)(_a[0]); + d = (int32_t) (_a[0]); break; } - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); int32_t ret = _mm_cvtsd_si32(a); return ret == d ? TEST_SUCCESS : TEST_FAIL; @@ -4126,28 +4640,28 @@ result_t test_mm_cvtsd_si32(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cvtsd_si64(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; - int64_t d; + int64_t d = 0; switch (iter & 0x3) { case 0: _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - d = (int64_t)(bankersRounding(_a[0])); + d = (int64_t) (bankersRounding(_a[0])); break; case 1: _MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN); - d = (int64_t)(floor(_a[0])); + d = (int64_t) (floor(_a[0])); break; case 2: _MM_SET_ROUNDING_MODE(_MM_ROUND_UP); - d = (int64_t)(ceil(_a[0])); + d = (int64_t) (ceil(_a[0])); break; case 3: _MM_SET_ROUNDING_MODE(_MM_ROUND_TOWARD_ZERO); - d = (int64_t)(_a[0]); + d = (int64_t) (_a[0]); break; } - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); int64_t ret = _mm_cvtsd_si64(a); return ret == d ? TEST_SUCCESS : TEST_FAIL; @@ -4163,13 +4677,13 @@ result_t test_mm_cvtsd_ss(const SSE2NEONTestImpl &impl, uint32_t iter) const float *_a = impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - float f0 = _b[0]; - float f1 = _a[1]; - float f2 = _a[2]; - float f3 = _a[3]; + float f0 = (float) _b[0]; + float f1 = (float) _a[1]; + float f2 = (float) _a[2]; + float f3 = (float) _a[3]; - __m128 a = do_mm_load_ps(_a); - __m128d b = do_mm_load_pd(_b); + __m128 a = load_m128(_a); + __m128d b = load_m128d(_b); __m128 c = _mm_cvtsd_ss(a, b); return validateFloat(c, f0, f1, f2, f3); @@ -4181,7 +4695,7 @@ result_t test_mm_cvtsi128_si32(const SSE2NEONTestImpl &impl, uint32_t iter) int32_t d = _a[0]; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); int c = _mm_cvtsi128_si32(a); return d == c ? TEST_SUCCESS : TEST_FAIL; @@ -4193,7 +4707,7 @@ result_t test_mm_cvtsi128_si64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d = _a[0]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); int64_t c = _mm_cvtsi128_si64(a); return d == c ? TEST_SUCCESS : TEST_FAIL; @@ -4209,7 +4723,7 @@ result_t test_mm_cvtsi32_sd(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; const int32_t b = (const int32_t) impl.mTestInts[iter]; - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); __m128d c = _mm_cvtsi32_sd(a, b); return validateDouble(c, b, _a[1]); @@ -4231,10 +4745,10 @@ result_t test_mm_cvtsi64_sd(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; const int64_t b = (const int64_t) impl.mTestInts[iter]; - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); __m128d c = _mm_cvtsi64_sd(a, b); - return validateDouble(c, b, _a[1]); + return validateDouble(c, (double) b, _a[1]); } result_t test_mm_cvtsi64_si128(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4265,8 +4779,8 @@ result_t test_mm_cvtss_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double d0 = double(_b[0]); double d1 = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128 b = do_mm_load_ps(_b); + __m128d a = load_m128d(_a); + __m128 b = load_m128(_b); __m128d c = _mm_cvtss_sd(a, b); return validateDouble(c, d0, d1); } @@ -4275,9 +4789,9 @@ result_t test_mm_cvttpd_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; - __m128d a = do_mm_load_pd(_a); - int32_t d0 = (int32_t)(_a[0]); - int32_t d1 = (int32_t)(_a[1]); + __m128d a = load_m128d(_a); + int32_t d0 = (int32_t) (_a[0]); + int32_t d1 = (int32_t) (_a[1]); __m128i ret = _mm_cvttpd_epi32(a); return validateInt32(ret, d0, d1, 0, 0); @@ -4287,9 +4801,9 @@ result_t test_mm_cvttpd_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; - __m128d a = do_mm_load_pd(_a); - int32_t d0 = (int32_t)(_a[0]); - int32_t d1 = (int32_t)(_a[1]); + __m128d a = load_m128d(_a); + int32_t d0 = (int32_t) (_a[0]); + int32_t d1 = (int32_t) (_a[1]); __m64 ret = _mm_cvttpd_pi32(a); return validateInt32(ret, d0, d1); @@ -4298,14 +4812,14 @@ result_t test_mm_cvttpd_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_cvttps_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); int32_t trun[4]; for (uint32_t i = 0; i < 4; i++) { trun[i] = (int32_t) _a[i]; } __m128i ret = _mm_cvttps_epi32(a); - return validateInt32(ret, trun[0], trun[1], trun[2], trun[3]); + return VALIDATE_INT32_M128(ret, trun); } result_t test_mm_cvttsd_si32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4354,8 +4868,8 @@ result_t test_mm_div_pd(const SSE2NEONTestImpl &impl, uint32_t iter) if (_b[1] != 0.0) d1 = _a[1] / _b[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_div_pd(a, b); return validateDouble(c, d0, d1); } @@ -4368,8 +4882,8 @@ result_t test_mm_div_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double d0 = _a[0] / _b[0]; double d1 = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_div_sd(a, b); @@ -4380,8 +4894,8 @@ result_t test_mm_extract_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { uint16_t *_a = (uint16_t *) impl.mTestIntPointer1; const int idx = iter & 0x7; - __m128i a = do_mm_load_ps((const int32_t *) _a); - int c; + __m128i a = load_m128i(_a); + int c = 0; switch (idx) { case 0: c = _mm_extract_epi16(a, 0); @@ -4417,22 +4931,28 @@ result_t test_mm_insert_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t insert = (int16_t) *impl.mTestIntPointer2; - const int imm8 = 2; - int16_t d[8]; - for (int i = 0; i < 8; i++) { - d[i] = _a[i]; - } - d[imm8] = insert; +#define TEST_IMPL(IDX) \ + int16_t d##IDX[8]; \ + for (int i = 0; i < 8; i++) { \ + d##IDX[i] = _a[i]; \ + } \ + d##IDX[IDX] = insert; \ + \ + __m128i a##IDX = load_m128i(_a); \ + __m128i b##IDX = _mm_insert_epi16(a##IDX, insert, IDX); \ + CHECK_RESULT(VALIDATE_INT16_M128(b##IDX, d##IDX)) + + IMM_8_ITER +#undef TEST_IMPL - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = _mm_insert_epi16(a, insert, imm8); - return validateInt16(b, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return TEST_SUCCESS; } result_t test_mm_lfence(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + /* FIXME: Assume that memory barriers always function as intended. */ + return TEST_SUCCESS; } result_t test_mm_load_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4462,7 +4982,7 @@ result_t test_mm_load_si128(const SSE2NEONTestImpl &impl, uint32_t iter) __m128i ret = _mm_load_si128((const __m128i *) addr); - return validateInt32(ret, addr[0], addr[1], addr[2], addr[3]); + return VALIDATE_INT32_M128(ret, addr); } result_t test_mm_load1_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4479,7 +4999,7 @@ result_t test_mm_loadh_pd(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; const double *addr = (const double *) impl.mTestFloatPointer2; - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); __m128d ret = _mm_loadh_pd(a, addr); return validateDouble(ret, _a[0], addr[0]); @@ -4499,7 +5019,7 @@ result_t test_mm_loadl_pd(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; const double *addr = (const double *) impl.mTestFloatPointer2; - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); __m128d ret = _mm_loadl_pd(a, addr); return validateDouble(ret, addr[0], _a[1]); @@ -4523,22 +5043,26 @@ result_t test_mm_loadu_pd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_loadu_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { - const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; + const unaligned_int32_t *_a = + (const unaligned_int32_t *) (impl.mTestUnalignedInts + 1); __m128i c = _mm_loadu_si128((const __m128i *) _a); - return validateInt32(c, _a[0], _a[1], _a[2], _a[3]); + return VALIDATE_INT32_M128(c, _a); } result_t test_mm_loadu_si32(const SSE2NEONTestImpl &impl, uint32_t iter) { -#if defined(__clang__) - const int32_t *addr = (const int32_t *) impl.mTestIntPointer1; + // The GCC version before 11 does not implement intrinsic function + // _mm_loadu_si32. Check https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95483 + // for more information. +#if (defined(__GNUC__) && !defined(__clang__)) && (__GNUC__ <= 10) + return TEST_UNIMPL; +#else + const unaligned_int32_t *addr = + (const unaligned_int32_t *) (impl.mTestUnalignedInts + 1); __m128i ret = _mm_loadu_si32((const void *) addr); return validateInt32(ret, addr[0], 0, 0, 0); -#else - // The intrinsic _mm_loadu_si32() does not exist in GCC - return TEST_UNIMPL; #endif } @@ -4555,15 +5079,16 @@ result_t test_mm_madd_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) int32_t d6 = (int32_t) _a[6] * _b[6]; int32_t d7 = (int32_t) _a[7] * _b[7]; - int32_t e0 = d0 + d1; - int32_t e1 = d2 + d3; - int32_t e2 = d4 + d5; - int32_t e3 = d6 + d7; + int32_t e[4]; + e[0] = d0 + d1; + e[1] = d2 + d3; + e[2] = d4 + d5; + e[3] = d6 + d7; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_madd_epi16(a, b); - return validateInt32(c, e0, e1, e2, e3); + return VALIDATE_INT32_M128(c, e); } result_t test_mm_maskmoveu_si128(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4572,8 +5097,8 @@ result_t test_mm_maskmoveu_si128(const SSE2NEONTestImpl &impl, uint32_t iter) const uint8_t *_mask = (const uint8_t *) impl.mTestIntPointer2; char mem_addr[16]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i mask = do_mm_load_ps((const int32_t *) _mask); + __m128i a = load_m128i(_a); + __m128i mask = load_m128i(_mask); _mm_maskmoveu_si128(a, mask, mem_addr); for (int i = 0; i < 16; i++) { @@ -4587,66 +5112,75 @@ result_t test_mm_maskmoveu_si128(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_max_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { +#if (__GNUC__ == 8) || (__GNUC__ == 9 && __GNUC_MINOR__ == 2) +#error Using older gcc versions can lead to an operand mismatch error. This issue affects all versions prior to gcc 10. +#else const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = _a[0] > _b[0] ? _a[0] : _b[0]; - int16_t d1 = _a[1] > _b[1] ? _a[1] : _b[1]; - int16_t d2 = _a[2] > _b[2] ? _a[2] : _b[2]; - int16_t d3 = _a[3] > _b[3] ? _a[3] : _b[3]; - int16_t d4 = _a[4] > _b[4] ? _a[4] : _b[4]; - int16_t d5 = _a[5] > _b[5] ? _a[5] : _b[5]; - int16_t d6 = _a[6] > _b[6] ? _a[6] : _b[6]; - int16_t d7 = _a[7] > _b[7] ? _a[7] : _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[0] > _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] > _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] > _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] > _b[3] ? _a[3] : _b[3]; + d[4] = _a[4] > _b[4] ? _a[4] : _b[4]; + d[5] = _a[5] > _b[5] ? _a[5] : _b[5]; + d[6] = _a[6] > _b[6] ? _a[6] : _b[6]; + d[7] = _a[7] > _b[7] ? _a[7] : _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_max_epi16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); +#endif } result_t test_mm_max_epu8(const SSE2NEONTestImpl &impl, uint32_t iter) { +#if (__GNUC__ == 8) || (__GNUC__ == 9 && __GNUC_MINOR__ == 2) +#error Using older gcc versions can lead to an operand mismatch error. This issue affects all versions prior to gcc 10. +#else const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - uint8_t d0 = ((uint8_t) _a[0] > (uint8_t) _b[0]) ? ((uint8_t) _a[0]) - : ((uint8_t) _b[0]); - uint8_t d1 = ((uint8_t) _a[1] > (uint8_t) _b[1]) ? ((uint8_t) _a[1]) - : ((uint8_t) _b[1]); - uint8_t d2 = ((uint8_t) _a[2] > (uint8_t) _b[2]) ? ((uint8_t) _a[2]) - : ((uint8_t) _b[2]); - uint8_t d3 = ((uint8_t) _a[3] > (uint8_t) _b[3]) ? ((uint8_t) _a[3]) - : ((uint8_t) _b[3]); - uint8_t d4 = ((uint8_t) _a[4] > (uint8_t) _b[4]) ? ((uint8_t) _a[4]) - : ((uint8_t) _b[4]); - uint8_t d5 = ((uint8_t) _a[5] > (uint8_t) _b[5]) ? ((uint8_t) _a[5]) - : ((uint8_t) _b[5]); - uint8_t d6 = ((uint8_t) _a[6] > (uint8_t) _b[6]) ? ((uint8_t) _a[6]) - : ((uint8_t) _b[6]); - uint8_t d7 = ((uint8_t) _a[7] > (uint8_t) _b[7]) ? ((uint8_t) _a[7]) - : ((uint8_t) _b[7]); - uint8_t d8 = ((uint8_t) _a[8] > (uint8_t) _b[8]) ? ((uint8_t) _a[8]) - : ((uint8_t) _b[8]); - uint8_t d9 = ((uint8_t) _a[9] > (uint8_t) _b[9]) ? ((uint8_t) _a[9]) - : ((uint8_t) _b[9]); - uint8_t d10 = ((uint8_t) _a[10] > (uint8_t) _b[10]) ? ((uint8_t) _a[10]) - : ((uint8_t) _b[10]); - uint8_t d11 = ((uint8_t) _a[11] > (uint8_t) _b[11]) ? ((uint8_t) _a[11]) - : ((uint8_t) _b[11]); - uint8_t d12 = ((uint8_t) _a[12] > (uint8_t) _b[12]) ? ((uint8_t) _a[12]) - : ((uint8_t) _b[12]); - uint8_t d13 = ((uint8_t) _a[13] > (uint8_t) _b[13]) ? ((uint8_t) _a[13]) - : ((uint8_t) _b[13]); - uint8_t d14 = ((uint8_t) _a[14] > (uint8_t) _b[14]) ? ((uint8_t) _a[14]) - : ((uint8_t) _b[14]); - uint8_t d15 = ((uint8_t) _a[15] > (uint8_t) _b[15]) ? ((uint8_t) _a[15]) - : ((uint8_t) _b[15]); - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint8_t d[16]; + d[0] = ((uint8_t) _a[0] > (uint8_t) _b[0]) ? ((uint8_t) _a[0]) + : ((uint8_t) _b[0]); + d[1] = ((uint8_t) _a[1] > (uint8_t) _b[1]) ? ((uint8_t) _a[1]) + : ((uint8_t) _b[1]); + d[2] = ((uint8_t) _a[2] > (uint8_t) _b[2]) ? ((uint8_t) _a[2]) + : ((uint8_t) _b[2]); + d[3] = ((uint8_t) _a[3] > (uint8_t) _b[3]) ? ((uint8_t) _a[3]) + : ((uint8_t) _b[3]); + d[4] = ((uint8_t) _a[4] > (uint8_t) _b[4]) ? ((uint8_t) _a[4]) + : ((uint8_t) _b[4]); + d[5] = ((uint8_t) _a[5] > (uint8_t) _b[5]) ? ((uint8_t) _a[5]) + : ((uint8_t) _b[5]); + d[6] = ((uint8_t) _a[6] > (uint8_t) _b[6]) ? ((uint8_t) _a[6]) + : ((uint8_t) _b[6]); + d[7] = ((uint8_t) _a[7] > (uint8_t) _b[7]) ? ((uint8_t) _a[7]) + : ((uint8_t) _b[7]); + d[8] = ((uint8_t) _a[8] > (uint8_t) _b[8]) ? ((uint8_t) _a[8]) + : ((uint8_t) _b[8]); + d[9] = ((uint8_t) _a[9] > (uint8_t) _b[9]) ? ((uint8_t) _a[9]) + : ((uint8_t) _b[9]); + d[10] = ((uint8_t) _a[10] > (uint8_t) _b[10]) ? ((uint8_t) _a[10]) + : ((uint8_t) _b[10]); + d[11] = ((uint8_t) _a[11] > (uint8_t) _b[11]) ? ((uint8_t) _a[11]) + : ((uint8_t) _b[11]); + d[12] = ((uint8_t) _a[12] > (uint8_t) _b[12]) ? ((uint8_t) _a[12]) + : ((uint8_t) _b[12]); + d[13] = ((uint8_t) _a[13] > (uint8_t) _b[13]) ? ((uint8_t) _a[13]) + : ((uint8_t) _b[13]); + d[14] = ((uint8_t) _a[14] > (uint8_t) _b[14]) ? ((uint8_t) _a[14]) + : ((uint8_t) _b[14]); + d[15] = ((uint8_t) _a[15] > (uint8_t) _b[15]) ? ((uint8_t) _a[15]) + : ((uint8_t) _b[15]); + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_max_epu8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); +#endif } result_t test_mm_max_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4657,8 +5191,8 @@ result_t test_mm_max_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double f0 = _a[0] > _b[0] ? _a[0] : _b[0]; double f1 = _a[1] > _b[1] ? _a[1] : _b[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_max_pd(a, b); return validateDouble(c, f0, f1); @@ -4668,11 +5202,11 @@ result_t test_mm_max_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - double d0 = fmax(_a[0], _b[0]); + double d0 = _a[0] > _b[0] ? _a[0] : _b[0]; double d1 = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_max_sd(a, b); return validateDouble(c, d0, d1); @@ -4680,81 +5214,83 @@ result_t test_mm_max_sd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_mfence(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + /* FIXME: Assume that memory barriers always function as intended. */ + return TEST_SUCCESS; } result_t test_mm_min_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = _a[0] < _b[0] ? _a[0] : _b[0]; - int16_t d1 = _a[1] < _b[1] ? _a[1] : _b[1]; - int16_t d2 = _a[2] < _b[2] ? _a[2] : _b[2]; - int16_t d3 = _a[3] < _b[3] ? _a[3] : _b[3]; - int16_t d4 = _a[4] < _b[4] ? _a[4] : _b[4]; - int16_t d5 = _a[5] < _b[5] ? _a[5] : _b[5]; - int16_t d6 = _a[6] < _b[6] ? _a[6] : _b[6]; - int16_t d7 = _a[7] < _b[7] ? _a[7] : _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[0] < _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] < _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] < _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] < _b[3] ? _a[3] : _b[3]; + d[4] = _a[4] < _b[4] ? _a[4] : _b[4]; + d[5] = _a[5] < _b[5] ? _a[5] : _b[5]; + d[6] = _a[6] < _b[6] ? _a[6] : _b[6]; + d[7] = _a[7] < _b[7] ? _a[7] : _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_min_epi16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_min_epu8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - uint8_t d0 = + uint8_t d[16]; + d[0] = ((uint8_t) _a[0] < (uint8_t) _b[0]) ? (uint8_t) _a[0] : (uint8_t) _b[0]; - uint8_t d1 = + d[1] = ((uint8_t) _a[1] < (uint8_t) _b[1]) ? (uint8_t) _a[1] : (uint8_t) _b[1]; - uint8_t d2 = + d[2] = ((uint8_t) _a[2] < (uint8_t) _b[2]) ? (uint8_t) _a[2] : (uint8_t) _b[2]; - uint8_t d3 = + d[3] = ((uint8_t) _a[3] < (uint8_t) _b[3]) ? (uint8_t) _a[3] : (uint8_t) _b[3]; - uint8_t d4 = + d[4] = ((uint8_t) _a[4] < (uint8_t) _b[4]) ? (uint8_t) _a[4] : (uint8_t) _b[4]; - uint8_t d5 = + d[5] = ((uint8_t) _a[5] < (uint8_t) _b[5]) ? (uint8_t) _a[5] : (uint8_t) _b[5]; - uint8_t d6 = + d[6] = ((uint8_t) _a[6] < (uint8_t) _b[6]) ? (uint8_t) _a[6] : (uint8_t) _b[6]; - uint8_t d7 = + d[7] = ((uint8_t) _a[7] < (uint8_t) _b[7]) ? (uint8_t) _a[7] : (uint8_t) _b[7]; - uint8_t d8 = + d[8] = ((uint8_t) _a[8] < (uint8_t) _b[8]) ? (uint8_t) _a[8] : (uint8_t) _b[8]; - uint8_t d9 = + d[9] = ((uint8_t) _a[9] < (uint8_t) _b[9]) ? (uint8_t) _a[9] : (uint8_t) _b[9]; - uint8_t d10 = ((uint8_t) _a[10] < (uint8_t) _b[10]) ? (uint8_t) _a[10] - : (uint8_t) _b[10]; - uint8_t d11 = ((uint8_t) _a[11] < (uint8_t) _b[11]) ? (uint8_t) _a[11] - : (uint8_t) _b[11]; - uint8_t d12 = ((uint8_t) _a[12] < (uint8_t) _b[12]) ? (uint8_t) _a[12] - : (uint8_t) _b[12]; - uint8_t d13 = ((uint8_t) _a[13] < (uint8_t) _b[13]) ? (uint8_t) _a[13] - : (uint8_t) _b[13]; - uint8_t d14 = ((uint8_t) _a[14] < (uint8_t) _b[14]) ? (uint8_t) _a[14] - : (uint8_t) _b[14]; - uint8_t d15 = ((uint8_t) _a[15] < (uint8_t) _b[15]) ? (uint8_t) _a[15] - : (uint8_t) _b[15]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + d[10] = ((uint8_t) _a[10] < (uint8_t) _b[10]) ? (uint8_t) _a[10] + : (uint8_t) _b[10]; + d[11] = ((uint8_t) _a[11] < (uint8_t) _b[11]) ? (uint8_t) _a[11] + : (uint8_t) _b[11]; + d[12] = ((uint8_t) _a[12] < (uint8_t) _b[12]) ? (uint8_t) _a[12] + : (uint8_t) _b[12]; + d[13] = ((uint8_t) _a[13] < (uint8_t) _b[13]) ? (uint8_t) _a[13] + : (uint8_t) _b[13]; + d[14] = ((uint8_t) _a[14] < (uint8_t) _b[14]) ? (uint8_t) _a[14] + : (uint8_t) _b[14]; + d[15] = ((uint8_t) _a[15] < (uint8_t) _b[15]) ? (uint8_t) _a[15] + : (uint8_t) _b[15]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_min_epu8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_min_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - double f0 = fmin(_a[0], _b[0]); - double f1 = fmin(_a[1], _b[1]); + double f0 = _a[0] < _b[0] ? _a[0] : _b[0]; + double f1 = _a[1] < _b[1] ? _a[1] : _b[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_min_pd(a, b); return validateDouble(c, f0, f1); @@ -4764,11 +5300,11 @@ result_t test_mm_min_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - double d0 = fmin(_a[0], _b[0]); + double d0 = _a[0] < _b[0] ? _a[0] : _b[0]; double d1 = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_min_sd(a, b); return validateDouble(c, d0, d1); @@ -4781,7 +5317,7 @@ result_t test_mm_move_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0]; int64_t d1 = 0; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i c = _mm_move_epi64(a); return validateInt64(c, d0, d1); @@ -4791,8 +5327,8 @@ result_t test_mm_move_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); double result[2]; result[0] = _b[0]; @@ -4805,7 +5341,7 @@ result_t test_mm_move_sd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_movemask_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); const uint8_t *ip = (const uint8_t *) _a; int ret = 0; @@ -4828,7 +5364,7 @@ result_t test_mm_movemask_pd(const SSE2NEONTestImpl &impl, uint32_t iter) _c |= ((*(const uint64_t *) _a) >> 63) & 0x1; _c |= (((*(const uint64_t *) (_a + 1)) >> 62) & 0x2); - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); int c = _mm_movemask_pd(a); ASSERT_RETURN((unsigned int) c == _c); @@ -4841,7 +5377,7 @@ result_t test_mm_movepi64_pi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m64 c = _mm_movepi64_pi64(a); return validateInt64(c, d0); @@ -4853,7 +5389,7 @@ result_t test_mm_movpi64_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0]; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); __m128i c = _mm_movpi64_epi64(a); return validateInt64(c, d0, 0); @@ -4863,8 +5399,8 @@ result_t test_mm_mul_epu32(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint32_t *_a = (const uint32_t *) impl.mTestIntPointer1; const uint32_t *_b = (const uint32_t *) impl.mTestIntPointer2; - uint64_t dx = (uint64_t)(_a[0]) * (uint64_t)(_b[0]); - uint64_t dy = (uint64_t)(_a[2]) * (uint64_t)(_b[2]); + uint64_t dx = (uint64_t) (_a[0]) * (uint64_t) (_b[0]); + uint64_t dy = (uint64_t) (_a[2]) * (uint64_t) (_b[2]); __m128i a = _mm_loadu_si128((const __m128i *) _a); __m128i b = _mm_loadu_si128((const __m128i *) _b); @@ -4892,8 +5428,8 @@ result_t test_mm_mul_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double dx = _a[0] * _b[0]; double dy = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_mul_sd(a, b); return validateDouble(c, dx, dy); } @@ -4903,10 +5439,10 @@ result_t test_mm_mul_su32(const SSE2NEONTestImpl &impl, uint32_t iter) const uint32_t *_a = (const uint32_t *) impl.mTestIntPointer1; const uint32_t *_b = (const uint32_t *) impl.mTestIntPointer2; - uint64_t u = (uint64_t)(_a[0]) * (uint64_t)(_b[0]); + uint64_t u = (uint64_t) (_a[0]) * (uint64_t) (_b[0]); - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 r = _mm_mul_su32(a, b); return validateUInt64(r, u); @@ -4919,13 +5455,13 @@ result_t test_mm_mulhi_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) int16_t d[8]; for (uint32_t i = 0; i < 8; i++) { int32_t m = (int32_t) _a[i] * (int32_t) _b[i]; - d[i] = (int16_t)(m >> 16); + d[i] = (int16_t) (m >> 16); } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_mulhi_epi16(a, b); - return validateInt16(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_mulhi_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4935,32 +5471,33 @@ result_t test_mm_mulhi_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) uint16_t d[8]; for (uint32_t i = 0; i < 8; i++) { uint32_t m = (uint32_t) _a[i] * (uint32_t) _b[i]; - d[i] = (uint16_t)(m >> 16); + d[i] = (uint16_t) (m >> 16); } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_mulhi_epu16(a, b); - return validateInt16(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_mullo_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = _a[0] * _b[0]; - int16_t d1 = _a[1] * _b[1]; - int16_t d2 = _a[2] * _b[2]; - int16_t d3 = _a[3] * _b[3]; - int16_t d4 = _a[4] * _b[4]; - int16_t d5 = _a[5] * _b[5]; - int16_t d6 = _a[6] * _b[6]; - int16_t d7 = _a[7] * _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[0] * _b[0]; + d[1] = _a[1] * _b[1]; + d[2] = _a[2] * _b[2]; + d[3] = _a[3] * _b[3]; + d[4] = _a[4] * _b[4]; + d[5] = _a[5] * _b[5]; + d[6] = _a[6] * _b[6]; + d[7] = _a[7] * _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_mullo_epi16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_or_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -4971,8 +5508,8 @@ result_t test_mm_or_pd(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0] | _b[0]; int64_t d1 = _a[1] | _b[1]; - __m128d a = do_mm_load_pd((const double *) _a); - __m128d b = do_mm_load_pd((const double *) _b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_or_pd(a, b); return validateDouble(c, *((double *) &d0), *((double *) &d1)); @@ -4982,23 +5519,24 @@ result_t test_mm_or_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128 fc = _mm_or_ps(*(const __m128 *) &a, *(const __m128 *) &b); __m128i c = *(const __m128i *) &fc; // now for the assertion... const uint32_t *ia = (const uint32_t *) &a; const uint32_t *ib = (const uint32_t *) &b; - uint32_t r0 = ia[0] | ib[0]; - uint32_t r1 = ia[1] | ib[1]; - uint32_t r2 = ia[2] | ib[2]; - uint32_t r3 = ia[3] | ib[3]; - __m128i ret = do_mm_set_epi32(r3, r2, r1, r0); - result_t r = validateInt32(c, r0, r1, r2, r3); - if (r) { - r = validateInt32(ret, r0, r1, r2, r3); + uint32_t r[4]; + r[0] = ia[0] | ib[0]; + r[1] = ia[1] | ib[1]; + r[2] = ia[2] | ib[2]; + r[3] = ia[3] | ib[3]; + __m128i ret = do_mm_set_epi32(r[3], r[2], r[1], r[0]); + result_t res = VALIDATE_INT32_M128(c, r); + if (res) { + res = VALIDATE_INT32_M128(ret, r); } - return r; + return res; } result_t test_mm_packs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5026,12 +5564,11 @@ result_t test_mm_packs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) d[i + 8] = (int8_t) _b[i]; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_packs_epi16(a, b); - return validateInt8(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], d[8], - d[9], d[10], d[11], d[12], d[13], d[14], d[15]); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_packs_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5059,11 +5596,11 @@ result_t test_mm_packs_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) d[i + 4] = (int16_t) _b[i]; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_packs_epi32(a, b); - return validateInt16(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_packus_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5091,12 +5628,11 @@ result_t test_mm_packus_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) d[i + 8] = (uint8_t) _b[i]; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_packus_epi16(a, b); - return validateUInt8(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], - d[8], d[9], d[10], d[11], d[12], d[13], d[14], d[15]); + return VALIDATE_UINT8_M128(c, d); } result_t test_mm_pause(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5118,8 +5654,8 @@ result_t test_mm_sad_epu8(const SSE2NEONTestImpl &impl, uint32_t iter) d1 += abs(_a[i] - _b[i]); } - const __m128i a = do_mm_load_ps((const int32_t *) _a); - const __m128i b = do_mm_load_ps((const int32_t *) _b); + const __m128i a = load_m128i(_a); + const __m128i b = load_m128i(_b); __m128i c = _mm_sad_epu8(a, b); return validateUInt16(c, d0, 0, 0, 0, d1, 0, 0, 0); } @@ -5127,34 +5663,36 @@ result_t test_mm_sad_epu8(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_set_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - int16_t d0 = _a[0]; - int16_t d1 = _a[1]; - int16_t d2 = _a[2]; - int16_t d3 = _a[3]; - int16_t d4 = _a[4]; - int16_t d5 = _a[5]; - int16_t d6 = _a[6]; - int16_t d7 = _a[7]; + int16_t d[8]; + d[0] = _a[0]; + d[1] = _a[1]; + d[2] = _a[2]; + d[3] = _a[3]; + d[4] = _a[4]; + d[5] = _a[5]; + d[6] = _a[6]; + d[7] = _a[7]; - __m128i c = _mm_set_epi16(d7, d6, d5, d4, d3, d2, d1, d0); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + __m128i c = _mm_set_epi16(d[7], d[6], d[5], d[4], d[3], d[2], d[1], d[0]); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_set_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { - int32_t x = impl.mTestInts[iter]; - int32_t y = impl.mTestInts[iter + 1]; - int32_t z = impl.mTestInts[iter + 2]; - int32_t w = impl.mTestInts[iter + 3]; - __m128i a = _mm_set_epi32(x, y, z, w); - return validateInt32(a, w, z, y, x); + int32_t d[4]; + d[3] = impl.mTestInts[iter]; + d[2] = impl.mTestInts[iter + 1]; + d[1] = impl.mTestInts[iter + 2]; + d[0] = impl.mTestInts[iter + 3]; + __m128i a = _mm_set_epi32(d[3], d[2], d[1], d[0]); + return VALIDATE_INT32_M128(a, d); } result_t test_mm_set_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; - __m128i ret = _mm_set_epi64((__m64) _a[1], (__m64) _a[0]); + __m128i ret = _mm_set_epi64(load_m64(&_a[1]), load_m64(&_a[0])); return validateInt64(ret, _a[0], _a[1]); } @@ -5171,27 +5709,28 @@ result_t test_mm_set_epi64x(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_set_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; - int8_t d0 = _a[0]; - int8_t d1 = _a[1]; - int8_t d2 = _a[2]; - int8_t d3 = _a[3]; - int8_t d4 = _a[4]; - int8_t d5 = _a[5]; - int8_t d6 = _a[6]; - int8_t d7 = _a[7]; - int8_t d8 = _a[8]; - int8_t d9 = _a[9]; - int8_t d10 = _a[10]; - int8_t d11 = _a[11]; - int8_t d12 = _a[12]; - int8_t d13 = _a[13]; - int8_t d14 = _a[14]; - int8_t d15 = _a[15]; - - __m128i c = _mm_set_epi8(d15, d14, d13, d12, d11, d10, d9, d8, d7, d6, d5, - d4, d3, d2, d1, d0); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + int8_t d[16]; + d[0] = _a[0]; + d[1] = _a[1]; + d[2] = _a[2]; + d[3] = _a[3]; + d[4] = _a[4]; + d[5] = _a[5]; + d[6] = _a[6]; + d[7] = _a[7]; + d[8] = _a[8]; + d[9] = _a[9]; + d[10] = _a[10]; + d[11] = _a[11]; + d[12] = _a[12]; + d[13] = _a[13]; + d[14] = _a[14]; + d[15] = _a[15]; + + __m128i c = + _mm_set_epi8(d[15], d[14], d[13], d[12], d[11], d[10], d[9], d[8], d[7], + d[6], d[5], d[4], d[3], d[2], d[1], d[0]); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_set_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5243,7 +5782,7 @@ result_t test_mm_set1_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; - __m128i ret = _mm_set1_epi64((__m64) _a[0]); + __m128i ret = _mm_set1_epi64(load_m64(&_a[0])); return validateInt64(ret, _a[0], _a[0]); } @@ -5281,22 +5820,21 @@ result_t test_mm_setr_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) __m128i c = _mm_setr_epi16(_a[0], _a[1], _a[2], _a[3], _a[4], _a[5], _a[6], _a[7]); - return validateInt16(c, _a[0], _a[1], _a[2], _a[3], _a[4], _a[5], _a[6], - _a[7]); + return VALIDATE_INT16_M128(c, _a); } result_t test_mm_setr_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; __m128i c = _mm_setr_epi32(_a[0], _a[1], _a[2], _a[3]); - return validateInt32(c, _a[0], _a[1], _a[2], _a[3]); + return VALIDATE_INT32_M128(c, _a); } result_t test_mm_setr_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { - const __m64 *_a = (const __m64 *) impl.mTestIntPointer1; - __m128i c = _mm_setr_epi64(_a[0], _a[1]); - return validateInt64(c, (int64_t) _a[0], (int64_t) _a[1]); + const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; + __m128i c = _mm_setr_epi64(load_m64(&_a[0]), load_m64(&_a[1])); + return validateInt64(c, _a[0], _a[1]); } result_t test_mm_setr_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5307,9 +5845,7 @@ result_t test_mm_setr_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) _a[7], _a[8], _a[9], _a[10], _a[11], _a[12], _a[13], _a[14], _a[15]); - return validateInt8(c, _a[0], _a[1], _a[2], _a[3], _a[4], _a[5], _a[6], - _a[7], _a[8], _a[9], _a[10], _a[11], _a[12], _a[13], - _a[14], _a[15]); + return VALIDATE_INT8_M128(c, _a); } result_t test_mm_setr_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5338,217 +5874,259 @@ result_t test_mm_setzero_si128(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_shuffle_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { +#if (__GNUC__ == 8) || (__GNUC__ == 9 && __GNUC_MINOR__ == 2) +#error Using older gcc versions can lead to an operand mismatch error. This issue affects all versions prior to gcc 10. +#else const int32_t *_a = impl.mTestIntPointer1; - const int imm = 105; - - int32_t d0 = _a[((imm) &0x3)]; - int32_t d1 = _a[((imm >> 2) & 0x3)]; - int32_t d2 = _a[((imm >> 4) & 0x3)]; - int32_t d3 = _a[((imm >> 6) & 0x3)]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i c = _mm_shuffle_epi32(a, imm); + __m128i a, c; + int32_t _d[4]; - return validateInt32(c, d0, d1, d2, d3); +#define TEST_IMPL(IDX) \ + _d[0] = _a[((IDX) &0x3)]; \ + _d[1] = _a[((IDX >> 2) & 0x3)]; \ + _d[2] = _a[((IDX >> 4) & 0x3)]; \ + _d[3] = _a[((IDX >> 6) & 0x3)]; \ + \ + a = load_m128i(_a); \ + c = _mm_shuffle_epi32(a, IDX); \ + CHECK_RESULT(VALIDATE_INT32_M128(c, _d)) + + IMM_256_ITER +#undef TEST_IMPL + return TEST_SUCCESS; +#endif } result_t test_mm_shuffle_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - - double d0 = _a[iter & 0x1]; - double d1 = _b[(iter & 0x2) >> 1]; - - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); - __m128d c; - switch (iter & 0x3) { - case 0: - c = _mm_shuffle_pd(a, b, 0); - break; - case 1: - c = _mm_shuffle_pd(a, b, 1); - break; - case 2: - c = _mm_shuffle_pd(a, b, 2); - break; - case 3: - c = _mm_shuffle_pd(a, b, 3); - break; - } - - return validateDouble(c, d0, d1); + __m128d a, b, c; + +#define TEST_IMPL(IDX) \ + a = load_m128d(_a); \ + b = load_m128d(_b); \ + c = _mm_shuffle_pd(a, b, IDX); \ + \ + double d0##IDX = _a[IDX & 0x1]; \ + double d1##IDX = _b[(IDX & 0x2) >> 1]; \ + CHECK_RESULT(validateDouble(c, d0##IDX, d1##IDX)) + + IMM_4_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_shufflehi_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { +#if (__GNUC__ == 8) || (__GNUC__ == 9 && __GNUC_MINOR__ == 2) +#error Using older gcc versions can lead to an operand mismatch error. This issue affects all versions prior to gcc 10. +#else const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - const int imm = 112; - - int16_t d0 = _a[0]; - int16_t d1 = _a[1]; - int16_t d2 = _a[2]; - int16_t d3 = _a[3]; - int16_t d4 = ((const int64_t *) _a)[1] >> ((imm & 0x3) * 16); - int16_t d5 = ((const int64_t *) _a)[1] >> (((imm >> 2) & 0x3) * 16); - int16_t d6 = ((const int64_t *) _a)[1] >> (((imm >> 4) & 0x3) * 16); - int16_t d7 = ((const int64_t *) _a)[1] >> (((imm >> 6) & 0x3) * 16); - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i c = _mm_shufflehi_epi16(a, imm); - - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + __m128i a, c; + + int16_t _d[8]; +#define TEST_IMPL(IDX) \ + _d[0] = _a[0]; \ + _d[1] = _a[1]; \ + _d[2] = _a[2]; \ + _d[3] = _a[3]; \ + _d[4] = (int16_t) (((const int64_t *) _a)[1] >> ((IDX & 0x3) * 16)); \ + _d[5] = \ + (int16_t) (((const int64_t *) _a)[1] >> (((IDX >> 2) & 0x3) * 16)); \ + _d[6] = \ + (int16_t) (((const int64_t *) _a)[1] >> (((IDX >> 4) & 0x3) * 16)); \ + _d[7] = \ + (int16_t) (((const int64_t *) _a)[1] >> (((IDX >> 6) & 0x3) * 16)); \ + \ + a = load_m128i(_a); \ + c = _mm_shufflehi_epi16(a, IDX); \ + \ + CHECK_RESULT(VALIDATE_INT16_M128(c, _d)) + + IMM_256_ITER +#undef TEST_IMPL + return TEST_SUCCESS; +#endif } result_t test_mm_shufflelo_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { +#if (__GNUC__ == 8) || (__GNUC__ == 9 && __GNUC_MINOR__ == 2) +#error Using older gcc versions can lead to an operand mismatch error. This issue affects all versions prior to gcc 10. +#else const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - const int imm = 112; - - int16_t d0 = ((const int64_t *) _a)[0] >> ((imm & 0x3) * 16); - int16_t d1 = ((const int64_t *) _a)[0] >> (((imm >> 2) & 0x3) * 16); - int16_t d2 = ((const int64_t *) _a)[0] >> (((imm >> 4) & 0x3) * 16); - int16_t d3 = ((const int64_t *) _a)[0] >> (((imm >> 6) & 0x3) * 16); - int16_t d4 = _a[4]; - int16_t d5 = _a[5]; - int16_t d6 = _a[6]; - int16_t d7 = _a[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i c = _mm_shufflelo_epi16(a, imm); - - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + __m128i a, c; + int16_t _d[8]; + +#define TEST_IMPL(IDX) \ + _d[0] = (int16_t) (((const int64_t *) _a)[0] >> ((IDX & 0x3) * 16)); \ + _d[1] = \ + (int16_t) (((const int64_t *) _a)[0] >> (((IDX >> 2) & 0x3) * 16)); \ + _d[2] = \ + (int16_t) (((const int64_t *) _a)[0] >> (((IDX >> 4) & 0x3) * 16)); \ + _d[3] = \ + (int16_t) (((const int64_t *) _a)[0] >> (((IDX >> 6) & 0x3) * 16)); \ + _d[4] = _a[4]; \ + _d[5] = _a[5]; \ + _d[6] = _a[6]; \ + _d[7] = _a[7]; \ + \ + a = load_m128i(_a); \ + c = _mm_shufflelo_epi16(a, IDX); \ + \ + CHECK_RESULT(VALIDATE_INT16_M128(c, _d)) + + IMM_256_ITER +#undef TEST_IMPL + return TEST_SUCCESS; +#endif } result_t test_mm_sll_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - const int64_t count = (int64_t) iter; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = _mm_set1_epi64x(count); - __m128i c = _mm_sll_epi16(a, b); - if (count < 0 || count > 15) - return validateInt16(c, 0, 0, 0, 0, 0, 0, 0, 0); + __m128i a, b, c; + uint8_t idx; +#define TEST_IMPL(IDX) \ + uint16_t d##IDX[8]; \ + idx = IDX; \ + d##IDX[0] = (idx > 15) ? 0 : _a[0] << idx; \ + d##IDX[1] = (idx > 15) ? 0 : _a[1] << idx; \ + d##IDX[2] = (idx > 15) ? 0 : _a[2] << idx; \ + d##IDX[3] = (idx > 15) ? 0 : _a[3] << idx; \ + d##IDX[4] = (idx > 15) ? 0 : _a[4] << idx; \ + d##IDX[5] = (idx > 15) ? 0 : _a[5] << idx; \ + d##IDX[6] = (idx > 15) ? 0 : _a[6] << idx; \ + d##IDX[7] = (idx > 15) ? 0 : _a[7] << idx; \ + \ + a = load_m128i(_a); \ + b = _mm_set1_epi64x(IDX); \ + c = _mm_sll_epi16(a, b); \ + CHECK_RESULT(VALIDATE_INT16_M128(c, d##IDX)) + + IMM_64_ITER +#undef TEST_IMPL - uint16_t d0 = _a[0] << count; - uint16_t d1 = _a[1] << count; - uint16_t d2 = _a[2] << count; - uint16_t d3 = _a[3] << count; - uint16_t d4 = _a[4] << count; - uint16_t d5 = _a[5] << count; - uint16_t d6 = _a[6] << count; - uint16_t d7 = _a[7] << count; - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return TEST_SUCCESS; } result_t test_mm_sll_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - const int64_t count = (int64_t) iter; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = _mm_set1_epi64x(count); - __m128i c = _mm_sll_epi32(a, b); - if (count < 0 || count > 31) - return validateInt32(c, 0, 0, 0, 0); - - uint32_t d0 = _a[0] << count; - uint32_t d1 = _a[1] << count; - uint32_t d2 = _a[2] << count; - uint32_t d3 = _a[3] << count; - return validateInt32(c, d0, d1, d2, d3); + __m128i a, b, c; + uint8_t idx; + +#define TEST_IMPL(IDX) \ + uint32_t d##IDX[4]; \ + idx = IDX; \ + d##IDX[0] = (idx > 31) ? 0 : _a[0] << idx; \ + d##IDX[1] = (idx > 31) ? 0 : _a[1] << idx; \ + d##IDX[2] = (idx > 31) ? 0 : _a[2] << idx; \ + d##IDX[3] = (idx > 31) ? 0 : _a[3] << idx; \ + \ + a = load_m128i(_a); \ + b = _mm_set1_epi64x(IDX); \ + c = _mm_sll_epi32(a, b); \ + CHECK_RESULT(VALIDATE_INT32_M128(c, d##IDX)) + + IMM_64_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_sll_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; - const int64_t count = (int64_t) iter; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = _mm_set1_epi64x(count); - __m128i c = _mm_sll_epi64(a, b); - if (count < 0 || count > 63) - return validateInt64(c, 0, 0); - - uint64_t d0 = _a[0] << count; - uint64_t d1 = _a[1] << count; - return validateInt64(c, d0, d1); + __m128i a, b, c; + +#define TEST_IMPL(IDX) \ + uint64_t d0##IDX = (IDX & ~63) ? 0 : _a[0] << IDX; \ + uint64_t d1##IDX = (IDX & ~63) ? 0 : _a[1] << IDX; \ + \ + a = load_m128i(_a); \ + b = _mm_set1_epi64x(IDX); \ + c = _mm_sll_epi64(a, b); \ + \ + CHECK_RESULT(validateInt64(c, d0##IDX, d1##IDX)) + + IMM_64_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_slli_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - const int count = 3; - - int16_t d0 = _a[0] << count; - int16_t d1 = _a[1] << count; - int16_t d2 = _a[2] << count; - int16_t d3 = _a[3] << count; - int16_t d4 = _a[4] << count; - int16_t d5 = _a[5] << count; - int16_t d6 = _a[6] << count; - int16_t d7 = _a[7] << count; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i c = _mm_slli_epi16(a, count); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + __m128i a, c; + uint8_t idx; +#define TEST_IMPL(IDX) \ + int16_t d##IDX[8]; \ + idx = IDX; \ + d##IDX[0] = (idx > 15) ? 0 : _a[0] << idx; \ + d##IDX[1] = (idx > 15) ? 0 : _a[1] << idx; \ + d##IDX[2] = (idx > 15) ? 0 : _a[2] << idx; \ + d##IDX[3] = (idx > 15) ? 0 : _a[3] << idx; \ + d##IDX[4] = (idx > 15) ? 0 : _a[4] << idx; \ + d##IDX[5] = (idx > 15) ? 0 : _a[5] << idx; \ + d##IDX[6] = (idx > 15) ? 0 : _a[6] << idx; \ + d##IDX[7] = (idx > 15) ? 0 : _a[7] << idx; \ + \ + a = load_m128i(_a); \ + c = _mm_slli_epi16(a, IDX); \ + CHECK_RESULT(VALIDATE_INT16_M128(c, d##IDX)) + + IMM_64_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_slli_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - const int32_t *_b = (const int32_t *) impl.mTestIntPointer2; #if defined(__clang__) // Clang compiler does not allow the second argument of _mm_slli_epi32() to // be greater than 31. - int count = (uint32_t) _b[0] % 32; + const int count = (int) (iter % 33 - 1); // range: -1 ~ 31 #else - int count = (uint32_t) _b[0] % 64; - // The value for doing the modulo should be greater - // than 32. Using 64 would provide more equal - // distribution for both under 32 and above 32 input value. + const int count = (int) (iter % 34 - 1); // range: -1 ~ 32 #endif - int32_t d0 = (count > 31) ? 0 : _a[0] << count; - int32_t d1 = (count > 31) ? 0 : _a[1] << count; - int32_t d2 = (count > 31) ? 0 : _a[2] << count; - int32_t d3 = (count > 31) ? 0 : _a[3] << count; + int32_t d[4]; + d[0] = (count & ~31) ? 0 : _a[0] << count; + d[1] = (count & ~31) ? 0 : _a[1] << count; + d[2] = (count & ~31) ? 0 : _a[2] << count; + d[3] = (count & ~31) ? 0 : _a[3] << count; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i c = _mm_slli_epi32(a, count); - return validateInt32(c, d0, d1, d2, d3); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_slli_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; - const int64_t *_b = (const int64_t *) impl.mTestIntPointer2; #if defined(__clang__) - // Clang compiler does not allow the second argument of `_mm_slli_epi64()` + // Clang compiler does not allow the second argument of "_mm_slli_epi64()" // to be greater than 63. - int count = (uint64_t) _b[0] % 64; + const int count = (int) (iter % 65 - 1); // range: -1 ~ 63 #else - int count = - (uint64_t) _b[0] % - 128; // The value for doing the modulo should be greater - // than 64. Using 128 would provide more equal - // distribution for both under 64 and above 64 input value. + const int count = (int) (iter % 66 - 1); // range: -1 ~ 64 #endif - int64_t d0 = (count > 63) ? 0 : _a[0] << count; - int64_t d1 = (count > 63) ? 0 : _a[1] << count; + int64_t d0 = (count & ~63) ? 0 : _a[0] << count; + int64_t d1 = (count & ~63) ? 0 : _a[1] << count; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i c = _mm_slli_epi64(a, count); return validateInt64(c, d0, d1); } result_t test_mm_slli_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { - // FIXME: - // The shift value should be tested with random constant immediate value. const int32_t *_a = impl.mTestIntPointer1; int8_t d[16]; - int count = 5; + int count = (iter % 5) << 2; for (int i = 0; i < 16; i++) { if (i < count) d[i] = 0; @@ -5556,11 +6134,27 @@ result_t test_mm_slli_si128(const SSE2NEONTestImpl &impl, uint32_t iter) d[i] = ((const int8_t *) _a)[i - count]; } - __m128i a = do_mm_load_ps(_a); - __m128i ret = _mm_slli_si128(a, 5); + __m128i a = load_m128i(_a); + __m128i ret = _mm_setzero_si128(); + switch (iter % 5) { + case 0: + ret = _mm_slli_si128(a, 0); + break; + case 1: + ret = _mm_slli_si128(a, 4); + break; + case 2: + ret = _mm_slli_si128(a, 8); + break; + case 3: + ret = _mm_slli_si128(a, 12); + break; + case 4: + ret = _mm_slli_si128(a, 16); + break; + } - return validateInt8(ret, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], - d[8], d[9], d[10], d[11], d[12], d[13], d[14], d[15]); + return VALIDATE_INT8_M128(ret, d); } result_t test_mm_sqrt_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5570,10 +6164,10 @@ result_t test_mm_sqrt_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double f0 = sqrt(_a[0]); double f1 = sqrt(_a[1]); - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); __m128d c = _mm_sqrt_pd(a); - return validateDouble(c, f0, f1); + return validateFloatError(c, f0, f1, 1.0e-15); } result_t test_mm_sqrt_sd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5584,90 +6178,74 @@ result_t test_mm_sqrt_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double f0 = sqrt(_b[0]); double f1 = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_sqrt_sd(a, b); - return validateDouble(c, f0, f1); + return validateFloatError(c, f0, f1, 1.0e-15); } result_t test_mm_sra_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - const int64_t count = (int64_t) iter; + const int64_t count = (int64_t) (iter % 18 - 1); // range: -1 ~ 16 + + int16_t d[8]; + d[0] = (count & ~15) ? (_a[0] < 0 ? ~UINT16_C(0) : 0) : (_a[0] >> count); + d[1] = (count & ~15) ? (_a[1] < 0 ? ~UINT16_C(0) : 0) : (_a[1] >> count); + d[2] = (count & ~15) ? (_a[2] < 0 ? ~UINT16_C(0) : 0) : (_a[2] >> count); + d[3] = (count & ~15) ? (_a[3] < 0 ? ~UINT16_C(0) : 0) : (_a[3] >> count); + d[4] = (count & ~15) ? (_a[4] < 0 ? ~UINT16_C(0) : 0) : (_a[4] >> count); + d[5] = (count & ~15) ? (_a[5] < 0 ? ~UINT16_C(0) : 0) : (_a[5] >> count); + d[6] = (count & ~15) ? (_a[6] < 0 ? ~UINT16_C(0) : 0) : (_a[6] >> count); + d[7] = (count & ~15) ? (_a[7] < 0 ? ~UINT16_C(0) : 0) : (_a[7] >> count); + __m128i a = _mm_load_si128((const __m128i *) _a); __m128i b = _mm_set1_epi64x(count); __m128i c = _mm_sra_epi16(a, b); - if (count > 15) { - int16_t d0 = _a[0] < 0 ? ~UINT16_C(0) : 0; - int16_t d1 = _a[1] < 0 ? ~UINT16_C(0) : 0; - int16_t d2 = _a[2] < 0 ? ~UINT16_C(0) : 0; - int16_t d3 = _a[3] < 0 ? ~UINT16_C(0) : 0; - int16_t d4 = _a[4] < 0 ? ~UINT16_C(0) : 0; - int16_t d5 = _a[5] < 0 ? ~UINT16_C(0) : 0; - int16_t d6 = _a[6] < 0 ? ~UINT16_C(0) : 0; - int16_t d7 = _a[7] < 0 ? ~UINT16_C(0) : 0; - - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); - } - int16_t d0 = _a[0] >> count; - int16_t d1 = _a[1] >> count; - int16_t d2 = _a[2] >> count; - int16_t d3 = _a[3] >> count; - int16_t d4 = _a[4] >> count; - int16_t d5 = _a[5] >> count; - int16_t d6 = _a[6] >> count; - int16_t d7 = _a[7] >> count; - - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_sra_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - const int64_t count = (int64_t) iter; + const int64_t count = (int64_t) (iter % 34 - 1); // range: -1 ~ 32 + + int32_t d[4]; + d[0] = (count & ~31) ? (_a[0] < 0 ? ~UINT32_C(0) : 0) : _a[0] >> count; + d[1] = (count & ~31) ? (_a[1] < 0 ? ~UINT32_C(0) : 0) : _a[1] >> count; + d[2] = (count & ~31) ? (_a[2] < 0 ? ~UINT32_C(0) : 0) : _a[2] >> count; + d[3] = (count & ~31) ? (_a[3] < 0 ? ~UINT32_C(0) : 0) : _a[3] >> count; + __m128i a = _mm_load_si128((const __m128i *) _a); __m128i b = _mm_set1_epi64x(count); __m128i c = _mm_sra_epi32(a, b); - if (count > 31) { - int32_t d0 = _a[0] < 0 ? ~UINT32_C(0) : 0; - int32_t d1 = _a[1] < 0 ? ~UINT32_C(0) : 0; - int32_t d2 = _a[2] < 0 ? ~UINT32_C(0) : 0; - int32_t d3 = _a[3] < 0 ? ~UINT32_C(0) : 0; - return validateInt32(c, d0, d1, d2, d3); - } - - int32_t d0 = _a[0] >> count; - int32_t d1 = _a[1] >> count; - int32_t d2 = _a[2] >> count; - int32_t d3 = _a[3] >> count; - - return validateInt32(c, d0, d1, d2, d3); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_srai_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { - const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - int64_t _b = (int64_t) iter; - const int b = _b; + const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; + const int32_t b = (int32_t) (iter % 18 - 1); // range: -1 ~ 16 + int16_t d[8]; + int count = (b & ~15) ? 15 : b; + + for (int i = 0; i < 8; i++) { + d[i] = _a[i] >> count; + } + __m128i a = _mm_load_si128((const __m128i *) _a); __m128i c = _mm_srai_epi16(a, b); - __m128i ret; - int count = (b & ~15) ? 15 : b; - for (size_t i = 0; i < 8; i++) { - ((SIMDVec *) &ret)->m128_i16[i] = - ((SIMDVec *) &a)->m128_i16[i] >> count; - } - return validate128(c, ret); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_srai_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - const int32_t b = (const int32_t) impl.mTestInts[iter]; + const int32_t b = (int32_t) (iter % 34 - 1); // range: -1 ~ 32 int32_t d[4]; int count = (b & ~31) ? 31 : b; @@ -5678,107 +6256,111 @@ result_t test_mm_srai_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) __m128i a = _mm_load_si128((const __m128i *) _a); __m128i c = _mm_srai_epi32(a, b); - return validateInt32(c, d[0], d[1], d[2], d[3]); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_srl_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - const int64_t count = (int64_t) iter; - __m128i a = do_mm_load_ps((const int32_t *) _a); + const int64_t count = (int64_t) (iter % 18 - 1); // range: -1 ~ 16 + + uint16_t d[8]; + d[0] = (count & ~15) ? 0 : (uint16_t) (_a[0]) >> count; + d[1] = (count & ~15) ? 0 : (uint16_t) (_a[1]) >> count; + d[2] = (count & ~15) ? 0 : (uint16_t) (_a[2]) >> count; + d[3] = (count & ~15) ? 0 : (uint16_t) (_a[3]) >> count; + d[4] = (count & ~15) ? 0 : (uint16_t) (_a[4]) >> count; + d[5] = (count & ~15) ? 0 : (uint16_t) (_a[5]) >> count; + d[6] = (count & ~15) ? 0 : (uint16_t) (_a[6]) >> count; + d[7] = (count & ~15) ? 0 : (uint16_t) (_a[7]) >> count; + + __m128i a = load_m128i(_a); __m128i b = _mm_set1_epi64x(count); __m128i c = _mm_srl_epi16(a, b); - if (count < 0 || count > 15) - return validateInt16(c, 0, 0, 0, 0, 0, 0, 0, 0); - uint16_t d0 = (uint16_t)(_a[0]) >> count; - uint16_t d1 = (uint16_t)(_a[1]) >> count; - uint16_t d2 = (uint16_t)(_a[2]) >> count; - uint16_t d3 = (uint16_t)(_a[3]) >> count; - uint16_t d4 = (uint16_t)(_a[4]) >> count; - uint16_t d5 = (uint16_t)(_a[5]) >> count; - uint16_t d6 = (uint16_t)(_a[6]) >> count; - uint16_t d7 = (uint16_t)(_a[7]) >> count; - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_srl_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - const int64_t count = (int64_t) iter; - __m128i a = do_mm_load_ps((const int32_t *) _a); + const int64_t count = (int64_t) (iter % 34 - 1); // range: -1 ~ 32 + + uint32_t d[4]; + d[0] = (count & ~31) ? 0 : (uint32_t) (_a[0]) >> count; + d[1] = (count & ~31) ? 0 : (uint32_t) (_a[1]) >> count; + d[2] = (count & ~31) ? 0 : (uint32_t) (_a[2]) >> count; + d[3] = (count & ~31) ? 0 : (uint32_t) (_a[3]) >> count; + + __m128i a = load_m128i(_a); __m128i b = _mm_set1_epi64x(count); __m128i c = _mm_srl_epi32(a, b); - if (count < 0 || count > 31) - return validateInt32(c, 0, 0, 0, 0); - uint32_t d0 = (uint32_t)(_a[0]) >> count; - uint32_t d1 = (uint32_t)(_a[1]) >> count; - uint32_t d2 = (uint32_t)(_a[2]) >> count; - uint32_t d3 = (uint32_t)(_a[3]) >> count; - return validateInt32(c, d0, d1, d2, d3); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_srl_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; - const int64_t count = (int64_t) iter; - __m128i a = do_mm_load_ps((const int32_t *) _a); + const int64_t count = (int64_t) (iter % 66 - 1); // range: -1 ~ 64 + + uint64_t d0 = (count & ~63) ? 0 : (uint64_t) (_a[0]) >> count; + uint64_t d1 = (count & ~63) ? 0 : (uint64_t) (_a[1]) >> count; + + __m128i a = load_m128i(_a); __m128i b = _mm_set1_epi64x(count); __m128i c = _mm_srl_epi64(a, b); - if (count < 0 || count > 63) - return validateInt64(c, 0, 0); - uint64_t d0 = (uint64_t)(_a[0]) >> count; - uint64_t d1 = (uint64_t)(_a[1]) >> count; return validateInt64(c, d0, d1); } result_t test_mm_srli_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - const int count = impl.mTestInts[iter]; - - int16_t d0 = count & (~15) ? 0 : (uint16_t)(_a[0]) >> count; - int16_t d1 = count & (~15) ? 0 : (uint16_t)(_a[1]) >> count; - int16_t d2 = count & (~15) ? 0 : (uint16_t)(_a[2]) >> count; - int16_t d3 = count & (~15) ? 0 : (uint16_t)(_a[3]) >> count; - int16_t d4 = count & (~15) ? 0 : (uint16_t)(_a[4]) >> count; - int16_t d5 = count & (~15) ? 0 : (uint16_t)(_a[5]) >> count; - int16_t d6 = count & (~15) ? 0 : (uint16_t)(_a[6]) >> count; - int16_t d7 = count & (~15) ? 0 : (uint16_t)(_a[7]) >> count; - - __m128i a = do_mm_load_ps((const int32_t *) _a); + const int count = (int) (iter % 18 - 1); // range: -1 ~ 16 + + int16_t d[8]; + d[0] = count & (~15) ? 0 : (uint16_t) (_a[0]) >> count; + d[1] = count & (~15) ? 0 : (uint16_t) (_a[1]) >> count; + d[2] = count & (~15) ? 0 : (uint16_t) (_a[2]) >> count; + d[3] = count & (~15) ? 0 : (uint16_t) (_a[3]) >> count; + d[4] = count & (~15) ? 0 : (uint16_t) (_a[4]) >> count; + d[5] = count & (~15) ? 0 : (uint16_t) (_a[5]) >> count; + d[6] = count & (~15) ? 0 : (uint16_t) (_a[6]) >> count; + d[7] = count & (~15) ? 0 : (uint16_t) (_a[7]) >> count; + + __m128i a = load_m128i(_a); __m128i c = _mm_srli_epi16(a, count); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_srli_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - const int count = impl.mTestInts[iter]; + const int count = (int) (iter % 34 - 1); // range: -1 ~ 32 - int32_t d0 = count & (~31) ? 0 : (uint32_t)(_a[0]) >> count; - int32_t d1 = count & (~31) ? 0 : (uint32_t)(_a[1]) >> count; - int32_t d2 = count & (~31) ? 0 : (uint32_t)(_a[2]) >> count; - int32_t d3 = count & (~31) ? 0 : (uint32_t)(_a[3]) >> count; + int32_t d[4]; + d[0] = count & (~31) ? 0 : (uint32_t) (_a[0]) >> count; + d[1] = count & (~31) ? 0 : (uint32_t) (_a[1]) >> count; + d[2] = count & (~31) ? 0 : (uint32_t) (_a[2]) >> count; + d[3] = count & (~31) ? 0 : (uint32_t) (_a[3]) >> count; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); __m128i c = _mm_srli_epi32(a, count); - return validateInt32(c, d0, d1, d2, d3); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_srli_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; - const int count = impl.mTestInts[iter]; + const int count = (int) (iter % 66 - 1); // range: -1 ~ 64 - int64_t d0 = count & (~63) ? 0 : (uint64_t)(_a[0]) >> count; - int64_t d1 = count & (~63) ? 0 : (uint64_t)(_a[1]) >> count; + int64_t d0 = count & (~63) ? 0 : (uint64_t) (_a[0]) >> count; + int64_t d1 = count & (~63) ? 0 : (uint64_t) (_a[1]) >> count; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i c = _mm_srli_epi64(a, count); return validateInt64(c, d0, d1); @@ -5786,10 +6368,8 @@ result_t test_mm_srli_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_srli_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { - // FIXME: - // The shift value should be tested with random constant immediate value. const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; - int count = 5; + const int count = (iter % 5) << 2; int8_t d[16]; for (int i = 0; i < 16; i++) { @@ -5799,13 +6379,29 @@ result_t test_mm_srli_si128(const SSE2NEONTestImpl &impl, uint32_t iter) d[i] = _a[i + count]; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i ret = _mm_srli_si128(a, 5); - - return validateInt8(ret, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], - d[8], d[9], d[10], d[11], d[12], d[13], d[14], d[15]); -} - + __m128i a = load_m128i(_a); + __m128i ret = _mm_setzero_si128(); + switch (iter % 5) { + case 0: + ret = _mm_srli_si128(a, 0); + break; + case 1: + ret = _mm_srli_si128(a, 4); + break; + case 2: + ret = _mm_srli_si128(a, 8); + break; + case 3: + ret = _mm_srli_si128(a, 12); + break; + case 4: + ret = _mm_srli_si128(a, 16); + break; + } + + return VALIDATE_INT8_M128(ret, d); +} + result_t test_mm_store_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { double *p = (double *) impl.mTestFloatPointer1; @@ -5825,7 +6421,7 @@ result_t test_mm_store_pd1(const SSE2NEONTestImpl &impl, uint32_t iter) double _a[2] = {(double) impl.mTestFloats[iter], (double) impl.mTestFloats[iter + 1]}; - __m128d a = do_mm_load_pd((const double *) _a); + __m128d a = load_m128d(_a); _mm_store_pd1(p, a); ASSERT_RETURN(p[0] == impl.mTestFloats[iter]); ASSERT_RETURN(p[1] == impl.mTestFloats[iter]); @@ -5838,7 +6434,7 @@ result_t test_mm_store_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double _a[2] = {(double) impl.mTestFloats[iter], (double) impl.mTestFloats[iter + 1]}; - __m128d a = do_mm_load_pd((const double *) _a); + __m128d a = load_m128d(_a); _mm_store_sd(p, a); ASSERT_RETURN(p[0] == impl.mTestFloats[iter]); return TEST_SUCCESS; @@ -5847,12 +6443,12 @@ result_t test_mm_store_sd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_store_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - int32_t p[4]; + alignas(16) int32_t p[4]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); _mm_store_si128((__m128i *) p, a); - return validateInt32(a, p[0], p[1], p[2], p[3]); + return VALIDATE_INT32_M128(a, p); } result_t test_mm_store1_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5865,7 +6461,7 @@ result_t test_mm_storeh_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double *p = (double *) impl.mTestFloatPointer1; double mem; - __m128d a = do_mm_load_pd(p); + __m128d a = load_m128d(p); _mm_storeh_pd(&mem, a); ASSERT_RETURN(mem == p[1]); @@ -5877,10 +6473,10 @@ result_t test_mm_storel_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t *p = (int64_t *) impl.mTestIntPointer1; __m128i mem; - __m128i a = do_mm_load_ps((const int32_t *) p); + __m128i a = load_m128i(p); _mm_storel_epi64(&mem, a); - ASSERT_RETURN(mem[0] == p[0]); + ASSERT_RETURN(((SIMDVec *) &mem)->m128_u64[0] == (uint64_t) p[0]); return TEST_SUCCESS; } @@ -5889,7 +6485,7 @@ result_t test_mm_storel_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double *p = (double *) impl.mTestFloatPointer1; double mem; - __m128d a = do_mm_load_pd(p); + __m128d a = load_m128d(p); _mm_storel_pd(&mem, a); ASSERT_RETURN(mem == p[0]); @@ -5901,10 +6497,10 @@ result_t test_mm_storer_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double *p = (double *) impl.mTestFloatPointer1; double mem[2]; - __m128d a = do_mm_load_pd(p); + __m128d a = load_m128d(p); _mm_storer_pd(mem, a); - __m128d res = do_mm_load_pd(mem); + __m128d res = load_m128d(mem); return validateDouble(res, p[1], p[0]); } @@ -5925,10 +6521,10 @@ result_t test_mm_storeu_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; __m128i b; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); _mm_storeu_si128(&b, a); int32_t *_b = (int32_t *) &b; - return validateInt32(a, _b[0], _b[1], _b[2], _b[3]); + return VALIDATE_INT32_M128(a, _b); } result_t test_mm_storeu_si32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5936,12 +6532,12 @@ result_t test_mm_storeu_si32(const SSE2NEONTestImpl &impl, uint32_t iter) // The GCC version before 11 does not implement intrinsic function // _mm_storeu_si32. Check https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95483 // for more information. -#if defined(__GNUC__) && __GNUC__ <= 10 +#if (defined(__GNUC__) && !defined(__clang__)) && (__GNUC__ <= 10) return TEST_UNIMPL; #else const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - __m128i b; - __m128i a = do_mm_load_ps(_a); + __m128i b = _mm_setzero_si128(); + __m128i a = load_m128i(_a); _mm_storeu_si32(&b, a); int32_t *_b = (int32_t *) &b; return validateInt32(b, _a[0], _b[1], _b[2], _b[3]); @@ -5953,7 +6549,7 @@ result_t test_mm_stream_pd(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; double p[2]; - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); _mm_stream_pd(p, a); return validateDouble(a, p[0], p[1]); @@ -5962,12 +6558,12 @@ result_t test_mm_stream_pd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_stream_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - int32_t p[4]; + alignas(16) int32_t p[4]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); _mm_stream_si128((__m128i *) p, a); - return validateInt32(a, p[0], p[1], p[2], p[3]); + return VALIDATE_INT32_M128(a, p); } result_t test_mm_stream_si32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -5983,41 +6579,47 @@ result_t test_mm_stream_si32(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_stream_si64(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + const int64_t a = (const int64_t) impl.mTestInts[iter]; + __int64 p[1]; + _mm_stream_si64(p, a); + ASSERT_RETURN(p[0] == a); + return TEST_SUCCESS; } result_t test_mm_sub_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = _a[0] - _b[0]; - int16_t d1 = _a[1] - _b[1]; - int16_t d2 = _a[2] - _b[2]; - int16_t d3 = _a[3] - _b[3]; - int16_t d4 = _a[4] - _b[4]; - int16_t d5 = _a[5] - _b[5]; - int16_t d6 = _a[6] - _b[6]; - int16_t d7 = _a[7] - _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[0] - _b[0]; + d[1] = _a[1] - _b[1]; + d[2] = _a[2] - _b[2]; + d[3] = _a[3] - _b[3]; + d[4] = _a[4] - _b[4]; + d[5] = _a[5] - _b[5]; + d[6] = _a[6] - _b[6]; + d[7] = _a[7] - _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_sub_epi16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_sub_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - int32_t dx = _a[0] - _b[0]; - int32_t dy = _a[1] - _b[1]; - int32_t dz = _a[2] - _b[2]; - int32_t dw = _a[3] - _b[3]; + int32_t d[4]; + d[0] = _a[0] - _b[0]; + d[1] = _a[1] - _b[1]; + d[2] = _a[2] - _b[2]; + d[3] = _a[3] - _b[3]; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_sub_epi32(a, b); - return validateInt32(c, dx, dy, dz, dw); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_sub_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6027,8 +6629,8 @@ result_t test_mm_sub_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0] - _b[0]; int64_t d1 = _a[1] - _b[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_sub_epi64(a, b); return validateInt64(c, d0, d1); } @@ -6037,28 +6639,28 @@ result_t test_mm_sub_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t d0 = _a[0] - _b[0]; - int8_t d1 = _a[1] - _b[1]; - int8_t d2 = _a[2] - _b[2]; - int8_t d3 = _a[3] - _b[3]; - int8_t d4 = _a[4] - _b[4]; - int8_t d5 = _a[5] - _b[5]; - int8_t d6 = _a[6] - _b[6]; - int8_t d7 = _a[7] - _b[7]; - int8_t d8 = _a[8] - _b[8]; - int8_t d9 = _a[9] - _b[9]; - int8_t d10 = _a[10] - _b[10]; - int8_t d11 = _a[11] - _b[11]; - int8_t d12 = _a[12] - _b[12]; - int8_t d13 = _a[13] - _b[13]; - int8_t d14 = _a[14] - _b[14]; - int8_t d15 = _a[15] - _b[15]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = _a[0] - _b[0]; + d[1] = _a[1] - _b[1]; + d[2] = _a[2] - _b[2]; + d[3] = _a[3] - _b[3]; + d[4] = _a[4] - _b[4]; + d[5] = _a[5] - _b[5]; + d[6] = _a[6] - _b[6]; + d[7] = _a[7] - _b[7]; + d[8] = _a[8] - _b[8]; + d[9] = _a[9] - _b[9]; + d[10] = _a[10] - _b[10]; + d[11] = _a[11] - _b[11]; + d[12] = _a[12] - _b[12]; + d[13] = _a[13] - _b[13]; + d[14] = _a[14] - _b[14]; + d[15] = _a[15] - _b[15]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_sub_epi8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_sub_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6068,8 +6670,8 @@ result_t test_mm_sub_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double d0 = _a[0] - _b[0]; double d1 = _a[1] - _b[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_sub_pd(a, b); return validateDouble(c, d0, d1); } @@ -6081,8 +6683,8 @@ result_t test_mm_sub_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double d0 = _a[0] - _b[0]; double d1 = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_sub_sd(a, b); return validateDouble(c, d0, d1); } @@ -6094,8 +6696,8 @@ result_t test_mm_sub_si64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d = _a[0] - _b[0]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_sub_si64(a, b); return validateInt64(c, d); @@ -6119,11 +6721,11 @@ result_t test_mm_subs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) d[i] = (int16_t) res; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_subs_epi16(a, b); - return validateInt16(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_subs_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6137,115 +6739,115 @@ result_t test_mm_subs_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) for (int i = 0; i < 16; i++) { int16_t res = (int16_t) _a[i] - (int16_t) _b[i]; if (res > max) - d[i] = max; + d[i] = (int8_t) max; else if (res < min) - d[i] = min; + d[i] = (int8_t) min; else d[i] = (int8_t) res; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_subs_epi8(a, b); - return validateInt8(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], d[8], - d[9], d[10], d[11], d[12], d[13], d[14], d[15]); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_subs_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - uint16_t d0 = (uint16_t) _a[0] - (uint16_t) _b[0]; - if (d0 > (uint16_t) _a[0]) - d0 = 0; - uint16_t d1 = (uint16_t) _a[1] - (uint16_t) _b[1]; - if (d1 > (uint16_t) _a[1]) - d1 = 0; - uint16_t d2 = (uint16_t) _a[2] - (uint16_t) _b[2]; - if (d2 > (uint16_t) _a[2]) - d2 = 0; - uint16_t d3 = (uint16_t) _a[3] - (uint16_t) _b[3]; - if (d3 > (uint16_t) _a[3]) - d3 = 0; - uint16_t d4 = (uint16_t) _a[4] - (uint16_t) _b[4]; - if (d4 > (uint16_t) _a[4]) - d4 = 0; - uint16_t d5 = (uint16_t) _a[5] - (uint16_t) _b[5]; - if (d5 > (uint16_t) _a[5]) - d5 = 0; - uint16_t d6 = (uint16_t) _a[6] - (uint16_t) _b[6]; - if (d6 > (uint16_t) _a[6]) - d6 = 0; - uint16_t d7 = (uint16_t) _a[7] - (uint16_t) _b[7]; - if (d7 > (uint16_t) _a[7]) - d7 = 0; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint16_t d[8]; + d[0] = (uint16_t) _a[0] - (uint16_t) _b[0]; + if (d[0] > (uint16_t) _a[0]) + d[0] = 0; + d[1] = (uint16_t) _a[1] - (uint16_t) _b[1]; + if (d[1] > (uint16_t) _a[1]) + d[1] = 0; + d[2] = (uint16_t) _a[2] - (uint16_t) _b[2]; + if (d[2] > (uint16_t) _a[2]) + d[2] = 0; + d[3] = (uint16_t) _a[3] - (uint16_t) _b[3]; + if (d[3] > (uint16_t) _a[3]) + d[3] = 0; + d[4] = (uint16_t) _a[4] - (uint16_t) _b[4]; + if (d[4] > (uint16_t) _a[4]) + d[4] = 0; + d[5] = (uint16_t) _a[5] - (uint16_t) _b[5]; + if (d[5] > (uint16_t) _a[5]) + d[5] = 0; + d[6] = (uint16_t) _a[6] - (uint16_t) _b[6]; + if (d[6] > (uint16_t) _a[6]) + d[6] = 0; + d[7] = (uint16_t) _a[7] - (uint16_t) _b[7]; + if (d[7] > (uint16_t) _a[7]) + d[7] = 0; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_subs_epu16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_subs_epu8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - uint8_t d0 = (uint8_t) _a[0] - (uint8_t) _b[0]; - if (d0 > (uint8_t) _a[0]) - d0 = 0; - uint8_t d1 = (uint8_t) _a[1] - (uint8_t) _b[1]; - if (d1 > (uint8_t) _a[1]) - d1 = 0; - uint8_t d2 = (uint8_t) _a[2] - (uint8_t) _b[2]; - if (d2 > (uint8_t) _a[2]) - d2 = 0; - uint8_t d3 = (uint8_t) _a[3] - (uint8_t) _b[3]; - if (d3 > (uint8_t) _a[3]) - d3 = 0; - uint8_t d4 = (uint8_t) _a[4] - (uint8_t) _b[4]; - if (d4 > (uint8_t) _a[4]) - d4 = 0; - uint8_t d5 = (uint8_t) _a[5] - (uint8_t) _b[5]; - if (d5 > (uint8_t) _a[5]) - d5 = 0; - uint8_t d6 = (uint8_t) _a[6] - (uint8_t) _b[6]; - if (d6 > (uint8_t) _a[6]) - d6 = 0; - uint8_t d7 = (uint8_t) _a[7] - (uint8_t) _b[7]; - if (d7 > (uint8_t) _a[7]) - d7 = 0; - uint8_t d8 = (uint8_t) _a[8] - (uint8_t) _b[8]; - if (d8 > (uint8_t) _a[8]) - d8 = 0; - uint8_t d9 = (uint8_t) _a[9] - (uint8_t) _b[9]; - if (d9 > (uint8_t) _a[9]) - d9 = 0; - uint8_t d10 = (uint8_t) _a[10] - (uint8_t) _b[10]; - if (d10 > (uint8_t) _a[10]) - d10 = 0; - uint8_t d11 = (uint8_t) _a[11] - (uint8_t) _b[11]; - if (d11 > (uint8_t) _a[11]) - d11 = 0; - uint8_t d12 = (uint8_t) _a[12] - (uint8_t) _b[12]; - if (d12 > (uint8_t) _a[12]) - d12 = 0; - uint8_t d13 = (uint8_t) _a[13] - (uint8_t) _b[13]; - if (d13 > (uint8_t) _a[13]) - d13 = 0; - uint8_t d14 = (uint8_t) _a[14] - (uint8_t) _b[14]; - if (d14 > (uint8_t) _a[14]) - d14 = 0; - uint8_t d15 = (uint8_t) _a[15] - (uint8_t) _b[15]; - if (d15 > (uint8_t) _a[15]) - d15 = 0; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint8_t d[16]; + d[0] = (uint8_t) _a[0] - (uint8_t) _b[0]; + if (d[0] > (uint8_t) _a[0]) + d[0] = 0; + d[1] = (uint8_t) _a[1] - (uint8_t) _b[1]; + if (d[1] > (uint8_t) _a[1]) + d[1] = 0; + d[2] = (uint8_t) _a[2] - (uint8_t) _b[2]; + if (d[2] > (uint8_t) _a[2]) + d[2] = 0; + d[3] = (uint8_t) _a[3] - (uint8_t) _b[3]; + if (d[3] > (uint8_t) _a[3]) + d[3] = 0; + d[4] = (uint8_t) _a[4] - (uint8_t) _b[4]; + if (d[4] > (uint8_t) _a[4]) + d[4] = 0; + d[5] = (uint8_t) _a[5] - (uint8_t) _b[5]; + if (d[5] > (uint8_t) _a[5]) + d[5] = 0; + d[6] = (uint8_t) _a[6] - (uint8_t) _b[6]; + if (d[6] > (uint8_t) _a[6]) + d[6] = 0; + d[7] = (uint8_t) _a[7] - (uint8_t) _b[7]; + if (d[7] > (uint8_t) _a[7]) + d[7] = 0; + d[8] = (uint8_t) _a[8] - (uint8_t) _b[8]; + if (d[8] > (uint8_t) _a[8]) + d[8] = 0; + d[9] = (uint8_t) _a[9] - (uint8_t) _b[9]; + if (d[9] > (uint8_t) _a[9]) + d[9] = 0; + d[10] = (uint8_t) _a[10] - (uint8_t) _b[10]; + if (d[10] > (uint8_t) _a[10]) + d[10] = 0; + d[11] = (uint8_t) _a[11] - (uint8_t) _b[11]; + if (d[11] > (uint8_t) _a[11]) + d[11] = 0; + d[12] = (uint8_t) _a[12] - (uint8_t) _b[12]; + if (d[12] > (uint8_t) _a[12]) + d[12] = 0; + d[13] = (uint8_t) _a[13] - (uint8_t) _b[13]; + if (d[13] > (uint8_t) _a[13]) + d[13] = 0; + d[14] = (uint8_t) _a[14] - (uint8_t) _b[14]; + if (d[14] > (uint8_t) _a[14]) + d[14] = 0; + d[15] = (uint8_t) _a[15] - (uint8_t) _b[15]; + if (d[15] > (uint8_t) _a[15]) + d[15] = 0; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_subs_epu8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_ucomieq_sd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6280,12 +6882,16 @@ result_t test_mm_ucomineq_sd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_undefined_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + __m128d a = _mm_undefined_pd(); + a = _mm_xor_pd(a, a); + return validateDouble(a, 0, 0); } result_t test_mm_undefined_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + __m128i a = _mm_undefined_si128(); + a = _mm_xor_si128(a, a); + return validateInt64(a, 0, 0); } result_t test_mm_unpackhi_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6293,20 +6899,21 @@ result_t test_mm_unpackhi_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t i0 = _a[4]; - int16_t i1 = _b[4]; - int16_t i2 = _a[5]; - int16_t i3 = _b[5]; - int16_t i4 = _a[6]; - int16_t i5 = _b[6]; - int16_t i6 = _a[7]; - int16_t i7 = _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[4]; + d[1] = _b[4]; + d[2] = _a[5]; + d[3] = _b[5]; + d[4] = _a[6]; + d[5] = _b[6]; + d[6] = _a[7]; + d[7] = _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_unpackhi_epi16(a, b); - return validateInt16(ret, i0, i1, i2, i3, i4, i5, i6, i7); + return VALIDATE_INT16_M128(ret, d); } result_t test_mm_unpackhi_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6314,16 +6921,17 @@ result_t test_mm_unpackhi_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_b = (const int32_t *) impl.mTestIntPointer2; - int32_t i0 = _a[2]; - int32_t i1 = _b[2]; - int32_t i2 = _a[3]; - int32_t i3 = _b[3]; + int32_t d[4]; + d[0] = _a[2]; + d[1] = _b[2]; + d[2] = _a[3]; + d[3] = _b[3]; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_unpackhi_epi32(a, b); - return validateInt32(ret, i0, i1, i2, i3); + return VALIDATE_INT32_M128(ret, d); } result_t test_mm_unpackhi_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6334,8 +6942,8 @@ result_t test_mm_unpackhi_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t i0 = _a[1]; int64_t i1 = _b[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_unpackhi_epi64(a, b); return validateInt64(ret, i0, i1); @@ -6346,29 +6954,29 @@ result_t test_mm_unpackhi_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t i0 = _a[8]; - int8_t i1 = _b[8]; - int8_t i2 = _a[9]; - int8_t i3 = _b[9]; - int8_t i4 = _a[10]; - int8_t i5 = _b[10]; - int8_t i6 = _a[11]; - int8_t i7 = _b[11]; - int8_t i8 = _a[12]; - int8_t i9 = _b[12]; - int8_t i10 = _a[13]; - int8_t i11 = _b[13]; - int8_t i12 = _a[14]; - int8_t i13 = _b[14]; - int8_t i14 = _a[15]; - int8_t i15 = _b[15]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = _a[8]; + d[1] = _b[8]; + d[2] = _a[9]; + d[3] = _b[9]; + d[4] = _a[10]; + d[5] = _b[10]; + d[6] = _a[11]; + d[7] = _b[11]; + d[8] = _a[12]; + d[9] = _b[12]; + d[10] = _a[13]; + d[11] = _b[13]; + d[12] = _a[14]; + d[13] = _b[14]; + d[14] = _a[15]; + d[15] = _b[15]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_unpackhi_epi8(a, b); - return validateInt8(ret, i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, - i12, i13, i14, i15); + return VALIDATE_INT8_M128(ret, d); } result_t test_mm_unpackhi_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6376,8 +6984,8 @@ result_t test_mm_unpackhi_pd(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d ret = _mm_unpackhi_pd(a, b); return validateDouble(ret, _a[1], _b[1]); @@ -6388,20 +6996,21 @@ result_t test_mm_unpacklo_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t i0 = _a[0]; - int16_t i1 = _b[0]; - int16_t i2 = _a[1]; - int16_t i3 = _b[1]; - int16_t i4 = _a[2]; - int16_t i5 = _b[2]; - int16_t i6 = _a[3]; - int16_t i7 = _b[3]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[0]; + d[1] = _b[0]; + d[2] = _a[1]; + d[3] = _b[1]; + d[4] = _a[2]; + d[5] = _b[2]; + d[6] = _a[3]; + d[7] = _b[3]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_unpacklo_epi16(a, b); - return validateInt16(ret, i0, i1, i2, i3, i4, i5, i6, i7); + return VALIDATE_INT16_M128(ret, d); } result_t test_mm_unpacklo_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6409,16 +7018,17 @@ result_t test_mm_unpacklo_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_b = (const int32_t *) impl.mTestIntPointer2; - int32_t i0 = _a[0]; - int32_t i1 = _b[0]; - int32_t i2 = _a[1]; - int32_t i3 = _b[1]; + int32_t d[4]; + d[0] = _a[0]; + d[1] = _b[0]; + d[2] = _a[1]; + d[3] = _b[1]; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_unpacklo_epi32(a, b); - return validateInt32(ret, i0, i1, i2, i3); + return VALIDATE_INT32_M128(ret, d); } result_t test_mm_unpacklo_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6429,8 +7039,8 @@ result_t test_mm_unpacklo_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t i0 = _a[0]; int64_t i1 = _b[0]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_unpacklo_epi64(a, b); return validateInt64(ret, i0, i1); @@ -6441,29 +7051,29 @@ result_t test_mm_unpacklo_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t i0 = _a[0]; - int8_t i1 = _b[0]; - int8_t i2 = _a[1]; - int8_t i3 = _b[1]; - int8_t i4 = _a[2]; - int8_t i5 = _b[2]; - int8_t i6 = _a[3]; - int8_t i7 = _b[3]; - int8_t i8 = _a[4]; - int8_t i9 = _b[4]; - int8_t i10 = _a[5]; - int8_t i11 = _b[5]; - int8_t i12 = _a[6]; - int8_t i13 = _b[6]; - int8_t i14 = _a[7]; - int8_t i15 = _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = _a[0]; + d[1] = _b[0]; + d[2] = _a[1]; + d[3] = _b[1]; + d[4] = _a[2]; + d[5] = _b[2]; + d[6] = _a[3]; + d[7] = _b[3]; + d[8] = _a[4]; + d[9] = _b[4]; + d[10] = _a[5]; + d[11] = _b[5]; + d[12] = _a[6]; + d[13] = _b[6]; + d[14] = _a[7]; + d[15] = _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_unpacklo_epi8(a, b); - return validateInt8(ret, i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, - i12, i13, i14, i15); + return VALIDATE_INT8_M128(ret, d); } result_t test_mm_unpacklo_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6471,8 +7081,8 @@ result_t test_mm_unpacklo_pd(const SSE2NEONTestImpl &impl, uint32_t iter) const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d ret = _mm_unpacklo_pd(a, b); return validateDouble(ret, _a[0], _b[0]); @@ -6486,8 +7096,8 @@ result_t test_mm_xor_pd(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0] ^ _b[0]; int64_t d1 = _a[1] ^ _b[1]; - __m128d a = do_mm_load_pd((const double *) _a); - __m128d b = do_mm_load_pd((const double *) _b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_xor_pd(a, b); return validateDouble(c, *((double *) &d0), *((double *) &d1)); @@ -6501,8 +7111,8 @@ result_t test_mm_xor_si128(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = _a[0] ^ _b[0]; int64_t d1 = _a[1] ^ _b[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_xor_si128(a, b); return validateInt64(c, d0, d1); @@ -6517,8 +7127,8 @@ result_t test_mm_addsub_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double d0 = _a[0] - _b[0]; double d1 = _a[1] + _b[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_addsub_pd(a, b); return validateDouble(c, d0, d1); @@ -6536,8 +7146,8 @@ result_t test_mm_addsub_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float f2 = _a[2] - _b[2]; float f3 = _a[3] + _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_addsub_ps(a, b); return validateFloat(c, f0, f1, f2, f3); @@ -6551,8 +7161,8 @@ result_t test_mm_hadd_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double f0 = _a[0] + _a[1]; double f1 = _b[0] + _b[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_hadd_pd(a, b); return validateDouble(c, f0, f1); @@ -6570,8 +7180,8 @@ result_t test_mm_hadd_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float f2 = _b[0] + _b[1]; float f3 = _b[2] + _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_hadd_ps(a, b); return validateFloat(c, f0, f1, f2, f3); @@ -6585,8 +7195,8 @@ result_t test_mm_hsub_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double f0 = _a[0] - _a[1]; double f1 = _b[0] - _b[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d c = _mm_hsub_pd(a, b); return validateDouble(c, f0, f1); @@ -6604,8 +7214,8 @@ result_t test_mm_hsub_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float f2 = _b[0] - _b[1]; float f3 = _b[2] - _b[3]; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_hsub_ps(a, b); return validateFloat(c, f0, f1, f2, f3); @@ -6628,7 +7238,7 @@ result_t test_mm_loaddup_pd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_movedup_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *p = (const double *) impl.mTestFloatPointer1; - __m128d a = do_mm_load_pd(p); + __m128d a = load_m128d(p); __m128d b = _mm_movedup_pd(a); return validateDouble(b, p[0], p[0]); @@ -6637,14 +7247,14 @@ result_t test_mm_movedup_pd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_movehdup_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *p = impl.mTestFloatPointer1; - __m128 a = do_mm_load_ps(p); + __m128 a = load_m128(p); return validateFloat(_mm_movehdup_ps(a), p[1], p[1], p[3], p[3]); } result_t test_mm_moveldup_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *p = impl.mTestFloatPointer1; - __m128 a = do_mm_load_ps(p); + __m128 a = load_m128(p); return validateFloat(_mm_moveldup_ps(a), p[0], p[0], p[2], p[2]); } @@ -6652,104 +7262,96 @@ result_t test_mm_moveldup_ps(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_abs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i c = _mm_abs_epi16(a); - uint32_t d0 = (_a[0] < 0) ? -_a[0] : _a[0]; - uint32_t d1 = (_a[1] < 0) ? -_a[1] : _a[1]; - uint32_t d2 = (_a[2] < 0) ? -_a[2] : _a[2]; - uint32_t d3 = (_a[3] < 0) ? -_a[3] : _a[3]; - uint32_t d4 = (_a[4] < 0) ? -_a[4] : _a[4]; - uint32_t d5 = (_a[5] < 0) ? -_a[5] : _a[5]; - uint32_t d6 = (_a[6] < 0) ? -_a[6] : _a[6]; - uint32_t d7 = (_a[7] < 0) ? -_a[7] : _a[7]; + uint32_t d[8]; + d[0] = (_a[0] < 0) ? -_a[0] : _a[0]; + d[1] = (_a[1] < 0) ? -_a[1] : _a[1]; + d[2] = (_a[2] < 0) ? -_a[2] : _a[2]; + d[3] = (_a[3] < 0) ? -_a[3] : _a[3]; + d[4] = (_a[4] < 0) ? -_a[4] : _a[4]; + d[5] = (_a[5] < 0) ? -_a[5] : _a[5]; + d[6] = (_a[6] < 0) ? -_a[6] : _a[6]; + d[7] = (_a[7] < 0) ? -_a[7] : _a[7]; - return validateUInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_UINT16_M128(c, d); } result_t test_mm_abs_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); __m128i c = _mm_abs_epi32(a); - uint32_t d0 = (_a[0] < 0) ? -_a[0] : _a[0]; - uint32_t d1 = (_a[1] < 0) ? -_a[1] : _a[1]; - uint32_t d2 = (_a[2] < 0) ? -_a[2] : _a[2]; - uint32_t d3 = (_a[3] < 0) ? -_a[3] : _a[3]; + uint32_t d[4]; + d[0] = (_a[0] < 0) ? -_a[0] : _a[0]; + d[1] = (_a[1] < 0) ? -_a[1] : _a[1]; + d[2] = (_a[2] < 0) ? -_a[2] : _a[2]; + d[3] = (_a[3] < 0) ? -_a[3] : _a[3]; - return validateUInt32(c, d0, d1, d2, d3); + return VALIDATE_UINT32_M128(c, d); } result_t test_mm_abs_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i c = _mm_abs_epi8(a); - uint32_t d0 = (_a[0] < 0) ? -_a[0] : _a[0]; - uint32_t d1 = (_a[1] < 0) ? -_a[1] : _a[1]; - uint32_t d2 = (_a[2] < 0) ? -_a[2] : _a[2]; - uint32_t d3 = (_a[3] < 0) ? -_a[3] : _a[3]; - uint32_t d4 = (_a[4] < 0) ? -_a[4] : _a[4]; - uint32_t d5 = (_a[5] < 0) ? -_a[5] : _a[5]; - uint32_t d6 = (_a[6] < 0) ? -_a[6] : _a[6]; - uint32_t d7 = (_a[7] < 0) ? -_a[7] : _a[7]; - uint32_t d8 = (_a[8] < 0) ? -_a[8] : _a[8]; - uint32_t d9 = (_a[9] < 0) ? -_a[9] : _a[9]; - uint32_t d10 = (_a[10] < 0) ? -_a[10] : _a[10]; - uint32_t d11 = (_a[11] < 0) ? -_a[11] : _a[11]; - uint32_t d12 = (_a[12] < 0) ? -_a[12] : _a[12]; - uint32_t d13 = (_a[13] < 0) ? -_a[13] : _a[13]; - uint32_t d14 = (_a[14] < 0) ? -_a[14] : _a[14]; - uint32_t d15 = (_a[15] < 0) ? -_a[15] : _a[15]; - - return validateUInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + uint32_t d[16]; + for (int i = 0; i < 16; i++) { + d[i] = (_a[i] < 0) ? -_a[i] : _a[i]; + } + + return VALIDATE_UINT8_M128(c, d); } result_t test_mm_abs_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); __m64 c = _mm_abs_pi16(a); - uint32_t d0 = (_a[0] < 0) ? -_a[0] : _a[0]; - uint32_t d1 = (_a[1] < 0) ? -_a[1] : _a[1]; - uint32_t d2 = (_a[2] < 0) ? -_a[2] : _a[2]; - uint32_t d3 = (_a[3] < 0) ? -_a[3] : _a[3]; + uint32_t d[4]; + d[0] = (_a[0] < 0) ? -_a[0] : _a[0]; + d[1] = (_a[1] < 0) ? -_a[1] : _a[1]; + d[2] = (_a[2] < 0) ? -_a[2] : _a[2]; + d[3] = (_a[3] < 0) ? -_a[3] : _a[3]; - return validateUInt16(c, d0, d1, d2, d3); + return VALIDATE_UINT16_M64(c, d); } result_t test_mm_abs_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); __m64 c = _mm_abs_pi32(a); - uint32_t d0 = (_a[0] < 0) ? -_a[0] : _a[0]; - uint32_t d1 = (_a[1] < 0) ? -_a[1] : _a[1]; + uint32_t d[2]; + d[0] = (_a[0] < 0) ? -_a[0] : _a[0]; + d[1] = (_a[1] < 0) ? -_a[1] : _a[1]; - return validateUInt32(c, d0, d1); + return VALIDATE_UINT32_M64(c, d); } result_t test_mm_abs_pi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; - __m64 a = do_mm_load_m64((const int64_t *) _a); + __m64 a = load_m64(_a); __m64 c = _mm_abs_pi8(a); - uint32_t d0 = (_a[0] < 0) ? -_a[0] : _a[0]; - uint32_t d1 = (_a[1] < 0) ? -_a[1] : _a[1]; - uint32_t d2 = (_a[2] < 0) ? -_a[2] : _a[2]; - uint32_t d3 = (_a[3] < 0) ? -_a[3] : _a[3]; - uint32_t d4 = (_a[4] < 0) ? -_a[4] : _a[4]; - uint32_t d5 = (_a[5] < 0) ? -_a[5] : _a[5]; - uint32_t d6 = (_a[6] < 0) ? -_a[6] : _a[6]; - uint32_t d7 = (_a[7] < 0) ? -_a[7] : _a[7]; + uint32_t d[8]; + d[0] = (_a[0] < 0) ? -_a[0] : _a[0]; + d[1] = (_a[1] < 0) ? -_a[1] : _a[1]; + d[2] = (_a[2] < 0) ? -_a[2] : _a[2]; + d[3] = (_a[3] < 0) ? -_a[3] : _a[3]; + d[4] = (_a[4] < 0) ? -_a[4] : _a[4]; + d[5] = (_a[5] < 0) ? -_a[5] : _a[5]; + d[6] = (_a[6] < 0) ? -_a[6] : _a[6]; + d[7] = (_a[7] < 0) ? -_a[7] : _a[7]; - return validateUInt8(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_UINT8_M64(c, d); } result_t test_mm_alignr_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6759,8 +7361,7 @@ result_t test_mm_alignr_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) #else const uint8_t *_a = (const uint8_t *) impl.mTestIntPointer1; const uint8_t *_b = (const uint8_t *) impl.mTestIntPointer2; - // FIXME: The different immediate value should be tested in the future - const int shift = 18; + unsigned int shift = (iter % 5) << 3; uint8_t d[32]; if (shift >= 32) { @@ -6769,7 +7370,7 @@ result_t test_mm_alignr_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) memcpy((void *) d, (const void *) _b, 16); memcpy((void *) (d + 16), (const void *) _a, 16); // shifting - for (uint x = 0; x < sizeof(d); x++) { + for (size_t x = 0; x < sizeof(d); x++) { if (x + shift >= sizeof(d)) d[x] = 0; else @@ -6777,12 +7378,28 @@ result_t test_mm_alignr_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); - __m128i ret = _mm_alignr_epi8(a, b, shift); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); + __m128i ret = _mm_setzero_si128(); + switch (iter % 5) { + case 0: + ret = _mm_alignr_epi8(a, b, 0); + break; + case 1: + ret = _mm_alignr_epi8(a, b, 8); + break; + case 2: + ret = _mm_alignr_epi8(a, b, 16); + break; + case 3: + ret = _mm_alignr_epi8(a, b, 24); + break; + case 4: + ret = _mm_alignr_epi8(a, b, 32); + break; + } - return validateUInt8(ret, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], - d[8], d[9], d[10], d[11], d[12], d[13], d[14], d[15]); + return VALIDATE_UINT8_M128(ret, d); #endif } @@ -6793,8 +7410,7 @@ result_t test_mm_alignr_pi8(const SSE2NEONTestImpl &impl, uint32_t iter) #else const uint8_t *_a = (const uint8_t *) impl.mTestIntPointer1; const uint8_t *_b = (const uint8_t *) impl.mTestIntPointer2; - // FIXME: The different immediate value should be tested in the future - const int shift = 10; + unsigned int shift = (iter % 3) << 3; uint8_t d[16]; if (shift >= 16) { @@ -6803,7 +7419,7 @@ result_t test_mm_alignr_pi8(const SSE2NEONTestImpl &impl, uint32_t iter) memcpy((void *) d, (const void *) _b, 8); memcpy((void *) (d + 8), (const void *) _a, 8); // shifting - for (uint x = 0; x < sizeof(d); x++) { + for (size_t x = 0; x < sizeof(d); x++) { if (x + shift >= sizeof(d)) d[x] = 0; else @@ -6811,11 +7427,23 @@ result_t test_mm_alignr_pi8(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); - __m64 ret = _mm_alignr_pi8(a, b, shift); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); + uint8_t zeros[] = {0, 0, 0, 0, 0, 0, 0, 0}; + __m64 ret = load_m64(zeros); + switch (iter % 3) { + case 0: + ret = _mm_alignr_pi8(a, b, 0); + break; + case 1: + ret = _mm_alignr_pi8(a, b, 8); + break; + case 2: + ret = _mm_alignr_pi8(a, b, 16); + break; + } - return validateUInt8(ret, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return VALIDATE_UINT8_M64(ret, d); #endif } @@ -6823,58 +7451,62 @@ result_t test_mm_hadd_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = _a[0] + _a[1]; - int16_t d1 = _a[2] + _a[3]; - int16_t d2 = _a[4] + _a[5]; - int16_t d3 = _a[6] + _a[7]; - int16_t d4 = _b[0] + _b[1]; - int16_t d5 = _b[2] + _b[3]; - int16_t d6 = _b[4] + _b[5]; - int16_t d7 = _b[6] + _b[7]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[0] + _a[1]; + d[1] = _a[2] + _a[3]; + d[2] = _a[4] + _a[5]; + d[3] = _a[6] + _a[7]; + d[4] = _b[0] + _b[1]; + d[5] = _b[2] + _b[3]; + d[6] = _b[4] + _b[5]; + d[7] = _b[6] + _b[7]; + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_hadd_epi16(a, b); - return validateInt16(ret, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(ret, d); } result_t test_mm_hadd_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_b = (const int32_t *) impl.mTestIntPointer2; - int32_t d0 = _a[0] + _a[1]; - int32_t d1 = _a[2] + _a[3]; - int32_t d2 = _b[0] + _b[1]; - int32_t d3 = _b[2] + _b[3]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int32_t d[4]; + d[0] = _a[0] + _a[1]; + d[1] = _a[2] + _a[3]; + d[2] = _b[0] + _b[1]; + d[3] = _b[2] + _b[3]; + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i ret = _mm_hadd_epi32(a, b); - return validateInt32(ret, d0, d1, d2, d3); + return VALIDATE_INT32_M128(ret, d); } result_t test_mm_hadd_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = _a[0] + _a[1]; - int16_t d1 = _a[2] + _a[3]; - int16_t d2 = _b[0] + _b[1]; - int16_t d3 = _b[2] + _b[3]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + int16_t d[4]; + d[0] = _a[0] + _a[1]; + d[1] = _a[2] + _a[3]; + d[2] = _b[0] + _b[1]; + d[3] = _b[2] + _b[3]; + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 ret = _mm_hadd_pi16(a, b); - return validateInt16(ret, d0, d1, d2, d3); + return VALIDATE_INT16_M64(ret, d); } result_t test_mm_hadd_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_b = (const int32_t *) impl.mTestIntPointer2; - int32_t d0 = _a[0] + _a[1]; - int32_t d1 = _b[0] + _b[1]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + int32_t d[2]; + d[0] = _a[0] + _a[1]; + d[1] = _b[0] + _b[1]; + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 ret = _mm_hadd_pi32(a, b); - return validateInt32(ret, d0, d1); + return VALIDATE_INT32_M64(ret, d); } result_t test_mm_hadds_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6901,12 +7533,11 @@ result_t test_mm_hadds_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) d16[i] = (int16_t) d32[i]; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_hadds_epi16(a, b); - return validateInt16(c, d16[0], d16[1], d16[2], d16[3], d16[4], d16[5], - d16[6], d16[7]); + return VALIDATE_INT16_M128(c, d16); } result_t test_mm_hadds_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6929,11 +7560,11 @@ result_t test_mm_hadds_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) d16[i] = (int16_t) d32[i]; } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_hadds_pi16(a, b); - return validateInt16(c, d16[0], d16[1], d16[2], d16[3]); + return VALIDATE_INT16_M64(c, d16); } result_t test_mm_hsub_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6941,20 +7572,21 @@ result_t test_mm_hsub_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer1; - int16_t d0 = _a[0] - _a[1]; - int16_t d1 = _a[2] - _a[3]; - int16_t d2 = _a[4] - _a[5]; - int16_t d3 = _a[6] - _a[7]; - int16_t d4 = _b[0] - _b[1]; - int16_t d5 = _b[2] - _b[3]; - int16_t d6 = _b[4] - _b[5]; - int16_t d7 = _b[6] - _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int16_t d[8]; + d[0] = _a[0] - _a[1]; + d[1] = _a[2] - _a[3]; + d[2] = _a[4] - _a[5]; + d[3] = _a[6] - _a[7]; + d[4] = _b[0] - _b[1]; + d[5] = _b[2] - _b[3]; + d[6] = _b[4] - _b[5]; + d[7] = _b[6] - _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_hsub_epi16(a, b); - return validateInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_hsub_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6962,16 +7594,17 @@ result_t test_mm_hsub_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer1; - int32_t d0 = _a[0] - _a[1]; - int32_t d1 = _a[2] - _a[3]; - int32_t d2 = _b[0] - _b[1]; - int32_t d3 = _b[2] - _b[3]; + int32_t d[4]; + d[0] = _a[0] - _a[1]; + d[1] = _a[2] - _a[3]; + d[2] = _b[0] - _b[1]; + d[3] = _b[2] - _b[3]; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_hsub_epi32(a, b); - return validateInt32(c, d0, d1, d2, d3); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_hsub_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6979,15 +7612,16 @@ result_t test_mm_hsub_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - int16_t d0 = _a[0] - _a[1]; - int16_t d1 = _a[2] - _a[3]; - int16_t d2 = _b[0] - _b[1]; - int16_t d3 = _b[2] - _b[3]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + int16_t d[4]; + d[0] = _a[0] - _a[1]; + d[1] = _a[2] - _a[3]; + d[2] = _b[0] - _b[1]; + d[3] = _b[2] - _b[3]; + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_hsub_pi16(a, b); - return validateInt16(c, d0, d1, d2, d3); + return VALIDATE_INT16_M64(c, d); } result_t test_mm_hsub_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -6995,14 +7629,15 @@ result_t test_mm_hsub_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) const int32_t *_a = impl.mTestIntPointer1; const int32_t *_b = impl.mTestIntPointer2; - int32_t d0 = _a[0] - _a[1]; - int32_t d1 = _b[0] - _b[1]; + int32_t d[2]; + d[0] = _a[0] - _a[1]; + d[1] = _b[0] - _b[1]; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_hsub_pi32(a, b); - return validateInt32(c, d0, d1); + return VALIDATE_INT32_M64(c, d); } result_t test_mm_hsubs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7029,12 +7664,11 @@ result_t test_mm_hsubs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) d16[i] = (int16_t) d32[i]; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_hsubs_epi16(a, b); - return validateInt16(c, d16[0], d16[1], d16[2], d16[3], d16[4], d16[5], - d16[6], d16[7]); + return VALIDATE_INT16_M128(c, d16); } result_t test_mm_hsubs_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7056,52 +7690,74 @@ result_t test_mm_hsubs_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_hsubs_pi16(a, b); - return validateInt16(c, _d[0], _d[1], _d[2], _d[3]); + return VALIDATE_INT16_M64(c, _d); } result_t test_mm_maddubs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint8_t *_a = (const uint8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int32_t d0 = (int32_t)(_a[0] * _b[0]); - int32_t d1 = (int32_t)(_a[1] * _b[1]); - int32_t d2 = (int32_t)(_a[2] * _b[2]); - int32_t d3 = (int32_t)(_a[3] * _b[3]); - int32_t d4 = (int32_t)(_a[4] * _b[4]); - int32_t d5 = (int32_t)(_a[5] * _b[5]); - int32_t d6 = (int32_t)(_a[6] * _b[6]); - int32_t d7 = (int32_t)(_a[7] * _b[7]); - int32_t d8 = (int32_t)(_a[8] * _b[8]); - int32_t d9 = (int32_t)(_a[9] * _b[9]); - int32_t d10 = (int32_t)(_a[10] * _b[10]); - int32_t d11 = (int32_t)(_a[11] * _b[11]); - int32_t d12 = (int32_t)(_a[12] * _b[12]); - int32_t d13 = (int32_t)(_a[13] * _b[13]); - int32_t d14 = (int32_t)(_a[14] * _b[14]); - int32_t d15 = (int32_t)(_a[15] * _b[15]); - - int16_t e0 = saturate_16(d0 + d1); - int16_t e1 = saturate_16(d2 + d3); - int16_t e2 = saturate_16(d4 + d5); - int16_t e3 = saturate_16(d6 + d7); - int16_t e4 = saturate_16(d8 + d9); - int16_t e5 = saturate_16(d10 + d11); - int16_t e6 = saturate_16(d12 + d13); - int16_t e7 = saturate_16(d14 + d15); - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int32_t d0 = (int32_t) (_a[0] * _b[0]); + int32_t d1 = (int32_t) (_a[1] * _b[1]); + int32_t d2 = (int32_t) (_a[2] * _b[2]); + int32_t d3 = (int32_t) (_a[3] * _b[3]); + int32_t d4 = (int32_t) (_a[4] * _b[4]); + int32_t d5 = (int32_t) (_a[5] * _b[5]); + int32_t d6 = (int32_t) (_a[6] * _b[6]); + int32_t d7 = (int32_t) (_a[7] * _b[7]); + int32_t d8 = (int32_t) (_a[8] * _b[8]); + int32_t d9 = (int32_t) (_a[9] * _b[9]); + int32_t d10 = (int32_t) (_a[10] * _b[10]); + int32_t d11 = (int32_t) (_a[11] * _b[11]); + int32_t d12 = (int32_t) (_a[12] * _b[12]); + int32_t d13 = (int32_t) (_a[13] * _b[13]); + int32_t d14 = (int32_t) (_a[14] * _b[14]); + int32_t d15 = (int32_t) (_a[15] * _b[15]); + + int16_t e[8]; + e[0] = saturate_16(d0 + d1); + e[1] = saturate_16(d2 + d3); + e[2] = saturate_16(d4 + d5); + e[3] = saturate_16(d6 + d7); + e[4] = saturate_16(d8 + d9); + e[5] = saturate_16(d10 + d11); + e[6] = saturate_16(d12 + d13); + e[7] = saturate_16(d14 + d15); + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_maddubs_epi16(a, b); - return validateInt16(c, e0, e1, e2, e3, e4, e5, e6, e7); + return VALIDATE_INT16_M128(c, e); } result_t test_mm_maddubs_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + const uint8_t *_a = (const uint8_t *) impl.mTestIntPointer1; + const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; + int16_t d0 = (int16_t) (_a[0] * _b[0]); + int16_t d1 = (int16_t) (_a[1] * _b[1]); + int16_t d2 = (int16_t) (_a[2] * _b[2]); + int16_t d3 = (int16_t) (_a[3] * _b[3]); + int16_t d4 = (int16_t) (_a[4] * _b[4]); + int16_t d5 = (int16_t) (_a[5] * _b[5]); + int16_t d6 = (int16_t) (_a[6] * _b[6]); + int16_t d7 = (int16_t) (_a[7] * _b[7]); + + int16_t e[4]; + e[0] = saturate_16(d0 + d1); + e[1] = saturate_16(d2 + d3); + e[2] = saturate_16(d4 + d5); + e[3] = saturate_16(d6 + d7); + + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); + __m64 c = _mm_maddubs_pi16(a, b); + + return VALIDATE_INT16_M64(c, e); } result_t test_mm_mulhrs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7109,8 +7765,8 @@ result_t test_mm_mulhrs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); int32_t _c[8]; for (int i = 0; i < 8; i++) { _c[i] = @@ -7118,8 +7774,7 @@ result_t test_mm_mulhrs_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) } __m128i c = _mm_mulhrs_epi16(a, b); - return validateInt16(c, _c[0], _c[1], _c[2], _c[3], _c[4], _c[5], _c[6], - _c[7]); + return VALIDATE_INT16_M128(c, _c); } result_t test_mm_mulhrs_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7127,8 +7782,8 @@ result_t test_mm_mulhrs_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); int32_t _c[4]; for (int i = 0; i < 4; i++) { _c[i] = @@ -7136,45 +7791,48 @@ result_t test_mm_mulhrs_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) } __m64 c = _mm_mulhrs_pi16(a, b); - return validateInt16(c, _c[0], _c[1], _c[2], _c[3]); + return VALIDATE_INT16_M64(c, _c); } result_t test_mm_shuffle_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { - const int32_t *a = impl.mTestIntPointer1; - const int32_t *b = impl.mTestIntPointer2; - const uint8_t *tbl = (const uint8_t *) a; - const uint8_t *idx = (const uint8_t *) b; - int32_t r[4]; - - r[0] = ((idx[3] & 0x80) ? 0 : tbl[idx[3] % 16]) << 24; - r[0] |= ((idx[2] & 0x80) ? 0 : tbl[idx[2] % 16]) << 16; - r[0] |= ((idx[1] & 0x80) ? 0 : tbl[idx[1] % 16]) << 8; - r[0] |= ((idx[0] & 0x80) ? 0 : tbl[idx[0] % 16]); - - r[1] = ((idx[7] & 0x80) ? 0 : tbl[idx[7] % 16]) << 24; - r[1] |= ((idx[6] & 0x80) ? 0 : tbl[idx[6] % 16]) << 16; - r[1] |= ((idx[5] & 0x80) ? 0 : tbl[idx[5] % 16]) << 8; - r[1] |= ((idx[4] & 0x80) ? 0 : tbl[idx[4] % 16]); - - r[2] = ((idx[11] & 0x80) ? 0 : tbl[idx[11] % 16]) << 24; - r[2] |= ((idx[10] & 0x80) ? 0 : tbl[idx[10] % 16]) << 16; - r[2] |= ((idx[9] & 0x80) ? 0 : tbl[idx[9] % 16]) << 8; - r[2] |= ((idx[8] & 0x80) ? 0 : tbl[idx[8] % 16]); - - r[3] = ((idx[15] & 0x80) ? 0 : tbl[idx[15] % 16]) << 24; - r[3] |= ((idx[14] & 0x80) ? 0 : tbl[idx[14] % 16]) << 16; - r[3] |= ((idx[13] & 0x80) ? 0 : tbl[idx[13] % 16]) << 8; - r[3] |= ((idx[12] & 0x80) ? 0 : tbl[idx[12] % 16]); + const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; + const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; + int8_t dst[16]; - __m128i ret = _mm_shuffle_epi8(do_mm_load_ps(a), do_mm_load_ps(b)); + for (int i = 0; i < 16; i++) { + if (_b[i] & 0x80) { + dst[i] = 0; + } else { + dst[i] = _a[_b[i] & 0x0F]; + } + } + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); + __m128i ret = _mm_shuffle_epi8(a, b); - return validateInt32(ret, r[0], r[1], r[2], r[3]); + return VALIDATE_INT8_M128(ret, dst); } result_t test_mm_shuffle_pi8(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; + const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; + int8_t dst[8]; + + for (int i = 0; i < 8; i++) { + if (_b[i] & 0x80) { + dst[i] = 0; + } else { + dst[i] = _a[_b[i] & 0x07]; + } + } + + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); + __m64 ret = _mm_shuffle_pi8(a, b); + + return VALIDATE_INT8_M64(ret, dst); } result_t test_mm_sign_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7193,11 +7851,11 @@ result_t test_mm_sign_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_sign_epi16(a, b); - return validateInt16(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return VALIDATE_INT16_M128(c, d); } result_t test_mm_sign_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7216,11 +7874,11 @@ result_t test_mm_sign_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_sign_epi32(a, b); - return validateInt32(c, d[0], d[1], d[2], d[3]); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_sign_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7239,12 +7897,11 @@ result_t test_mm_sign_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_sign_epi8(a, b); - return validateInt8(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], d[8], - d[9], d[10], d[11], d[12], d[13], d[14], d[15]); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_sign_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7263,11 +7920,11 @@ result_t test_mm_sign_pi16(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_sign_pi16(a, b); - return validateInt16(c, d[0], d[1], d[2], d[3]); + return VALIDATE_INT16_M64(c, d); } result_t test_mm_sign_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7286,11 +7943,11 @@ result_t test_mm_sign_pi32(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_sign_pi32(a, b); - return validateInt32(c, d[0], d[1]); + return VALIDATE_INT32_M64(c, d); } result_t test_mm_sign_pi8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7309,11 +7966,11 @@ result_t test_mm_sign_pi8(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m64 a = do_mm_load_m64((const int64_t *) _a); - __m64 b = do_mm_load_m64((const int64_t *) _b); + __m64 a = load_m64(_a); + __m64 b = load_m64(_b); __m64 c = _mm_sign_pi8(a, b); - return validateInt8(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return VALIDATE_INT8_M64(c, d); } /* SSE4.1 */ @@ -7321,126 +7978,80 @@ result_t test_mm_blend_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; const int16_t *_b = (const int16_t *) impl.mTestIntPointer2; - const int mask = 104; - int16_t _c[8]; - for (int j = 0; j < 8; j++) { - if ((mask >> j) & 0x1) { - _c[j] = _b[j]; - } else { - _c[j] = _a[j]; - } - } - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); - __m128i c = _mm_blend_epi16(a, b, mask); - - return validateInt16(c, _c[0], _c[1], _c[2], _c[3], _c[4], _c[5], _c[6], - _c[7]); + __m128i a, b, c; + +#define TEST_IMPL(IDX) \ + for (int j = 0; j < 8; j++) { \ + if ((IDX >> j) & 0x1) { \ + _c[j] = _b[j]; \ + } else { \ + _c[j] = _a[j]; \ + } \ + } \ + a = load_m128i(_a); \ + b = load_m128i(_b); \ + c = _mm_blend_epi16(a, b, IDX); \ + CHECK_RESULT(VALIDATE_INT16_M128(c, _c)); + + IMM_256_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_blend_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (const double *) impl.mTestFloatPointer1; const double *_b = (const double *) impl.mTestFloatPointer2; - // the last argument must be a 2-bit immediate - const int mask = 3; - - double _c[2]; - for (int j = 0; j < 2; j++) { - if ((mask >> j) & 0x1) { - _c[j] = _b[j]; - } else { - _c[j] = _a[j]; - } - } - - __m128d a = do_mm_load_pd((const double *) _a); - __m128d b = do_mm_load_pd((const double *) _b); - __m128d c = _mm_blend_pd(a, b, mask); - - return validateDouble(c, _c[0], _c[1]); + __m128d a, b, c; + +#define TEST_IMPL(IDX) \ + double _c##IDX[2]; \ + for (int j = 0; j < 2; j++) { \ + if ((IDX >> j) & 0x1) { \ + _c##IDX[j] = _b[j]; \ + } else { \ + _c##IDX[j] = _a[j]; \ + } \ + } \ + \ + a = load_m128d(_a); \ + b = load_m128d(_b); \ + c = _mm_blend_pd(a, b, IDX); \ + CHECK_RESULT(validateDouble(c, _c##IDX[0], _c##IDX[1])) + + IMM_4_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_blend_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - - const char mask = (char) iter; - - float _c[4]; - for (int i = 0; i < 4; i++) { - if (mask & (1 << i)) { - _c[i] = _b[i]; - } else { - _c[i] = _a[i]; - } - } - - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); - - // gcc and clang can't compile call to _mm_blend_ps with 3rd argument as - // integer type due 4 bit size limitation and test framework doesn't support - // compile time constant so for testing decided explicit define all 16 - // possible values + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c; - switch (mask & 0xF) { - case 0: - c = _mm_blend_ps(a, b, 0); - break; - case 1: - c = _mm_blend_ps(a, b, 1); - break; - case 2: - c = _mm_blend_ps(a, b, 2); - break; - case 3: - c = _mm_blend_ps(a, b, 3); - break; - - case 4: - c = _mm_blend_ps(a, b, 4); - break; - case 5: - c = _mm_blend_ps(a, b, 5); - break; - case 6: - c = _mm_blend_ps(a, b, 6); - break; - case 7: - c = _mm_blend_ps(a, b, 7); - break; - case 8: - c = _mm_blend_ps(a, b, 8); - break; - case 9: - c = _mm_blend_ps(a, b, 9); - break; - case 10: - c = _mm_blend_ps(a, b, 10); - break; - case 11: - c = _mm_blend_ps(a, b, 11); - break; - - case 12: - c = _mm_blend_ps(a, b, 12); - break; - case 13: - c = _mm_blend_ps(a, b, 13); - break; - case 14: - c = _mm_blend_ps(a, b, 14); - break; - case 15: - c = _mm_blend_ps(a, b, 15); - break; - } - return validateFloat(c, _c[0], _c[1], _c[2], _c[3]); + // gcc and clang can't compile call to _mm_blend_ps with 3rd argument as + // integer type due 4 bit size limitation. +#define TEST_IMPL(IDX) \ + float _c##IDX[4]; \ + for (int i = 0; i < 4; i++) { \ + if (IDX & (1 << i)) { \ + _c##IDX[i] = _b[i]; \ + } else { \ + _c##IDX[i] = _a[i]; \ + } \ + } \ + \ + c = _mm_blend_ps(a, b, IDX); \ + CHECK_RESULT( \ + validateFloat(c, _c##IDX[0], _c##IDX[1], _c##IDX[2], _c##IDX[3])) + + IMM_4_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_blendv_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7465,14 +8076,12 @@ result_t test_mm_blendv_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); - __m128i mask = do_mm_load_ps((const int32_t *) _mask); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); + __m128i mask = load_m128i(_mask); __m128i c = _mm_blendv_epi8(a, b, mask); - return validateInt8(c, _c[0], _c[1], _c[2], _c[3], _c[4], _c[5], _c[6], - _c[7], _c[8], _c[9], _c[10], _c[11], _c[12], _c[13], - _c[14], _c[15]); + return VALIDATE_INT8_M128(c, _c); } result_t test_mm_blendv_pd(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7493,9 +8102,9 @@ result_t test_mm_blendv_pd(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); - __m128d mask = do_mm_load_pd(_mask); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); + __m128d mask = load_m128d(_mask); __m128d c = _mm_blendv_pd(a, b, mask); @@ -7521,9 +8130,9 @@ result_t test_mm_blendv_ps(const SSE2NEONTestImpl &impl, uint32_t iter) } } - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); - __m128 mask = do_mm_load_ps(_mask); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); + __m128 mask = load_m128(_mask); __m128 c = _mm_blendv_ps(a, b, mask); @@ -7537,7 +8146,7 @@ result_t test_mm_ceil_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double dx = ceil(_a[0]); double dy = ceil(_a[1]); - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); __m128d ret = _mm_ceil_pd(a); return validateDouble(ret, dx, dy); @@ -7564,8 +8173,8 @@ result_t test_mm_ceil_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double dx = ceil(_b[0]); double dy = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d ret = _mm_ceil_sd(a, b); return validateDouble(ret, dx, dy); @@ -7578,8 +8187,8 @@ result_t test_mm_ceil_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float f0 = ceilf(_b[0]); - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_ceil_ss(a, b); return validateFloat(c, f0, _a[1], _a[2], _a[3]); @@ -7592,8 +8201,8 @@ result_t test_mm_cmpeq_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t d0 = (_a[0] == _b[0]) ? 0xffffffffffffffff : 0x0; int64_t d1 = (_a[1] == _b[1]) ? 0xffffffffffffffff : 0x0; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_cmpeq_epi64(a, b); return validateInt64(c, d0, d1); } @@ -7602,15 +8211,16 @@ result_t test_mm_cvtepi16_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int16_t *_a = (const int16_t *) impl.mTestIntPointer1; - int32_t i0 = (int32_t) _a[0]; - int32_t i1 = (int32_t) _a[1]; - int32_t i2 = (int32_t) _a[2]; - int32_t i3 = (int32_t) _a[3]; + int32_t d[4]; + d[0] = (int32_t) _a[0]; + d[1] = (int32_t) _a[1]; + d[2] = (int32_t) _a[2]; + d[3] = (int32_t) _a[3]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepi16_epi32(a); - return validateInt32(ret, i0, i1, i2, i3); + return VALIDATE_INT32_M128(ret, d); } result_t test_mm_cvtepi16_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7620,7 +8230,7 @@ result_t test_mm_cvtepi16_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t i0 = (int64_t) _a[0]; int64_t i1 = (int64_t) _a[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepi16_epi64(a); return validateInt64(ret, i0, i1); @@ -7633,7 +8243,7 @@ result_t test_mm_cvtepi32_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t i0 = (int64_t) _a[0]; int64_t i1 = (int64_t) _a[1]; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepi32_epi64(a); return validateInt64(ret, i0, i1); @@ -7643,34 +8253,36 @@ result_t test_mm_cvtepi8_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; - int16_t i0 = (int16_t) _a[0]; - int16_t i1 = (int16_t) _a[1]; - int16_t i2 = (int16_t) _a[2]; - int16_t i3 = (int16_t) _a[3]; - int16_t i4 = (int16_t) _a[4]; - int16_t i5 = (int16_t) _a[5]; - int16_t i6 = (int16_t) _a[6]; - int16_t i7 = (int16_t) _a[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); + int16_t d[8]; + d[0] = (int16_t) _a[0]; + d[1] = (int16_t) _a[1]; + d[2] = (int16_t) _a[2]; + d[3] = (int16_t) _a[3]; + d[4] = (int16_t) _a[4]; + d[5] = (int16_t) _a[5]; + d[6] = (int16_t) _a[6]; + d[7] = (int16_t) _a[7]; + + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepi8_epi16(a); - return validateInt16(ret, i0, i1, i2, i3, i4, i5, i6, i7); + return VALIDATE_INT16_M128(ret, d); } result_t test_mm_cvtepi8_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; - int32_t i0 = (int32_t) _a[0]; - int32_t i1 = (int32_t) _a[1]; - int32_t i2 = (int32_t) _a[2]; - int32_t i3 = (int32_t) _a[3]; + int32_t d[4]; + d[0] = (int32_t) _a[0]; + d[1] = (int32_t) _a[1]; + d[2] = (int32_t) _a[2]; + d[3] = (int32_t) _a[3]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepi8_epi32(a); - return validateInt32(ret, i0, i1, i2, i3); + return VALIDATE_INT32_M128(ret, d); } result_t test_mm_cvtepi8_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7680,7 +8292,7 @@ result_t test_mm_cvtepi8_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t i0 = (int64_t) _a[0]; int64_t i1 = (int64_t) _a[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepi8_epi64(a); return validateInt64(ret, i0, i1); @@ -7690,15 +8302,16 @@ result_t test_mm_cvtepu16_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint16_t *_a = (const uint16_t *) impl.mTestIntPointer1; - int32_t i0 = (int32_t) _a[0]; - int32_t i1 = (int32_t) _a[1]; - int32_t i2 = (int32_t) _a[2]; - int32_t i3 = (int32_t) _a[3]; + int32_t d[4]; + d[0] = (int32_t) _a[0]; + d[1] = (int32_t) _a[1]; + d[2] = (int32_t) _a[2]; + d[3] = (int32_t) _a[3]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepu16_epi32(a); - return validateInt32(ret, i0, i1, i2, i3); + return VALIDATE_INT32_M128(ret, d); } result_t test_mm_cvtepu16_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7708,7 +8321,7 @@ result_t test_mm_cvtepu16_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t i0 = (int64_t) _a[0]; int64_t i1 = (int64_t) _a[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepu16_epi64(a); return validateInt64(ret, i0, i1); @@ -7721,7 +8334,7 @@ result_t test_mm_cvtepu32_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t i0 = (int64_t) _a[0]; int64_t i1 = (int64_t) _a[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepu32_epi64(a); return validateInt64(ret, i0, i1); @@ -7731,34 +8344,36 @@ result_t test_mm_cvtepu8_epi16(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint8_t *_a = (const uint8_t *) impl.mTestIntPointer1; - int16_t i0 = (int16_t) _a[0]; - int16_t i1 = (int16_t) _a[1]; - int16_t i2 = (int16_t) _a[2]; - int16_t i3 = (int16_t) _a[3]; - int16_t i4 = (int16_t) _a[4]; - int16_t i5 = (int16_t) _a[5]; - int16_t i6 = (int16_t) _a[6]; - int16_t i7 = (int16_t) _a[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); + int16_t d[8]; + d[0] = (int16_t) _a[0]; + d[1] = (int16_t) _a[1]; + d[2] = (int16_t) _a[2]; + d[3] = (int16_t) _a[3]; + d[4] = (int16_t) _a[4]; + d[5] = (int16_t) _a[5]; + d[6] = (int16_t) _a[6]; + d[7] = (int16_t) _a[7]; + + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepu8_epi16(a); - return validateInt16(ret, i0, i1, i2, i3, i4, i5, i6, i7); + return VALIDATE_INT16_M128(ret, d); } result_t test_mm_cvtepu8_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint8_t *_a = (const uint8_t *) impl.mTestIntPointer1; - int32_t i0 = (int32_t) _a[0]; - int32_t i1 = (int32_t) _a[1]; - int32_t i2 = (int32_t) _a[2]; - int32_t i3 = (int32_t) _a[3]; + int32_t d[4]; + d[0] = (int32_t) _a[0]; + d[1] = (int32_t) _a[1]; + d[2] = (int32_t) _a[2]; + d[3] = (int32_t) _a[3]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepu8_epi32(a); - return validateInt32(ret, i0, i1, i2, i3); + return VALIDATE_INT32_M128(ret, d); } result_t test_mm_cvtepu8_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -7768,120 +8383,131 @@ result_t test_mm_cvtepu8_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) int64_t i0 = (int64_t) _a[0]; int64_t i1 = (int64_t) _a[1]; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); __m128i ret = _mm_cvtepu8_epi64(a); return validateInt64(ret, i0, i1); } +#define MM_DP_PD_TEST_CASE_WITH(imm8) \ + do { \ + const double *_a = (const double *) impl.mTestFloatPointer1; \ + const double *_b = (const double *) impl.mTestFloatPointer2; \ + const int imm = imm8; \ + double d[2]; \ + double sum = 0; \ + for (size_t i = 0; i < 2; i++) \ + sum += ((imm) & (1 << (i + 4))) ? _a[i] * _b[i] : 0; \ + for (size_t i = 0; i < 2; i++) \ + d[i] = (imm & (1 << i)) ? sum : 0; \ + __m128d a = load_m128d(_a); \ + __m128d b = load_m128d(_b); \ + __m128d ret = _mm_dp_pd(a, b, imm); \ + if (validateDouble(ret, d[0], d[1]) != TEST_SUCCESS) \ + return TEST_FAIL; \ + } while (0) + +#define GENERATE_MM_DP_PD_TEST_CASES \ + MM_DP_PD_TEST_CASE_WITH(0xF0); \ + MM_DP_PD_TEST_CASE_WITH(0xF1); \ + MM_DP_PD_TEST_CASE_WITH(0xF2); \ + MM_DP_PD_TEST_CASE_WITH(0xFF); \ + MM_DP_PD_TEST_CASE_WITH(0x10); \ + MM_DP_PD_TEST_CASE_WITH(0x11); \ + MM_DP_PD_TEST_CASE_WITH(0x12); \ + MM_DP_PD_TEST_CASE_WITH(0x13); \ + MM_DP_PD_TEST_CASE_WITH(0x00); \ + MM_DP_PD_TEST_CASE_WITH(0x01); \ + MM_DP_PD_TEST_CASE_WITH(0x02); \ + MM_DP_PD_TEST_CASE_WITH(0x03); \ + MM_DP_PD_TEST_CASE_WITH(0x20); \ + MM_DP_PD_TEST_CASE_WITH(0x21); \ + MM_DP_PD_TEST_CASE_WITH(0x22); \ + MM_DP_PD_TEST_CASE_WITH(0x23); + result_t test_mm_dp_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_DP_PD_TEST_CASES + return TEST_SUCCESS; } +#define MM_DP_PS_TEST_CASE_WITH(IMM) \ + do { \ + const float *_a = impl.mTestFloatPointer1; \ + const float *_b = impl.mTestFloatPointer2; \ + const int imm = IMM; \ + __m128 a = load_m128(_a); \ + __m128 b = load_m128(_b); \ + __m128 out = _mm_dp_ps(a, b, imm); \ + float r[4]; /* the reference */ \ + float sum = 0; \ + for (size_t i = 0; i < 4; i++) \ + sum += ((imm) & (1 << (i + 4))) ? _a[i] * _b[i] : 0; \ + for (size_t i = 0; i < 4; i++) \ + r[i] = (imm & (1 << i)) ? sum : 0; \ + /* the epsilon has to be large enough, otherwise test suite fails. */ \ + if (validateFloatEpsilon(out, r[0], r[1], r[2], r[3], 2050.0f) != \ + TEST_SUCCESS) \ + return TEST_FAIL; \ + } while (0) + +#define GENERATE_MM_DP_PS_TEST_CASES \ + MM_DP_PS_TEST_CASE_WITH(0xFF); \ + MM_DP_PS_TEST_CASE_WITH(0x7F); \ + MM_DP_PS_TEST_CASE_WITH(0x9F); \ + MM_DP_PS_TEST_CASE_WITH(0x2F); \ + MM_DP_PS_TEST_CASE_WITH(0x0F); \ + MM_DP_PS_TEST_CASE_WITH(0x23); \ + MM_DP_PS_TEST_CASE_WITH(0xB5); + result_t test_mm_dp_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { - // FIXME: The rounding mode would affect the testing result on ARM platform. - _MM_SET_ROUNDING_MODE(_MM_ROUND_NEAREST); - const float *_a = impl.mTestFloatPointer1; - const float *_b = impl.mTestFloatPointer2; - const int imm = 0xFF; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); - __m128 out = _mm_dp_ps(a, b, imm); - - float r[4]; /* the reference */ - float sum = 0; - - for (size_t i = 0; i < 4; i++) - sum += ((imm) & (1 << (i + 4))) ? _a[i] * _b[i] : 0; - for (size_t i = 0; i < 4; i++) - r[i] = (imm & (1 << i)) ? sum : 0; - - /* the epsilon has to be large enough, otherwise test suite fails. */ - return validateFloatEpsilon(out, r[0], r[1], r[2], r[3], 2050.0f); + GENERATE_MM_DP_PS_TEST_CASES + return TEST_SUCCESS; } result_t test_mm_extract_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { int32_t *_a = (int32_t *) impl.mTestIntPointer1; - const int idx = iter & 0x3; - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); int c; - switch (idx) { - case 0: - c = _mm_extract_epi32(a, 0); - break; - case 1: - c = _mm_extract_epi32(a, 1); - break; - case 2: - c = _mm_extract_epi32(a, 2); - break; - case 3: - c = _mm_extract_epi32(a, 3); - break; - } - ASSERT_RETURN(c == *(_a + idx)); +#define TEST_IMPL(IDX) \ + c = _mm_extract_epi32(a, IDX); \ + ASSERT_RETURN(c == *(_a + IDX)); + + IMM_4_ITER +#undef TEST_IMPL return TEST_SUCCESS; } result_t test_mm_extract_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { int64_t *_a = (int64_t *) impl.mTestIntPointer1; + __m128i a = load_m128i(_a); + __int64 c; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __int64_t c; - - switch (iter & 0x1) { - case 0: - c = _mm_extract_epi64(a, 0); - break; - case 1: - c = _mm_extract_epi64(a, 1); - break; - } +#define TEST_IMPL(IDX) \ + c = _mm_extract_epi64(a, IDX); \ + ASSERT_RETURN(c == *(_a + IDX)); - ASSERT_RETURN(c == *(_a + (iter & 1))); + IMM_2_ITER +#undef TEST_IMPL return TEST_SUCCESS; } result_t test_mm_extract_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { uint8_t *_a = (uint8_t *) impl.mTestIntPointer1; - const int idx = iter & 0x7; - - __m128i a = do_mm_load_ps((const int32_t *) _a); + __m128i a = load_m128i(_a); int c; - switch (idx) { - case 0: - c = _mm_extract_epi8(a, 0); - break; - case 1: - c = _mm_extract_epi8(a, 1); - break; - case 2: - c = _mm_extract_epi8(a, 2); - break; - case 3: - c = _mm_extract_epi8(a, 3); - break; - case 4: - c = _mm_extract_epi8(a, 4); - break; - case 5: - c = _mm_extract_epi8(a, 5); - break; - case 6: - c = _mm_extract_epi8(a, 6); - break; - case 7: - c = _mm_extract_epi8(a, 7); - break; - } - ASSERT_RETURN(c == *(_a + idx)); +#define TEST_IMPL(IDX) \ + c = _mm_extract_epi8(a, IDX); \ + ASSERT_RETURN(c == *(_a + IDX)); + + IMM_8_ITER +#undef TEST_IMPL return TEST_SUCCESS; } @@ -7892,22 +8518,12 @@ result_t test_mm_extract_ps(const SSE2NEONTestImpl &impl, uint32_t iter) __m128 a = _mm_load_ps(_a); int32_t c; - switch (iter & 0x3) { - case 0: - c = _mm_extract_ps(a, 0); - break; - case 1: - c = _mm_extract_ps(a, 1); - break; - case 2: - c = _mm_extract_ps(a, 2); - break; - case 3: - c = _mm_extract_ps(a, 3); - break; - } +#define TEST_IMPL(IDX) \ + c = _mm_extract_ps(a, IDX); \ + ASSERT_RETURN(c == *(const int32_t *) (_a + IDX)); - ASSERT_RETURN(c == *(const int32_t *) (_a + (iter & 0x3))); + IMM_4_ITER +#undef TEST_IMPL return TEST_SUCCESS; } @@ -7918,7 +8534,7 @@ result_t test_mm_floor_pd(const SSE2NEONTestImpl &impl, uint32_t iter) double dx = floor(_a[0]); double dy = floor(_a[1]); - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); __m128d ret = _mm_floor_pd(a); return validateDouble(ret, dx, dy); @@ -7932,7 +8548,7 @@ result_t test_mm_floor_ps(const SSE2NEONTestImpl &impl, uint32_t iter) float dz = floorf(_a[2]); float dw = floorf(_a[3]); - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); __m128 c = _mm_floor_ps(a); return validateFloat(c, dx, dy, dz, dw); } @@ -7945,8 +8561,8 @@ result_t test_mm_floor_sd(const SSE2NEONTestImpl &impl, uint32_t iter) double dx = floor(_b[0]); double dy = _a[1]; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); __m128d ret = _mm_floor_sd(a, b); return validateDouble(ret, dx, dy); @@ -7959,8 +8575,8 @@ result_t test_mm_floor_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float f0 = floorf(_b[0]); - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); __m128 c = _mm_floor_ss(a, b); return validateFloat(c, f0, _a[1], _a[2], _a[3]); @@ -7970,74 +8586,89 @@ result_t test_mm_insert_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t insert = (int32_t) *impl.mTestIntPointer2; - const int imm8 = 2; - - int32_t d[4]; - for (int i = 0; i < 4; i++) { - d[i] = _a[i]; - } - d[imm8] = insert; - - __m128i a = do_mm_load_ps(_a); - __m128i b = _mm_insert_epi32(a, (int) insert, imm8); - return validateInt32(b, d[0], d[1], d[2], d[3]); + __m128i a, b; + +#define TEST_IMPL(IDX) \ + int32_t d##IDX[4]; \ + for (int i = 0; i < 4; i++) { \ + d##IDX[i] = _a[i]; \ + } \ + d##IDX[IDX] = insert; \ + \ + a = load_m128i(_a); \ + b = _mm_insert_epi32(a, (int) insert, IDX); \ + CHECK_RESULT(VALIDATE_INT32_M128(b, d##IDX)); + + IMM_4_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_insert_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; int64_t insert = (int64_t) *impl.mTestIntPointer2; - const int imm8 = 1; + __m128i a, b; int64_t d[2]; - - d[0] = _a[0]; - d[1] = _a[1]; - d[imm8] = insert; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = _mm_insert_epi64(a, insert, imm8); - return validateInt64(b, d[0], d[1]); +#define TEST_IMPL(IDX) \ + d[0] = _a[0]; \ + d[1] = _a[1]; \ + d[IDX] = insert; \ + a = load_m128i(_a); \ + b = _mm_insert_epi64(a, insert, IDX); \ + CHECK_RESULT(validateInt64(b, d[0], d[1])); + + IMM_2_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_insert_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t insert = (int8_t) *impl.mTestIntPointer2; - const int imm8 = 2; - + __m128i a, b; int8_t d[16]; - for (int i = 0; i < 16; i++) { - d[i] = _a[i]; - } - d[imm8] = insert; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = _mm_insert_epi8(a, insert, imm8); - return validateInt8(b, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7], d[8], - d[9], d[10], d[11], d[12], d[13], d[14], d[15]); +#define TEST_IMPL(IDX) \ + for (int i = 0; i < 16; i++) { \ + d[i] = _a[i]; \ + } \ + d[IDX] = insert; \ + a = load_m128i(_a); \ + b = _mm_insert_epi8(a, insert, IDX); \ + CHECK_RESULT(VALIDATE_INT8_M128(b, d)); + + IMM_16_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_insert_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; const float *_b = impl.mTestFloatPointer2; - const uint8_t imm = 0x1 << 6 | 0x2 << 4 | 0x1 << 2; - - float d[4] = {_a[0], _a[1], _a[2], _a[3]}; - d[(imm >> 4) & 0x3] = _b[(imm >> 6) & 0x3]; - - for (int j = 0; j < 4; j++) { - if (imm & (1 << j)) { - d[j] = 0; - } - } - __m128 a = _mm_load_ps(_a); - __m128 b = _mm_load_ps(_b); - __m128 c = _mm_insert_ps(a, b, imm); - - return validateFloat(c, d[0], d[1], d[2], d[3]); + __m128 a, b, c; +#define TEST_IMPL(IDX) \ + float d##IDX[4] = {_a[0], _a[1], _a[2], _a[3]}; \ + d##IDX[(IDX >> 4) & 0x3] = _b[(IDX >> 6) & 0x3]; \ + \ + for (int j = 0; j < 4; j++) { \ + if (IDX & (1 << j)) { \ + d##IDX[j] = 0; \ + } \ + } \ + \ + a = _mm_load_ps(_a); \ + b = _mm_load_ps(_b); \ + c = _mm_insert_ps(a, b, IDX); \ + CHECK_RESULT(validateFloat(c, d##IDX[0], d##IDX[1], d##IDX[2], d##IDX[3])); + + IMM_256_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_max_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8045,45 +8676,46 @@ result_t test_mm_max_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_b = (const int32_t *) impl.mTestIntPointer2; - int32_t d0 = _a[0] > _b[0] ? _a[0] : _b[0]; - int32_t d1 = _a[1] > _b[1] ? _a[1] : _b[1]; - int32_t d2 = _a[2] > _b[2] ? _a[2] : _b[2]; - int32_t d3 = _a[3] > _b[3] ? _a[3] : _b[3]; + int32_t d[4]; + d[0] = _a[0] > _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] > _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] > _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] > _b[3] ? _a[3] : _b[3]; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_max_epi32(a, b); - return validateInt32(c, d0, d1, d2, d3); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_max_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) { const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t d0 = _a[0] > _b[0] ? _a[0] : _b[0]; - int8_t d1 = _a[1] > _b[1] ? _a[1] : _b[1]; - int8_t d2 = _a[2] > _b[2] ? _a[2] : _b[2]; - int8_t d3 = _a[3] > _b[3] ? _a[3] : _b[3]; - int8_t d4 = _a[4] > _b[4] ? _a[4] : _b[4]; - int8_t d5 = _a[5] > _b[5] ? _a[5] : _b[5]; - int8_t d6 = _a[6] > _b[6] ? _a[6] : _b[6]; - int8_t d7 = _a[7] > _b[7] ? _a[7] : _b[7]; - int8_t d8 = _a[8] > _b[8] ? _a[8] : _b[8]; - int8_t d9 = _a[9] > _b[9] ? _a[9] : _b[9]; - int8_t d10 = _a[10] > _b[10] ? _a[10] : _b[10]; - int8_t d11 = _a[11] > _b[11] ? _a[11] : _b[11]; - int8_t d12 = _a[12] > _b[12] ? _a[12] : _b[12]; - int8_t d13 = _a[13] > _b[13] ? _a[13] : _b[13]; - int8_t d14 = _a[14] > _b[14] ? _a[14] : _b[14]; - int8_t d15 = _a[15] > _b[15] ? _a[15] : _b[15]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = _a[0] > _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] > _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] > _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] > _b[3] ? _a[3] : _b[3]; + d[4] = _a[4] > _b[4] ? _a[4] : _b[4]; + d[5] = _a[5] > _b[5] ? _a[5] : _b[5]; + d[6] = _a[6] > _b[6] ? _a[6] : _b[6]; + d[7] = _a[7] > _b[7] ? _a[7] : _b[7]; + d[8] = _a[8] > _b[8] ? _a[8] : _b[8]; + d[9] = _a[9] > _b[9] ? _a[9] : _b[9]; + d[10] = _a[10] > _b[10] ? _a[10] : _b[10]; + d[11] = _a[11] > _b[11] ? _a[11] : _b[11]; + d[12] = _a[12] > _b[12] ? _a[12] : _b[12]; + d[13] = _a[13] > _b[13] ? _a[13] : _b[13]; + d[14] = _a[14] > _b[14] ? _a[14] : _b[14]; + d[15] = _a[15] > _b[15] ? _a[15] : _b[15]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_max_epi8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_max_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8091,20 +8723,21 @@ result_t test_mm_max_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) const uint16_t *_a = (const uint16_t *) impl.mTestIntPointer1; const uint16_t *_b = (const uint16_t *) impl.mTestIntPointer2; - uint16_t d0 = _a[0] > _b[0] ? _a[0] : _b[0]; - uint16_t d1 = _a[1] > _b[1] ? _a[1] : _b[1]; - uint16_t d2 = _a[2] > _b[2] ? _a[2] : _b[2]; - uint16_t d3 = _a[3] > _b[3] ? _a[3] : _b[3]; - uint16_t d4 = _a[4] > _b[4] ? _a[4] : _b[4]; - uint16_t d5 = _a[5] > _b[5] ? _a[5] : _b[5]; - uint16_t d6 = _a[6] > _b[6] ? _a[6] : _b[6]; - uint16_t d7 = _a[7] > _b[7] ? _a[7] : _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint16_t d[8]; + d[0] = _a[0] > _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] > _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] > _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] > _b[3] ? _a[3] : _b[3]; + d[4] = _a[4] > _b[4] ? _a[4] : _b[4]; + d[5] = _a[5] > _b[5] ? _a[5] : _b[5]; + d[6] = _a[6] > _b[6] ? _a[6] : _b[6]; + d[7] = _a[7] > _b[7] ? _a[7] : _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_max_epu16(a, b); - return validateUInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_UINT16_M128(c, d); } result_t test_mm_max_epu32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8112,16 +8745,17 @@ result_t test_mm_max_epu32(const SSE2NEONTestImpl &impl, uint32_t iter) const uint32_t *_a = (const uint32_t *) impl.mTestIntPointer1; const uint32_t *_b = (const uint32_t *) impl.mTestIntPointer2; - uint32_t d0 = _a[0] > _b[0] ? _a[0] : _b[0]; - uint32_t d1 = _a[1] > _b[1] ? _a[1] : _b[1]; - uint32_t d2 = _a[2] > _b[2] ? _a[2] : _b[2]; - uint32_t d3 = _a[3] > _b[3] ? _a[3] : _b[3]; + uint32_t d[4]; + d[0] = _a[0] > _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] > _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] > _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] > _b[3] ? _a[3] : _b[3]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_max_epu32(a, b); - return validateUInt32(c, d0, d1, d2, d3); + return VALIDATE_UINT32_M128(c, d); } result_t test_mm_min_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8129,16 +8763,17 @@ result_t test_mm_min_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_b = (const int32_t *) impl.mTestIntPointer2; - int32_t d0 = _a[0] < _b[0] ? _a[0] : _b[0]; - int32_t d1 = _a[1] < _b[1] ? _a[1] : _b[1]; - int32_t d2 = _a[2] < _b[2] ? _a[2] : _b[2]; - int32_t d3 = _a[3] < _b[3] ? _a[3] : _b[3]; + int32_t d[4]; + d[0] = _a[0] < _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] < _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] < _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] < _b[3] ? _a[3] : _b[3]; - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_min_epi32(a, b); - return validateInt32(c, d0, d1, d2, d3); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_min_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8146,29 +8781,29 @@ result_t test_mm_min_epi8(const SSE2NEONTestImpl &impl, uint32_t iter) const int8_t *_a = (const int8_t *) impl.mTestIntPointer1; const int8_t *_b = (const int8_t *) impl.mTestIntPointer2; - int8_t d0 = _a[0] < _b[0] ? _a[0] : _b[0]; - int8_t d1 = _a[1] < _b[1] ? _a[1] : _b[1]; - int8_t d2 = _a[2] < _b[2] ? _a[2] : _b[2]; - int8_t d3 = _a[3] < _b[3] ? _a[3] : _b[3]; - int8_t d4 = _a[4] < _b[4] ? _a[4] : _b[4]; - int8_t d5 = _a[5] < _b[5] ? _a[5] : _b[5]; - int8_t d6 = _a[6] < _b[6] ? _a[6] : _b[6]; - int8_t d7 = _a[7] < _b[7] ? _a[7] : _b[7]; - int8_t d8 = _a[8] < _b[8] ? _a[8] : _b[8]; - int8_t d9 = _a[9] < _b[9] ? _a[9] : _b[9]; - int8_t d10 = _a[10] < _b[10] ? _a[10] : _b[10]; - int8_t d11 = _a[11] < _b[11] ? _a[11] : _b[11]; - int8_t d12 = _a[12] < _b[12] ? _a[12] : _b[12]; - int8_t d13 = _a[13] < _b[13] ? _a[13] : _b[13]; - int8_t d14 = _a[14] < _b[14] ? _a[14] : _b[14]; - int8_t d15 = _a[15] < _b[15] ? _a[15] : _b[15]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + int8_t d[16]; + d[0] = _a[0] < _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] < _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] < _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] < _b[3] ? _a[3] : _b[3]; + d[4] = _a[4] < _b[4] ? _a[4] : _b[4]; + d[5] = _a[5] < _b[5] ? _a[5] : _b[5]; + d[6] = _a[6] < _b[6] ? _a[6] : _b[6]; + d[7] = _a[7] < _b[7] ? _a[7] : _b[7]; + d[8] = _a[8] < _b[8] ? _a[8] : _b[8]; + d[9] = _a[9] < _b[9] ? _a[9] : _b[9]; + d[10] = _a[10] < _b[10] ? _a[10] : _b[10]; + d[11] = _a[11] < _b[11] ? _a[11] : _b[11]; + d[12] = _a[12] < _b[12] ? _a[12] : _b[12]; + d[13] = _a[13] < _b[13] ? _a[13] : _b[13]; + d[14] = _a[14] < _b[14] ? _a[14] : _b[14]; + d[15] = _a[15] < _b[15] ? _a[15] : _b[15]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_min_epi8(a, b); - return validateInt8(c, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, - d12, d13, d14, d15); + return VALIDATE_INT8_M128(c, d); } result_t test_mm_min_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8176,20 +8811,21 @@ result_t test_mm_min_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) const uint16_t *_a = (const uint16_t *) impl.mTestIntPointer1; const uint16_t *_b = (const uint16_t *) impl.mTestIntPointer2; - uint16_t d0 = _a[0] < _b[0] ? _a[0] : _b[0]; - uint16_t d1 = _a[1] < _b[1] ? _a[1] : _b[1]; - uint16_t d2 = _a[2] < _b[2] ? _a[2] : _b[2]; - uint16_t d3 = _a[3] < _b[3] ? _a[3] : _b[3]; - uint16_t d4 = _a[4] < _b[4] ? _a[4] : _b[4]; - uint16_t d5 = _a[5] < _b[5] ? _a[5] : _b[5]; - uint16_t d6 = _a[6] < _b[6] ? _a[6] : _b[6]; - uint16_t d7 = _a[7] < _b[7] ? _a[7] : _b[7]; - - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + uint16_t d[8]; + d[0] = _a[0] < _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] < _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] < _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] < _b[3] ? _a[3] : _b[3]; + d[4] = _a[4] < _b[4] ? _a[4] : _b[4]; + d[5] = _a[5] < _b[5] ? _a[5] : _b[5]; + d[6] = _a[6] < _b[6] ? _a[6] : _b[6]; + d[7] = _a[7] < _b[7] ? _a[7] : _b[7]; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_min_epu16(a, b); - return validateUInt16(c, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_UINT16_M128(c, d); } result_t test_mm_min_epu32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8197,16 +8833,17 @@ result_t test_mm_min_epu32(const SSE2NEONTestImpl &impl, uint32_t iter) const uint32_t *_a = (const uint32_t *) impl.mTestIntPointer1; const uint32_t *_b = (const uint32_t *) impl.mTestIntPointer2; - uint32_t d0 = _a[0] < _b[0] ? _a[0] : _b[0]; - uint32_t d1 = _a[1] < _b[1] ? _a[1] : _b[1]; - uint32_t d2 = _a[2] < _b[2] ? _a[2] : _b[2]; - uint32_t d3 = _a[3] < _b[3] ? _a[3] : _b[3]; + uint32_t d[4]; + d[0] = _a[0] < _b[0] ? _a[0] : _b[0]; + d[1] = _a[1] < _b[1] ? _a[1] : _b[1]; + d[2] = _a[2] < _b[2] ? _a[2] : _b[2]; + d[3] = _a[3] < _b[3] ? _a[3] : _b[3]; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_min_epu32(a, b); - return validateUInt32(c, d0, d1, d2, d3); + return VALIDATE_UINT32_M128(c, d); } result_t test_mm_minpos_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8219,18 +8856,39 @@ result_t test_mm_minpos_epu16(const SSE2NEONTestImpl &impl, uint32_t iter) min = (uint16_t) _a[i]; } } - uint16_t d0 = min; - uint16_t d1 = index; - uint16_t d2 = 0, d3 = 0, d4 = 0, d5 = 0, d6 = 0, d7 = 0; - __m128i a = do_mm_load_ps((const int32_t *) _a); + uint16_t d[8] = {min, index, 0, 0, 0, 0, 0, 0}; + + __m128i a = load_m128i(_a); __m128i ret = _mm_minpos_epu16(a); - return validateUInt16(ret, d0, d1, d2, d3, d4, d5, d6, d7); + return VALIDATE_UINT16_M128(ret, d); } result_t test_mm_mpsadbw_epu8(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + const uint8_t *_a = (const uint8_t *) impl.mTestIntPointer1; + const uint8_t *_b = (const uint8_t *) impl.mTestIntPointer2; + + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); + __m128i c; +#define TEST_IMPL(IDX) \ + uint8_t a_offset##IDX = ((IDX >> 2) & 0x1) * 4; \ + uint8_t b_offset##IDX = (IDX & 0x3) * 4; \ + \ + uint16_t d##IDX[8] = {}; \ + for (int i = 0; i < 8; i++) { \ + for (int j = 0; j < 4; j++) { \ + d##IDX[i] += \ + abs(_a[(a_offset##IDX + i) + j] - _b[b_offset##IDX + j]); \ + } \ + } \ + c = _mm_mpsadbw_epu8(a, b, IDX); \ + CHECK_RESULT(VALIDATE_UINT16_M128(c, d##IDX)); + + IMM_8_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } result_t test_mm_mul_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8238,8 +8896,8 @@ result_t test_mm_mul_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_b = (const int32_t *) impl.mTestIntPointer2; - int64_t dx = (int64_t)(_a[0]) * (int64_t)(_b[0]); - int64_t dy = (int64_t)(_a[2]) * (int64_t)(_b[2]); + int64_t dx = (int64_t) (_a[0]) * (int64_t) (_b[0]); + int64_t dy = (int64_t) (_a[2]) * (int64_t) (_b[2]); __m128i a = _mm_loadu_si128((const __m128i *) _a); __m128i b = _mm_loadu_si128((const __m128i *) _b); @@ -8255,12 +8913,12 @@ result_t test_mm_mullo_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) int32_t d[4]; for (int i = 0; i < 4; i++) { - d[i] = (int32_t)((int64_t) _a[i] * (int64_t) _b[i]); + d[i] = (int32_t) ((int64_t) _a[i] * (int64_t) _b[i]); } - __m128i a = do_mm_load_ps(_a); - __m128i b = do_mm_load_ps(_b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_mullo_epi32(a, b); - return validateInt32(c, d[0], d[1], d[2], d[3]); + return VALIDATE_INT32_M128(c, d); } result_t test_mm_packus_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8288,20 +8946,20 @@ result_t test_mm_packus_epi32(const SSE2NEONTestImpl &impl, uint32_t iter) d[i + 4] = (uint16_t) _b[i]; } - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i c = _mm_packus_epi32(a, b); - return validateUInt16(c, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7]); + return VALIDATE_UINT16_M128(c, d); } result_t test_mm_round_pd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (double *) impl.mTestFloatPointer1; - double d[2]; + double d[2] = {}; __m128d ret; - __m128d a = do_mm_load_pd(_a); + __m128d a = load_m128d(_a); switch (iter & 0x7) { case 0: d[0] = bankersRounding(_a[0]); @@ -8363,10 +9021,10 @@ result_t test_mm_round_pd(const SSE2NEONTestImpl &impl, uint32_t iter) result_t test_mm_round_ps(const SSE2NEONTestImpl &impl, uint32_t iter) { const float *_a = impl.mTestFloatPointer1; - float f[4]; + float f[4] = {}; __m128 ret; - __m128 a = do_mm_load_ps(_a); + __m128 a = load_m128(_a); switch (iter & 0x7) { case 0: f[0] = bankersRounding(_a[0]); @@ -8445,11 +9103,11 @@ result_t test_mm_round_sd(const SSE2NEONTestImpl &impl, uint32_t iter) { const double *_a = (double *) impl.mTestFloatPointer1; const double *_b = (double *) impl.mTestFloatPointer2; - double d[2]; + double d[2] = {}; __m128d ret; - __m128d a = do_mm_load_pd(_a); - __m128d b = do_mm_load_pd(_b); + __m128d a = load_m128d(_a); + __m128d b = load_m128d(_b); d[1] = _a[1]; switch (iter & 0x7) { case 0: @@ -8508,8 +9166,8 @@ result_t test_mm_round_ss(const SSE2NEONTestImpl &impl, uint32_t iter) float f[4]; __m128 ret; - __m128 a = do_mm_load_ps(_a); - __m128 b = do_mm_load_ps(_b); + __m128 a = load_m128(_a); + __m128 b = load_m128(_b); switch (iter & 0x7) { case 0: f[0] = bankersRounding(_b[0]); @@ -8570,13 +9228,13 @@ result_t test_mm_stream_load_si128(const SSE2NEONTestImpl &impl, uint32_t iter) __m128i ret = _mm_stream_load_si128((__m128i *) addr); - return validateInt32(ret, addr[0], addr[1], addr[2], addr[3]); + return VALIDATE_INT32_M128(ret, addr); } result_t test_mm_test_all_ones(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; - __m128i a = do_mm_load_ps(_a); + __m128i a = load_m128i(_a); int32_t d0 = ~_a[0] & (~(uint32_t) 0); int32_t d1 = ~_a[1] & (~(uint32_t) 0); @@ -8593,8 +9251,8 @@ result_t test_mm_test_all_zeros(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_mask = (const int32_t *) impl.mTestIntPointer2; - __m128i a = do_mm_load_ps(_a); - __m128i mask = do_mm_load_ps(_mask); + __m128i a = load_m128i(_a); + __m128i mask = load_m128i(_mask); int32_t d0 = _a[0] & _mask[0]; int32_t d1 = _a[1] & _mask[1]; @@ -8612,17 +9270,18 @@ result_t test_mm_test_mix_ones_zeros(const SSE2NEONTestImpl &impl, { const int32_t *_a = (const int32_t *) impl.mTestIntPointer1; const int32_t *_mask = (const int32_t *) impl.mTestIntPointer2; - __m128i a = do_mm_load_ps(_a); - __m128i mask = do_mm_load_ps(_mask); + __m128i a = load_m128i(_a); + __m128i mask = load_m128i(_mask); - int32_t d0 = !((_a[0]) & _mask[0]) & !((!_a[0]) & _mask[0]); - int32_t d1 = !((_a[1]) & _mask[1]) & !((!_a[1]) & _mask[1]); - int32_t d2 = !((_a[2]) & _mask[2]) & !((!_a[2]) & _mask[2]); - int32_t d3 = !((_a[3]) & _mask[3]) & !((!_a[3]) & _mask[3]); - int32_t result = ((d0 & d1 & d2 & d3) == 0) ? 1 : 0; + int32_t ZF = 1; + int32_t CF = 1; + for (int i = 0; i < 4; i++) { + ZF &= ((_a[i] & _mask[i]) == 0); + CF &= ((~_a[i] & _mask[i]) == 0); + } + int32_t result = (ZF == 0 && CF == 0); int32_t ret = _mm_test_mix_ones_zeros(a, mask); - return result == ret ? TEST_SUCCESS : TEST_FAIL; } @@ -8665,41 +9324,1325 @@ result_t test_mm_testz_si128(const SSE2NEONTestImpl &impl, uint32_t iter) } /* SSE4.2 */ +#define IS_CMPESTRI 1 + +#define DEF_ENUM_MM_CMPESTRX_VARIANT(c, ...) c, + +#define EVAL_MM_CMPESTRX_TEST_CASE(c, type, data_type, im, IM) \ + do { \ + data_type *a = test_mm_##im##_##type##_data[c].a, \ + *b = test_mm_##im##_##type##_data[c].b; \ + int la = test_mm_##im##_##type##_data[c].la, \ + lb = test_mm_##im##_##type##_data[c].lb; \ + const int imm8 = IMM_##c; \ + IIF(IM) \ + (int expect = test_mm_##im##_##type##_data[c].expect, \ + data_type *expect = test_mm_##im##_##type##_data[c].expect); \ + __m128i ma, mb; \ + memcpy(&ma, a, sizeof(ma)); \ + memcpy(&mb, b, sizeof(mb)); \ + IIF(IM) \ + (int res = _mm_##im(ma, la, mb, lb, imm8), \ + __m128i res = _mm_##im(ma, la, mb, lb, imm8)); \ + if (IIF(IM)(res != expect, memcmp(expect, &res, sizeof(__m128i)))) \ + return TEST_FAIL; \ + } while (0); + +#define ENUM_MM_CMPESTRX_TEST_CASES(type, type_lower, data_type, func, FUNC, \ + IM) \ + enum { MM_##FUNC##_##type##_TEST_CASES(DEF_ENUM_MM_CMPESTRX_VARIANT) }; \ + MM_##FUNC##_##type##_TEST_CASES(EVAL_MM_CMPESTRX_TEST_CASE, type_lower, \ + data_type, func, IM) + +#define IMM_UBYTE_EACH_LEAST \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT) +#define IMM_UBYTE_EACH_LEAST_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_EACH_LEAST_MASKED_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UBYTE_EACH_MOST \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT) +#define IMM_UBYTE_EACH_MOST_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_EACH_MOST_MASKED_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UBYTE_ANY_LEAST \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT) +#define IMM_UBYTE_ANY_LEAST_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_ANY_LEAST_MASKED_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UBYTE_ANY_MOST \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT) +#define IMM_UBYTE_ANY_MOST_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_ANY_MOST_MASKED_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UBYTE_RANGES_LEAST \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT) +#define IMM_UBYTE_RANGES_MOST \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT) +#define IMM_UBYTE_RANGES_LEAST_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_RANGES_MOST_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_RANGES_LEAST_MASKED_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UBYTE_RANGES_MOST_MASKED_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UBYTE_ORDERED_LEAST \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT) +#define IMM_UBYTE_ORDERED_LEAST_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_ORDERED_MOST \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT) +#define IMM_UBYTE_ORDERED_MOST_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_ORDERED_MOST_MASKED_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) + +#define IMM_SBYTE_EACH_LEAST \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SBYTE_EACH_LEAST_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SBYTE_EACH_LEAST_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SBYTE_EACH_MOST \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT) +#define IMM_SBYTE_EACH_MOST_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SBYTE_EACH_MOST_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SBYTE_ANY_LEAST \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SBYTE_ANY_LEAST_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SBYTE_ANY_MOST \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT) +#define IMM_SBYTE_ANY_MOST_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SBYTE_RANGES_LEAST \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SBYTE_RANGES_LEAST_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SBYTE_RANGES_LEAST_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SBYTE_RANGES_MOST \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT) +#define IMM_SBYTE_RANGES_MOST_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SBYTE_RANGES_MOST_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SBYTE_ORDERED_LEAST \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SBYTE_ORDERED_LEAST_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SBYTE_ORDERED_LEAST_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SBYTE_ORDERED_MOST_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SBYTE_ORDERED_MOST \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT) +#define IMM_SBYTE_ORDERED_MOST_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) + +#define IMM_UWORD_RANGES_LEAST \ + (_SIDD_UWORD_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT) +#define IMM_UWORD_RANGES_LEAST_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UWORD_RANGES_LEAST_MASKED_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UWORD_RANGES_MOST \ + (_SIDD_UWORD_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT) +#define IMM_UWORD_RANGES_MOST_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UWORD_RANGES_MOST_MASKED_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UWORD_EACH_LEAST \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT) +#define IMM_UWORD_EACH_MOST \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT) +#define IMM_UWORD_EACH_LEAST_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UWORD_EACH_LEAST_MASKED_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UWORD_EACH_MOST_MASKED_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UWORD_ANY_LEAST \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT) +#define IMM_UWORD_ANY_MOST \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT) +#define IMM_UWORD_ANY_MOST_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UWORD_ANY_LEAST_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UWORD_ANY_LEAST_MASKED_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UWORD_ORDERED_LEAST \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT) +#define IMM_UWORD_ORDERED_LEAST_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UWORD_ORDERED_LEAST_MASKED_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_UWORD_ORDERED_MOST \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT) +#define IMM_UWORD_ORDERED_MOST_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UWORD_ORDERED_MOST_MASKED_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) + +#define IMM_SWORD_RANGES_LEAST \ + (_SIDD_SWORD_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SWORD_RANGES_MOST \ + (_SIDD_SWORD_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT) +#define IMM_SWORD_RANGES_LEAST_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SWORD_RANGES_LEAST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_RANGES | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SWORD_RANGES_MOST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_RANGES | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SWORD_EACH_LEAST \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SWORD_EACH_MOST \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT) +#define IMM_SWORD_EACH_LEAST_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SWORD_EACH_LEAST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SWORD_EACH_MOST_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SWORD_EACH_MOST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SWORD_ANY_LEAST \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SWORD_ANY_LEAST_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SWORD_ANY_LEAST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SWORD_ANY_MOST \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT) +#define IMM_SWORD_ANY_MOST_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SWORD_ANY_MOST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SWORD_ANY_MOST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SWORD_ORDERED_LEAST \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT) +#define IMM_SWORD_ORDERED_LEAST_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_SWORD_ORDERED_LEAST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_LEAST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SWORD_ORDERED_MOST \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT) +#define IMM_SWORD_ORDERED_MOST_MASKED_NEGATIVE \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_MOST_SIGNIFICANT | \ + _SIDD_MASKED_NEGATIVE_POLARITY) + +typedef struct { + uint8_t a[16], b[16]; + int la, lb; + const int imm8; + int expect; +} test_mm_cmpestri_ubyte_data_t; +typedef struct { + int8_t a[16], b[16]; + int la, lb; + const int imm8; + int expect; +} test_mm_cmpestri_sbyte_data_t; +typedef struct { + uint16_t a[8], b[8]; + int la, lb; + const int imm8; + int expect; +} test_mm_cmpestri_uword_data_t; +typedef struct { + int16_t a[8], b[8]; + int la, lb; + const int imm8; + int expect; +} test_mm_cmpestri_sword_data_t; + +#define TEST_MM_CMPESTRA_UBYTE_DATA_LEN 3 +static test_mm_cmpestri_ubyte_data_t + test_mm_cmpestra_ubyte_data[TEST_MM_CMPESTRA_UBYTE_DATA_LEN] = { + {{20, 10, 33, 56, 78}, + {20, 10, 34, 98, 127, 20, 10, 32, 20, 10, 32, 11, 3, 20, 10, 31}, + 3, + 17, + IMM_UBYTE_ORDERED_MOST, + 1}, + {{20, 127, 0, 45, 77, 1, 34, 43, 109}, + {2, 127, 0, 54, 6, 43, 12, 110, 100}, + 9, + 20, + IMM_UBYTE_EACH_LEAST_NEGATIVE, + 0}, + {{22, 33, 90, 1}, + {22, 33, 90, 1, 1, 5, 4, 7, 98, 34, 1, 12, 13, 14, 15, 16}, + 4, + 11, + IMM_UBYTE_ANY_LEAST_MASKED_NEGATIVE, + 0}, +}; + +#define TEST_MM_CMPESTRA_SBYTE_DATA_LEN 3 +static test_mm_cmpestri_sbyte_data_t + test_mm_cmpestra_sbyte_data[TEST_MM_CMPESTRA_SBYTE_DATA_LEN] = { + {{45, -94, 38, -11, 84, -123, -43, -49, 25, -55, -121, -6, 57, 108, -55, + 69}, + {-26, -61, -21, -96, 48, -112, 95, -56, 29, -55, -121, -6, 57, 108, + -55, 69}, + 23, + 28, + IMM_SBYTE_RANGES_LEAST, + 0}, + {{-12, 8}, + {-12, 7, -12, 8, -13, 45, -12, 8}, + 2, + 8, + IMM_SBYTE_ORDERED_MOST_NEGATIVE, + 0}, + {{-100, -127, 56, 78, 21, -1, 9, 127, 45}, + {100, 126, 30, 65, 87, 54, 80, 81, -98, -101, 90, 1, 5, 60, -77, -65}, + 10, + 20, + IMM_SBYTE_ANY_LEAST, + 1}, +}; + +#define TEST_MM_CMPESTRA_UWORD_DATA_LEN 3 +static test_mm_cmpestri_uword_data_t + test_mm_cmpestra_uword_data[TEST_MM_CMPESTRA_UWORD_DATA_LEN] = { + {{10000, 20000, 30000, 40000, 50000}, + {40001, 50002, 10000, 20000, 30000, 40000, 50000}, + 5, + 10, + IMM_UWORD_ORDERED_LEAST, + 0}, + {{1001, 9487, 9487, 8000}, + {1001, 1002, 1003, 8709, 100, 1, 1000, 999}, + 4, + 6, + IMM_UWORD_RANGES_LEAST_MASKED_NEGATIVE, + 0}, + {{12, 21, 0, 45, 88, 10001, 10002, 65535}, + {22, 13, 3, 54, 888, 10003, 10000, 65530}, + 13, + 13, + IMM_UWORD_EACH_MOST, + 1}, +}; + +#define TEST_MM_CMPESTRA_SWORD_DATA_LEN 3 +static test_mm_cmpestri_sword_data_t + test_mm_cmpestra_sword_data[TEST_MM_CMPESTRA_SWORD_DATA_LEN] = { + {{-100, -80, -5, -1, 10, 1000}, + {-100, -99, -80, -2, 11, 789, 889, 999}, + 6, + 12, + IMM_SWORD_RANGES_LEAST_NEGATIVE, + 1}, + {{-30000, -90, -32766, 1200, 5}, + {-30001, 21, 10000, 1201, 888}, + 5, + 5, + IMM_SWORD_EACH_MOST, + 0}, + {{2001, -1928}, + {2000, 1928, 3000, 2289, 4000, 111, 2002, -1928}, + 2, + 9, + IMM_SWORD_ANY_LEAST_MASKED_NEGATIVE, + 0}, +}; + + +#define MM_CMPESTRA_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ORDERED_MOST, __VA_ARGS__) \ + _(UBYTE_EACH_LEAST_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_ANY_LEAST_MASKED_NEGATIVE, __VA_ARGS__) + +#define MM_CMPESTRA_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(SBYTE_ORDERED_MOST_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ANY_LEAST, __VA_ARGS__) + +#define MM_CMPESTRA_UWORD_TEST_CASES(_, ...) \ + _(UWORD_ORDERED_LEAST, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(UWORD_EACH_MOST, __VA_ARGS__) + +#define MM_CMPESTRA_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_LEAST_NEGATIVE, __VA_ARGS__) \ + _(SWORD_EACH_MOST, __VA_ARGS__) \ + _(SWORD_ANY_LEAST_MASKED_NEGATIVE, __VA_ARGS__) + +#define GENERATE_MM_CMPESTRA_TEST_CASES \ + ENUM_MM_CMPESTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpestra, CMPESTRA, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpestra, CMPESTRA, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(UWORD, uword, uint16_t, cmpestra, CMPESTRA, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SWORD, sword, int16_t, cmpestra, CMPESTRA, \ + IS_CMPESTRI) + result_t test_mm_cmpestra(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPESTRA_TEST_CASES + return TEST_SUCCESS; } +#define TEST_MM_CMPESTRC_UBYTE_DATA_LEN 4 +static test_mm_cmpestri_ubyte_data_t + test_mm_cmpestrc_ubyte_data[TEST_MM_CMPESTRC_UBYTE_DATA_LEN] = { + {{66, 3, 3, 65}, + {66, 3, 3, 65, 67, 2, 2, 67, 56, 11, 1, 23, 66, 3, 3, 65}, + 4, + 16, + IMM_UBYTE_ORDERED_MOST_MASKED_NEGATIVE, + 1}, + {{1, 11, 2, 22, 3, 33, 4, 44, 5, 55, 6, 66, 7, 77, 8, 88}, + {2, 22, 3, 23, 5, 66, 255, 43, 6, 66, 7, 77, 9, 99, 10, 100}, + 16, + 16, + IMM_UBYTE_EACH_MOST, + 0}, + {{36, 72, 108}, {12, 24, 48, 96, 77, 84}, 3, 6, IMM_UBYTE_ANY_LEAST, 0}, + {{12, 24, 36, 48}, + {11, 49, 50, 56, 77, 15, 10}, + 4, + 7, + IMM_UBYTE_RANGES_LEAST_NEGATIVE, + 1}, +}; + +#define TEST_MM_CMPESTRC_SBYTE_DATA_LEN 4 +static test_mm_cmpestri_sbyte_data_t + test_mm_cmpestrc_sbyte_data[TEST_MM_CMPESTRC_SBYTE_DATA_LEN] = { + {{-22, -30, 40, 45}, + {-31, -32, 46, 77}, + 4, + 4, + IMM_SBYTE_RANGES_MOST, + 0}, + {{-12, -7, 33, 100, 12}, + {-12, -7, 33, 100, 11, -11, -7, 33, 100, 12}, + 5, + 10, + IMM_SBYTE_ORDERED_MOST_MASKED_NEGATIVE, + 1}, + {{1, 2, 3, 4, 5, -1, -2, -3, -4, -5}, + {1, 2, 3, 4, 5, -1, -2, -3, -5}, + 10, + 9, + IMM_SBYTE_ANY_MOST_MASKED_NEGATIVE, + 0}, + {{101, -128, -88, -76, 89, 109, 44, -12, -45, -100, 22, 1, 91}, + {102, -120, 88, -76, 98, 107, 33, 12, 45, -100, 22, 10, 19}, + 13, + 13, + IMM_SBYTE_EACH_MOST, + 1}, +}; + +#define TEST_MM_CMPESTRC_UWORD_DATA_LEN 4 +static test_mm_cmpestri_uword_data_t + test_mm_cmpestrc_uword_data[TEST_MM_CMPESTRC_UWORD_DATA_LEN] = { + {{1000, 2000, 4000, 8000, 16000}, + {40001, 1000, 2000, 40000, 8000, 16000}, + 5, + 6, + IMM_UWORD_ORDERED_LEAST_NEGATIVE, + 1}, + {{1111, 1212}, + {1110, 1213, 1110, 1214, 1100, 1220, 1000, 1233}, + 2, + 8, + IMM_UWORD_RANGES_MOST, + 0}, + {{10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000}, + {9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000}, + 13, + 13, + IMM_UWORD_EACH_LEAST_MASKED_NEGATIVE, + 1}, + {{12}, {11, 13, 14, 15, 10}, 1, 5, IMM_UWORD_ANY_MOST, 0}, +}; + +#define TEST_MM_CMPESTRC_SWORD_DATA_LEN 4 +static test_mm_cmpestri_sword_data_t + test_mm_cmpestrc_sword_data[TEST_MM_CMPESTRC_SWORD_DATA_LEN] = { + {{-100, -90, -80, -66, 1}, + {-101, -102, -1000, 2, 67, 10000}, + 5, + 6, + IMM_SWORD_RANGES_LEAST, + 0}, + {{12, 13, -700, 888, 44, -987, 19}, + {12, 13, -700, 888, 44, -987, 19}, + 7, + 7, + IMM_SWORD_EACH_MOST_NEGATIVE, + 0}, + {{2001, -1992, 1995, 10007, 2000}, + {2000, 1928, 3000, 9822, 5000, 1111, 2002, -1928}, + 5, + 9, + IMM_SWORD_ANY_LEAST_NEGATIVE, + 1}, + {{13, -26, 39}, + {12, -25, 33, 13, -26, 39}, + 3, + 6, + IMM_SWORD_ORDERED_MOST, + 1}, +}; + + +#define MM_CMPESTRC_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ORDERED_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_EACH_MOST, __VA_ARGS__) \ + _(UBYTE_ANY_LEAST, __VA_ARGS__) \ + _(UBYTE_RANGES_LEAST_NEGATIVE, __VA_ARGS__) + +#define MM_CMPESTRC_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_RANGES_MOST, __VA_ARGS__) \ + _(SBYTE_ORDERED_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ANY_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_EACH_MOST, __VA_ARGS__) + +#define MM_CMPESTRC_UWORD_TEST_CASES(_, ...) \ + _(UWORD_ORDERED_LEAST_NEGATIVE, __VA_ARGS__) \ + _(UWORD_RANGES_MOST, __VA_ARGS__) \ + _(UWORD_EACH_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(UWORD_ANY_MOST, __VA_ARGS__) + +#define MM_CMPESTRC_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_LEAST, __VA_ARGS__) \ + _(SWORD_EACH_MOST_NEGATIVE, __VA_ARGS__) \ + _(SWORD_ANY_LEAST_NEGATIVE, __VA_ARGS__) \ + _(SWORD_ORDERED_MOST, __VA_ARGS__) + +#define GENERATE_MM_CMPESTRC_TEST_CASES \ + ENUM_MM_CMPESTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpestrc, CMPESTRC, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpestrc, CMPESTRC, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(UWORD, uword, uint16_t, cmpestrc, CMPESTRC, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SWORD, sword, int16_t, cmpestrc, CMPESTRC, \ + IS_CMPESTRI) + result_t test_mm_cmpestrc(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPESTRC_TEST_CASES + return TEST_SUCCESS; } +#define TEST_MM_CMPESTRI_UBYTE_DATA_LEN 4 +static test_mm_cmpestri_ubyte_data_t + test_mm_cmpestri_ubyte_data[TEST_MM_CMPESTRI_UBYTE_DATA_LEN] = { + {{23, 89, 255, 0, 90, 45, 67, 12, 1, 56, 200, 141, 3, 4, 2, 76}, + {32, 89, 255, 128, 9, 54, 78, 12, 1, 56, 100, 41, 42, 68, 32, 5}, + 16, + 16, + IMM_UBYTE_ANY_LEAST_NEGATIVE, + 0}, + {{0, 83, 112, 12, 221, 54, 76, 83, 112, 10}, + {0, 83, 112, 83, 122, 45, 67, 83, 112, 9}, + 10, + 10, + IMM_UBYTE_EACH_LEAST, + 0}, + {{34, 78, 12}, + {56, 100, 11, 67, 35, 79, 67, 255, 0, 43, 121, 234, 225, 91, 31, 23}, + 3, + 16, + IMM_UBYTE_RANGES_LEAST, + 0}, + {{13, 10, 9, 32, 105, 103, 110, 111, 114, 101, 32, 116, 104, 105, 115, + 32}, + {83, 112, 108, 105, 116, 32, 13, 10, 9, 32, 108, 105, 110, 101, 32, + 32}, + 3, + 15, + IMM_UBYTE_ORDERED_LEAST, + 6}, +}; + +#define TEST_MM_CMPESTRI_SBYTE_DATA_LEN 4 +static test_mm_cmpestri_sbyte_data_t + test_mm_cmpestri_sbyte_data[TEST_MM_CMPESTRI_SBYTE_DATA_LEN] = { + {{-12, -1, 90, -128, 43, 6, 87, 127}, + {-1, -1, 9, -127, 126, 6, 78, 23}, + 8, + 8, + IMM_SBYTE_EACH_LEAST, + 1}, + {{34, 67, -90, 33, 123, -100, 43, 56}, + {43, 76, -90, 44, 20, -100, 54, 56}, + 8, + 8, + IMM_SBYTE_ANY_LEAST, + 0}, + {{-43, 67, 89}, + {-44, -54, -30, -128, 127, 34, 10, -62}, + 3, + 7, + IMM_SBYTE_RANGES_LEAST, + 2}, + {{90, 34, -32, 0, 5}, + {19, 34, -32, 90, 34, -32, 45, 0, 5, 90, 34, -32, 0, 5, 19, 87}, + 3, + 16, + IMM_SBYTE_ORDERED_LEAST, + 3}, +}; + +#define TEST_MM_CMPESTRI_UWORD_DATA_LEN 4 +static test_mm_cmpestri_uword_data_t + test_mm_cmpestri_uword_data[TEST_MM_CMPESTRI_UWORD_DATA_LEN] = { + {{45, 65535, 0, 87, 1000, 10, 45, 26}, + {65534, 0, 0, 78, 1000, 10, 32, 26}, + 8, + 8, + IMM_UWORD_EACH_LEAST, + 2}, + {{45, 23, 10, 54, 88, 10000, 20000, 100}, + {544, 10000, 20000, 1, 0, 2897, 2330, 2892}, + 8, + 8, + IMM_UWORD_ANY_LEAST, + 1}, + {{10000, 15000}, + {12, 45, 67, 899, 10001, 32, 15001, 15000}, + 2, + 8, + IMM_UWORD_RANGES_LEAST, + 4}, + {{0, 1, 54, 89, 100}, + {101, 102, 65535, 0, 1, 54, 89, 100}, + 5, + 8, + IMM_UWORD_ORDERED_LEAST, + 3}, +}; + +#define TEST_MM_CMPESTRI_SWORD_DATA_LEN 4 +static test_mm_cmpestri_sword_data_t + test_mm_cmpestri_sword_data[TEST_MM_CMPESTRI_SWORD_DATA_LEN] = { + {{13, 6, 5, 4, 3, 2, 1, 3}, + {-7, 16, 5, 4, -1, 6, 1, 3}, + 10, + 10, + IMM_SWORD_RANGES_MOST, + 7}, + {{13, 6, 5, 4, 3, 2, 1, 3}, + {-7, 16, 5, 4, -1, 6, 1, 3}, + 8, + 8, + IMM_SWORD_EACH_LEAST, + 2}, + {{-32768, 90, 455, 67, -1000, -10000, 21, 12}, + {-7, 61, 455, 67, -32768, 32767, 11, 888}, + 8, + 8, + IMM_SWORD_ANY_LEAST, + 2}, + {{-12, -56}, + {-7, 16, 555, 554, -12, 61, -16, 3}, + 2, + 8, + IMM_SWORD_ORDERED_LEAST, + 8}, +}; + +#define MM_CMPESTRI_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_LEAST_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_EACH_LEAST, __VA_ARGS__) \ + _(UBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(UBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPESTRI_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_LEAST, __VA_ARGS__) \ + _(SBYTE_ANY_LEAST, __VA_ARGS__) \ + _(SBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPESTRI_UWORD_TEST_CASES(_, ...) \ + _(UWORD_EACH_LEAST, __VA_ARGS__) \ + _(UWORD_ANY_LEAST, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST, __VA_ARGS__) \ + _(UWORD_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPESTRI_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_MOST, __VA_ARGS__) \ + _(SWORD_EACH_LEAST, __VA_ARGS__) \ + _(SWORD_ANY_LEAST, __VA_ARGS__) \ + _(SWORD_ORDERED_LEAST, __VA_ARGS__) + +#define GENERATE_MM_CMPESTRI_TEST_CASES \ + ENUM_MM_CMPESTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpestri, CMPESTRI, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpestri, CMPESTRI, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(UWORD, uword, uint16_t, cmpestri, CMPESTRI, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SWORD, sword, int16_t, cmpestri, CMPESTRI, \ + IS_CMPESTRI) + result_t test_mm_cmpestri(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPESTRI_TEST_CASES + return TEST_SUCCESS; } +#define IS_CMPESTRM 0 + +typedef struct { + uint8_t a[16], b[16]; + int la, lb; + const int imm8; + uint8_t expect[16]; +} test_mm_cmpestrm_ubyte_data_t; +typedef struct { + int8_t a[16], b[16]; + int la, lb; + const int imm8; + int8_t expect[16]; +} test_mm_cmpestrm_sbyte_data_t; +typedef struct { + uint16_t a[8], b[8]; + int la, lb; + const int imm8; + uint16_t expect[8]; +} test_mm_cmpestrm_uword_data_t; +typedef struct { + int16_t a[8], b[8]; + int la, lb; + const int imm8; + int16_t expect[8]; +} test_mm_cmpestrm_sword_data_t; + +#define IMM_UBYTE_EACH_UNIT \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_UNIT_MASK) +#define IMM_UBYTE_EACH_UNIT_NEGATIVE \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_UNIT_MASK | \ + _SIDD_NEGATIVE_POLARITY) +#define IMM_UBYTE_ANY_UNIT \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_UNIT_MASK) +#define IMM_UBYTE_ANY_BIT \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_BIT_MASK) +#define IMM_UBYTE_RANGES_UNIT \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_UNIT_MASK) +#define IMM_UBYTE_ORDERED_UNIT \ + (_SIDD_UBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_UNIT_MASK) + +#define IMM_SBYTE_EACH_UNIT \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_UNIT_MASK) +#define IMM_SBYTE_EACH_BIT_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_BIT_MASK | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SBYTE_ANY_UNIT \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_UNIT_MASK) +#define IMM_SBYTE_ANY_UNIT_MASKED_NEGATIVE \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_UNIT_MASK | \ + _SIDD_MASKED_NEGATIVE_POLARITY) +#define IMM_SBYTE_RANGES_UNIT \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_RANGES | _SIDD_UNIT_MASK) +#define IMM_SBYTE_ORDERED_UNIT \ + (_SIDD_SBYTE_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_UNIT_MASK) + +#define IMM_UWORD_RANGES_UNIT \ + (_SIDD_UWORD_OPS | _SIDD_CMP_RANGES | _SIDD_UNIT_MASK) +#define IMM_UWORD_EACH_UNIT \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_UNIT_MASK) +#define IMM_UWORD_ANY_UNIT \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_UNIT_MASK) +#define IMM_UWORD_ANY_BIT \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_BIT_MASK) +#define IMM_UWORD_ORDERED_UNIT \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_UNIT_MASK) +#define IMM_UWORD_ORDERED_UNIT_NEGATIVE \ + (_SIDD_UWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_UNIT_MASK | \ + _SIDD_NEGATIVE_POLARITY) + +#define IMM_SWORD_RANGES_UNIT \ + (_SIDD_SWORD_OPS | _SIDD_CMP_RANGES | _SIDD_UNIT_MASK) +#define IMM_SWORD_RANGES_BIT \ + (_SIDD_SWORD_OPS | _SIDD_CMP_RANGES | _SIDD_BIT_MASK) +#define IMM_SWORD_EACH_UNIT \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_EACH | _SIDD_UNIT_MASK) +#define IMM_SWORD_ANY_UNIT \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ANY | _SIDD_UNIT_MASK) +#define IMM_SWORD_ORDERED_UNIT \ + (_SIDD_SWORD_OPS | _SIDD_CMP_EQUAL_ORDERED | _SIDD_UNIT_MASK) + +#define TEST_MM_CMPESTRM_UBYTE_DATA_LEN 4 +static test_mm_cmpestrm_ubyte_data_t + test_mm_cmpestrm_ubyte_data[TEST_MM_CMPESTRM_UBYTE_DATA_LEN] = { + {{85, 115, 101, 70, 108, 97, 116, 65, 115, 115, 101, 109, 98, 108, 101, + 114}, + {85, 115, 105, 110, 103, 65, 110, 65, 115, 115, 101, 109, 98, 108, 101, + 114}, + 16, + 16, + IMM_UBYTE_EACH_UNIT_NEGATIVE, + {0, 0, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0}}, + {{97, 101, 105, 111, 117, 121}, + {89, 111, 117, 32, 68, 114, 105, 118, 101, 32, 77, 101, 32, 77, 97, + 100}, + 6, + 16, + IMM_UBYTE_ANY_UNIT, + {0, 255, 255, 0, 0, 0, 255, 0, 255, 0, 0, 255, 0, 0, 255, 0}}, + {{97, 122, 65, 90}, + {73, 39, 109, 32, 104, 101, 114, 101, 32, 98, 101, 99, 97, 117, 115, + 101}, + 4, + 16, + IMM_UBYTE_RANGES_UNIT, + {255, 0, 255, 0, 255, 255, 255, 255, 0, 255, 255, 255, 255, 255, 255, + 255}}, + {{87, 101, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, + {87, 104, 101, 110, 87, 101, 87, 105, 108, 108, 66, 101, 87, 101, 100, + 33}, + 2, + 16, + IMM_UBYTE_ORDERED_UNIT, + {0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0}}, +}; + +#define TEST_MM_CMPESTRM_SBYTE_DATA_LEN 4 +static test_mm_cmpestrm_sbyte_data_t + test_mm_cmpestrm_sbyte_data[TEST_MM_CMPESTRM_SBYTE_DATA_LEN] = { + {{-127, -127, 34, 88, 0, 1, -1, 78, 90, 9, 23, 34, 3, -128, 127, 0}, + {0, -127, 34, 88, 12, 43, -128, 78, 8, 9, 43, 32, 7, 126, 115, 0}, + 16, + 16, + IMM_SBYTE_EACH_UNIT, + {0, -1, -1, -1, 0, 0, 0, -1, 0, -1, 0, 0, 0, 0, 0, -1}}, + {{0, 32, 7, 115, -128, 44, 33}, + {0, -127, 34, 88, 12, 43, -128, 78, 8, 9, 43, 32, 7, 126, 115, 0}, + 7, + 10, + IMM_SBYTE_ANY_UNIT_MASKED_NEGATIVE, + {0, -1, -1, -1, -1, -1, 0, -1, -1, -1, 0, 0, 0, 0, 0, 0}}, + {{-128, -80, -90, 10, 33}, + {-126, -93, -80, -77, -56, -23, -10, -1, 0, 3, 10, 12, 13, 33, 34, 56}, + 5, + 16, + IMM_SBYTE_RANGES_UNIT, + {-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 0, 0, 0, 0}}, + {{104, 9, -12}, + {0, 0, 87, 104, 9, -12, 89, -117, 9, 10, -11, 87, -114, 104, 9, -61}, + 3, + 16, + IMM_SBYTE_ORDERED_UNIT, + {0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}, +}; + +#define TEST_MM_CMPESTRM_UWORD_DATA_LEN 4 +static test_mm_cmpestrm_uword_data_t + test_mm_cmpestrm_uword_data[TEST_MM_CMPESTRM_UWORD_DATA_LEN] = { + {{1, 5, 13, 19, 22}, + {12, 60000, 5, 1, 100, 1000, 34, 20}, + 5, + 8, + IMM_UWORD_RANGES_UNIT, + {0, 0, 65535, 65535, 0, 0, 0, 0}}, + {{65535, 12, 7, 9876, 3456, 12345, 10, 98}, + {65535, 0, 10, 9876, 3456, 0, 13, 32}, + 8, + 8, + IMM_UWORD_EACH_UNIT, + {65535, 0, 0, 65535, 65535, 0, 0, 0}}, + {{100, 0}, + {12345, 6766, 234, 0, 1, 34, 89, 100}, + 2, + 8, + IMM_UWORD_ANY_BIT, + {136, 0, 0, 0, 0, 0, 0, 0}}, + {{123, 67, 890}, + {123, 67, 890, 8900, 4, 0, 123, 67}, + 3, + 8, + IMM_UWORD_ORDERED_UNIT, + {65535, 0, 0, 0, 0, 0, 65535, 0}}, +}; + +#define TEST_MM_CMPESTRM_SWORD_DATA_LEN 4 +static test_mm_cmpestrm_sword_data_t + test_mm_cmpestrm_sword_data[TEST_MM_CMPESTRM_SWORD_DATA_LEN] = { + {{13, 6, 5, 4, 3, 2, 1, 3}, + {-7, 16, 5, 4, -1, 6, 1, 3}, + 10, + 10, + IMM_SWORD_RANGES_UNIT, + {0, 0, 0, 0, 0, 0, -1, -1}}, + {{85, 115, 101, 70, 108, 97, 116, 65}, + {85, 115, 105, 110, 103, 65, 110, 65}, + 8, + 8, + IMM_SWORD_EACH_UNIT, + {-1, -1, 0, 0, 0, 0, 0, -1}}, + {{-32768, 10000, 10, -13}, + {-32767, 32767, -32768, 90, 0, -13, 23, 45}, + 4, + 8, + IMM_SWORD_ANY_UNIT, + {0, 0, -1, 0, 0, -1, 0, 0}}, + {{10, 20, -10, 60}, + {0, 0, 0, 10, 20, -10, 60, 10}, + 4, + 8, + IMM_SWORD_ORDERED_UNIT, + {0, 0, 0, -1, 0, 0, 0, -1}}, +}; + +#define MM_CMPESTRM_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_EACH_UNIT_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_ANY_UNIT, __VA_ARGS__) \ + _(UBYTE_RANGES_UNIT, __VA_ARGS__) \ + _(UBYTE_ORDERED_UNIT, __VA_ARGS__) + +#define MM_CMPESTRM_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_UNIT, __VA_ARGS__) \ + _(SBYTE_ANY_UNIT_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_RANGES_UNIT, __VA_ARGS__) \ + _(SBYTE_ORDERED_UNIT, __VA_ARGS__) + +#define MM_CMPESTRM_UWORD_TEST_CASES(_, ...) \ + _(UWORD_RANGES_UNIT, __VA_ARGS__) \ + _(UWORD_EACH_UNIT, __VA_ARGS__) \ + _(UWORD_ANY_BIT, __VA_ARGS__) \ + _(UWORD_ORDERED_UNIT, __VA_ARGS__) + +#define MM_CMPESTRM_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_UNIT, __VA_ARGS__) \ + _(SWORD_EACH_UNIT, __VA_ARGS__) \ + _(SWORD_ANY_UNIT, __VA_ARGS__) \ + _(SWORD_ORDERED_UNIT, __VA_ARGS__) + +#define GENERATE_MM_CMPESTRM_TEST_CASES \ + ENUM_MM_CMPESTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpestrm, CMPESTRM, \ + IS_CMPESTRM) \ + ENUM_MM_CMPESTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpestrm, CMPESTRM, \ + IS_CMPESTRM) \ + ENUM_MM_CMPESTRX_TEST_CASES(UWORD, uword, uint16_t, cmpestrm, CMPESTRM, \ + IS_CMPESTRM) \ + ENUM_MM_CMPESTRX_TEST_CASES(SWORD, sword, int16_t, cmpestrm, CMPESTRM, \ + IS_CMPESTRM) + result_t test_mm_cmpestrm(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPESTRM_TEST_CASES + return TEST_SUCCESS; } +#undef IS_CMPESTRM + +#define TEST_MM_CMPESTRO_UBYTE_DATA_LEN 4 +static test_mm_cmpestri_ubyte_data_t + test_mm_cmpestro_ubyte_data[TEST_MM_CMPESTRO_UBYTE_DATA_LEN] = { + {{56, 78, 255, 1, 9}, + {56, 78, 43, 255, 1, 6, 9}, + 5, + 7, + IMM_UBYTE_ANY_MOST_NEGATIVE, + 0}, + {{33, 44, 100, 24, 3, 89, 127, 254, 33, 45, 250}, + {33, 44, 100, 22, 3, 98, 125, 254, 33, 4, 243}, + 11, + 11, + IMM_UBYTE_EACH_LEAST_MASKED_NEGATIVE, + 0}, + {{34, 27, 18, 9}, {}, 4, 16, IMM_UBYTE_RANGES_LEAST_MASKED_NEGATIVE, 1}, + {{3, 18, 216}, + {3, 18, 222, 3, 17, 216, 3, 18, 216}, + 3, + 9, + IMM_UBYTE_ORDERED_LEAST_NEGATIVE, + 1}, +}; + +#define TEST_MM_CMPESTRO_SBYTE_DATA_LEN 4 +static test_mm_cmpestri_sbyte_data_t + test_mm_cmpestro_sbyte_data[TEST_MM_CMPESTRO_SBYTE_DATA_LEN] = { + {{23, -23, 24, -24, 25, -25, 26, -26, 27, -27, 28, -28, -29, 29, 30, + 31}, + {24, -23, 25, -24, 25, -25, 26, -26, 27, -27, 28, -28, -29, 29, 30, + 31}, + 16, + 16, + IMM_SBYTE_EACH_MOST_NEGATIVE, + 1}, + {{34, 33, 67, 72, -90, 127, 33, -128, 123, -90, -100, 34, 43, 15, 56, + 3}, + {3, 14, 15, 65, 90, -127, 100, 100}, + 16, + 8, + IMM_SBYTE_ANY_MOST, + 1}, + {{-13, 0, 34}, + {-12, -11, 1, 12, 56, 57, 3, 2, -17}, + 6, + 9, + IMM_SBYTE_RANGES_MOST_MASKED_NEGATIVE, + 0}, + {{1, 2, 3, 4, 5, 6, 7, 8}, + {-1, -2, -3, -4, -5, -6, -7, -8, 1, 2, 3, 4, 5, 6, 7, 8}, + 8, + 16, + IMM_SBYTE_ORDERED_MOST, + 0}, +}; + +#define TEST_MM_CMPESTRO_UWORD_DATA_LEN 4 +static test_mm_cmpestri_uword_data_t + test_mm_cmpestro_uword_data[TEST_MM_CMPESTRO_UWORD_DATA_LEN] = { + {{0, 0, 0, 4, 4, 4, 8, 8}, + {0, 0, 0, 3, 3, 16653, 3333, 222}, + 8, + 8, + IMM_UWORD_EACH_MOST_MASKED_NEGATIVE, + 0}, + {{12, 666, 9456, 10000, 32, 444, 57, 0}, + {11, 777, 9999, 32767, 23}, + 8, + 5, + IMM_UWORD_ANY_LEAST_MASKED_NEGATIVE, + 1}, + {{23, 32, 45, 67}, + {10022, 23, 32, 44, 66, 67, 12, 22}, + 4, + 8, + IMM_UWORD_RANGES_LEAST_NEGATIVE, + 1}, + {{222, 45, 8989}, + {221, 222, 45, 8989, 222, 45, 8989}, + 3, + 7, + IMM_UWORD_ORDERED_MOST, + 0}, +}; + +#define TEST_MM_CMPESTRO_SWORD_DATA_LEN 4 +static test_mm_cmpestri_sword_data_t + test_mm_cmpestro_sword_data[TEST_MM_CMPESTRO_SWORD_DATA_LEN] = { + {{-9999, -9487, -5000, -4433, -3000, -2999, -2000, -1087}, + {-32767, -30000, -4998}, + 100, + 3, + IMM_SWORD_RANGES_MOST_MASKED_NEGATIVE, + 1}, + {{-30, 89, 7777}, + {-30, 89, 7777}, + 3, + 3, + IMM_SWORD_EACH_MOST_MASKED_NEGATIVE, + 0}, + {{8, 9, -100, 1000, -5000, -32000, 32000, 7}, + {29999, 32001, 5, 555}, + 8, + 4, + IMM_SWORD_ANY_MOST_MASKED_NEGATIVE, + 1}, + {{-1, 56, -888, 9000, -23, 12, -1, -1}, + {-1, 56, -888, 9000, -23, 12, -1, -1}, + 8, + 8, + IMM_SWORD_ORDERED_MOST_MASKED_NEGATIVE, + 0}, +}; + +#define MM_CMPESTRO_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_MOST_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_EACH_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_RANGES_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_ORDERED_LEAST_NEGATIVE, __VA_ARGS__) + +#define MM_CMPESTRO_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_MOST_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ANY_MOST, __VA_ARGS__) \ + _(SBYTE_RANGES_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ORDERED_MOST, __VA_ARGS__) + +#define MM_CMPESTRO_UWORD_TEST_CASES(_, ...) \ + _(UWORD_EACH_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(UWORD_ANY_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST_NEGATIVE, __VA_ARGS__) \ + _(UWORD_ORDERED_MOST, __VA_ARGS__) + +#define MM_CMPESTRO_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SWORD_EACH_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SWORD_ANY_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SWORD_ORDERED_MOST_MASKED_NEGATIVE, __VA_ARGS__) + +#define GENERATE_MM_CMPESTRO_TEST_CASES \ + ENUM_MM_CMPESTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpestro, CMPESTRO, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpestro, CMPESTRO, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(UWORD, uword, uint16_t, cmpestro, CMPESTRO, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SWORD, sword, int16_t, cmpestro, CMPESTRO, \ + IS_CMPESTRI) + result_t test_mm_cmpestro(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPESTRO_TEST_CASES + return TEST_SUCCESS; } +#define TEST_MM_CMPESTRS_UBYTE_DATA_LEN 2 +static test_mm_cmpestri_ubyte_data_t + test_mm_cmpestrs_ubyte_data[TEST_MM_CMPESTRS_UBYTE_DATA_LEN] = { + {{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, + {0}, + 16, + 0, + IMM_UBYTE_ANY_MOST, + 0}, + {{1, 2, 3}, {1, 2, 3}, 3, 8, IMM_UBYTE_RANGES_MOST, 1}, +}; + +#define TEST_MM_CMPESTRS_SBYTE_DATA_LEN 2 +static test_mm_cmpestri_sbyte_data_t + test_mm_cmpestrs_sbyte_data[TEST_MM_CMPESTRS_SBYTE_DATA_LEN] = { + {{-1, -2, -3, -4, -100, 100, 1, 2, 3, 4}, + {-90, -80, 111, 67, 88}, + 10, + 5, + IMM_SBYTE_EACH_LEAST_MASKED_NEGATIVE, + 1}, + {{99, 100, 101, -99, -100, -101, 56, 7}, + {-128, -126, 100, 127}, + 23, + 4, + IMM_SBYTE_ORDERED_LEAST_MASKED_NEGATIVE, + 0}, +}; + +#define TEST_MM_CMPESTRS_UWORD_DATA_LEN 2 +static test_mm_cmpestri_uword_data_t + test_mm_cmpestrs_uword_data[TEST_MM_CMPESTRS_UWORD_DATA_LEN] = { + {{1}, + {90, 65535, 63355, 12, 8, 5, 34, 10000}, + 100, + 7, + IMM_UWORD_ANY_MOST_NEGATIVE, + 0}, + {{}, {0}, 0, 28, IMM_UWORD_RANGES_MOST_MASKED_NEGATIVE, 1}, +}; + +#define TEST_MM_CMPESTRS_SWORD_DATA_LEN 2 +static test_mm_cmpestri_sword_data_t + test_mm_cmpestrs_sword_data[TEST_MM_CMPESTRS_SWORD_DATA_LEN] = { + {{-30000, 2897, 1111, -4455}, + {30, 40, 500, 6000, 20, -10, -789, -29999}, + 4, + 8, + IMM_SWORD_ORDERED_LEAST_MASKED_NEGATIVE, + 1}, + {{34, 56, 789, 1024, 2048, 4096, 8192, -16384}, + {3, 9, -27, 81, -216, 1011}, + 9, + 6, + IMM_SWORD_EACH_LEAST_NEGATIVE, + 0}, +}; + +#define MM_CMPESTRS_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_MOST, __VA_ARGS__) \ + _(UBYTE_RANGES_MOST, __VA_ARGS__) + +#define MM_CMPESTRS_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST_MASKED_NEGATIVE, __VA_ARGS__) + +#define MM_CMPESTRS_UWORD_TEST_CASES(_, ...) \ + _(UWORD_ANY_MOST_NEGATIVE, __VA_ARGS__) \ + _(UWORD_RANGES_MOST_MASKED_NEGATIVE, __VA_ARGS__) + +#define MM_CMPESTRS_SWORD_TEST_CASES(_, ...) \ + _(SWORD_ANY_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SWORD_EACH_LEAST_NEGATIVE, __VA_ARGS__) + +#define GENERATE_MM_CMPESTRS_TEST_CASES \ + ENUM_MM_CMPESTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpestrs, CMPESTRS, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpestrs, CMPESTRS, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(UWORD, uword, uint16_t, cmpestrs, CMPESTRS, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SWORD, sword, int16_t, cmpestrs, CMPESTRS, \ + IS_CMPESTRI) + result_t test_mm_cmpestrs(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPESTRS_TEST_CASES + return TEST_SUCCESS; } +#define TEST_MM_CMPESTRZ_UBYTE_DATA_LEN 2 +static test_mm_cmpestri_ubyte_data_t + test_mm_cmpestrz_ubyte_data[TEST_MM_CMPESTRZ_UBYTE_DATA_LEN] = { + {{0, 1, 2, 3, 4, 5, 6, 7}, + {12, 67, 0, 3}, + 8, + 4, + IMM_UBYTE_ANY_MOST_MASKED_NEGATIVE, + 1}, + {{255, 0, 127, 88}, + {1, 2, 4, 8, 16, 32, 64, 128, 254, 233, 209, 41, 66, 77, 90, 100}, + 4, + 16, + IMM_UBYTE_RANGES_MOST_MASKED_NEGATIVE, + 0}, +}; + +#define TEST_MM_CMPESTRZ_SBYTE_DATA_LEN 2 +static test_mm_cmpestri_sbyte_data_t + test_mm_cmpestrz_sbyte_data[TEST_MM_CMPESTRZ_SBYTE_DATA_LEN] = { + {{}, {-90, -80, 111, 67, 88}, 0, 18, IMM_SBYTE_EACH_LEAST_NEGATIVE, 0}, + {{9, 10, 10, -99, -100, -101, 56, 76}, + {-127, 127, -100, -120, 13, 108, 1, -66, -34, 89, -89, 123, 22, -19, + -8}, + 7, + 15, + IMM_SBYTE_ORDERED_LEAST_NEGATIVE, + 1}, +}; + +#define TEST_MM_CMPESTRZ_UWORD_DATA_LEN 2 +static test_mm_cmpestri_uword_data_t + test_mm_cmpestrz_uword_data[TEST_MM_CMPESTRZ_UWORD_DATA_LEN] = { + {{1}, + {9000, 33333, 63333, 120, 8, 55, 34, 100}, + 100, + 7, + IMM_UWORD_ANY_LEAST_NEGATIVE, + 1}, + {{1, 2, 3}, + {1, 10000, 65535, 8964, 9487, 32, 451, 666}, + 3, + 8, + IMM_UWORD_RANGES_MOST_NEGATIVE, + 0}, +}; + +#define TEST_MM_CMPESTRZ_SWORD_DATA_LEN 2 +static test_mm_cmpestri_sword_data_t + test_mm_cmpestrz_sword_data[TEST_MM_CMPESTRZ_SWORD_DATA_LEN] = { + {{30000, 28997, 11111, 4455}, + {30, 40, 500, 6000, 20, -10, -789, -29999}, + 4, + 8, + IMM_SWORD_ORDERED_LEAST_MASKED_NEGATIVE, + 0}, + {{789, 1024, 2048, 4096, 8192}, + {-3, 9, -27, 18, -217, 10111, 22222}, + 5, + 7, + IMM_SWORD_EACH_LEAST_MASKED_NEGATIVE, + 1}, +}; + +#define MM_CMPESTRZ_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_MOST, __VA_ARGS__) \ + _(UBYTE_RANGES_MOST, __VA_ARGS__) + +#define MM_CMPESTRZ_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_LEAST_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST_NEGATIVE, __VA_ARGS__) + +#define MM_CMPESTRZ_UWORD_TEST_CASES(_, ...) \ + _(UWORD_ANY_LEAST_NEGATIVE, __VA_ARGS__) \ + _(UWORD_RANGES_MOST_NEGATIVE, __VA_ARGS__) + +#define MM_CMPESTRZ_SWORD_TEST_CASES(_, ...) \ + _(SWORD_ANY_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SWORD_EACH_LEAST_MASKED_NEGATIVE, __VA_ARGS__) + +#define GENERATE_MM_CMPESTRZ_TEST_CASES \ + ENUM_MM_CMPESTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpestrz, CMPESTRZ, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpestrz, CMPESTRZ, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(UWORD, uword, uint16_t, cmpestrz, CMPESTRZ, \ + IS_CMPESTRI) \ + ENUM_MM_CMPESTRX_TEST_CASES(SWORD, sword, int16_t, cmpestrz, CMPESTRZ, \ + IS_CMPESTRI) + result_t test_mm_cmpestrz(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPESTRZ_TEST_CASES + return TEST_SUCCESS; } +#undef IS_CMPESTRI + result_t test_mm_cmpgt_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) { const int64_t *_a = (const int64_t *) impl.mTestIntPointer1; @@ -8709,46 +10652,938 @@ result_t test_mm_cmpgt_epi64(const SSE2NEONTestImpl &impl, uint32_t iter) result[0] = _a[0] > _b[0] ? -1 : 0; result[1] = _a[1] > _b[1] ? -1 : 0; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); __m128i iret = _mm_cmpgt_epi64(a, b); return validateInt64(iret, result[0], result[1]); } +#define IS_CMPISTRI 1 + +#define DEF_ENUM_MM_CMPISTRX_VARIANT(c, ...) c, + +#define EVAL_MM_CMPISTRX_TEST_CASE(c, type, data_type, im, IM) \ + do { \ + data_type *a = test_mm_##im##_##type##_data[c].a, \ + *b = test_mm_##im##_##type##_data[c].b; \ + const int imm8 = IMM_##c; \ + IIF(IM) \ + (int expect = test_mm_##im##_##type##_data[c].expect, \ + data_type *expect = test_mm_##im##_##type##_data[c].expect); \ + __m128i ma, mb; \ + memcpy(&ma, a, sizeof(ma)); \ + memcpy(&mb, b, sizeof(mb)); \ + IIF(IM) \ + (int res = _mm_##im(ma, mb, imm8), \ + __m128i res = _mm_##im(ma, mb, imm8)); \ + if (IIF(IM)(res != expect, memcmp(expect, &res, sizeof(__m128i)))) \ + return TEST_FAIL; \ + } while (0); + +#define ENUM_MM_CMPISTRX_TEST_CASES(type, type_lower, data_type, func, FUNC, \ + IM) \ + enum { MM_##FUNC##_##type##_TEST_CASES(DEF_ENUM_MM_CMPISTRX_VARIANT) }; \ + MM_##FUNC##_##type##_TEST_CASES(EVAL_MM_CMPISTRX_TEST_CASE, type_lower, \ + data_type, func, IM) + +typedef struct { + uint8_t a[16], b[16]; + const int imm8; + int expect; +} test_mm_cmpistri_ubyte_data_t; +typedef struct { + int8_t a[16], b[16]; + const int imm8; + int expect; +} test_mm_cmpistri_sbyte_data_t; +typedef struct { + uint16_t a[8], b[8]; + const int imm8; + int expect; +} test_mm_cmpistri_uword_data_t; +typedef struct { + int16_t a[8], b[8]; + const int imm8; + int expect; +} test_mm_cmpistri_sword_data_t; + +#define TEST_MM_CMPISTRA_UBYTE_DATA_LEN 4 +static test_mm_cmpistri_ubyte_data_t + test_mm_cmpistra_ubyte_data[TEST_MM_CMPISTRA_UBYTE_DATA_LEN] = { + {{10, 11, 12, 13, 14, 15, 16, 17, 18, 9, 20, 98, 97, 96, 95, 127}, + {1, 2, 3, 4, 5, 6, 7, 8, 99, 100, 101, 102, 103, 104, 105, 106}, + IMM_UBYTE_ANY_LEAST, + 1}, + {{1, 22, 33, 44, 5, 66, 7, 88, 9, 10, 111, 0}, + {2, 23, 34, 21, 6, 65, 8, 84, 99, 100, 11, 112, 123, 14, 15, 6}, + IMM_UBYTE_EACH_LEAST, + 1}, + {{5, 15, 25, 35, 45, 55, 65, 75, 0}, + {4, 6, 14, 16, 24, 26, 34, 36, 44, 46, 54, 56, 74, 76}, + IMM_UBYTE_RANGES_LEAST, + 0}, + {{4, 14, 64, 84, 0}, + {4, 14, 64, 84, 0, 4, 14, 65, 84, 0, 4, 14, 64, 84, 0, 1}, + IMM_UBYTE_ORDERED_MOST_NEGATIVE, + 0}, +}; + +#define TEST_MM_CMPISTRA_SBYTE_DATA_LEN 4 +static test_mm_cmpistri_sbyte_data_t + test_mm_cmpistra_sbyte_data[TEST_MM_CMPISTRA_SBYTE_DATA_LEN] = { + {{-11, -13, -43, -50, 66, 77, 87, 98, -128, 127, 126, 99, 1, 2, 3, -5}, + {-12, -13, -43, -56, 66, 78, 88, 98, -125, 127, 120, 9, 100, 22, 54, + -10}, + IMM_SBYTE_EACH_LEAST, + 0}, + {{10, 11, 100, -90, 0}, + {8, 9, 10, 11, 0, 8, 9, 10, -90, 0}, + IMM_SBYTE_ANY_LEAST_NEGATIVE, + 0}, + {{-90, -60, -34, -25, 34, 56, 70, 79, 0}, + {-100, -59, -35, -24, -101, 33, 57, 69, 80, 81, -128, 100, 101, 102, + -101, -102}, + IMM_SBYTE_RANGES_LEAST, + 1}, + {{1, 1, 1, 1, -1, -1, -1, -1, -10, 10, -10, 10, 44, -44, 44, -44}, + {1, 1, -1, 1, -1, -1, -1, -1, -10, 10, -10, 10, 44, -44, 44, -44}, + IMM_SBYTE_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRA_UWORD_DATA_LEN 4 +static test_mm_cmpistri_uword_data_t + test_mm_cmpistra_uword_data[TEST_MM_CMPISTRA_UWORD_DATA_LEN] = { + {{88, 888, 8888, 31888, 10888, 18088, 10880, 28888}, + {888, 88, 8888, 32000, 10888, 18000, 10888, 28888}, + IMM_UWORD_EACH_LEAST_NEGATIVE, + 0}, + {{3, 4, 555, 6666, 7777, 888, 9, 100}, + {1, 2, 333, 4444, 5555, 666, 7, 8}, + IMM_UWORD_ANY_LEAST, + 1}, + {{1000, 2000, 2002, 3000, 3002, 4000, 5000, 5999}, + {999, 2001, 3001, 4001, 4002, 4999, 6000, 6001}, + IMM_UWORD_RANGES_LEAST, + 1}, + {{55, 66, 77, 888, 0}, + {55, 66, 77, 888, 0, 33, 2, 10000}, + IMM_UWORD_ORDERED_LEAST, + 0}, +}; + +#define TEST_MM_CMPISTRA_SWORD_DATA_LEN 4 +static test_mm_cmpistri_sword_data_t + test_mm_cmpistra_sword_data[TEST_MM_CMPISTRA_SWORD_DATA_LEN] = { + {{-32000, -28000, 0}, + {-32001, -29999, -28001, -28000, -27999, -26000, -32768}, + IMM_SWORD_RANGES_LEAST_MASKED_NEGATIVE, + 0}, + {{-12, -11, -10, -9, -8, -7, 90, 1000}, + {-13, -10, 9, -8, -7, 1000, 1000, 90}, + IMM_SWORD_EACH_LEAST, + 1}, + {{33, 44, 787, 23, 0}, + {32, 43, 788, 0, 32, 0, 43, 0}, + IMM_SWORD_ANY_LEAST, + 0}, + {{18, 78, 999, -56, 0}, + {18, 78, 999, 56, 18, 78, 999, 4}, + IMM_SWORD_ORDERED_LEAST, + 1}, +}; + +#define MM_CMPISTRA_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_LEAST, __VA_ARGS__) \ + _(UBYTE_EACH_LEAST, __VA_ARGS__) \ + _(UBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(UBYTE_ORDERED_MOST_NEGATIVE, __VA_ARGS__) + +#define MM_CMPISTRA_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_LEAST, __VA_ARGS__) \ + _(SBYTE_ANY_LEAST_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRA_UWORD_TEST_CASES(_, ...) \ + _(UWORD_EACH_LEAST_NEGATIVE, __VA_ARGS__) \ + _(UWORD_ANY_LEAST, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST, __VA_ARGS__) \ + _(UWORD_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRA_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_LEAST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SWORD_EACH_LEAST, __VA_ARGS__) \ + _(SWORD_ANY_LEAST, __VA_ARGS__) \ + _(SWORD_ORDERED_LEAST, __VA_ARGS__) + +#define GENERATE_MM_CMPISTRA_TEST_CASES \ + ENUM_MM_CMPISTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpistra, CMPISTRA, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpistra, CMPISTRA, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(UWORD, uword, uint16_t, cmpistra, CMPISTRA, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SWORD, sword, int16_t, cmpistra, CMPISTRA, \ + IS_CMPISTRI) + result_t test_mm_cmpistra(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPISTRA_TEST_CASES + return TEST_SUCCESS; } +#define TEST_MM_CMPISTRC_UBYTE_DATA_LEN 4 +static test_mm_cmpistri_ubyte_data_t + test_mm_cmpistrc_ubyte_data[TEST_MM_CMPISTRC_UBYTE_DATA_LEN] = { + {{89, 64, 88, 23, 11, 109, 34, 55, 0}, + {2, 64, 87, 32, 1, 110, 43, 66, 0}, + IMM_UBYTE_ANY_LEAST, + 1}, + {{99, 67, 2, 127, 125, 3, 24, 77, 32, 68, 96, 74, 70, 110, 111, 5}, + {98, 88, 67, 125, 111, 4, 56, 88, 33, 69, 99, 79, 123, 11, 10, 6}, + IMM_UBYTE_EACH_LEAST, + 0}, + {{2, 3, 74, 78, 81, 83, 85, 87, 89, 90, 0}, + {86, 90, 74, 85, 87, 81, 2, 3, 3, 3, 75, 76, 77, 78, 82, 85}, + IMM_UBYTE_RANGES_MOST_NEGATIVE, + 0}, + {{45, 67, 8, 9, 0}, + {67, 45, 67, 8, 9, 45, 67, 8, 9, 45, 67, 8, 9, 45, 67, 8}, + IMM_UBYTE_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRC_SBYTE_DATA_LEN 4 +static test_mm_cmpistri_sbyte_data_t + test_mm_cmpistrc_sbyte_data[TEST_MM_CMPISTRC_SBYTE_DATA_LEN] = { + {{35, -35, 67, -66, 34, 55, 12, -100, 34, -34, 66, -67, 52, 100, 127, + -128}, + {35, -35, 67, -66, 0, 55, 12, -100, 0, -34, 66, -67, 0, 100, 127, + -128}, + IMM_SBYTE_EACH_MOST_MASKED_NEGATIVE, + 0}, + {{-119, 112, 105, 104, 0}, + {119, -112, 105, -104, 104, -34, 112, -119, 0}, + IMM_SBYTE_ANY_LEAST, + 1}, + {{-79, -69, -40, -35, 34, 45, 67, 88, 0}, + {1, 2, 3, 4, 5, 6, 7, 8, 0}, + IMM_SBYTE_RANGES_LEAST, + 0}, + {{22, -109, 123, 115, -12, 0}, + {22, -109, 12, 115, 22, -109, 123, 115, -12, 0}, + IMM_SBYTE_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRC_UWORD_DATA_LEN 4 +static test_mm_cmpistri_uword_data_t + test_mm_cmpistrc_uword_data[TEST_MM_CMPISTRC_UWORD_DATA_LEN] = { + {{23, 45, 67, 89, 102, 121, 23, 45}, + {23, 45, 67, 89, 102, 121, 23, 44}, + IMM_UWORD_EACH_LEAST, + 1}, + {{1, 11, 55, 75}, {13, 14, 56, 77, 0}, IMM_UWORD_ANY_LEAST, 0}, + {{1, 9, 11, 19, 21, 29, 91, 99}, + {10, 29, 30, 40, 50, 60, 70, 80}, + IMM_UWORD_RANGES_LEAST, + 1}, + {{3, 4, 5, 0}, + {0, 3, 4, 5, 3, 4, 5, 0}, + IMM_UWORD_ORDERED_LEAST_MASKED_NEGATIVE, + 0}, +}; + +#define TEST_MM_CMPISTRC_SWORD_DATA_LEN 4 +static test_mm_cmpistri_sword_data_t + test_mm_cmpistrc_sword_data[TEST_MM_CMPISTRC_SWORD_DATA_LEN] = { + {{-78, -56, 1000, 1002}, + {-79, -55, -12, -13, 999, 1003, -80, 10000}, + IMM_SWORD_RANGES_LEAST, + 0}, + {{45, 32767, -30000, 2345, -23450, 0}, + {45, 32767, -30000, 2346, -23456, 0, 45, 333}, + IMM_SWORD_EACH_LEAST, + 1}, + {{-10000, -20000, -30000, 10000, 20000, 30000, 0}, + {10000, 20000, 30000, -10000, -20000, 20000, -30000, 12}, + IMM_SWORD_ANY_MOST_NEGATIVE, + 1}, + {{1, 2, -3, -55, -666, -7777, 8888}, + {2, -3, -55, -666, -7777, 8888, 1}, + IMM_SWORD_ORDERED_LEAST, + 0}, +}; + +#define MM_CMPISTRC_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_LEAST, __VA_ARGS__) \ + _(UBYTE_EACH_LEAST, __VA_ARGS__) \ + _(UBYTE_RANGES_MOST_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRC_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ANY_LEAST, __VA_ARGS__) \ + _(SBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRC_UWORD_TEST_CASES(_, ...) \ + _(UWORD_EACH_LEAST, __VA_ARGS__) \ + _(UWORD_ANY_LEAST, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST, __VA_ARGS__) \ + _(UWORD_ORDERED_LEAST_MASKED_NEGATIVE, __VA_ARGS__) + +#define MM_CMPISTRC_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_LEAST, __VA_ARGS__) \ + _(SWORD_EACH_LEAST, __VA_ARGS__) \ + _(SWORD_ANY_MOST_NEGATIVE, __VA_ARGS__) \ + _(SWORD_ORDERED_LEAST, __VA_ARGS__) + +#define GENERATE_MM_CMPISTRC_TEST_CASES \ + ENUM_MM_CMPISTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpistrc, CMPISTRC, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpistrc, CMPISTRC, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(UWORD, uword, uint16_t, cmpistrc, CMPISTRC, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SWORD, sword, int16_t, cmpistrc, CMPISTRC, \ + IS_CMPISTRI) + result_t test_mm_cmpistrc(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPISTRC_TEST_CASES + return TEST_SUCCESS; } +#define TEST_MM_CMPISTRI_UBYTE_DATA_LEN 4 +static test_mm_cmpistri_ubyte_data_t + test_mm_cmpistri_ubyte_data[TEST_MM_CMPISTRI_UBYTE_DATA_LEN] = { + {{104, 117, 110, 116, 114, 50, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, + {33, 64, 35, 36, 37, 94, 38, 42, 40, 41, 91, 93, 58, 59, 60, 62}, + IMM_UBYTE_ANY_LEAST, + 16}, + {{4, 5, 6, 7, 8, 111, 34, 21, 0, 0, 0, 0, 0, 0, 0, 0}, + {5, 6, 7, 8, 8, 111, 43, 12, 0, 0, 0, 0, 0, 0, 0, 0}, + IMM_UBYTE_EACH_MOST_MASKED_NEGATIVE, + 15}, + {{65, 90, 97, 122, 48, 57, 0}, + {47, 46, 43, 44, 42, 43, 45, 41, 40, 123, 124, 125, 126, 127, 1, 2}, + IMM_UBYTE_RANGES_LEAST, + 16}, + {{111, 222, 22, 0}, + {33, 44, 55, 66, 77, 88, 99, 111, 222, 22, 11, 0}, + IMM_UBYTE_ORDERED_LEAST, + 7}, +}; + +#define TEST_MM_CMPISTRI_SBYTE_DATA_LEN 4 +static test_mm_cmpistri_sbyte_data_t + test_mm_cmpistri_sbyte_data[TEST_MM_CMPISTRI_SBYTE_DATA_LEN] = { + {{1, 2, 3, 4, 5, -99, -128, -100, -1, 49, 0}, + {2, 3, 3, 4, 5, -100, -128, -99, 1, 44, 0}, + IMM_SBYTE_EACH_LEAST, + 2}, + {{99, 100, 23, -90, 0}, + {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 99, 100, 23, -90, -90, 100}, + IMM_SBYTE_ANY_LEAST, + 10}, + {{-10, -2, 89, 97, 0}, + {-11, -12, -3, 1, 97, 0}, + IMM_SBYTE_RANGES_LEAST_NEGATIVE, + 0}, + {{-10, -90, -22, 30, 87, 127, 0}, {0}, IMM_SBYTE_ORDERED_LEAST, 16}, +}; + +#define TEST_MM_CMPISTRI_UWORD_DATA_LEN 4 +static test_mm_cmpistri_uword_data_t + test_mm_cmpistri_uword_data[TEST_MM_CMPISTRI_UWORD_DATA_LEN] = { + {{38767, 99, 1234, 65535, 2222, 1, 34456, 11}, + {38768, 999, 1235, 4444, 2222, 1, 34456, 12}, + IMM_UWORD_EACH_LEAST, + 4}, + {{22222, 33333, 44444, 55555, 6000, 600, 60, 6}, + {0}, + IMM_UWORD_ANY_LEAST, + 8}, + {{34, 777, 1000, 1004, 0}, + {33, 32, 889, 1003, 0}, + IMM_UWORD_RANGES_LEAST, + 3}, + {{44, 555, 44, 0}, + {44, 555, 44, 555, 44, 555, 44, 0}, + IMM_UWORD_ORDERED_MOST_NEGATIVE, + 7}, +}; + +#define TEST_MM_CMPISTRI_SWORD_DATA_LEN 4 +static test_mm_cmpistri_sword_data_t + test_mm_cmpistri_sword_data[TEST_MM_CMPISTRI_SWORD_DATA_LEN] = { + {{-1, -5, 10, 30, 40, 0}, + {13, -2, 7, 80, 11, 0}, + IMM_SWORD_RANGES_LEAST, + 0}, + {{-12, 12, 6666, 777, 0}, + {11, 12, 6666, 777, 0}, + IMM_SWORD_EACH_LEAST, + 1}, + {{23, 22, 33, 567, 9999, 12345, 0}, + {23, 22, 23, 22, 23, 22, 23, 12222}, + IMM_SWORD_ANY_MOST, + 6}, + {{12, -234, -567, 8888, 0}, + {13, -234, -567, 8888, 12, -234, -567, 8889}, + IMM_SWORD_ORDERED_LEAST, + 8}, +}; + +#define MM_CMPISTRI_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_LEAST, __VA_ARGS__) \ + _(UBYTE_EACH_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(UBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRI_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_LEAST, __VA_ARGS__) \ + _(SBYTE_ANY_LEAST, __VA_ARGS__) \ + _(SBYTE_RANGES_LEAST_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRI_UWORD_TEST_CASES(_, ...) \ + _(UWORD_EACH_LEAST, __VA_ARGS__) \ + _(UWORD_ANY_LEAST, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST, __VA_ARGS__) \ + _(UWORD_ORDERED_MOST_NEGATIVE, __VA_ARGS__) + +#define MM_CMPISTRI_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_LEAST, __VA_ARGS__) \ + _(SWORD_EACH_LEAST, __VA_ARGS__) \ + _(SWORD_ANY_MOST, __VA_ARGS__) \ + _(SWORD_ORDERED_LEAST, __VA_ARGS__) + +#define GENERATE_MM_CMPISTRI_TEST_CASES \ + ENUM_MM_CMPISTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpistri, CMPISTRI, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpistri, CMPISTRI, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(UWORD, uword, uint16_t, cmpistri, CMPISTRI, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SWORD, sword, int16_t, cmpistri, CMPISTRI, \ + IS_CMPISTRI) + result_t test_mm_cmpistri(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPISTRI_TEST_CASES + return TEST_SUCCESS; } +#define IS_CMPISTRM 0 + +typedef struct { + uint8_t a[16], b[16]; + const int imm8; + uint8_t expect[16]; +} test_mm_cmpistrm_ubyte_data_t; +typedef struct { + int8_t a[16], b[16]; + const int imm8; + int8_t expect[16]; +} test_mm_cmpistrm_sbyte_data_t; +typedef struct { + uint16_t a[8], b[8]; + const int imm8; + uint16_t expect[8]; +} test_mm_cmpistrm_uword_data_t; +typedef struct { + int16_t a[8], b[8]; + const int imm8; + int16_t expect[8]; +} test_mm_cmpistrm_sword_data_t; + +#define TEST_MM_CMPISTRM_UBYTE_DATA_LEN 4 +static test_mm_cmpistrm_ubyte_data_t + test_mm_cmpistrm_ubyte_data[TEST_MM_CMPISTRM_UBYTE_DATA_LEN] = { + {{88, 89, 90, 91, 92, 93, 0}, + {78, 88, 99, 127, 92, 93, 0}, + IMM_UBYTE_EACH_UNIT, + {0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, + 255}}, + {{30, 41, 52, 63, 74, 85, 0}, + {30, 42, 51, 63, 74, 85, 0}, + IMM_UBYTE_ANY_BIT, + {57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}, + {{34, 32, 21, 16, 7, 0}, + {34, 33, 32, 31, 30, 29, 10, 6, 0}, + IMM_UBYTE_RANGES_UNIT, + {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}, + {{33, 21, 123, 89, 76, 56, 0}, + {33, 21, 124, 33, 21, 123, 89, 76, 56, 33, 21, 123, 89, 76, 56, 22}, + IMM_UBYTE_ORDERED_UNIT, + {0, 0, 0, 255, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0}}, +}; + +#define TEST_MM_CMPISTRM_SBYTE_DATA_LEN 4 +static test_mm_cmpistrm_sbyte_data_t + test_mm_cmpistrm_sbyte_data[TEST_MM_CMPISTRM_SBYTE_DATA_LEN] = { + {{-11, -90, -128, 127, 66, 45, 23, 32, 99, 10, 0}, + {-10, -90, -124, 33, 66, 45, 23, 22, 99, 100, 0}, + IMM_SBYTE_EACH_BIT_MASKED_NEGATIVE, + {-115, -2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}, + {{13, 14, 55, 1, 32, 100, 101, 102, 103, 97, 23, 21, 45, 54, 55, 56}, + {22, 109, 87, 45, 1, 103, 22, 102, 43, 87, 78, 56, 65, 55, 44, 33}, + IMM_SBYTE_ANY_UNIT, + {0, 0, 0, -1, -1, -1, 0, -1, 0, 0, 0, -1, 0, -1, 0, 0}}, + {{-31, -28, -9, 10, 45, 67, 88, 0}, + {-30, -32, -33, -44, 93, 44, 9, 89, 0}, + IMM_SBYTE_RANGES_UNIT, + {-1, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0}}, + {{34, -10, 78, -99, -100, 100, 0}, + {34, 123, 88, 4, 34, -10, 78, -99, -100, 100, 34, -10, 78, -99, -100, + -100}, + IMM_SBYTE_ORDERED_UNIT, + {0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}}, +}; + +#define TEST_MM_CMPISTRM_UWORD_DATA_LEN 4 +static test_mm_cmpistrm_uword_data_t + test_mm_cmpistrm_uword_data[TEST_MM_CMPISTRM_UWORD_DATA_LEN] = { + {{1024, 2048, 4096, 5000, 0}, + {1023, 1000, 2047, 1596, 5566, 5666, 4477, 9487}, + IMM_UWORD_RANGES_UNIT, + {0, 0, 65535, 65535, 0, 0, 65535, 0}}, + {{1, 2, 345, 7788, 10000, 0}, + {2, 1, 345, 7788, 10000, 0}, + IMM_UWORD_EACH_UNIT, + {0, 0, 65535, 65535, 65535, 65535, 65535, 65535}}, + {{100, 0}, + {12345, 6766, 234, 0, 1, 34, 89, 100}, + IMM_UWORD_ANY_UNIT, + {0, 0, 0, 0, 0, 0, 0, 0}}, + {{34, 122, 9000, 0}, + {34, 122, 9000, 34, 122, 9000, 34, 122}, + IMM_UWORD_ORDERED_UNIT_NEGATIVE, + {0, 65535, 65535, 0, 65535, 65535, 0, 65535}}, +}; + +#define TEST_MM_CMPISTRM_SWORD_DATA_LEN 4 +static test_mm_cmpistrm_sword_data_t + test_mm_cmpistrm_sword_data[TEST_MM_CMPISTRM_SWORD_DATA_LEN] = { + {{-39, -10, 17, 89, 998, 1000, 1234, 4566}, + {-40, -52, -39, -29, 100, 1024, 4565, 4600}, + IMM_SWORD_RANGES_BIT, + {0, 0, -1, -1, 0, 0, -1, 0}}, + {{345, -1900, -10000, -30000, 50, 6789, 0}, + {103, -1901, -10000, 32767, 50, 6780, 0}, + IMM_SWORD_EACH_UNIT, + {0, 0, -1, 0, -1, 0, -1, -1}}, + {{677, 10001, 1001, 23, 0}, + {345, 677, 10001, 1003, 1001, 32, 23, 677}, + IMM_SWORD_ANY_UNIT, + {0, -1, -1, 0, -1, 0, -1, -1}}, + {{1024, -2288, 3752, -4096, 0}, + {1024, 1024, -2288, 3752, -4096, 1024, -2288, 3752}, + IMM_SWORD_ORDERED_UNIT, + {0, -1, 0, 0, 0, -1, 0, 0}}, +}; + +#define MM_CMPISTRM_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_EACH_UNIT, __VA_ARGS__) \ + _(UBYTE_ANY_BIT, __VA_ARGS__) \ + _(UBYTE_RANGES_UNIT, __VA_ARGS__) \ + _(UBYTE_ORDERED_UNIT, __VA_ARGS__) + +#define MM_CMPISTRM_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_BIT_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ANY_UNIT, __VA_ARGS__) \ + _(SBYTE_RANGES_UNIT, __VA_ARGS__) \ + _(SBYTE_ORDERED_UNIT, __VA_ARGS__) + +#define MM_CMPISTRM_UWORD_TEST_CASES(_, ...) \ + _(UWORD_RANGES_UNIT, __VA_ARGS__) \ + _(UWORD_EACH_UNIT, __VA_ARGS__) \ + _(UWORD_ANY_UNIT, __VA_ARGS__) \ + _(UWORD_ORDERED_UNIT_NEGATIVE, __VA_ARGS__) + +#define MM_CMPISTRM_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_UNIT, __VA_ARGS__) \ + _(SWORD_EACH_UNIT, __VA_ARGS__) \ + _(SWORD_ANY_UNIT, __VA_ARGS__) \ + _(SWORD_ORDERED_UNIT, __VA_ARGS__) + +#define GENERATE_MM_CMPISTRM_TEST_CASES \ + ENUM_MM_CMPISTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpistrm, CMPISTRM, \ + IS_CMPISTRM) \ + ENUM_MM_CMPISTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpistrm, CMPISTRM, \ + IS_CMPISTRM) \ + ENUM_MM_CMPISTRX_TEST_CASES(UWORD, uword, uint16_t, cmpistrm, CMPISTRM, \ + IS_CMPISTRM) \ + ENUM_MM_CMPISTRX_TEST_CASES(SWORD, sword, int16_t, cmpistrm, CMPISTRM, \ + IS_CMPISTRM) + result_t test_mm_cmpistrm(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPISTRM_TEST_CASES + return TEST_SUCCESS; } +#undef IS_CMPISTRM + +#define TEST_MM_CMPISTRO_UBYTE_DATA_LEN 4 +static test_mm_cmpistri_ubyte_data_t + test_mm_cmpistro_ubyte_data[TEST_MM_CMPISTRO_UBYTE_DATA_LEN] = { + {{3, 4, 5, 0}, {5, 5, 5, 4, 3, 0}, IMM_UBYTE_ANY_LEAST, 1}, + {{23, 127, 88, 3, 45, 6, 7, 2, 0}, + {32, 127, 87, 2, 44, 32, 1, 2, 0}, + IMM_UBYTE_EACH_MOST_NEGATIVE, + 1}, + {{3, 4, 55, 56, 0}, + {2, 3, 4, 5, 43, 54, 55, 56, 0}, + IMM_UBYTE_RANGES_LEAST, + 0}, + {{55, 66, 77, 11, 23, 0}, + {55, 55, 66, 77, 11, 23, 55, 66, 77, 11, 23, 33, 123, 18, 0}, + IMM_UBYTE_ORDERED_LEAST, + 0}, +}; + +#define TEST_MM_CMPISTRO_SBYTE_DATA_LEN 4 +static test_mm_cmpistri_sbyte_data_t + test_mm_cmpistro_sbyte_data[TEST_MM_CMPISTRO_SBYTE_DATA_LEN] = { + {{33, -33, 23, -32, -1, -1, 23, 46, 78, 34, 54, 100, 90, 91, 92, 101}, + {32, 33, 23, -33, -2, -3, 23, 46, -78, 43, 56, 10, 9, 91, 90, 126}, + IMM_SBYTE_EACH_LEAST, + 0}, + {{-1, -2, -3, -4, -5, -6, -7, -8, 87, 86, 85, 84, 83, 82, 81, 80}, + {87, 79, 0}, + IMM_SBYTE_ANY_LEAST, + 1}, + {{3, 4, 2, 0}, + {3, 3, 4, 5, 6, 2, 0}, + IMM_SBYTE_RANGES_MOST_NEGATIVE, + 0}, + {{23, 66, 1, 13, 17, 1, 13, 17, 0}, + {23, 66, 1, 13, 17, 1, 13, 17, 32, 23, 66, 1, 13, 17, 1, 13}, + IMM_SBYTE_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRO_UWORD_DATA_LEN 4 +static test_mm_cmpistri_uword_data_t + test_mm_cmpistro_uword_data[TEST_MM_CMPISTRO_UWORD_DATA_LEN] = { + {{3333, 4444, 10000, 20000, 40000, 50000, 65535, 0}, + {3332, 4443, 10000, 20001, 40000, 50000, 65534, 0}, + IMM_UWORD_EACH_LEAST, + 0}, + {{1, 2, 333, 4444, 55555, 7777, 23, 347}, + {4444, 7777, 55555, 23, 347, 2, 1, 0}, + IMM_UWORD_ANY_LEAST, + 1}, + {{356, 380, 320, 456, 0}, + {455, 379, 333, 319, 300, 299, 0}, + IMM_UWORD_RANGES_LEAST, + 1}, + {{3, 1001, 235, 0}, + {3, 1001, 235, 0, 3, 1001, 235, 0}, + IMM_UWORD_ORDERED_MOST_MASKED_NEGATIVE, + 0}, +}; + +#define TEST_MM_CMPISTRO_SWORD_DATA_LEN 4 +static test_mm_cmpistri_sword_data_t + test_mm_cmpistro_sword_data[TEST_MM_CMPISTRO_SWORD_DATA_LEN] = { + {{-10, -5, -100, -90, 45, 56, 1000, 1009}, + {54, -1, -5, -6, 1001, 10001, 1009, 1009}, + IMM_SWORD_RANGES_LEAST, + 1}, + {{456, -32768, 32767, 13, 0}, + {455, -32768, 32767, 31, 0}, + IMM_SWORD_EACH_LEAST, + 0}, + {{23, 46, -44, 32000, 0}, + {23, 66, -44, 678, 32000, 0}, + IMM_SWORD_ANY_MOST_MASKED_NEGATIVE, + 0}, + {{-7900, -101, -34, 666, 345, 0}, + {-7900, -101, -34, 666, 345, -7900, -191, -34}, + IMM_SWORD_ORDERED_LEAST, + 1}, +}; + +#define MM_CMPISTRO_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_LEAST, __VA_ARGS__) \ + _(UBYTE_EACH_MOST_NEGATIVE, __VA_ARGS__) \ + _(UBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(UBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRO_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_LEAST, __VA_ARGS__) \ + _(SBYTE_ANY_LEAST, __VA_ARGS__) \ + _(SBYTE_RANGES_MOST_NEGATIVE, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRO_UWORD_TEST_CASES(_, ...) \ + _(UWORD_EACH_LEAST, __VA_ARGS__) \ + _(UWORD_ANY_LEAST, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST, __VA_ARGS__) \ + _(UWORD_ORDERED_MOST_MASKED_NEGATIVE, __VA_ARGS__) + +#define MM_CMPISTRO_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_LEAST, __VA_ARGS__) \ + _(SWORD_EACH_LEAST, __VA_ARGS__) \ + _(SWORD_ANY_MOST_MASKED_NEGATIVE, __VA_ARGS__) \ + _(SWORD_ORDERED_LEAST, __VA_ARGS__) + +#define GENERATE_MM_CMPISTRO_TEST_CASES \ + ENUM_MM_CMPISTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpistro, CMPISTRO, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpistro, CMPISTRO, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(UWORD, uword, uint16_t, cmpistro, CMPISTRO, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SWORD, sword, int16_t, cmpistro, CMPISTRO, \ + IS_CMPISTRI) + result_t test_mm_cmpistro(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPISTRO_TEST_CASES + return TEST_SUCCESS; } +#define TEST_MM_CMPISTRS_UBYTE_DATA_LEN 4 +static test_mm_cmpistri_ubyte_data_t + test_mm_cmpistrs_ubyte_data[TEST_MM_CMPISTRS_UBYTE_DATA_LEN] = { + {{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, + {1, 2, 3, 4, 5, 0}, + IMM_UBYTE_ANY_LEAST, + 0}, + {{127, 126, 125, 124, 0}, + {127, 1, 34, 43, 54, 0}, + IMM_UBYTE_EACH_LEAST, + 1}, + {{127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, 127, + 127, 127}, + {56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 0}, + IMM_UBYTE_RANGES_LEAST, + 0}, + {{33, 44, 55, 78, 99, 100, 101, 102, 0}, + {0}, + IMM_UBYTE_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRS_SBYTE_DATA_LEN 4 +static test_mm_cmpistri_sbyte_data_t + test_mm_cmpistrs_sbyte_data[TEST_MM_CMPISTRS_SBYTE_DATA_LEN] = { + {{100, 99, 98, 97, -67, -4, -5, -6, -7, -1, -2, -3, -128, -128, -128, + -128}, + {0}, + IMM_SBYTE_EACH_LEAST, + 0}, + {{-128, -128, -128, -128, 127, 127, 127, 127, -128, -128, -128, -128, + 127, 127, 127, 127}, + {-1, -2, -11, -98, -12, 0}, + IMM_SBYTE_ANY_LEAST, + 0}, + {{0, 1, 2, 3, 4, 5, -6, -7}, + {0, 1, 2, 3, 4, 5, 6, 7}, + IMM_SBYTE_RANGES_LEAST, + 1}, + {{0, 1, 0, -1, 0, -2, 0, 0, -3, 4, 0, 0, 5, 6, 7, 8}, + {0}, + IMM_SBYTE_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRS_UWORD_DATA_LEN 4 +static test_mm_cmpistri_uword_data_t + test_mm_cmpistrs_uword_data[TEST_MM_CMPISTRS_UWORD_DATA_LEN] = { + {{0, 1, 2, 3, 65535, 0, 0, 0}, + {9, 8, 7, 6, 5, 4, 3, 2}, + IMM_UWORD_EACH_LEAST, + 1}, + {{4, 567, 65535, 32, 34, 0}, {0}, IMM_UWORD_ANY_LEAST, 1}, + {{65535, 65535, 65535, 65535, 65535, 65535, 65535, 65535}, + {1, 2, 3, 4, 900, 7890, 6767, 0}, + IMM_UWORD_RANGES_LEAST, + 0}, + {{1, 2, 3, 4, 5, 6, 7, 8}, {1, 2, 3, 4, 0}, IMM_UWORD_ORDERED_LEAST, 0}, +}; + +#define TEST_MM_CMPISTRS_SWORD_DATA_LEN 4 +static test_mm_cmpistri_sword_data_t + test_mm_cmpistrs_sword_data[TEST_MM_CMPISTRS_SWORD_DATA_LEN] = { + {{-32768, -32768, -32768, -32768, -32768, -32768, -32768, -3276}, + {34, 45, 6, 7, 9, 8, 7, 6}, + IMM_SWORD_RANGES_LEAST, + 0}, + {{1000, 2000, 4000, 8000, 16000, 32000, 32767, 0}, + {3, 4, 56, 23, 0}, + IMM_SWORD_EACH_LEAST, + 1}, + {{0, 1, 3, 4, -32768, 9, 0, 1}, + {56, 47, 43, 999, 1111, 0}, + IMM_SWORD_ANY_LEAST, + 1}, + {{1111, 1212, 831, 2345, 32767, 32767, -32768, 32767}, + {0}, + IMM_SWORD_ORDERED_LEAST, + 0}, +}; + +#define MM_CMPISTRS_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_LEAST, __VA_ARGS__) \ + _(UBYTE_EACH_LEAST, __VA_ARGS__) \ + _(UBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(UBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRS_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_LEAST, __VA_ARGS__) \ + _(SBYTE_ANY_LEAST, __VA_ARGS__) \ + _(SBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRS_UWORD_TEST_CASES(_, ...) \ + _(UWORD_EACH_LEAST, __VA_ARGS__) \ + _(UWORD_ANY_LEAST, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST, __VA_ARGS__) \ + _(UWORD_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRS_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_LEAST, __VA_ARGS__) \ + _(SWORD_EACH_LEAST, __VA_ARGS__) \ + _(SWORD_ANY_LEAST, __VA_ARGS__) \ + _(SWORD_ORDERED_LEAST, __VA_ARGS__) + +#define GENERATE_MM_CMPISTRS_TEST_CASES \ + ENUM_MM_CMPISTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpistrs, CMPISTRS, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpistrs, CMPISTRS, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(UWORD, uword, uint16_t, cmpistrs, CMPISTRS, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SWORD, sword, int16_t, cmpistrs, CMPISTRS, \ + IS_CMPISTRI) + result_t test_mm_cmpistrs(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPISTRS_TEST_CASES + return TEST_SUCCESS; } +#define TEST_MM_CMPISTRZ_UBYTE_DATA_LEN 4 +static test_mm_cmpistri_ubyte_data_t + test_mm_cmpistrz_ubyte_data[TEST_MM_CMPISTRZ_UBYTE_DATA_LEN] = { + {{0}, + {255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, + 255, 255}, + IMM_UBYTE_ANY_LEAST, + 0}, + {{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, + {1, 1, 1, 1, 2, 2, 2, 2, 4, 5, 6, 7, 89, 89, 89, 89}, + IMM_UBYTE_EACH_LEAST, + 0}, + {{1, 2, 3, 4, 0}, {}, IMM_UBYTE_RANGES_LEAST, 1}, + {{127, 126, 125, 124, 124, 0}, + {100, 101, 123, 100, 111, 122, 0}, + IMM_UBYTE_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRZ_SBYTE_DATA_LEN 4 +static test_mm_cmpistri_sbyte_data_t + test_mm_cmpistrz_sbyte_data[TEST_MM_CMPISTRZ_SBYTE_DATA_LEN] = { + {{127, 126, 99, -100, 0}, + {-128, -128, -128, -128, -128, -128, -128, -128, -128, -128, -128, + -128, -128, -128, -128, -128}, + IMM_SBYTE_EACH_LEAST, + 0}, + {{120, 66, 54, 0}, {3, 4, 5, -99, -6, 0}, IMM_SBYTE_ANY_LEAST, 1}, + {{0}, + {127, 127, 127, 127, 126, 126, 126, 126, -127, -127, -127, -127, -1, + -1, -1, -1}, + IMM_SBYTE_RANGES_LEAST, + 0}, + {{12, 3, 4, 5, 6, 7, 8, 0}, + {-1, -2, -3, -4, -6, 75, 0}, + IMM_SBYTE_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRZ_UWORD_DATA_LEN 4 +static test_mm_cmpistri_uword_data_t + test_mm_cmpistrz_uword_data[TEST_MM_CMPISTRZ_UWORD_DATA_LEN] = { + {{10000, 20000, 50000, 40000, 0}, + {65535, 65533, 60000, 60000, 50000, 123, 1, 2}, + IMM_UWORD_EACH_LEAST, + 0}, + {{0}, + {65528, 65529, 65530, 65531, 65532, 65533, 65534, 65535}, + IMM_UWORD_ANY_LEAST, + 0}, + {{3, 333, 3333, 33333, 0}, {0}, IMM_UWORD_RANGES_LEAST, 1}, + {{123, 456, 7, 890, 0}, + {123, 456, 7, 900, 0}, + IMM_UWORD_ORDERED_LEAST, + 1}, +}; + +#define TEST_MM_CMPISTRZ_SWORD_DATA_LEN 4 +static test_mm_cmpistri_sword_data_t + test_mm_cmpistrz_sword_data[TEST_MM_CMPISTRZ_SWORD_DATA_LEN] = { + {{2, 22, 222, 2222, 22222, -2222, -222, -22}, + {-32768, 32767, -32767, 32766, -32766, 32765, -32768, 32767}, + IMM_SWORD_RANGES_LEAST, + 0}, + {{345, 10000, -10000, -30000, 0}, + {1, 2, 3, 4, 5, 6, 7, 0}, + IMM_SWORD_EACH_LEAST, + 1}, + {{}, {0}, IMM_SWORD_ANY_LEAST, 1}, + {{1, 2, -789, -1, -90, 0}, + {1, 10, 100, 1000, 10000, -10000, -1000, 1000}, + IMM_SWORD_ORDERED_LEAST, + 0}, +}; + +#define MM_CMPISTRZ_UBYTE_TEST_CASES(_, ...) \ + _(UBYTE_ANY_LEAST, __VA_ARGS__) \ + _(UBYTE_EACH_LEAST, __VA_ARGS__) \ + _(UBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(UBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRZ_SBYTE_TEST_CASES(_, ...) \ + _(SBYTE_EACH_LEAST, __VA_ARGS__) \ + _(SBYTE_ANY_LEAST, __VA_ARGS__) \ + _(SBYTE_RANGES_LEAST, __VA_ARGS__) \ + _(SBYTE_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRZ_UWORD_TEST_CASES(_, ...) \ + _(UWORD_EACH_LEAST, __VA_ARGS__) \ + _(UWORD_ANY_LEAST, __VA_ARGS__) \ + _(UWORD_RANGES_LEAST, __VA_ARGS__) \ + _(UWORD_ORDERED_LEAST, __VA_ARGS__) + +#define MM_CMPISTRZ_SWORD_TEST_CASES(_, ...) \ + _(SWORD_RANGES_LEAST, __VA_ARGS__) \ + _(SWORD_EACH_LEAST, __VA_ARGS__) \ + _(SWORD_ANY_LEAST, __VA_ARGS__) \ + _(SWORD_ORDERED_LEAST, __VA_ARGS__) + +#define GENERATE_MM_CMPISTRZ_TEST_CASES \ + ENUM_MM_CMPISTRX_TEST_CASES(UBYTE, ubyte, uint8_t, cmpistrz, CMPISTRZ, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SBYTE, sbyte, int8_t, cmpistrz, CMPISTRZ, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(UWORD, uword, uint16_t, cmpistrz, CMPISTRZ, \ + IS_CMPISTRI) \ + ENUM_MM_CMPISTRX_TEST_CASES(SWORD, sword, int16_t, cmpistrz, CMPISTRZ, \ + IS_CMPISTRI) + result_t test_mm_cmpistrz(const SSE2NEONTestImpl &impl, uint32_t iter) { - return TEST_UNIMPL; + GENERATE_MM_CMPISTRZ_TEST_CASES + return TEST_SUCCESS; } result_t test_mm_crc32_u16(const SSE2NEONTestImpl &impl, uint32_t iter) @@ -8801,6 +11636,19 @@ result_t test_mm_aesenc_si128(const SSE2NEONTestImpl &impl, uint32_t iter) return validate128(resultReference, resultIntrinsic); } +result_t test_mm_aesdec_si128(const SSE2NEONTestImpl &impl, uint32_t iter) +{ + const int32_t *a = (int32_t *) impl.mTestIntPointer1; + const int32_t *b = (int32_t *) impl.mTestIntPointer2; + __m128i data = _mm_loadu_si128((const __m128i *) a); + __m128i rk = _mm_loadu_si128((const __m128i *) b); + + __m128i resultReference = aesdec_128_reference(data, rk); + __m128i resultIntrinsic = _mm_aesdec_si128(data, rk); + + return validate128(resultReference, resultIntrinsic); +} + result_t test_mm_aesenclast_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const int32_t *a = (const int32_t *) impl.mTestIntPointer1; @@ -8814,23 +11662,93 @@ result_t test_mm_aesenclast_si128(const SSE2NEONTestImpl &impl, uint32_t iter) return validate128(resultReference, resultIntrinsic); } +result_t test_mm_aesdeclast_si128(const SSE2NEONTestImpl &impl, uint32_t iter) +{ + const uint8_t *a = (uint8_t *) impl.mTestIntPointer1; + const uint8_t *rk = (uint8_t *) impl.mTestIntPointer2; + __m128i _a = _mm_loadu_si128((const __m128i *) a); + __m128i _rk = _mm_loadu_si128((const __m128i *) rk); + uint8_t c[16] = {}; + + uint8_t v[4][4]; + for (int i = 0; i < 16; ++i) { + v[((i / 4) + (i % 4)) % 4][i % 4] = crypto_aes_rsbox[a[i]]; + } + for (int i = 0; i < 16; ++i) { + c[i] = v[i / 4][i % 4] ^ rk[i]; + } + + __m128i result_reference = _mm_loadu_si128((const __m128i *) c); + __m128i result_intrinsic = _mm_aesdeclast_si128(_a, _rk); + + return validate128(result_reference, result_intrinsic); +} + +result_t test_mm_aesimc_si128(const SSE2NEONTestImpl &impl, uint32_t iter) +{ + const uint8_t *a = (uint8_t *) impl.mTestIntPointer1; + __m128i _a = _mm_loadu_si128((const __m128i *) a); + + uint8_t e, f, g, h, v[4][4]; + for (int i = 0; i < 16; ++i) { + ((uint8_t *) v)[i] = a[i]; + } + for (int i = 0; i < 4; ++i) { + e = v[i][0]; + f = v[i][1]; + g = v[i][2]; + h = v[i][3]; + + v[i][0] = MULTIPLY(e, 0x0e) ^ MULTIPLY(f, 0x0b) ^ MULTIPLY(g, 0x0d) ^ + MULTIPLY(h, 0x09); + v[i][1] = MULTIPLY(e, 0x09) ^ MULTIPLY(f, 0x0e) ^ MULTIPLY(g, 0x0b) ^ + MULTIPLY(h, 0x0d); + v[i][2] = MULTIPLY(e, 0x0d) ^ MULTIPLY(f, 0x09) ^ MULTIPLY(g, 0x0e) ^ + MULTIPLY(h, 0x0b); + v[i][3] = MULTIPLY(e, 0x0b) ^ MULTIPLY(f, 0x0d) ^ MULTIPLY(g, 0x09) ^ + MULTIPLY(h, 0x0e); + } + + __m128i result_reference = _mm_loadu_si128((const __m128i *) v); + __m128i result_intrinsic = _mm_aesimc_si128(_a); + + return validate128(result_reference, result_intrinsic); +} + +static inline uint32_t sub_word(uint32_t in) +{ + return (crypto_aes_sbox[(in >> 24) & 0xff] << 24) | + (crypto_aes_sbox[(in >> 16) & 0xff] << 16) | + (crypto_aes_sbox[(in >> 8) & 0xff] << 8) | + (crypto_aes_sbox[in & 0xff]); +} + // FIXME: improve the test case for AES-256 key expansion. // Reference: // https://github.com/randombit/botan/blob/master/src/lib/block/aes/aes_ni/aes_ni.cpp result_t test_mm_aeskeygenassist_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { - const int32_t *a = (int32_t *) impl.mTestIntPointer1; - const int32_t *b = (int32_t *) impl.mTestIntPointer2; - __m128i data = _mm_loadu_si128((const __m128i *) a); - - (void) b; // parameter b is unused because we can only pass an 8-bit - // immediate to _mm_aeskeygenassist_si128. - const int8_t rcon = 0x40; /* an arbitrary 8-bit immediate */ - __m128i resultReference = aeskeygenassist_128_reference(data, rcon); - __m128i resultIntrinsic = _mm_aeskeygenassist_si128(data, rcon); - - return validate128(resultReference, resultIntrinsic); + const uint32_t *a = (uint32_t *) impl.mTestIntPointer1; + __m128i data = load_m128i(a); + uint32_t sub_x1 = sub_word(a[1]); + uint32_t sub_x3 = sub_word(a[3]); + __m128i result_reference; + __m128i result_intrinsic; +#define TEST_IMPL(IDX) \ + uint32_t res##IDX[4] = { \ + sub_x1, \ + rotr(sub_x1, 8) ^ IDX, \ + sub_x3, \ + rotr(sub_x3, 8) ^ IDX, \ + }; \ + result_reference = load_m128i(res##IDX); \ + result_intrinsic = _mm_aeskeygenassist_si128(data, IDX); \ + CHECK_RESULT(validate128(result_reference, result_intrinsic)); + + IMM_256_ITER +#undef TEST_IMPL + return TEST_SUCCESS; } /* Others */ @@ -8838,8 +11756,8 @@ result_t test_mm_clmulepi64_si128(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint64_t *_a = (const uint64_t *) impl.mTestIntPointer1; const uint64_t *_b = (const uint64_t *) impl.mTestIntPointer2; - __m128i a = do_mm_load_ps((const int32_t *) _a); - __m128i b = do_mm_load_ps((const int32_t *) _b); + __m128i a = load_m128i(_a); + __m128i b = load_m128i(_b); auto result = clmul_64(_a[0], _b[0]); if (!validateUInt64(_mm_clmulepi64_si128(a, b, 0x00), result.first, result.second)) @@ -8859,27 +11777,100 @@ result_t test_mm_clmulepi64_si128(const SSE2NEONTestImpl &impl, uint32_t iter) return TEST_SUCCESS; } +result_t test_mm_get_denormals_zero_mode(const SSE2NEONTestImpl &impl, + uint32_t iter) +{ + int res_denormals_zero_on, res_denormals_zero_off; + + _MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_ON); + res_denormals_zero_on = + _MM_GET_DENORMALS_ZERO_MODE() == _MM_DENORMALS_ZERO_ON; + + _MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_OFF); + res_denormals_zero_off = + _MM_GET_DENORMALS_ZERO_MODE() == _MM_DENORMALS_ZERO_OFF; + + return (res_denormals_zero_on && res_denormals_zero_off) ? TEST_SUCCESS + : TEST_FAIL; +} + +static int popcnt_reference(uint64_t a) +{ + int count = 0; + while (a != 0) { + count += a & 1; + a >>= 1; + } + return count; +} + result_t test_mm_popcnt_u32(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint64_t *a = (const uint64_t *) impl.mTestIntPointer1; - ASSERT_RETURN(__builtin_popcount(a[0]) == _mm_popcnt_u32(a[0])); + ASSERT_RETURN(popcnt_reference((uint32_t) a[0]) == + _mm_popcnt_u32((unsigned int) a[0])); return TEST_SUCCESS; } result_t test_mm_popcnt_u64(const SSE2NEONTestImpl &impl, uint32_t iter) { const uint64_t *a = (const uint64_t *) impl.mTestIntPointer1; - ASSERT_RETURN(__builtin_popcountll(a[0]) == _mm_popcnt_u64(a[0])); + ASSERT_RETURN(popcnt_reference(a[0]) == _mm_popcnt_u64(a[0])); return TEST_SUCCESS; } +result_t test_mm_set_denormals_zero_mode(const SSE2NEONTestImpl &impl, + uint32_t iter) +{ + result_t res_set_denormals_zero_on, res_set_denormals_zero_off; + float factor = 2; + float denormal = FLT_MIN / factor; + float denormals[4] = {denormal, denormal, denormal, denormal}; + float factors[4] = {factor, factor, factor, factor}; + __m128 ret = _mm_setzero_ps(); + + _MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_ON); + ret = _mm_mul_ps(load_m128(denormals), load_m128(factors)); + res_set_denormals_zero_on = validateFloat(ret, 0, 0, 0, 0); + + _MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_OFF); + ret = _mm_mul_ps(load_m128(denormals), load_m128(factors)); +#if defined(__arm__) + // AArch32 Advanced SIMD arithmetic always uses the Flush-to-zero setting, + // regardless of the value of the FZ bit. + res_set_denormals_zero_off = validateFloat(ret, 0, 0, 0, 0); +#else + res_set_denormals_zero_off = + validateFloat(ret, FLT_MIN, FLT_MIN, FLT_MIN, FLT_MIN); +#endif + + if (res_set_denormals_zero_on == TEST_FAIL || + res_set_denormals_zero_off == TEST_FAIL) + return TEST_FAIL; + return TEST_SUCCESS; +} + +result_t test_rdtsc(const SSE2NEONTestImpl &impl, uint32_t iter) +{ + uint64_t start = _rdtsc(); + for (int i = 0; i < 100000; i++) { +#if defined(_MSC_VER) + _ReadWriteBarrier(); +#else + __asm__ __volatile__("" ::: "memory"); +#endif + } + uint64_t end = _rdtsc(); + return end > start ? TEST_SUCCESS : TEST_FAIL; +} + SSE2NEONTestImpl::SSE2NEONTestImpl(void) { mTestFloatPointer1 = (float *) platformAlignedAlloc(sizeof(__m128)); mTestFloatPointer2 = (float *) platformAlignedAlloc(sizeof(__m128)); mTestIntPointer1 = (int32_t *) platformAlignedAlloc(sizeof(__m128i)); mTestIntPointer2 = (int32_t *) platformAlignedAlloc(sizeof(__m128i)); - srand(0); + SSE2NEON_INIT_RNG(123456); for (uint32_t i = 0; i < MAX_TEST_VALUE; i++) { mTestFloats[i] = ranf(-100000, 100000); mTestInts[i] = (int32_t) ranf(-100000, 100000); @@ -8924,7 +11915,12 @@ result_t SSE2NEONTestImpl::runSingleTest(InstructionTest test, uint32_t i) result_t ret = TEST_SUCCESS; switch (test) { - INTRIN_FOREACH(CASE) +#define _(x) \ + case it_##x: \ + ret = test_##x(*this, i); \ + break; + INTRIN_LIST +#undef _ } return ret; diff --git a/external/sse2neon/tests/impl.h b/external/sse2neon/tests/impl.h index f52dc06c..5cc5a6f8 100644 --- a/external/sse2neon/tests/impl.h +++ b/external/sse2neon/tests/impl.h @@ -1,536 +1,542 @@ #ifndef SSE2NEONTEST_H #define SSE2NEONTEST_H + #include "common.h" -#define ENUM(c) it_##c, -#define STR(c) #c, -#define CASE(c) \ - case it_##c: \ - ret = test_##c(*this, i); \ - break; -#define INTRIN_FOREACH(TYPE) \ - /* SSE */ \ - TYPE(mm_add_ps) \ - TYPE(mm_add_ss) \ - TYPE(mm_and_ps) \ - TYPE(mm_andnot_ps) \ - TYPE(mm_avg_pu16) \ - TYPE(mm_avg_pu8) \ - TYPE(mm_cmpeq_ps) \ - TYPE(mm_cmpeq_ss) \ - TYPE(mm_cmpge_ps) \ - TYPE(mm_cmpge_ss) \ - TYPE(mm_cmpgt_ps) \ - TYPE(mm_cmpgt_ss) \ - TYPE(mm_cmple_ps) \ - TYPE(mm_cmple_ss) \ - TYPE(mm_cmplt_ps) \ - TYPE(mm_cmplt_ss) \ - TYPE(mm_cmpneq_ps) \ - TYPE(mm_cmpneq_ss) \ - TYPE(mm_cmpnge_ps) \ - TYPE(mm_cmpnge_ss) \ - TYPE(mm_cmpngt_ps) \ - TYPE(mm_cmpngt_ss) \ - TYPE(mm_cmpnle_ps) \ - TYPE(mm_cmpnle_ss) \ - TYPE(mm_cmpnlt_ps) \ - TYPE(mm_cmpnlt_ss) \ - TYPE(mm_cmpord_ps) \ - TYPE(mm_cmpord_ss) \ - TYPE(mm_cmpunord_ps) \ - TYPE(mm_cmpunord_ss) \ - TYPE(mm_comieq_ss) \ - TYPE(mm_comige_ss) \ - TYPE(mm_comigt_ss) \ - TYPE(mm_comile_ss) \ - TYPE(mm_comilt_ss) \ - TYPE(mm_comineq_ss) \ - TYPE(mm_cvt_pi2ps) \ - TYPE(mm_cvt_ps2pi) \ - TYPE(mm_cvt_si2ss) \ - TYPE(mm_cvt_ss2si) \ - TYPE(mm_cvtpi16_ps) \ - TYPE(mm_cvtpi32_ps) \ - TYPE(mm_cvtpi32x2_ps) \ - TYPE(mm_cvtpi8_ps) \ - TYPE(mm_cvtps_pi16) \ - TYPE(mm_cvtps_pi32) \ - TYPE(mm_cvtps_pi8) \ - TYPE(mm_cvtpu16_ps) \ - TYPE(mm_cvtpu8_ps) \ - TYPE(mm_cvtsi32_ss) \ - TYPE(mm_cvtsi64_ss) \ - TYPE(mm_cvtss_f32) \ - TYPE(mm_cvtss_si32) \ - TYPE(mm_cvtss_si64) \ - TYPE(mm_cvtt_ps2pi) \ - TYPE(mm_cvtt_ss2si) \ - TYPE(mm_cvttps_pi32) \ - TYPE(mm_cvttss_si32) \ - TYPE(mm_cvttss_si64) \ - TYPE(mm_div_ps) \ - TYPE(mm_div_ss) \ - TYPE(mm_extract_pi16) \ - TYPE(mm_free) \ - TYPE(mm_get_rounding_mode) \ - TYPE(mm_getcsr) \ - TYPE(mm_insert_pi16) \ - TYPE(mm_load_ps) \ - TYPE(mm_load_ps1) \ - TYPE(mm_load_ss) \ - TYPE(mm_load1_ps) \ - TYPE(mm_loadh_pi) \ - TYPE(mm_loadl_pi) \ - TYPE(mm_loadr_ps) \ - TYPE(mm_loadu_ps) \ - TYPE(mm_loadu_si16) \ - TYPE(mm_loadu_si64) \ - TYPE(mm_malloc) \ - TYPE(mm_maskmove_si64) \ - TYPE(m_maskmovq) \ - TYPE(mm_max_pi16) \ - TYPE(mm_max_ps) \ - TYPE(mm_max_pu8) \ - TYPE(mm_max_ss) \ - TYPE(mm_min_pi16) \ - TYPE(mm_min_ps) \ - TYPE(mm_min_pu8) \ - TYPE(mm_min_ss) \ - TYPE(mm_move_ss) \ - TYPE(mm_movehl_ps) \ - TYPE(mm_movelh_ps) \ - TYPE(mm_movemask_pi8) \ - TYPE(mm_movemask_ps) \ - TYPE(mm_mul_ps) \ - TYPE(mm_mul_ss) \ - TYPE(mm_mulhi_pu16) \ - TYPE(mm_or_ps) \ - TYPE(m_pavgb) \ - TYPE(m_pavgw) \ - TYPE(m_pextrw) \ - TYPE(m_pinsrw) \ - TYPE(m_pmaxsw) \ - TYPE(m_pmaxub) \ - TYPE(m_pminsw) \ - TYPE(m_pminub) \ - TYPE(m_pmovmskb) \ - TYPE(m_pmulhuw) \ - TYPE(mm_prefetch) \ - TYPE(m_psadbw) \ - TYPE(m_pshufw) \ - TYPE(mm_rcp_ps) \ - TYPE(mm_rcp_ss) \ - TYPE(mm_rsqrt_ps) \ - TYPE(mm_rsqrt_ss) \ - TYPE(mm_sad_pu8) \ - TYPE(mm_set_ps) \ - TYPE(mm_set_ps1) \ - TYPE(mm_set_rounding_mode) \ - TYPE(mm_set_ss) \ - TYPE(mm_set1_ps) \ - TYPE(mm_setcsr) \ - TYPE(mm_setr_ps) \ - TYPE(mm_setzero_ps) \ - TYPE(mm_sfence) \ - TYPE(mm_shuffle_pi16) \ - TYPE(mm_shuffle_ps) \ - TYPE(mm_sqrt_ps) \ - TYPE(mm_sqrt_ss) \ - TYPE(mm_store_ps) \ - TYPE(mm_store_ps1) \ - TYPE(mm_store_ss) \ - TYPE(mm_store1_ps) \ - TYPE(mm_storeh_pi) \ - TYPE(mm_storel_pi) \ - TYPE(mm_storer_ps) \ - TYPE(mm_storeu_ps) \ - TYPE(mm_storeu_si16) \ - TYPE(mm_storeu_si64) \ - TYPE(mm_stream_pi) \ - TYPE(mm_stream_ps) \ - TYPE(mm_sub_ps) \ - TYPE(mm_sub_ss) \ - TYPE(mm_ucomieq_ss) \ - TYPE(mm_ucomige_ss) \ - TYPE(mm_ucomigt_ss) \ - TYPE(mm_ucomile_ss) \ - TYPE(mm_ucomilt_ss) \ - TYPE(mm_ucomineq_ss) \ - TYPE(mm_undefined_ps) \ - TYPE(mm_unpackhi_ps) \ - TYPE(mm_unpacklo_ps) \ - TYPE(mm_xor_ps) \ - /* SSE2 */ \ - TYPE(mm_add_epi16) \ - TYPE(mm_add_epi32) \ - TYPE(mm_add_epi64) \ - TYPE(mm_add_epi8) \ - TYPE(mm_add_pd) \ - TYPE(mm_add_sd) \ - TYPE(mm_add_si64) \ - TYPE(mm_adds_epi16) \ - TYPE(mm_adds_epi8) \ - TYPE(mm_adds_epu16) \ - TYPE(mm_adds_epu8) \ - TYPE(mm_and_pd) \ - TYPE(mm_and_si128) \ - TYPE(mm_andnot_pd) \ - TYPE(mm_andnot_si128) \ - TYPE(mm_avg_epu16) \ - TYPE(mm_avg_epu8) \ - TYPE(mm_bslli_si128) \ - TYPE(mm_bsrli_si128) \ - TYPE(mm_castpd_ps) \ - TYPE(mm_castpd_si128) \ - TYPE(mm_castps_pd) \ - TYPE(mm_castps_si128) \ - TYPE(mm_castsi128_pd) \ - TYPE(mm_castsi128_ps) \ - TYPE(mm_clflush) \ - TYPE(mm_cmpeq_epi16) \ - TYPE(mm_cmpeq_epi32) \ - TYPE(mm_cmpeq_epi8) \ - TYPE(mm_cmpeq_pd) \ - TYPE(mm_cmpeq_sd) \ - TYPE(mm_cmpge_pd) \ - TYPE(mm_cmpge_sd) \ - TYPE(mm_cmpgt_epi16) \ - TYPE(mm_cmpgt_epi32) \ - TYPE(mm_cmpgt_epi8) \ - TYPE(mm_cmpgt_pd) \ - TYPE(mm_cmpgt_sd) \ - TYPE(mm_cmple_pd) \ - TYPE(mm_cmple_sd) \ - TYPE(mm_cmplt_epi16) \ - TYPE(mm_cmplt_epi32) \ - TYPE(mm_cmplt_epi8) \ - TYPE(mm_cmplt_pd) \ - TYPE(mm_cmplt_sd) \ - TYPE(mm_cmpneq_pd) \ - TYPE(mm_cmpneq_sd) \ - TYPE(mm_cmpnge_pd) \ - TYPE(mm_cmpnge_sd) \ - TYPE(mm_cmpngt_pd) \ - TYPE(mm_cmpngt_sd) \ - TYPE(mm_cmpnle_pd) \ - TYPE(mm_cmpnle_sd) \ - TYPE(mm_cmpnlt_pd) \ - TYPE(mm_cmpnlt_sd) \ - TYPE(mm_cmpord_pd) \ - TYPE(mm_cmpord_sd) \ - TYPE(mm_cmpunord_pd) \ - TYPE(mm_cmpunord_sd) \ - TYPE(mm_comieq_sd) \ - TYPE(mm_comige_sd) \ - TYPE(mm_comigt_sd) \ - TYPE(mm_comile_sd) \ - TYPE(mm_comilt_sd) \ - TYPE(mm_comineq_sd) \ - TYPE(mm_cvtepi32_pd) \ - TYPE(mm_cvtepi32_ps) \ - TYPE(mm_cvtpd_epi32) \ - TYPE(mm_cvtpd_pi32) \ - TYPE(mm_cvtpd_ps) \ - TYPE(mm_cvtpi32_pd) \ - TYPE(mm_cvtps_epi32) \ - TYPE(mm_cvtps_pd) \ - TYPE(mm_cvtsd_f64) \ - TYPE(mm_cvtsd_si32) \ - TYPE(mm_cvtsd_si64) \ - TYPE(mm_cvtsd_si64x) \ - TYPE(mm_cvtsd_ss) \ - TYPE(mm_cvtsi128_si32) \ - TYPE(mm_cvtsi128_si64) \ - TYPE(mm_cvtsi128_si64x) \ - TYPE(mm_cvtsi32_sd) \ - TYPE(mm_cvtsi32_si128) \ - TYPE(mm_cvtsi64_sd) \ - TYPE(mm_cvtsi64_si128) \ - TYPE(mm_cvtsi64x_sd) \ - TYPE(mm_cvtsi64x_si128) \ - TYPE(mm_cvtss_sd) \ - TYPE(mm_cvttpd_epi32) \ - TYPE(mm_cvttpd_pi32) \ - TYPE(mm_cvttps_epi32) \ - TYPE(mm_cvttsd_si32) \ - TYPE(mm_cvttsd_si64) \ - TYPE(mm_cvttsd_si64x) \ - TYPE(mm_div_pd) \ - TYPE(mm_div_sd) \ - TYPE(mm_extract_epi16) \ - TYPE(mm_insert_epi16) \ - TYPE(mm_lfence) \ - TYPE(mm_load_pd) \ - TYPE(mm_load_pd1) \ - TYPE(mm_load_sd) \ - TYPE(mm_load_si128) \ - TYPE(mm_load1_pd) \ - TYPE(mm_loadh_pd) \ - TYPE(mm_loadl_epi64) \ - TYPE(mm_loadl_pd) \ - TYPE(mm_loadr_pd) \ - TYPE(mm_loadu_pd) \ - TYPE(mm_loadu_si128) \ - TYPE(mm_loadu_si32) \ - TYPE(mm_madd_epi16) \ - TYPE(mm_maskmoveu_si128) \ - TYPE(mm_max_epi16) \ - TYPE(mm_max_epu8) \ - TYPE(mm_max_pd) \ - TYPE(mm_max_sd) \ - TYPE(mm_mfence) \ - TYPE(mm_min_epi16) \ - TYPE(mm_min_epu8) \ - TYPE(mm_min_pd) \ - TYPE(mm_min_sd) \ - TYPE(mm_move_epi64) \ - TYPE(mm_move_sd) \ - TYPE(mm_movemask_epi8) \ - TYPE(mm_movemask_pd) \ - TYPE(mm_movepi64_pi64) \ - TYPE(mm_movpi64_epi64) \ - TYPE(mm_mul_epu32) \ - TYPE(mm_mul_pd) \ - TYPE(mm_mul_sd) \ - TYPE(mm_mul_su32) \ - TYPE(mm_mulhi_epi16) \ - TYPE(mm_mulhi_epu16) \ - TYPE(mm_mullo_epi16) \ - TYPE(mm_or_pd) \ - TYPE(mm_or_si128) \ - TYPE(mm_packs_epi16) \ - TYPE(mm_packs_epi32) \ - TYPE(mm_packus_epi16) \ - TYPE(mm_pause) \ - TYPE(mm_sad_epu8) \ - TYPE(mm_set_epi16) \ - TYPE(mm_set_epi32) \ - TYPE(mm_set_epi64) \ - TYPE(mm_set_epi64x) \ - TYPE(mm_set_epi8) \ - TYPE(mm_set_pd) \ - TYPE(mm_set_pd1) \ - TYPE(mm_set_sd) \ - TYPE(mm_set1_epi16) \ - TYPE(mm_set1_epi32) \ - TYPE(mm_set1_epi64) \ - TYPE(mm_set1_epi64x) \ - TYPE(mm_set1_epi8) \ - TYPE(mm_set1_pd) \ - TYPE(mm_setr_epi16) \ - TYPE(mm_setr_epi32) \ - TYPE(mm_setr_epi64) \ - TYPE(mm_setr_epi8) \ - TYPE(mm_setr_pd) \ - TYPE(mm_setzero_pd) \ - TYPE(mm_setzero_si128) \ - TYPE(mm_shuffle_epi32) \ - TYPE(mm_shuffle_pd) \ - TYPE(mm_shufflehi_epi16) \ - TYPE(mm_shufflelo_epi16) \ - TYPE(mm_sll_epi16) \ - TYPE(mm_sll_epi32) \ - TYPE(mm_sll_epi64) \ - TYPE(mm_slli_epi16) \ - TYPE(mm_slli_epi32) \ - TYPE(mm_slli_epi64) \ - TYPE(mm_slli_si128) \ - TYPE(mm_sqrt_pd) \ - TYPE(mm_sqrt_sd) \ - TYPE(mm_sra_epi16) \ - TYPE(mm_sra_epi32) \ - TYPE(mm_srai_epi16) \ - TYPE(mm_srai_epi32) \ - TYPE(mm_srl_epi16) \ - TYPE(mm_srl_epi32) \ - TYPE(mm_srl_epi64) \ - TYPE(mm_srli_epi16) \ - TYPE(mm_srli_epi32) \ - TYPE(mm_srli_epi64) \ - TYPE(mm_srli_si128) \ - TYPE(mm_store_pd) \ - TYPE(mm_store_pd1) \ - TYPE(mm_store_sd) \ - TYPE(mm_store_si128) \ - TYPE(mm_store1_pd) \ - TYPE(mm_storeh_pd) \ - TYPE(mm_storel_epi64) \ - TYPE(mm_storel_pd) \ - TYPE(mm_storer_pd) \ - TYPE(mm_storeu_pd) \ - TYPE(mm_storeu_si128) \ - TYPE(mm_storeu_si32) \ - TYPE(mm_stream_pd) \ - TYPE(mm_stream_si128) \ - TYPE(mm_stream_si32) \ - TYPE(mm_stream_si64) \ - TYPE(mm_sub_epi16) \ - TYPE(mm_sub_epi32) \ - TYPE(mm_sub_epi64) \ - TYPE(mm_sub_epi8) \ - TYPE(mm_sub_pd) \ - TYPE(mm_sub_sd) \ - TYPE(mm_sub_si64) \ - TYPE(mm_subs_epi16) \ - TYPE(mm_subs_epi8) \ - TYPE(mm_subs_epu16) \ - TYPE(mm_subs_epu8) \ - TYPE(mm_ucomieq_sd) \ - TYPE(mm_ucomige_sd) \ - TYPE(mm_ucomigt_sd) \ - TYPE(mm_ucomile_sd) \ - TYPE(mm_ucomilt_sd) \ - TYPE(mm_ucomineq_sd) \ - TYPE(mm_undefined_pd) \ - TYPE(mm_undefined_si128) \ - TYPE(mm_unpackhi_epi16) \ - TYPE(mm_unpackhi_epi32) \ - TYPE(mm_unpackhi_epi64) \ - TYPE(mm_unpackhi_epi8) \ - TYPE(mm_unpackhi_pd) \ - TYPE(mm_unpacklo_epi16) \ - TYPE(mm_unpacklo_epi32) \ - TYPE(mm_unpacklo_epi64) \ - TYPE(mm_unpacklo_epi8) \ - TYPE(mm_unpacklo_pd) \ - TYPE(mm_xor_pd) \ - TYPE(mm_xor_si128) \ - /* SSE3 */ \ - TYPE(mm_addsub_pd) \ - TYPE(mm_addsub_ps) \ - TYPE(mm_hadd_pd) \ - TYPE(mm_hadd_ps) \ - TYPE(mm_hsub_pd) \ - TYPE(mm_hsub_ps) \ - TYPE(mm_lddqu_si128) \ - TYPE(mm_loaddup_pd) \ - TYPE(mm_movedup_pd) \ - TYPE(mm_movehdup_ps) \ - TYPE(mm_moveldup_ps) \ - /* SSSE3 */ \ - TYPE(mm_abs_epi16) \ - TYPE(mm_abs_epi32) \ - TYPE(mm_abs_epi8) \ - TYPE(mm_abs_pi16) \ - TYPE(mm_abs_pi32) \ - TYPE(mm_abs_pi8) \ - TYPE(mm_alignr_epi8) \ - TYPE(mm_alignr_pi8) \ - TYPE(mm_hadd_epi16) \ - TYPE(mm_hadd_epi32) \ - TYPE(mm_hadd_pi16) \ - TYPE(mm_hadd_pi32) \ - TYPE(mm_hadds_epi16) \ - TYPE(mm_hadds_pi16) \ - TYPE(mm_hsub_epi16) \ - TYPE(mm_hsub_epi32) \ - TYPE(mm_hsub_pi16) \ - TYPE(mm_hsub_pi32) \ - TYPE(mm_hsubs_epi16) \ - TYPE(mm_hsubs_pi16) \ - TYPE(mm_maddubs_epi16) \ - TYPE(mm_maddubs_pi16) \ - TYPE(mm_mulhrs_epi16) \ - TYPE(mm_mulhrs_pi16) \ - TYPE(mm_shuffle_epi8) \ - TYPE(mm_shuffle_pi8) \ - TYPE(mm_sign_epi16) \ - TYPE(mm_sign_epi32) \ - TYPE(mm_sign_epi8) \ - TYPE(mm_sign_pi16) \ - TYPE(mm_sign_pi32) \ - TYPE(mm_sign_pi8) \ - /* SSE4.1 */ \ - TYPE(mm_blend_epi16) \ - TYPE(mm_blend_pd) \ - TYPE(mm_blend_ps) \ - TYPE(mm_blendv_epi8) \ - TYPE(mm_blendv_pd) \ - TYPE(mm_blendv_ps) \ - TYPE(mm_ceil_pd) \ - TYPE(mm_ceil_ps) \ - TYPE(mm_ceil_sd) \ - TYPE(mm_ceil_ss) \ - TYPE(mm_cmpeq_epi64) \ - TYPE(mm_cvtepi16_epi32) \ - TYPE(mm_cvtepi16_epi64) \ - TYPE(mm_cvtepi32_epi64) \ - TYPE(mm_cvtepi8_epi16) \ - TYPE(mm_cvtepi8_epi32) \ - TYPE(mm_cvtepi8_epi64) \ - TYPE(mm_cvtepu16_epi32) \ - TYPE(mm_cvtepu16_epi64) \ - TYPE(mm_cvtepu32_epi64) \ - TYPE(mm_cvtepu8_epi16) \ - TYPE(mm_cvtepu8_epi32) \ - TYPE(mm_cvtepu8_epi64) \ - TYPE(mm_dp_pd) \ - TYPE(mm_dp_ps) \ - TYPE(mm_extract_epi32) \ - TYPE(mm_extract_epi64) \ - TYPE(mm_extract_epi8) \ - TYPE(mm_extract_ps) \ - TYPE(mm_floor_pd) \ - TYPE(mm_floor_ps) \ - TYPE(mm_floor_sd) \ - TYPE(mm_floor_ss) \ - TYPE(mm_insert_epi32) \ - TYPE(mm_insert_epi64) \ - TYPE(mm_insert_epi8) \ - TYPE(mm_insert_ps) \ - TYPE(mm_max_epi32) \ - TYPE(mm_max_epi8) \ - TYPE(mm_max_epu16) \ - TYPE(mm_max_epu32) \ - TYPE(mm_min_epi32) \ - TYPE(mm_min_epi8) \ - TYPE(mm_min_epu16) \ - TYPE(mm_min_epu32) \ - TYPE(mm_minpos_epu16) \ - TYPE(mm_mpsadbw_epu8) \ - TYPE(mm_mul_epi32) \ - TYPE(mm_mullo_epi32) \ - TYPE(mm_packus_epi32) \ - TYPE(mm_round_pd) \ - TYPE(mm_round_ps) \ - TYPE(mm_round_sd) \ - TYPE(mm_round_ss) \ - TYPE(mm_stream_load_si128) \ - TYPE(mm_test_all_ones) \ - TYPE(mm_test_all_zeros) \ - TYPE(mm_test_mix_ones_zeros) \ - TYPE(mm_testc_si128) \ - TYPE(mm_testnzc_si128) \ - TYPE(mm_testz_si128) \ - /* SSE4.2 */ \ - TYPE(mm_cmpestra) \ - TYPE(mm_cmpestrc) \ - TYPE(mm_cmpestri) \ - TYPE(mm_cmpestrm) \ - TYPE(mm_cmpestro) \ - TYPE(mm_cmpestrs) \ - TYPE(mm_cmpestrz) \ - TYPE(mm_cmpgt_epi64) \ - TYPE(mm_cmpistra) \ - TYPE(mm_cmpistrc) \ - TYPE(mm_cmpistri) \ - TYPE(mm_cmpistrm) \ - TYPE(mm_cmpistro) \ - TYPE(mm_cmpistrs) \ - TYPE(mm_cmpistrz) \ - TYPE(mm_crc32_u16) \ - TYPE(mm_crc32_u32) \ - TYPE(mm_crc32_u64) \ - TYPE(mm_crc32_u8) \ - /* AES */ \ - TYPE(mm_aesenc_si128) \ - TYPE(mm_aesenclast_si128) \ - TYPE(mm_aeskeygenassist_si128) \ - /* Others */ \ - TYPE(mm_clmulepi64_si128) \ - TYPE(mm_popcnt_u32) \ - TYPE(mm_popcnt_u64) \ - TYPE(last) /* This indicates the end of macros */ + +#define INTRIN_LIST \ + /* MMX */ \ + _(mm_empty) \ + /* SSE */ \ + _(mm_add_ps) \ + _(mm_add_ss) \ + _(mm_and_ps) \ + _(mm_andnot_ps) \ + _(mm_avg_pu16) \ + _(mm_avg_pu8) \ + _(mm_cmpeq_ps) \ + _(mm_cmpeq_ss) \ + _(mm_cmpge_ps) \ + _(mm_cmpge_ss) \ + _(mm_cmpgt_ps) \ + _(mm_cmpgt_ss) \ + _(mm_cmple_ps) \ + _(mm_cmple_ss) \ + _(mm_cmplt_ps) \ + _(mm_cmplt_ss) \ + _(mm_cmpneq_ps) \ + _(mm_cmpneq_ss) \ + _(mm_cmpnge_ps) \ + _(mm_cmpnge_ss) \ + _(mm_cmpngt_ps) \ + _(mm_cmpngt_ss) \ + _(mm_cmpnle_ps) \ + _(mm_cmpnle_ss) \ + _(mm_cmpnlt_ps) \ + _(mm_cmpnlt_ss) \ + _(mm_cmpord_ps) \ + _(mm_cmpord_ss) \ + _(mm_cmpunord_ps) \ + _(mm_cmpunord_ss) \ + _(mm_comieq_ss) \ + _(mm_comige_ss) \ + _(mm_comigt_ss) \ + _(mm_comile_ss) \ + _(mm_comilt_ss) \ + _(mm_comineq_ss) \ + _(mm_cvt_pi2ps) \ + _(mm_cvt_ps2pi) \ + _(mm_cvt_si2ss) \ + _(mm_cvt_ss2si) \ + _(mm_cvtpi16_ps) \ + _(mm_cvtpi32_ps) \ + _(mm_cvtpi32x2_ps) \ + _(mm_cvtpi8_ps) \ + _(mm_cvtps_pi16) \ + _(mm_cvtps_pi32) \ + _(mm_cvtps_pi8) \ + _(mm_cvtpu16_ps) \ + _(mm_cvtpu8_ps) \ + _(mm_cvtsi32_ss) \ + _(mm_cvtsi64_ss) \ + _(mm_cvtss_f32) \ + _(mm_cvtss_si32) \ + _(mm_cvtss_si64) \ + _(mm_cvtt_ps2pi) \ + _(mm_cvtt_ss2si) \ + _(mm_cvttps_pi32) \ + _(mm_cvttss_si32) \ + _(mm_cvttss_si64) \ + _(mm_div_ps) \ + _(mm_div_ss) \ + _(mm_extract_pi16) \ + _(mm_free) \ + _(mm_get_flush_zero_mode) \ + _(mm_get_rounding_mode) \ + _(mm_getcsr) \ + _(mm_insert_pi16) \ + _(mm_load_ps) \ + _(mm_load_ps1) \ + _(mm_load_ss) \ + _(mm_load1_ps) \ + _(mm_loadh_pi) \ + _(mm_loadl_pi) \ + _(mm_loadr_ps) \ + _(mm_loadu_ps) \ + _(mm_loadu_si16) \ + _(mm_loadu_si64) \ + _(mm_malloc) \ + _(mm_maskmove_si64) \ + _(m_maskmovq) \ + _(mm_max_pi16) \ + _(mm_max_ps) \ + _(mm_max_pu8) \ + _(mm_max_ss) \ + _(mm_min_pi16) \ + _(mm_min_ps) \ + _(mm_min_pu8) \ + _(mm_min_ss) \ + _(mm_move_ss) \ + _(mm_movehl_ps) \ + _(mm_movelh_ps) \ + _(mm_movemask_pi8) \ + _(mm_movemask_ps) \ + _(mm_mul_ps) \ + _(mm_mul_ss) \ + _(mm_mulhi_pu16) \ + _(mm_or_ps) \ + _(m_pavgb) \ + _(m_pavgw) \ + _(m_pextrw) \ + _(m_pinsrw) \ + _(m_pmaxsw) \ + _(m_pmaxub) \ + _(m_pminsw) \ + _(m_pminub) \ + _(m_pmovmskb) \ + _(m_pmulhuw) \ + _(mm_prefetch) \ + _(m_psadbw) \ + _(m_pshufw) \ + _(mm_rcp_ps) \ + _(mm_rcp_ss) \ + _(mm_rsqrt_ps) \ + _(mm_rsqrt_ss) \ + _(mm_sad_pu8) \ + _(mm_set_flush_zero_mode) \ + _(mm_set_ps) \ + _(mm_set_ps1) \ + _(mm_set_rounding_mode) \ + _(mm_set_ss) \ + _(mm_set1_ps) \ + _(mm_setcsr) \ + _(mm_setr_ps) \ + _(mm_setzero_ps) \ + _(mm_sfence) \ + _(mm_shuffle_pi16) \ + _(mm_shuffle_ps) \ + _(mm_sqrt_ps) \ + _(mm_sqrt_ss) \ + _(mm_store_ps) \ + _(mm_store_ps1) \ + _(mm_store_ss) \ + _(mm_store1_ps) \ + _(mm_storeh_pi) \ + _(mm_storel_pi) \ + _(mm_storer_ps) \ + _(mm_storeu_ps) \ + _(mm_storeu_si16) \ + _(mm_storeu_si64) \ + _(mm_stream_pi) \ + _(mm_stream_ps) \ + _(mm_sub_ps) \ + _(mm_sub_ss) \ + _(mm_ucomieq_ss) \ + _(mm_ucomige_ss) \ + _(mm_ucomigt_ss) \ + _(mm_ucomile_ss) \ + _(mm_ucomilt_ss) \ + _(mm_ucomineq_ss) \ + _(mm_undefined_ps) \ + _(mm_unpackhi_ps) \ + _(mm_unpacklo_ps) \ + _(mm_xor_ps) \ + /* SSE2 */ \ + _(mm_add_epi16) \ + _(mm_add_epi32) \ + _(mm_add_epi64) \ + _(mm_add_epi8) \ + _(mm_add_pd) \ + _(mm_add_sd) \ + _(mm_add_si64) \ + _(mm_adds_epi16) \ + _(mm_adds_epi8) \ + _(mm_adds_epu16) \ + _(mm_adds_epu8) \ + _(mm_and_pd) \ + _(mm_and_si128) \ + _(mm_andnot_pd) \ + _(mm_andnot_si128) \ + _(mm_avg_epu16) \ + _(mm_avg_epu8) \ + _(mm_bslli_si128) \ + _(mm_bsrli_si128) \ + _(mm_castpd_ps) \ + _(mm_castpd_si128) \ + _(mm_castps_pd) \ + _(mm_castps_si128) \ + _(mm_castsi128_pd) \ + _(mm_castsi128_ps) \ + _(mm_clflush) \ + _(mm_cmpeq_epi16) \ + _(mm_cmpeq_epi32) \ + _(mm_cmpeq_epi8) \ + _(mm_cmpeq_pd) \ + _(mm_cmpeq_sd) \ + _(mm_cmpge_pd) \ + _(mm_cmpge_sd) \ + _(mm_cmpgt_epi16) \ + _(mm_cmpgt_epi32) \ + _(mm_cmpgt_epi8) \ + _(mm_cmpgt_pd) \ + _(mm_cmpgt_sd) \ + _(mm_cmple_pd) \ + _(mm_cmple_sd) \ + _(mm_cmplt_epi16) \ + _(mm_cmplt_epi32) \ + _(mm_cmplt_epi8) \ + _(mm_cmplt_pd) \ + _(mm_cmplt_sd) \ + _(mm_cmpneq_pd) \ + _(mm_cmpneq_sd) \ + _(mm_cmpnge_pd) \ + _(mm_cmpnge_sd) \ + _(mm_cmpngt_pd) \ + _(mm_cmpngt_sd) \ + _(mm_cmpnle_pd) \ + _(mm_cmpnle_sd) \ + _(mm_cmpnlt_pd) \ + _(mm_cmpnlt_sd) \ + _(mm_cmpord_pd) \ + _(mm_cmpord_sd) \ + _(mm_cmpunord_pd) \ + _(mm_cmpunord_sd) \ + _(mm_comieq_sd) \ + _(mm_comige_sd) \ + _(mm_comigt_sd) \ + _(mm_comile_sd) \ + _(mm_comilt_sd) \ + _(mm_comineq_sd) \ + _(mm_cvtepi32_pd) \ + _(mm_cvtepi32_ps) \ + _(mm_cvtpd_epi32) \ + _(mm_cvtpd_pi32) \ + _(mm_cvtpd_ps) \ + _(mm_cvtpi32_pd) \ + _(mm_cvtps_epi32) \ + _(mm_cvtps_pd) \ + _(mm_cvtsd_f64) \ + _(mm_cvtsd_si32) \ + _(mm_cvtsd_si64) \ + _(mm_cvtsd_si64x) \ + _(mm_cvtsd_ss) \ + _(mm_cvtsi128_si32) \ + _(mm_cvtsi128_si64) \ + _(mm_cvtsi128_si64x) \ + _(mm_cvtsi32_sd) \ + _(mm_cvtsi32_si128) \ + _(mm_cvtsi64_sd) \ + _(mm_cvtsi64_si128) \ + _(mm_cvtsi64x_sd) \ + _(mm_cvtsi64x_si128) \ + _(mm_cvtss_sd) \ + _(mm_cvttpd_epi32) \ + _(mm_cvttpd_pi32) \ + _(mm_cvttps_epi32) \ + _(mm_cvttsd_si32) \ + _(mm_cvttsd_si64) \ + _(mm_cvttsd_si64x) \ + _(mm_div_pd) \ + _(mm_div_sd) \ + _(mm_extract_epi16) \ + _(mm_insert_epi16) \ + _(mm_lfence) \ + _(mm_load_pd) \ + _(mm_load_pd1) \ + _(mm_load_sd) \ + _(mm_load_si128) \ + _(mm_load1_pd) \ + _(mm_loadh_pd) \ + _(mm_loadl_epi64) \ + _(mm_loadl_pd) \ + _(mm_loadr_pd) \ + _(mm_loadu_pd) \ + _(mm_loadu_si128) \ + _(mm_loadu_si32) \ + _(mm_madd_epi16) \ + _(mm_maskmoveu_si128) \ + _(mm_max_epi16) \ + _(mm_max_epu8) \ + _(mm_max_pd) \ + _(mm_max_sd) \ + _(mm_mfence) \ + _(mm_min_epi16) \ + _(mm_min_epu8) \ + _(mm_min_pd) \ + _(mm_min_sd) \ + _(mm_move_epi64) \ + _(mm_move_sd) \ + _(mm_movemask_epi8) \ + _(mm_movemask_pd) \ + _(mm_movepi64_pi64) \ + _(mm_movpi64_epi64) \ + _(mm_mul_epu32) \ + _(mm_mul_pd) \ + _(mm_mul_sd) \ + _(mm_mul_su32) \ + _(mm_mulhi_epi16) \ + _(mm_mulhi_epu16) \ + _(mm_mullo_epi16) \ + _(mm_or_pd) \ + _(mm_or_si128) \ + _(mm_packs_epi16) \ + _(mm_packs_epi32) \ + _(mm_packus_epi16) \ + _(mm_pause) \ + _(mm_sad_epu8) \ + _(mm_set_epi16) \ + _(mm_set_epi32) \ + _(mm_set_epi64) \ + _(mm_set_epi64x) \ + _(mm_set_epi8) \ + _(mm_set_pd) \ + _(mm_set_pd1) \ + _(mm_set_sd) \ + _(mm_set1_epi16) \ + _(mm_set1_epi32) \ + _(mm_set1_epi64) \ + _(mm_set1_epi64x) \ + _(mm_set1_epi8) \ + _(mm_set1_pd) \ + _(mm_setr_epi16) \ + _(mm_setr_epi32) \ + _(mm_setr_epi64) \ + _(mm_setr_epi8) \ + _(mm_setr_pd) \ + _(mm_setzero_pd) \ + _(mm_setzero_si128) \ + _(mm_shuffle_epi32) \ + _(mm_shuffle_pd) \ + _(mm_shufflehi_epi16) \ + _(mm_shufflelo_epi16) \ + _(mm_sll_epi16) \ + _(mm_sll_epi32) \ + _(mm_sll_epi64) \ + _(mm_slli_epi16) \ + _(mm_slli_epi32) \ + _(mm_slli_epi64) \ + _(mm_slli_si128) \ + _(mm_sqrt_pd) \ + _(mm_sqrt_sd) \ + _(mm_sra_epi16) \ + _(mm_sra_epi32) \ + _(mm_srai_epi16) \ + _(mm_srai_epi32) \ + _(mm_srl_epi16) \ + _(mm_srl_epi32) \ + _(mm_srl_epi64) \ + _(mm_srli_epi16) \ + _(mm_srli_epi32) \ + _(mm_srli_epi64) \ + _(mm_srli_si128) \ + _(mm_store_pd) \ + _(mm_store_pd1) \ + _(mm_store_sd) \ + _(mm_store_si128) \ + _(mm_store1_pd) \ + _(mm_storeh_pd) \ + _(mm_storel_epi64) \ + _(mm_storel_pd) \ + _(mm_storer_pd) \ + _(mm_storeu_pd) \ + _(mm_storeu_si128) \ + _(mm_storeu_si32) \ + _(mm_stream_pd) \ + _(mm_stream_si128) \ + _(mm_stream_si32) \ + _(mm_stream_si64) \ + _(mm_sub_epi16) \ + _(mm_sub_epi32) \ + _(mm_sub_epi64) \ + _(mm_sub_epi8) \ + _(mm_sub_pd) \ + _(mm_sub_sd) \ + _(mm_sub_si64) \ + _(mm_subs_epi16) \ + _(mm_subs_epi8) \ + _(mm_subs_epu16) \ + _(mm_subs_epu8) \ + _(mm_ucomieq_sd) \ + _(mm_ucomige_sd) \ + _(mm_ucomigt_sd) \ + _(mm_ucomile_sd) \ + _(mm_ucomilt_sd) \ + _(mm_ucomineq_sd) \ + _(mm_undefined_pd) \ + _(mm_undefined_si128) \ + _(mm_unpackhi_epi16) \ + _(mm_unpackhi_epi32) \ + _(mm_unpackhi_epi64) \ + _(mm_unpackhi_epi8) \ + _(mm_unpackhi_pd) \ + _(mm_unpacklo_epi16) \ + _(mm_unpacklo_epi32) \ + _(mm_unpacklo_epi64) \ + _(mm_unpacklo_epi8) \ + _(mm_unpacklo_pd) \ + _(mm_xor_pd) \ + _(mm_xor_si128) \ + /* SSE3 */ \ + _(mm_addsub_pd) \ + _(mm_addsub_ps) \ + _(mm_hadd_pd) \ + _(mm_hadd_ps) \ + _(mm_hsub_pd) \ + _(mm_hsub_ps) \ + _(mm_lddqu_si128) \ + _(mm_loaddup_pd) \ + _(mm_movedup_pd) \ + _(mm_movehdup_ps) \ + _(mm_moveldup_ps) \ + /* SSSE3 */ \ + _(mm_abs_epi16) \ + _(mm_abs_epi32) \ + _(mm_abs_epi8) \ + _(mm_abs_pi16) \ + _(mm_abs_pi32) \ + _(mm_abs_pi8) \ + _(mm_alignr_epi8) \ + _(mm_alignr_pi8) \ + _(mm_hadd_epi16) \ + _(mm_hadd_epi32) \ + _(mm_hadd_pi16) \ + _(mm_hadd_pi32) \ + _(mm_hadds_epi16) \ + _(mm_hadds_pi16) \ + _(mm_hsub_epi16) \ + _(mm_hsub_epi32) \ + _(mm_hsub_pi16) \ + _(mm_hsub_pi32) \ + _(mm_hsubs_epi16) \ + _(mm_hsubs_pi16) \ + _(mm_maddubs_epi16) \ + _(mm_maddubs_pi16) \ + _(mm_mulhrs_epi16) \ + _(mm_mulhrs_pi16) \ + _(mm_shuffle_epi8) \ + _(mm_shuffle_pi8) \ + _(mm_sign_epi16) \ + _(mm_sign_epi32) \ + _(mm_sign_epi8) \ + _(mm_sign_pi16) \ + _(mm_sign_pi32) \ + _(mm_sign_pi8) \ + /* SSE4.1 */ \ + _(mm_blend_epi16) \ + _(mm_blend_pd) \ + _(mm_blend_ps) \ + _(mm_blendv_epi8) \ + _(mm_blendv_pd) \ + _(mm_blendv_ps) \ + _(mm_ceil_pd) \ + _(mm_ceil_ps) \ + _(mm_ceil_sd) \ + _(mm_ceil_ss) \ + _(mm_cmpeq_epi64) \ + _(mm_cvtepi16_epi32) \ + _(mm_cvtepi16_epi64) \ + _(mm_cvtepi32_epi64) \ + _(mm_cvtepi8_epi16) \ + _(mm_cvtepi8_epi32) \ + _(mm_cvtepi8_epi64) \ + _(mm_cvtepu16_epi32) \ + _(mm_cvtepu16_epi64) \ + _(mm_cvtepu32_epi64) \ + _(mm_cvtepu8_epi16) \ + _(mm_cvtepu8_epi32) \ + _(mm_cvtepu8_epi64) \ + _(mm_dp_pd) \ + _(mm_dp_ps) \ + _(mm_extract_epi32) \ + _(mm_extract_epi64) \ + _(mm_extract_epi8) \ + _(mm_extract_ps) \ + _(mm_floor_pd) \ + _(mm_floor_ps) \ + _(mm_floor_sd) \ + _(mm_floor_ss) \ + _(mm_insert_epi32) \ + _(mm_insert_epi64) \ + _(mm_insert_epi8) \ + _(mm_insert_ps) \ + _(mm_max_epi32) \ + _(mm_max_epi8) \ + _(mm_max_epu16) \ + _(mm_max_epu32) \ + _(mm_min_epi32) \ + _(mm_min_epi8) \ + _(mm_min_epu16) \ + _(mm_min_epu32) \ + _(mm_minpos_epu16) \ + _(mm_mpsadbw_epu8) \ + _(mm_mul_epi32) \ + _(mm_mullo_epi32) \ + _(mm_packus_epi32) \ + _(mm_round_pd) \ + _(mm_round_ps) \ + _(mm_round_sd) \ + _(mm_round_ss) \ + _(mm_stream_load_si128) \ + _(mm_test_all_ones) \ + _(mm_test_all_zeros) \ + _(mm_test_mix_ones_zeros) \ + _(mm_testc_si128) \ + _(mm_testnzc_si128) \ + _(mm_testz_si128) \ + /* SSE4.2 */ \ + _(mm_cmpestra) \ + _(mm_cmpestrc) \ + _(mm_cmpestri) \ + _(mm_cmpestrm) \ + _(mm_cmpestro) \ + _(mm_cmpestrs) \ + _(mm_cmpestrz) \ + _(mm_cmpgt_epi64) \ + _(mm_cmpistra) \ + _(mm_cmpistrc) \ + _(mm_cmpistri) \ + _(mm_cmpistrm) \ + _(mm_cmpistro) \ + _(mm_cmpistrs) \ + _(mm_cmpistrz) \ + _(mm_crc32_u16) \ + _(mm_crc32_u32) \ + _(mm_crc32_u64) \ + _(mm_crc32_u8) \ + /* AES */ \ + _(mm_aesenc_si128) \ + _(mm_aesdec_si128) \ + _(mm_aesenclast_si128) \ + _(mm_aesdeclast_si128) \ + _(mm_aesimc_si128) \ + _(mm_aeskeygenassist_si128) \ + /* Others */ \ + _(mm_clmulepi64_si128) \ + _(mm_get_denormals_zero_mode) \ + _(mm_popcnt_u32) \ + _(mm_popcnt_u64) \ + _(mm_set_denormals_zero_mode) \ + _(rdtsc) \ + _(last) /* This indicates the end of macros */ namespace SSE2NEON { @@ -542,7 +548,11 @@ namespace SSE2NEON // of the 10,000 randomized input vectors. When running on ARM, then the results // are compared to the NEON approximate version. extern const char *instructionString[]; -enum InstructionTest { INTRIN_FOREACH(ENUM) }; +enum InstructionTest { +#define _(x) it_##x, + INTRIN_LIST +#undef _ +}; class SSE2NEONTest {