-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bit_cast/padding_cast #147
Comments
The current implementation is in |
But the implementation for bit_cast is not constexpr if std::bit_cast/__builtin_bit_cast are available. I'm sure you can easily adapt that - then I would be happy already :-) thx |
It is not obvious as to what the behavior should be when the size of the source and target types are different. And such a behavior would likely be endian-dependent. I'm feeling skeptical about adding |
I don't think I'll make use of |
What is the intended use of |
I need this for some math/FP functions, especially for the type boost::float80_t. The memory size of this type (sizeof) can be 16, 12 or 10 bytes depending on the platform/compiler/etc (of course only 10 bytes are relevant). To cast boost::float80_t e.g. to __int128 and back padding_cast is helpful. |
Hello Peter, constexpr case non constexpr case thx & regards |
At the moment all the compilers that have a Checking the further restrictions requires type traits, but Core can't use TypeTraits (or |
You can use boost::type_traits ? |
*Problem
In many cases a bit_cast is needed. However, it cannot be assumed that std::bit_cast is available. Therefore it would make sense to "rebuild" this in boost; my implementation:
cast.hpp.txt
*Functionality
*padding_cast
thx
Gero
The text was updated successfully, but these errors were encountered: