-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: overflow in value_cast #580
base: master
Are you sure you want to change the base?
Conversation
#579 is merged. You can rebase this one on top of the master. Do you have an idea of how to improve |
Let me have a look at the One potential way I see is to always using a double multiplication. The multiplication factor will always remain a compile-time constant, so the compiler is free to elide/replace that with suitable integer arithmetic that gives the same result. Thus, we're basically pushing the issue to the compiler - which I think is a good thing here. Of course, one should do compiler-explorer verification if that's indeed the case, some of which you may have previously done if I read your comment in the other thread correctly. However, one thing we'll lose if we always select the |
I am not sure if that is a good idea. First, there are rounding issues, as you mentioned already. Second, the use of a floating-point type may be a no-go on some platforms. Isn't it an issue for your embedded projects? |
Hi @burnpanck! Do you still plan to work on this issue? You also planned to refactor |
6cbf0f5
to
f9ae701
Compare
Sorry, during rebasing attempt, the PR got accidentally closed. There is a fundamental tension between accuracy/rounding and performance, and we'd need to compromise one for the other. Fundamentally, to perform the conversion accurately ( But when either |
It is hard to say. We probably should not produce more assembly instructions than a "naive" code using fundamental types would do. Otherwise, people will complain and not use the library. Also, we always say that the library will never be perfect and will not prevent all the cases where an overflow may happen. If someone wants to avoid that, then |
I created a somewhat more extended discussion here: #599. Whatever the outcome of that discussion is, I believe the current implementation is sub-optimal for integral-types, in that the choice between the floating-point (
I'm inclined to suggest an implementation based on direct fixed-point multiplication, which gives the most accurate results with good confidence on reasonable assembly even without optimisation. For 32-bit input and output representations, this can be written very portably using a single 64 bit multiplication and a bit-shift. I believe there is no downside to this. I further believe that we should spend that 4x multiplication effort for 64 bit input types, because if we don't, we basically can't guarantee anything for general conversion factors. The only exceptions that I would implement is sticking to pure multiplication or division if the conversion factor either has a numerator or denominator which is equal to one. What do you think? |
I experimented with the above three implementation strategies for constant multiplication; see Godbolt.
Thus, to summarise, the proposed fully-accurate fixed-point implementation without overflow beats the existing implementation almost everywhere (sometimes very significantly), even though the existing one is potentially overflowing for 64 bit inputs. The exception seems to be 32 bit on x86-64, where the proposed implementation is twice as large for 64 bit inputs (but in contrast to the previous implementation guarantees correctness). On the other hand, for 32 bit integers, it beats the existing implementation by a factor of 4. Based on those results, I strongly suggest we just guarantee correctness for all representable values, and use the fixed-point implementation whenever the conversion is not a pure integer multiplication or division. |
What you write is really interesting but I am unable to give any meaningful feedback as I am on vacation until the mid-August. I will dig into it a bit more if I find some time in the next two weeks, or I will return to you after my vacation. |
Don't worry, enjoy your vacation! I will attempt to complete #598 with full correctness guarantees under the assumption that the the quantity rescaling provides the same guarantees, but of course it would "work" even with a different resolution here - it will just become even more difficult to figure out when overflow will happen. |
Hi @burnpanck! Do you plan to work on those in the near future. I plan to release mp-units 2.3 soon. |
I generally do, but you know how it is :-). I will be pretty busy the coming week and won't have time the weekend after, so it's either going to be this weekend or then rather two weeks from now, or we'll get close to October. |
I just found github-action-benchmark, which runs benchmarks as part of CI, and is able to push graphs of the performance trend to GitHub Pages. It also appears to support Catch2 out of the box; thus, this may be a valuable tool to keep an eye on the effect caused by changes to core functionality as what we're touching here. |
It looks really interesting indeed |
I was now able to find a test-cast highlighting that overflow I suspected in #579. I keep this as a separate issue/PR, so that #579 can be merged before we fix the issue highlighted here. After all, it appears it's still a fairly uncommon issue given that you need a value whose representation times the numerator of a rational conversion factor overflows
std::intmax_t
.