You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before this commit, the code did:
int max = G_MAXINT; /* INT_MAX, 2147483647 */
float factor = 1.0f;
max *= factor;
Dimitry Andric helped me understand the problem; here is the
explanation:
Here, "max * factor" is 2147483648.0, not 2147483647.0: the value is
rounded up because the float type has a mantissa of 23 bits only.
However, converting 2147483648.0 to an integer is an undefined
behaviour.
The resulting value depends on the compiler and the level of
optimization:
GCC (all versions) with -O0: max = -2147483648
GCC (all versions) with -O2: max = 2147483647
Clang up-to 3.5 with -O0: max = -2147483648
Clang up-to 3.5 with -O2: max = 2147483647
(ie. same behaviour as GCC)
Clang 3.6+ with -O0: max = -2147483648
Clang 3.6+ with -O2: max = 0
In the context of the preferences dialog, this means that all integers
must be between min=0 and max=0.
The fix, suggested by Dimitry, is to use a double as an intermediate
variable: it is wide enough to store "max * factor" without rounding up
the value. Then, 2147483647.0 can be converted to 2147483647.
(cherry picked from commit 9d77a28)
0 commit comments