Search code examples
javajdbcfloating-pointcpu-architectureieee-754

Why does floating-point output differ across platforms?


As far as I know, due to the IEEE 754 standard, extremely large numbers inputted by the user is not stored precisely in binary format. When such imprecisely stored values are converted back to decimal for display on the console, should the output remain consistent across different operating systems (Windows/Linux) and CPU architectures?

I'm asking because my client is using JDBC, and we noticed that the output values differ between Windows and Linux environments. For instance, after storing 123456.6543214586532653516 in the database and printing it to the console from both Linux and Windows systems, the results are different:

Linux:   123456.65432145866000000000
Windows: 123456.65432145864000000000

I understand that the input value cannot be precisely stored due to the limitations of floating-point representation, but I don't understand why the console output differs depending on the environment.

To investigate further, I tested the same value in both Python and Java on my x86 Windows and x86 Linux environments. Specifically, I assigned the value 123456.6543214586532653516 to a variable and printed it directly to the console. In both environments, the output was consistently 123456.65432145866 regardless of the operating system or virtual machine.

Can someone explain what might be causing this inconsistency?


This issue, to be more specific, originates from a question my colleague received from a customer. Since this is not a question I received directly and because my colleague and I handle different tasks, the information I have is unfortunately limited to what I'm sharing here. Both my colleague and I work for a database company called Tibero, which supports queries similar to those in Oracle. The customer reported that they are getting different results on Windows and Linux when running the same Java code. Unfortunately, the Java code belongs to the customer, so we cannot share it. However, the customer mentioned that the following SQL query, executed in tbsql—our company's CLI tool similar to sqlplus—also produces the same discrepancy:

CREATE TABLE convert_test(c1 BINARY_DOUBLE);
INSERT INTO convert_test VALUES(123456.6543214586532653516);
SELECT TO_CHAR(c1, '999999999999999D9999999999999999999', 'NLS_NUMERIC_CHARACTERS = ''.'''') AS formatted_value
FROM convert_test;

Unfortunately, I couldn't reproduce the issue in my environment. I understand that Tibero might not be a familiar or easily accessible product for you, and I appreciate the challenges this might pose in arriving at an answer.

Just to check, I also installed PostgreSQL on both Windows and Linux and executed the following query. The results were consistent across platforms, with no differences:

CREATE TABLE convert_test (
    c1 DOUBLE PRECISION
);
INSERT INTO convert_test VALUES (123456.6543214586532653516);
SELECT C1 FROM CONVERT_TEST;

         c1
--------------------
 123456.65432145866

Solution

  • This is very likely a rounding error, as the math below shows:

    Converting the integer part:

    123456  /   2   =   61728   R   0
    61728   /   2   =   30864   R   0
    30864   /   2   =   15432   R   0
    15432   /   2   =   7716    R   0
    7716    /   2   =   3858    R   0
    3858    /   2   =   1929    R   0
    1929    /   2   =   964 R   1
    964 /   2   =   482 R   0
    482 /   2   =   241 R   0
    241 /   2   =   120 R   1
    120 /   2   =   60  R   0
    60  /   2   =   30  R   0
    30  /   2   =   15  R   0
    15  /   2   =   7   R   1
    7   /   2   =   3   R   1
    3   /   2   =   1   R   1
    1   /   2   =   0   R   1
    
    So, 12345610 in base 10 = 11110001001000000 in base 2.
    digit count:              12345678901234567
                                       1      1
    

    Converting fractional part:

    0.6543214586532653516   x   2   =   1.3086429173065307032
    0.3086429173065307032   x   2   =   0.6172858346130614064
    0.6172858346130614064   x   2   =   1.2345716692261228128
    0.2345716692261228128   x   2   =   0.4691433384522456256
    0.4691433384522456256   x   2   =   0.9382866769044912512
    0.9382866769044912512   x   2   =   1.8765733538089825024
    0.8765733538089825024   x   2   =   1.7531467076179650048
    0.7531467076179650048   x   2   =   1.5062934152359300096
    0.5062934152359300096   x   2   =   1.0125868304718600192
    0.0125868304718600192   x   2   =   0.0251736609437200384
    0.0251736609437200384   x   2   =   0.0503473218874400768
    0.0503473218874400768   x   2   =   0.1006946437748801536
    0.1006946437748801536   x   2   =   0.2013892875497603072
    0.2013892875497603072   x   2   =   0.4027785750995206144
    0.4027785750995206144   x   2   =   0.8055571501990412288
    0.8055571501990412288   x   2   =   1.6111143003980824576
    0.6111143003980824576   x   2   =   1.2222286007961649152
    0.2222286007961649152   x   2   =   0.4444572015923298304
    0.4444572015923298304   x   2   =   0.8889144031846596608
    0.8889144031846596608   x   2   =   1.7778288063693193216
    0.7778288063693193216   x   2   =   1.5556576127386386432
    0.5556576127386386432   x   2   =   1.1113152254772772864
    0.1113152254772772864   x   2   =   0.2226304509545545728
    0.2226304509545545728   x   2   =   0.4452609019091091456
    0.4452609019091091456   x   2   =   0.8905218038182182912
    0.8905218038182182912   x   2   =   1.7810436076364365824
    0.7810436076364365824   x   2   =   1.5620872152728731648
    0.5620872152728731648   x   2   =   1.1241744305457463296
    0.1241744305457463296   x   2   =   0.2483488610914926592
    0.2483488610914926592   x   2   =   0.4966977221829853184
    0.4966977221829853184   x   2   =   0.9933954443659706368
    0.9933954443659706368   x   2   =   1.9867908887319412736
    0.9867908887319412736   x   2   =   1.9735817774638825472
    0.9735817774638825472   x   2   =   1.9471635549277650944
    0.9471635549277650944   x   2   =   1.8943271098555301888
    0.8943271098555301888   x   2   =   1.7886542197110603776
    0.7886542197110603776   x   2   =   1.5773084394221207552
    0.5773084394221207552   x   2   =       1.xxx...
    So, 0.654321458653265351610 in base 10 ~= 0.10100111100000011001110001110001111111 in base 2.
    digit count:                                12345678901234567890123456789012345678
                                                         1         2         3     ^ 3
                                                                                   ^   
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^
                                                                        the least significant bit
    

    No further fractional digits need to be calculated since 17 + 38 = 55 = 53 bits of significand + 2 extra bits to show that this should be rounded up.

    Merging these two results and dropping the extra bits:

    123456.6543214586532653516 in base 10 = 11110001001000000.101001111000000110011100011100011111 in base 2.
    

    Rounding up and shifting the radix:

    1.1110001001000000101001111000000110011100011100100000 x 2^16 in base 2 = 123456.6543214586563408374786376953125
    

    Whereas, without rounding up:

    1.1110001001000000101001111000000110011100011100011111 x 2^16 in base 2 = 123456.654321458641788922250270843505859375
    

    Since Java uses only just enough decimal digits to uniquely distinguish the number (and then zero padded if needed), these two numbers reduce to:

    123456.65432145864 and 
    123456.65432145866 (note the re-rounding here, aka "double rounding").
    

    Refer to https://numeral-systems.com/ieee-754-converter/ and https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/util/Formatter.html#dndec