System.out.println(1.03 - .42);
The output is
0.6100000000000001
In C#
Console.WriteLine((1.03F - .42F));
Console.WriteLine((1.03D - .42D));
The output is
0.61
0.61
The surprising bit is that in both languages float and double are implemented based on IEEE-754 spec. So why the output differs? I was trying to figure out this by looking it up in a CSharp Spec, but no luck. IEEE specification does not give a strict guidelines of how to implement operations (like add, subtract, multiply), just to implementation of data itself - it should be kept as an exponent number. So I believe that operation implementations are different - maybe due to optimization - but it would be weird, because decimal type was 'invented' to tell explicitly, that a programmer is interested in a precise answer, not a relative accuracy.
No comments:
Post a Comment