Sunday, June 26, 2011

Floating vs Double vs Decimal in C#

For a last few days I'm reading Java Puzzlers book (the older version). I am aware about using decimal formats for money, and data that needs to be calculated precisely. I am testing majority of puzzles against C# language, and I was surprised to figure out that in Java:


System.out.println(1.03 - .42);

The output is

0.6100000000000001


In C#

Console.WriteLine((1.03F - .42F));
Console.WriteLine((1.03D - .42D));

The output is

0.61
0.61


The surprising bit is that in both languages float and double are implemented based on IEEE-754 spec. So why the output differs? I was trying to figure out this by looking it up in a CSharp Spec, but no luck. IEEE specification does not give a strict guidelines of how to implement operations (like add, subtract, multiply), just to implementation of data itself - it should be kept as an exponent number. So I believe that operation implementations are different - maybe due to optimization - but it would be weird, because decimal type was 'invented' to tell explicitly, that a programmer is interested in a precise answer, not a relative accuracy.

Thursday, June 23, 2011

The world of floating points

From time to time in a word of computer scientist a question is raised, that for the first look is hard to figure. I remember few years ago there was a big deal about incorrect answer to a really simple algebra problem in MS Excell. Usually this class of problems deals with some math basics, that every computer scientist knows (at least I hope for that), but the knowledge is really rusty - not often used, so the answer doesn't come to a mind. A day ago a great question was raised dealing with a floating point arithmetic, for the first look at the question I wouldn't even guest that it is because we might deal with floating points. But ofcourse the answer is correct, and what's more it leads to a great article, a good read.

Monday, June 20, 2011

Java Pitfall

Recently, I've been interested in some differences between C# and Java. Mainly in how some behaviours are implemented. Consider flowing code:


Byte bigByte = 13;
test(bigByte);

byte smallByte = 14;
test(smallByte);


public void test(Byte b){
System.out.println("byte");
System.out.println(b);
}

public void test(int b){
System.out.println("int");
System.out.println(b);
}



The output is:

byte
13
int
14


I did not expect it. In Java there are mechanisms called widening and autoboxing. Starting from Java 5.0 framework first tries to wide the size of a type that is reserved for it in a memory - in this case Byte is equal to 4 bits, but int reserves 8 bits in a memory. And because before Java 5.0 there was no autoboxing, the framework is trying to find a method that somehow matches the call. Maybe it should throw an error, but designers decided differently. And starting from Java 5.0 the framework is trying to find a method that as an argument also takes a type of a primitive value in this case it is trying to find a type Byte for a byte call.

I understand designers that added autboxing feature, and were trying to keep compatibility with older versions - in a behaviour - this is why Java 5.0 first tries to use widening mechanism and then autoboxing. What concerns me is that the code behaves unnaturally for a person without a knowledge about Java history (when what mechanism was added). In other words, because this code compiles, my first impression is that the method that takes as an argument Byte should be called when I send a message with a byte parameter, but the framework behaves differently.