## Thursday, July 25, 2013

### Decimal and Double once again

Yesterday I had a discussion with my co-worker about primitive types, I suggested that he should use decimal instead of double otherwise results of his computation may be misleading/incorrect. It is a booking applications and showing a correct prize (fee, cost) is crucial. His argument was that: "Double is a floating point type and decimal is not, that's why precision kept in double is bigger and it is safer to use it". Majority of information mentioned in a sentence above is not true! Decimal is also a floating point type, but with a base 10 not 2 like double has. First have a look at floating point canonical equation taken from wikipedia:
``` ```

It explains what exponent and mantissa is. Because double and decimal have a fixed size in computer memory in C# language, designers had to decide how many bits should be spent on matissa and how many for exponent. And as for C# language it is illustrated below:
```A double represents a number of the form +/- (1 + F / 252 ) x 2E-1023, where F is a 52 bit unsigned integer and E is an 11 bit unsigned integer; that makes 63 bits and the remaining bit is the sign, zero for positive, one for negative.

A decimal represents a number in the form +/- V / 10X where V is a 96 bit unsigned integer and X is an integer between 0 and 28.
```

It all means that decimal is much safer in doing a correct computations than double.