Showing posts with label math. Show all posts
Showing posts with label math. Show all posts

Friday, September 8, 2017

Floating point arithmetic in 21 century

I really hope that entire concept will be dropped one day. Current approach was useful 30 years ago, now we have much more memory and we can do it in a much better way. And yet when I look at JavaScript:
0.1 + 0.2 === 0.3 //false
(0.1 + 0.2) + 0.3 === 0.1 + (0.2 + 0.3) //false


It is all beacause of 0.1 representation in IEEE standard.
s eeeeeeee mmmmmmmmmmmmmmmmmmmmmmm    1/n
0 01111011 10011001100110011001101
           |  ||  ||  ||  ||  || +- 8388608
           |  ||  ||  ||  ||  |+--- 2097152
           |  ||  ||  ||  ||  +---- 1048576
           |  ||  ||  ||  |+-------  131072
           |  ||  ||  ||  +--------   65536
           |  ||  ||  |+-----------    8192
           |  ||  ||  +------------    4096
           |  ||  |+---------------     512
           |  ||  +----------------     256
           |  |+-------------------      32
           |  +--------------------      16
           +-----------------------       2
The sign is positive, that's pretty easy.

The exponent is 64+32+16+8+2+1 = 123 - 127 bias = -4, so the multiplier is 2-4 or 1/16.

The mantissa is chunky. It consists of 1 (the implicit base) plus (for all those bits with each being worth 1/(2n) as n starts at 1 and increases to the right), {1/2, 1/16, 1/32, 1/256, 1/512, 1/4096, 1/8192, 1/65536, 1/131072, 1/1048576, 1/2097152, 1/8388608}.

When you add all these up, you get 1.60000002384185791015625.

When you multiply that by the multiplier, you get 0.100000001490116119384765625, which is why they say you cannot represent 0.1 exactly as an IEEE754 float.

Friday, January 15, 2016

XOR swap - or how to swap two numbers

It used to be explain in multiple books that I read in university, but I do not see a clear and easy to follow explanation in the internet.

Let's say that we have two variables:
x = 5
y = 7
and we want to change the value of these numbers w/o using additional variable. We can achieve it by using using simple basic operation:
x := x + y = 5 + 7 = 12
y := x - y = 12 - 7 = 5
x := x - y = 12 - 5 = 7
But there is a different technique that is more efficient in IT because all the data is kept in binary format, it make sense to take advantage of a simple xor operation to do the work.
x= 101 in binary = 5 in decimal
y= 111 in binary = 7 in decimal


x=101
y=111
------ x xor y
x=010

x=010
y=111
------ x xor y
y=101 = 5 in decimal

x=101
y=010
------ x xor y
x=111 = 7 in decimal
One more note. You don't want to code it, because compile should optimize it for you - even when you use additional variable to swap the values compiler should use xor technique to do the job.

Monday, May 20, 2013

Monday, December 17, 2012

RSA Security commonly uses keys of sizes 1024-bit, 2048-bit or even 3072-bit. And most Symmetric algorithms only between 112-bit and 256-bit.

The ultimate question is should I use a longer key. And ladies and gentleman, here is an answer from Bruce Schneier's book

Longer key lengths are better, but only up to a point. AES will have 128-bit, 192-bit, and 256-bit key lengths. This is far longer than needed for the foreseeable future. In fact, we cannot even imagine a world where 256-bit brute force searches are possible. It requires some fundamental breakthroughs in physics and our understanding of the universe.

One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)

Given that k = 1.38 × 10−16 erg/K, and that the ambient temperature of the universe is 3.2 Kelvin, an ideal computer running at 3.2 K would consume 4.4 × 10−16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.

Now, the annual energy output of our sun is about 1.21 × 1041 ergs. This is enough to power about 2.7 × 1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter.

But that's just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.

These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.


An excellent explanation

Wednesday, July 25, 2012

A good approximation of pi

Ah, yes, a really nice article about a Pi approximation.

Monday, April 2, 2012

Vacuous truth

Recently in my dev team there was a problem with LINQ operators on empty collections. As I found out programmers generally are not aware of vacuous truth. Basic math, but still many coders were not exposed to this kind of problem, ech...

Thursday, February 2, 2012

Major unsolved problems in CS

There is a nice discussion going on on a stack exchange about major unsolved cs problems, so if you are a math fun (as me) it is something to solve on your free time.

Tuesday, December 6, 2011

Excellent article abut BK-Trees

I love when ideas are presented in a simple way. BK-Trees are well known structures, but in a normal work as a typical programmer, when I need to implement a more advanced search that MS SQL or SOLR.NET (lucene) allows, I tend to do it in a simple way - by result combination of some sub queries. This approach makes me forget about some not often used data structures in a "day time" job. And when I find an article that is easy to follow, light in a form, and cool in a subjects that it touches, I simple love it.

Thursday, June 23, 2011

The world of floating points

From time to time in a word of computer scientist a question is raised, that for the first look is hard to figure. I remember few years ago there was a big deal about incorrect answer to a really simple algebra problem in MS Excell. Usually this class of problems deals with some math basics, that every computer scientist knows (at least I hope for that), but the knowledge is really rusty - not often used, so the answer doesn't come to a mind. A day ago a great question was raised dealing with a floating point arithmetic, for the first look at the question I wouldn't even guest that it is because we might deal with floating points. But ofcourse the answer is correct, and what's more it leads to a great article, a good read.

Friday, April 1, 2011

P vs NP problem

There are some problems that are not easy to solve. One of them is CACM problem. Recently I read a great paper describing the history of the problem, its context, solve strategies, and what it means to solve it. A good read.