Why can't floating point do money? It's a brilliant solution for speed of calculations in the computer, but how and why does moving the decimal point (well, in this case binary or radix point) help and how does it get currency so wrong?
3D Graphics Playlist: http://www.youtube.com/playlist?list=PLzH6n4zXuckrPkEUK5iMQrQyvj9Z6WCrm
The Trouble with Timezones: http://youtu.be/-5wpm-gesOY
More from Tom Scott: http://www.youtube.com/user/enyay and https://twitter.com/tomscott
This video was filmed and edited by Sean Riley.
Computerphile is a sister project to Brady Haran's Numberphile. See the full list of Brady's video projects at: http://bit.ly/bradychannels
Sorry to comment on a video from 4 years ago, but why don't modern compilers suffer from this anymore? For example, adding 0.20 and 0.10 in Visual Studio gives you 0.30 (0.3). Doesn't this mean the rounding point error has been removed?
How I deal with this is I write numbers in a string, then explode that and make an array of character and cast each to an int, then I do the calculation and return it in a float format. It literally has never failed me...
That "1MB per number" is entirely nonsensical. 256 bits fixed precision numbers are already total overkill in dealing both with subatomic scales and size of the universe, and we have registers that can hold one of these on modern CPUs. Sure, back in the day, the difference between 256 bits and 32 bits per number was crucial, but today, it's a convenience thing, nothing more.
These rounding errors are quite obvious in systems that are highly sensitive on initial conditions (chaotic systems).
Basically to the point where if you run absolutely the same program on a 32 bit cpu vs a 64 bit cpu, you will soon get completely different results.
For example calculating a double joint pendulum swing.
Wow. I remember this from my early times in programming. Now I am learning programming in school and those programming languages are so smart that they fix these errors for us. It makes me kind of sad to think that in the future these things will be done for us, and an understanding of these sort of things is going to be obsolete.
Would be fantastic to see a video done on Fixed Point, which is the other way to solve the Floating Point accuracy issues for some small length of numbers, especially as you can store the decimal component as an integer and do some clever maths computing the overflow quantity back into the integer component. This is actually how programs like Excel solve the problem when you click the Currency button.
"32 bit computer" -> that's about addressing. They can handle 64 bit floating-point arithmetic.
You could build a computer that can do 128 bit floating-point arithmetic and uses only 16 bit for addressing. There certainly is some relation when it comes to CPU design and the floating-point unit size, register size and address size. But it's not really the same. It might even be handled by a coprocessor. And FPU stack registers are usually 80 bits (10 bytes) wide, not 64 bits. See fld, fild, and fbld in assembly.
The problem is of course once you need to compare numbers.
Especially for legal stuff related to stupid laws.
Suppose you must declare some stuff to the government if the total value is strictly more than 10,000 dublons. When compliance is checked and you find a value of 10,000.0001 is that more than 10,000 or is equal 10,000 give or take imprecision inherent to floating point operations ? And what is precisely your margin ? Is it a static margin or do you make it depends on the operations you had to perform to compute the total ? (Since imprecision is increased when some operations are performed)
By the way, it helps to *not* think about it in terms of rounding errors but really in terms of imprecision. Rounding errors are made on top of that... Yes, ultimately they are the same thing, but thinking of them that way doesn't help reasoning. It is easier to understand rounding as the base 10 rounding you do in "real world" and imprecision as "the problem with computer not being able to code numbers perfectly".
Thanks a lot for your explanation! Floating Point Numbers are a very big problem in spreadsheet calculations most People are not aware of. If you do a lookup or if-function the normal user expects, that his numbers are correct and not smaller or bigger. I allways use to round a value to cut the error off.
I remember the first time I experienced this. I was writing a Pac-Man clone, and I set Pac-Man's speed to be 0.2, where 1.0 would be the distance from one dot to another. Everything worked fine until I started coding the wall collisions, where Pac-Man keeps going straight ahead until hitting a wall, causing him to stop. The code checked to see if Pac-Man's location was a whole integer value, like 3.0, and if it was it would figure out if a wall had been hit. When I tested it, though, Pac-Man went straight through the walls. If I changed the speed to 0.25, though, it worked exactly as expected. I was baffled for a few moments, and then it hit me. Computers don't store decimal values the way you might first expect.
It's literally just a rounding error from the computers end, btw I'm curious how calculators deal with this problem, for example if you did root two squared you should get 2, but of course the calculator can't know this cause it doesn't understand infinite decimal places. I assume they are just programmed with these special cases in mind
If you ever want to see the failure of precision of a 32 bit float in a video game, go watch kurtjmac's Far Lands or Bust. He has walked so far in minecraft his location on one axis has lost precision to quarters of blocks! Especially episode 471 where he finds the boundary where the precision loss increases.
This finally explains strange results in a program I wrote for first year uni. The program was meant to calculate change in the fewest possible coins but whenever I had 10c left it would always give two 5c coins, for the life of me I couldn't figure out why now I know. It also used to happen with the train ticket machines so at least I'm not alone in making that error.
After the start of the video, I'm like "Hm, really? That happens? Alright." So I opened up the Python shell (I just started learning Python) and typed in "0.1 + 0.2" and sure enough... It spat out "0.30000000000000004"
Romania fashion week.
We are so excited to bring Real Deals on Home Decor to Cannon Falls! Here you’ll find fabulous home decor and new fashions that are always trendy and always affordable.
Home Decor: Check out our clocks, mirrors, lamps, fine art, metal wall art, garden decor, spring & seasonal decor, candles, kitchen & bath decor & much more — you won’t find a better value or selection anywhere in Cannon Falls.
Fashion & Accessories: Watch for new styles weekly that we pull from our trips to LA — we have the latest in trendy tops, jewelry, graphic tees, jeans, ponte pants, handbags, accessories, and more.
We carry all sizes S – 3XL and promise you’ll find something you can’t live without!
Be sure not to miss out at Real Deals because our inventory changes weekly. What you see this week may be gone next week!
Whether you are decorating your home, you’re an interior designer, or your friends call on you to help make their home interiors more beautiful, we have what you need here at Cannon Falls’ Real Deals on Home Décor. Plus, we have affordable clothing and accessories to complete your unique look!