Why can’t floating point do money? It’s a brilliant solution for speed of calculations in the computer, but how and why does moving the decimal point (well, in this case binary or radix point) help and how does it get currency so wrong?

3D Graphics Playlist:

The Trouble with Timezones:

More from Tom Scott: and

This video was filmed and edited by Sean Riley.

Computerphile is a sister project to Brady Haran’s Numberphile. See the full list of Brady’s video projects at:

Nguồn: https://rmacct.org/

Xem thêm bài viết khác: https://rmacct.org/cong-nghe/

from philosophy to psychology to programming. 0.0. hello people

I was checking console.log(0.1+0.2 = 0.3) and returns false then finally I am here

Simple and clear explanation.

Thank you so much sir for such an explanation that a 3 yr old can also understand :).

But i have sill question that 0.1 + 0.2 = 0.30000000000000004

But if i do 0.01 + 0.02 its gives me exact 0.03

whats the matter behind??

just adding zero it gives me answer exactly what a human calculates

Sir, please take me out of this confusion

Thank you!!!

Printf would like to know your location

Tom: Smaller than a molecule doesn't matter

Protein folding people: åååh, that's where you're wrong bud

Javascript math in a nutshell

Did anyone else have no idea what the hell “not” was in the beginning of the video ?

Finally I know why my floats sometimes behave like this!

I've seen those contractor's calculators that display rational numbers in fractional format. Is that a hardware or software solution? What does that format look like in the binary representation?

That's why you use Maple or Mathematica, or Wolfram Alpha…. these systems understand symbolic math.

Awesome

No problems with that in COBOL using BCD numbers. Take that modern languages!

How does one handle bitcoin operations…. 😂😅 let me disable that real quick

I've a lot to learn….

Must love programming, even after hours of classes, I still take in more programming information…😁

@3:14 spooky 👻

The problem with 0.1+0.2 not being equal to 0.3 is not that the 0.3 isn't accurate – it's the conversion of 0.1 to floating point storage and the conversion 0f 0.2 to floating point storage that loses the accuracy right there. The addition is actually perfectly accurate – the accuracy was already lost BEFORE the addition =)

guess what problem i just had…. i hate this

I was only one minute into the video and you already answered my questions! I am no a specialist and literally have no idea what floating points are/ after hours of searching this is the first video that makes sense to me !! thanks

What if the universe is like a computer that uses floating-point numbers? What if the universe only stores values down to a certain, very large but finite, number of bits?

I've taken five semesters of calculus, chemistry, and several engineering and physics classes. I've used scientific notation for years…and this video is the first time I've heard an explanation for WHY scientific notation is used.

i hate recuring numbers like your example adding 1/3s because i hate that it doesnt work and its not really right when we use decimals. knew i was autistic or something now i know im just a bit computer.

Its called floating-point. Why is nothing floating??

Great explanation man thanks for that!

Try coding a Mac Lauren series on a 4slice array processor which has NO division…

BTW, division is the most “expensive” simple arithmetic operation within any computer.

thats why we round number. so it will be back correct LOL and floating point can be exact. fraction is that PROBLEM. genious what school he go 😛 1/3 + 1/3 + 1/3 =0.9999999..infinity but if you roundup it for lets say 1 billion decimal acuraty it will be still 1 LOL

North?

That's Numberwang!

Hah, this reminds me of a programming exercise that I had to undertake in Algorithms 2. The teacher wanted us to calculate a continuous moving average for a set of values. Since the data requirement was so minimal, I decided to store the last n digits in an array, and cycle through them when new numbers appeared. When needed, the moving average was calculated by adding the numbers together and dividing by n.

My program would fail the automated test, because it failed to include the almost 3% error that the professor had gotten by updating a floating point average value for every step of the calculation. I had to teach about 5 other students about the fact that their program was too accurate, and needed to be downgraded.

3:48 I don't understand. Why does base 2 do fractions as one half, one quarter, etc. Why doesn't it do it as 0.1, 0.2, 0.4, 0.8 – the same as it does whole numbers?

WONDERFUL

I'm just here to enjoy the accent

But wait… who the hell uses fractions in coding, first of all? Secondly, most floating point needs arise from computing money, and so yes, rounding is acceptable and necessary. I get the fact of floating precision and computing limits, but it does have its place. Some say store money as cents… not sure how much that matters… it always comes back round to 2 decimal payouts

I prefer 3 bit floats. They have 1 sign bit, 1 exponent bit, and 1 significand bit. They can encode ±0, ±1, ±∞, and NaN, which is all you really need.

Luckily more modern number types solve this using the qualifier "and a little more".

Python 3.6.7

>>> 1 / 3 + 1 / 3 + 1 / 3

1.0

Thanks, Guido

It's not difficult to get your head around, right

I have a computer science degree from a top school, and yet nothing was ever explained nearly as well as this.

I love this YouTube channel. Absolutely brilliant explaination. Thank you!!

Rather use float then waste 4 bytes.

Exceptional vid – many thanks

Feel like his voice changed a bit. Less British or something lol. Also wow, I first subscribed to his own channel then I discovered this one. So what does he actually do???