Do you have a favourite example or. How do you explain floating point inaccuracy to fresh programmers and laymen who still think computers are infinitely wise and accurate? The crux of the problem is that numbers are represented in this format as a.
Binary floating point math works like this. In most programming languages, it is based on the ieee 754 standard. So even if you have a.
B is a fixed base (a base determined by the format, not varying with the represented values), s is a sign. Why do some numbers lose accuracy when stored as floating point numbers? And also, any significant floating point program is likely to have significant integer work too even if it's only calculating indices into arrays, loop counter etc. Floating point numbers are more general purpose because they can represent very small or very large numbers in the same way, but there is a small penalty in having to.
5 you can't represent most decimal fractions exactly with binary floating point types (which is what ecmascript uses to represent floating point values). However, if it's not a whole, i always want to round down the variable, regardless of how close.