Hacker News

0.30000000000000004(0.30000000000000004.com)

768 pointsbeznet posted 7 months ago50 Comments
50 Comments:
mcv said 7 months ago:

The big issue here is what you're going to use your numbers for. If you're going to do a lot of fast floating point operations for something like graphics or neural networks, these errors are fine. Speed is more important than exact accuracy.

If you're handling money, or numbers representing some other real, important concern where accuracy matters, most likely any number you intend to show to the user as a number, floats are not what you need.

Back when I started using Groovy, I was very pleased to discover that Groovy's default decimal number literal was translated to a BigDecimal rather than a float. For any sort of website, 9 times out of 10, that's what you need.

I'd really appreciate it if Javascript had a native decimal number type like that.

umanwizard said 7 months ago:

Decimal numbers are not conceptually any more or less exact than binary numbers. For example, you can't represent 1/3 exactly in decimal, just like you can't represent 1/5 exactly in binary.

When handling money, we care about faithfully reproducing the human-centric quirks of decimal numbers, not "being more accurate". There's no reason in principle to regard a system that can't represent 1/3 as being fundamentally more accurate because it happens to be able to represent 1/5.

dspillett said 7 months ago:

MS Excel tries to be clever and disguise the most common places this is noticed.

Give it =0.1+0.2-0.3 and it will see what you are trying to do and return 0.

Give it anything slightly more complicated such as =(0.1+0.2-0.3) and this won't trip, in this example displaying 5.55112E-17 or similar.

brundolf said 7 months ago:

I remember in college when we learned about this and I had the thought, "Why don't we just store the numerator and denominator?", and threw together a little C++ class complete with (then novel, to me) operator-overloads, which implemented the concept. I felt very proud of myself. Then years later I learned that it's a thing people actually use: https://en.wikipedia.org/wiki/Rational_data_type

dang said 7 months ago:
mark-r said 7 months ago:

Also the subject of one of the most popular questions on StackOverflow: https://stackoverflow.com/q/588004/5987

lordnacho said 7 months ago:

While it's true that floating point has its limitations, this stuff about not using it for money seems overblown to me. I've worked in finance for many years, and it really doesn't matter that much. There are de minimis clauses in contracts that basically say "forget about the fractions of a cent". Of course it might still trip up your position checking code, but that's easily fixed with a tiny tolerance.

GuB-42 said 7 months ago:

That's one of the worst domain name ever. When the topic comes along, I always remember about "that single-serving website with a domain name that looks like a number" and then take a surprisingly long time searching for it.

I have written a test framework and I am quite familiar with these problems, and comparing floating point numbers is a PITA. I had users complaining that 0.3 is not 0.3.

The code managing these comparisons turned out to be more complex than expected. The idea is that values are represented as ranges, so, for example, the IEEE-754 "0.3" is represented as ]0.299~, 0.300~[ which makes it equal to a true 0.3, because 0.3 is within that range.

mc3 said 7 months ago:

This is a good thing to be aware of.

Also the "field" of floating point numbers is not commutative†, (can run on JS console:)

x=0;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; }; x+=1

--> 1.000000000000001

x=1;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; };

--> 1

Although most of the time a+b===b+a can be relied on. And for most of the stuff we do on the web it's fine!††

† edit: Please s/commutative/associative/, thanks for the comments below.

†† edit: that's wrong! Replace with (a+b)+c === a+(b+c)

maxdamantus said 7 months ago:

I feel like it should really be emphasised that the reason this occurs is due to a mismatch between binary exponentiation and decimal exponentiation.

0.1 = 1 × 10^-1, but there is no integer significand s and integer exponent e such that 0.1 = s × 2^e.

When this issue comes up, people seem to often talk about fixing it by using decimal floats or fixed-point numbers (using some 10^x divisor). If you change the base, you solve the problem of representing 0.1, but whatever base you choose, you're going to have unrepresentable rationals. Base 2 fails to represent 1/10 just as base 10 fails to represent 1/3. All you're doing by using something based around the number 10 is supporting numbers that we expect to be able to write on paper, not solving some fundamental issue of number representation.

Also, binary-coded decimal is irrelevant. The thing you're wanting to change is which base is used, not how any integers are represented in memory.

ufo said 7 months ago:

One small tip about printf for floating point numbers. In addition to "%f", you can also print them using "%g". While the precision specifier in %f refers to digits after the decimal period, in %g the precision refers to the number of significant digits. The %g version is also allowed to use exponential notation, which often results in more pleasant-looking output than %f.

   printf("%.4g", 1.125e10) --> 1.125e+10
   printf("%.4f", 1.125e10) --> 11250000000.0000
amyjess said 7 months ago:

One of my favorite things about Perl 6 is that decimal-looking literals are stored as rationals. If you actually want a float, you have to use scientific notation.

Edit: Oh wait, it's listed in the main article under Raku. Forgot about the name change.

lelf said 7 months ago:

That’s only formatting.

The other (and more important) matter, — that is not even mentioned, — is comparison. E. g. in “rational by default in this specific case” languages (Perl 6),

  > 0.1+0.2==0.3
  True
Or, APL (now they are floats there! But comparison is special)

      0.1+0.2
  0.3
      ⎕PP←20 ⋄ 0.1+0.2
  0.30000000000000004
      (0.1+0.2) ≡ 0.3
  1
DonHopkins said 7 months ago:

The runner up for length is FORTRAN with: 0.300000000000000000000000000000000039

And the length (but not value) winner is GO with: 0.299999999999999988897769753748434595763683319091796875

jonny_eh said 7 months ago:

> It's actually pretty simple

The explanation then goes on to be very complex. e.g. "it can only express fractions that use a prime factor of the base".

Please don't say things like this when explaining things to people, it makes them feel stupid if it doesn't click with the first explanation.

I suggest instead "It's actually rather interesting".

garyclarke27 said 7 months ago:

Postgresql figured this out many years ago with their Decimal/Numeric type. It can handle any size number and it performs fractional arithmetic perfectly accurately - how amazingly for the 21st Century! Is comically tragic to me that all of the mainstream programming languages are still so far behind, so primitive that they do not have a native accurate number type that can handle fractions.

Ididntdothis said 7 months ago:

I still remember when I encountered this and nobody else in the office knew about it either. We speculated about broken CPUs and compilers until somebody found a newsgroup post that explained everything. Makes me wonder why we haven't switched to a better floating point model in the last decades. It will probably be slower but a lot of problems could be avoided.

combatentropy said 7 months ago:

In JavaScript, you could use a library like decimal.js. For simple situations, could you not just convert the final result to a precision of 15 or less?

  > 0.1 + 0.2;
  < 0.30000000000000004

  > (0.1 + 0.2).toPrecision(15);
  < "0.300000000000000"
From Wikipedia: "If a decimal string with at most 15 significant digits is converted to IEEE 754 double-precision representation, and then converted back to a decimal string with the same number of digits, the final result should match the original string." --- https://en.wikipedia.org/wiki/Double-precision_floating-poin...
ChuckMcM said 7 months ago:

That is why I only used base 2310 for my floating point numbers :-). FWIW there are some really interesting decimal format floating point libraries out there (see http://speleotrove.com/decimal/ and https://github.com/MARTIMM/Decimal) and the early computers had decimal as a native type (https://en.wikipedia.org/wiki/Decimal_computer#Early_compute...)

skohan said 7 months ago:

This is part of the reason Swift Numerics is helping to make it much nicer to do numerical computing in Swift.

https://swift.org/blog/numerics/

gowld said 7 months ago:

This is a great shibboleth for identifying mature programmers who understand the complexity of computers, vs arrogant people who wonder aloud how systems developers and language designers could get such a "simple" thing wrong.

dunham said 7 months ago:

Interesting, I searched for "1.2-1.0" on google. The calculator comes up and it briefly flashes 0.19999999999999996 (and no calculator buttons) before changing to 0.2. This happens inconsistently on reload.

YeGoblynQueenne said 7 months ago:

Swi-Prolog (listed int he article) also supports rationals:

  ?- A is rationalize(0.1 + 0.2), format('~50f~n', [A]).
  0.30000000000000000000000000000000000000000000000000
  A = 3 rdiv 10.
okennedy said 7 months ago:

This specific issue nearly drove me insane trying to debug a SQL -> C++/Scala/OCaml transpiler years ago. We were using the TPC-H benchmark as part of our test suite, and (unbeknownst to me), the validation parameters for one of the queries (Q6) triggered this behavior (0.6+0.1 != 0.7), but only in the C/Scala targets. OCaml (around which we had built most of our debugging infrastructure) handled the math correctly...

Fun times.

goosehonk said 7 months ago:

When did RFC1035 get thrown under the bus? According to it, with respect to domain name labels, "They must start with a letter" (2.3.1).

dec0dedab0de said 7 months ago:

I wish high level languages (specifically python) would default to using decimal, and only use a float when cast specifically. From what I understand that would make things slower, but as a higher level language you're already making the trade of running things slower to be easier to understand.

That said, it's one of my favorite trivia gotchas.

mytailorisrich said 7 months ago:

Fixed-point calculations seem to be somewhat of a lost art these days.

It used to be widespread because floating point processors were rare and any floating point computation was costly.

That's not longer the case and everyone seems to immediately use floating point arithmetic without being fully aware of the limitations and/or without considering the precision needed.

qwerty456127 said 7 months ago:

As soon as I've started developing real-life business apps I've started to dream about a POWER which is said to have hardware decimal type support. Javs's BigDecimal solves the problem on x86 but it is at least an order of magnitude more slow than FPU-accelerated types.

said 7 months ago:
[deleted]
povik said 7 months ago:

In the Go example, can someone explain the difference between the first and the last case?

said 7 months ago:
[deleted]
tus88 said 7 months ago:

Mods: Can we have a top level menu option called "Floating point explained"?

gumby said 7 months ago:

Not surprisingly Common Lisp gets it right. I don’t mean this is snark (I don’t mean to imply you are a weenie if you don’t use lisp) but just to show that it picked a different kind of region in the language design domain.

thanatropism said 7 months ago:

Computer languages should default to fixed precision decimals and offer floats with special syntax (eg “0.1f32”).

The status quo is that even Excel defaults to floats and wrong calculations with dollars and cents are widespread.

Waterluvian said 7 months ago:

The thing that surprised me the most (because I never learned any of this in school) was not just the lack of precision to represent some numbers, but that precision falls off a cliff for very large numbers.

said 7 months ago:
[deleted]
alberth said 7 months ago:

TL;DR - 0.1 in Base 2 (binary) is the equivalent of 1/3 in Base 10 meaning, it’s a repeating decimal that causes rounding issues (0.333333 repeating)

This is why you should never do “does X == 0.1” because it might not evaluate accurately

0xDEEPFAC said 7 months ago:

Whoo go Ada, one of the few to get it right. Must be the goto for secure programming for a reason.

Take that Rust and C ; )

bluetwo said 7 months ago:

Happy to see ColdFusion doing it right. Also, good for Julia for having the support for fractions.

cogburnd02 said 7 months ago:

I love how Awk, bc, and dc all DTRT. I wonder what postscript(/Ghostscript?) does.

xkriva11 said 7 months ago:

for Smalltalk, the list is not complete, it has scalled decimals and fractions too: 0.1s + 0.2s = 0.3s . (1/10) + (2/10) = (3/10)

adamc said 7 months ago:

Those Babylonians were ahead of their time.

threatofrain said 7 months ago:

Use Int types for programming logic.

cellular said 7 months ago:

Why is D different than the rest?!

mttpgn said 7 months ago:

bc actually computes this correctly, and returns 0.3 for 0.1 + 0.2

idonotknowwhy said 7 months ago:

This has been posted here many times before. It even got mocked on n-gate in 2017 http://n-gate.com/hackernews/2017/04/07/

edisonjoao said 7 months ago:

lol what

said 7 months ago:
[deleted]
beckerdo said 7 months ago:

Please check some of the online papers on Posit numbers and Unum computing, especially by John Gustafson. In general, Unums can represent more numbers, with less rounding, and fewer exceptions than floating points. Many software and hardware vendors are starting to do interesting work with Posits.

pmarreck said 7 months ago:

IEEE floating-point is disgusting. The non-determinism and illusion of accuracy is just wrong.

I use integer or fixed-point decimal if at all possible. If the algorithm needs floats, I convert it to work with integer or fixed-point decimal instead. (Or if possible, I see the decimal point as a "rendering concern" and just do the math in integers and leave the view to put the decimal by whatever my selected precision is.)