Addition is one of those things that does work pretty predictably and error free with floats. The problem with 0.30000000000000004 etc is usually that the things you are adding are not what you expect (float(0.1) != 0.1), i.e. the difficulty usually is string<->float conversions rather than float arithmetic itself.
Even that error is a problem. Doing a single addition might be fine, but doing thousands or millions of additions, and those tiny errors add up to something appreciable.
If you’re doing money operations at a scale where the computational difference between using true number type with infinite precision vs floats is worth thinking about, then you’re also in the territory where tiny floating point errors really stack up into a problem.
As a consequence, there’s very few scenarios where using floats for money actually makes sense. Either your computation is so simple there’s no real benefit to using floats rather than infinite precision number types, or your computation is so large, that floating point errors sum to meaningful amounts.
The only scenario I can think of where floats might be the right choice is on tiny embedded system where computational power and memory is very limited, and working with infinite precision types is a real problem. But in that scenario, you probably don’t need to be educated on the issues with floats. But if you’re the kind of person who unsure if you should use floats for money, then the answer is almost certainly a resounding “No. Do not use floats for money values”.
I would argue the more correct answer is if you don't know what maths to use for money (including e.g. any legal rules about how you do it), you shouldn't be doing maths on money.
That strikes me as an unnecessarily elitist answer that holds us back.
I'm hardly a mathematician, or even college educated, but AFAICT this all boils down to the fact that I can type a number into the computer, and it can't represent it exactly internally, so it misrepresents it, silently.
Where I come from, that's called "a bug", regardless of cause.
Non-mathematicians (and even non-accountants and non-financiers and the like) have to math money all the time. They do it in daily life. Some of them even write programs to do it, because they know enough about computers to do that.
They don't expect that their expensive smartphone is going to screw up the calculation due to some esoteric representational reason that they need four or eight years of college to be aware of, let alone to understand or explain.
And they shouldn't need to!
I would argue that if computers can't do the job correctly in every case without the user jumping through hoops, then we should be continuing to develop methods to make it better.
That's not what I'm talking about. There are legal frameworks about how to compute something (which I only vaguely know exist, so I know enough that I personally should not be implementing any of this without significant outside input and expertise), and if you do it wrong we get deaths. This has been a major issue in at least two countries (the UK with the Post Office and Australia with Robodebt). I'd argue its more elitist thinking "I can program numbers into a computer so I can do anything involving numbers", rather than referring to non-software-developer experts.
On needing years in college to learn this: the "weirdness" of floats should be covered (it was for me) in high school science, and is also drummed in in first year science labs. Any time you actually need to work with numbers (rather than saying checking whether a group is abelian), you're dealing with how to compute and these rules predate electronics.
That’s because you haven’t looked at what floats are designed for. They were created for the purpose of high performance scientific computing, so they quite deliberately, and explicitly, trade off perfect accuracy for much greater performance.
Computers are perfectly good at providing infinite precision numbers and doing arithmetic on them, and most languages expose explicit number types for that purpose. But there’s a reason why floats are called floats, and not numbers. It’s because floats aren’t numbers (at least not base 10 numbers)! They’re a pretty accurate, but highly performant, approximation of base 10 numbers.
string->float conversion is conversion from accounting realm to computer abstraction. If the conversion is not accurate and the results can not be converted back into accounting realm without artificial artifacts the use of such computation is problematic.
The conversion is accurate up to 15 digits. Round-tripping through floats should not cause artifacts if you know what you are doing and remain within that 15 digit bound all time, and I believe that same applies for all basic arithmetic operations. So the question is, what are actually the cases where floats introduce "artifacts"