Consider, if you need to accumulate 0.1 until the result is one. Groovy does it as expected
0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 = 1.0
Just because 0.1 is BigDecimal in Groovy.
In Clojure, the example is not that nice:
(+ 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1) results in 0.9999999999999999.
Basically, the problem is solved when you add the M qualifier to each of the numbers in the expression, like
(+ 0.1M 0.1M 0.1M 0.1M 0.1M 0.1M 0.1M 0.1M 0.1M 0.1M) = 1.0
Or use rational numbers:
(+ 1/10 1/10 1/10 1/10 1/10 1/10 1/10 1/10 1/10 1/10) = 1.0
The hypothetical problem is that, if a number (say, some money amount) is stored in the database, and the calculation is configured by the user in business rules via some DSL, then a user could create a calculation that could lead to a loss of precision. For instance:
(* 100M 1.1) = 110.00000000000001
(class (* 100M 1.1)) = java.lang.Double
Double there is!!!
The same kind calculation gives BigDecimal as a result in Scala:
scala.BigDecimal(1.5) * 1.5
BigDecimal = 2.25
The question is what is the rationale behind such behavior and which result is correct? To me, when writing an application which operates with monetary amounts, it is the Scala which is correct.