It’s mainly just an issue of precision.
Take 1/3 in decimal as an example...
In single-digit precision (not SINGLE), it might be represented as 0.3.
In double-digit precision, it might be represented as 0.33.
In hexa-digit precision, it might be 0.333333.
So to multiple by 10, with each precision method, we’d get:
3
3.3
3.33333
If we’re rounding for monetary values, the 3 is WRONG, the 3.3 is wrong, and the 3.33 would be right...
The more decimal places you use, the less the margin of error becomes — but it never goes away completely. Even with hexa-digit precision, 1 / 3 * 100000 would end up becoming 333333.0 — which is 33 “cent” off the correct total.
DOUBLE offers more precision than SINGLE, so you have a smaller margin of error. -FLOAT offers an even greater precision and even smaller margin of error...
BUT...
In the end, both still have a margin of error due to rounding/numeric representation.
The only way to truly avoid the issue is to use integer values, or (yuck) string math.