It’s as Pete indicitated: basic floating point math.
In decimal, what is 1/3?
0.3333333333333333333333333333333333333.....
We can’t truly represent 1/3 in decimal form as it’s a never ending fraction. The best we can do is approximate the value.
For a computer, numbers are represented by binary values (power of 2), instead of decimal values (power of 10). Using powers of 2, it’s impossible to actually represent something as simple as 1/10...
2 ^ 4 = 16
2 ^ 3 = 8
2 ^ 2 = 4
2 ^ 1 = 2
2 ^ 0 = 1
2 ^ -1 = 1/2
2 ^ -2 = 1/4
2 ^ -3 = 1/8
2 ^ -4 = 1/16
We can’t represent 1/10th in binary, just as we can’t represent 1/3rd in decimal format. It’s just a basic flaw of the math itself.
So how do we get such precision without errors?? In something like financial matters/programs, we track tenths and hundredths, but HOW do those programs work??
They track INTEGER values, not SINGLE. Instead of $10.03 + $2.19, they add/subtract the value as 1003 cent + 219 cent.
Your only choices are:
1) Convert to integer values, so you’re not dealing with single precision math.
OR
2) Allow for variance from rounding errors. Instead of a statement like IF a = b THEN... use a statement like IF ABS(a - b) < 0.001 THEN....
By using “IF ABS(a - b) < 0.001 THEN”, the values don’t have to be EXACT; they only need to be within a specified threshold.
You can also convert a value like .2974997 by rounding it to a lower level of precision, with a simple statement like: ra = INT(ra * 10000 + 0.5) / 100000 Then you can round the 7 digit value down to a 4 digit value, which may give you the results you’re looking for.
At the end of the day, all you can do is either:
1) Use integer values to avoid rounding errors.
OR
2) Write your program to account for the natural precision errors which WILL occur with single precision math.