Hi all
Trying to do this (although with figures from an array rather than directly programmed numbers - getting the same result):
finishseconds! = 39818.38
startseconds! = 34200.77
PRINT finishseconds!
- startseconds!
I should get 5617.61 but for some reason I'm getting 5617.609.
Why and how can I be sure to get the exact result I need? Should I use a different type of number?
I'm using type ! because the numbers could possibly go up to 86400 (number of seconds in 24 hours) and I need 2 decimal point accuracy (as that's what my timing system outputs).
Is there a better option, or alternatively, how can I make sure the answer always comes back rounded up to 5617.61?
Thanks
As Fellippe says, what you're seeing is the natural effect of binary math. We see the same result with decimal math (what we normally use, which is base-10), but folks don't quite comprehend why it pops up like it does in binary math. Let me break down what's happening for you, really simply:
In decimal-math, what is 1/3? It's 0.333333333333333333333333333333333..... with more threes to infinity. There is no perfect decimal representation of 1/10. At some point, we have to have a break off point for precision -- whatever we decide we need it to be, whether that's 2 digits(.33), 6 digits (.333333), or 12 digits (.333333333333). We can NEVER truly represent 1/10 as a decimal, so for practicality's sake, we break it off at some point.
But what happens when we take that 1/10, store it as a decimal, and then multiply it by 3? 1/3 * 3 = 1, and everyone knows that. But if we were to write it down on paper, we'd work the math right to left and get .333333 * 3 = .999999... The more precision we have, the closer we come to the proper answer of 1, but we'll always be just a weeee bit short from it.
Which is why computers tend to calculate figures to one place past their precision level and then round. .333333 (if we use a 6 digit precision limit) * 3 actually becomes .9999999, with 7 digits of precision, and then it rounds up to 1, to try and reduce errors as much as possible. Many times, it works just fine, but sometimes it doesn't and values get off by a small fraction -- especially when calculating values inside loops. A .000001 difference looped 100000 times could make your answer be off by several points.
And this is why, when coding with real precision numbers, you shouldn't make IF checks absolute.
IF x / y = 0.1 THEN... This is very likely going to fail for you, as x / y might end up being 0.09999999999874 instead of 0.1.
The best way to do these type of checks with real variable types is to do them based upon a natural level of variance:
If ABS(x / y - 0.1) < 0.00001 THEN... Here we're saying that as long as our value is *almost* 0.1, then we're going to allow it and just work with it as a rounding error from the floating point math.
Now, why do we see this type of error when we do with binary math? Even something as simple as 1/10 can end up glitching out on us, with our PC!!
Think of binary math, which is base-2 values. Let's count from 0 to 10 in binary:
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
So that's 0 to 10, represented in binary values. But what are fractional values?
Just like with decimal math, they're represented on the right side of the period:
0000.1000 = 1/2
0000.0100 = 1/4
0000.0010 = 1/8
0000.0001 = 1/16
But the problem here is that just like how we can't represent 1/3 in decimal math, there's no perfect representation of 1/10. The more precision we use, the closer we can come to representing 1/10, but we'll never be able to do it. You just can't represent 1/10 with binary numbers.
So what's the work around?
If one is talking about two digit values, such as with banks and other organizations which deal with money, they simply NEVER use decimals. The bank doesn't count the number of dollars you have with them; they count the number of pennies!! Your cash isn't stored as $123.32; it's stored as 12332 pennies and then always processed as INTEGER values. Integer math is limited in scope, with much smaller numbers, but it doesn't have a lack of precision. As long as the values you're dealing with are less than 10^15th power, or whatever, you shouldn't have any type of issues calculating them.
Swapping over to integer math might be the answer you seek.
From the minor example you posted though, I'd think the easiest solution in this case is simply to use PRINT using to force a rounding to 2-digit precision.
finishseconds! = 39818.38
startseconds! = 34200.77
PRINT USING "#####.##"; finishseconds! - startseconds!
If that doesn't work for you, then you can always swap over to using string math for your calculations, and then you can set the level of precision to whatever point you need it to become, but that's a complete different topic for another day....
The overall end result of what you're seeing is just the natural state of computers and binary math. If precision has to be absolute, use INTEGER values instead of REAL values -- count pennies instead of dollars. If it's just a matter of, "...BUT IT LOOKS FUNKY!!", then fix that with PRINT USING and format it to a point where you find it visually pleasing and suitable for your needs.