I'm confused.
What is the point of an _UNSIGNED variable if operations recognize the leftmost bit as a sign?
All functions have limits on their return values, according to their return type. In this case, it’s a SIGNED _INTEGER64.
Now, why was a signed type chosen over an unsigned one, you ask?
Let’s look at a few values:
11111111 — as a signed byte, this represents the value for -1.
01111111 — Now, if we just shift it right once, it suddenly becomes +128?? <— This is basically what happens if we just shift everything, with no regard to the sign bit being special.
Most people think of a bit shift as basically being a math operation of times two, or divide by two — not a case of times 128, negated...
So, to preserve that “shift by a power of two” mechanic, we hold that first bit in reserve for negative values.
11111111 <— Again, this represents -1, in a signed byte.
10111111 <— Now, we shifted our bit once, and we have -2...
You might pass the function an _UNSIGNED variable, but it’s processing and working with the signed version. Think of it as:
FUNCTION foo (foo2 AS _INTEGER64)
Now, we can pass foo an integer, a single, a float, or an unsigned INTEGER64, but what’s it actually going to process it as, for us, internally?
(foo2 AS _INTEGER64) <— this says it’s accepting and working with a signed INTEGER64, no matter what we send it...