Author Topic: QB64x64 math differences from QB64x32  (Read 6949 times)

0 Members and 1 Guest are viewing this topic.

Offline SMcNeill

  • QB64 Developer
  • Forum Resident
  • Posts: 3972
    • View Profile
    • Steve’s QB64 Archive Forum
Re: QB64x64 math differences from QB64x32
« Reply #15 on: February 23, 2020, 08:48:13 am »
To FellippeHeitor
I see you have a lot to say in this Forum. Therefore, I agree to delete my profile from qb64.org/forum.
Goodbye

Bye.  /wave

Just curious though: What error does DEFLNG A-Z generate???
https://github.com/SteveMcNeill/Steve64 — A github collection of all things Steve!

Offline Dimster

  • Forum Resident
  • Posts: 500
    • View Profile
Re: QB64x64 math differences from QB64x32
« Reply #16 on: February 23, 2020, 10:50:50 am »
Sorry to see Ryster go. I'm world's worst programmer in that I have very little understanding sometimes as to why somethings work and why some don't work. The issue Ryster raised I have come across before - clueless as to why it worked in Qbasic and not QB64x64, so just threw different things at it until I found the Round function solved my problem. I was hoping this thread would give some more insight and walla the "specific system architecture" was raised.

So is that referring to the OS of Windows, Apple, Android etc, or Hardware installed in a computer, or is that referring to how QB64 deals with 32 bit values or 64 bit values applying the same operation (ie 10 x 354 is handled differently if using QB64x32 v's using QB64x64).


Offline bplus

  • Global Moderator
  • Forum Resident
  • Posts: 8053
  • b = b + ...
    • View Profile
Re: QB64x64 math differences from QB64x32
« Reply #17 on: February 23, 2020, 11:13:18 am »
Hi Dimster,

I'm no tech genius either but floating point math is part of Program Language compiling code so if there were problems I would look there before the OS which the PL does have to work with.

I too have found surprises, what I thought should be 0 wasn't exactly according to say an IF evaluation because of junk wandering in from big precision float math.

Offline Qwerkey

  • Forum Resident
  • Posts: 755
    • View Profile
Re: QB64x64 math differences from QB64x32
« Reply #18 on: February 23, 2020, 02:32:28 pm »
Therefore, I agree to delete my profile from qb64.org/forum.
Goodbye

That has got to be the most clearly illustrative example of cutting off one's nose to spite one's face.

Offline SMcNeill

  • QB64 Developer
  • Forum Resident
  • Posts: 3972
    • View Profile
    • Steve’s QB64 Archive Forum
Re: QB64x64 math differences from QB64x32
« Reply #19 on: February 23, 2020, 02:35:04 pm »
Sorry to see Ryster go. I'm world's worst programmer in that I have very little understanding sometimes as to why somethings work and why some don't work. The issue Ryster raised I have come across before - clueless as to why it worked in Qbasic and not QB64x64, so just threw different things at it until I found the Round function solved my problem. I was hoping this thread would give some more insight and walla the "specific system architecture" was raised.

So is that referring to the OS of Windows, Apple, Android etc, or Hardware installed in a computer, or is that referring to how QB64 deals with 32 bit values or 64 bit values applying the same operation (ie 10 x 354 is handled differently if using QB64x32 v's using QB64x64).

At the end of the day, QB64 just translates BAS code to C code.   mingw is the compiler we use to then compile that C code to an EXE.

GENERALLY SPEAKING:  *
G++ 32-bit uses the 80-bit precision X87 FPU math processors by default.
G++ 64-bit uses 64-bit precision SSE2 math processors by default, as they’re much faster.

That gives us a noticeable difference in results as the precision limits are different.  Usually this is a difference of something like 0.000000002 or such, and it’s hardly noticeable — BUT when rounding it can cause a huge change in values.

INT(15.9999999999999999) = 15
INT(16.0000000000000001) = 16

Only .0000000000000002 difference in those values, but their INT value is quite different.



* You notice I mentioned GENERALLY SPEAKING above??  That’s because various machines and OSes have different architecture that they default to.  From what I’ve heard, Mac OS X and up all use 64-bit SSE2 processing — even on 32-bit Macs...

If one wants to alter these type of default behaviors, they usually just need to set the proper flags to tell the compiler ”I want the slower, 80-bit FPU math, rather than the faster 64-bit SSE2 math”.


https://github.com/SteveMcNeill/Steve64 — A github collection of all things Steve!

FellippeHeitor

  • Guest
Re: QB64x64 math differences from QB64x32
« Reply #20 on: February 23, 2020, 02:35:08 pm »
@Qwerkey thanks for teaching me an expression! Had never heard that one. :-)

Offline Qwerkey

  • Forum Resident
  • Posts: 755
    • View Profile
Re: QB64x64 math differences from QB64x32
« Reply #21 on: February 23, 2020, 02:39:22 pm »
@Fellippe, it is a very odd expression indeed and very rarely used, but I thought that I'd foist it upon everyone.
« Last Edit: February 24, 2020, 11:54:28 am by Qwerkey »

Offline SMcNeill

  • QB64 Developer
  • Forum Resident
  • Posts: 3972
    • View Profile
    • Steve’s QB64 Archive Forum
Re: QB64x64 math differences from QB64x32
« Reply #22 on: February 23, 2020, 02:40:38 pm »
A few quick wiki links to help:

https://en.wikipedia.org/wiki/X87

https://en.wikipedia.org/wiki/SSE2

Quote
Differences between x87 FPU and SSE2
FPU (x87) instructions provide higher precision by calculating intermediate results with 80 bits of precision, by default, to minimise roundoff error in numerically unstable algorithms (see IEEE 754 design rationale and references therein). However, the x87 FPU is a scalar unit only whereas SSE2 can process a small vector of operands in parallel.

If codes designed for x87 are ported to the lower precision double precision SSE2 floating point, certain combinations of math operations or input datasets can result in measurable numerical deviation, which can be an issue in reproducible scientific computations, e.g. if the calculation results must be compared against results generated from a different machine architecture. A related issue is that, historically, language standards and compilers had been inconsistent in their handling of the x87 80-bit registers implementing double extended precision variables, compared with the double and single precision formats implemented in SSE2: the rounding of extended precision intermediate values to double precision variables was not fully defined and was dependent on implementation details such as when registers were spilled to memory.
https://github.com/SteveMcNeill/Steve64 — A github collection of all things Steve!

Offline luke

  • Administrator
  • Seasoned Forum Regular
  • Posts: 324
    • View Profile
Re: QB64x64 math differences from QB64x32
« Reply #23 on: February 23, 2020, 05:02:27 pm »

Offline MWheatley

  • Newbie
  • Posts: 64
    • View Profile
Re: QB64x64 math differences from QB64x32
« Reply #24 on: February 24, 2020, 12:17:09 pm »
@Fellippe, it is a very odd expression indeed and very rarely used, but I thought that I'd foist it upon everyone.

British, I think.

Malcolm

Offline bplus

  • Global Moderator
  • Forum Resident
  • Posts: 8053
  • b = b + ...
    • View Profile
Re: QB64x64 math differences from QB64x32
« Reply #25 on: February 24, 2020, 01:23:29 pm »
Rhinectomy - cutting off all nosy decimals.

« Last Edit: February 24, 2020, 01:58:14 pm by bplus »

Offline romichess

  • Forum Regular
  • Posts: 145
    • View Profile
Re: QB64x64 math differences from QB64x32
« Reply #26 on: February 24, 2020, 04:24:22 pm »
The reason this kind of problem exist is because values on the computer are digital and not analog. That means that there are values that are just simply impossible to represent inside a computer. The error is bigger for single precision floating point than for double precision numbers. The point would be then that if the 64 bit mingw compiler uses different size registers or a slightly different format a different size error would be propagated through the calculations producing a slightly different answer. If absolute precision out to a number of decimal places that C++ cannot handle is required there are languages that can handle much larger values. Some languages can string together hundreds of bytes to form any precision needed. Though they tend to be very slow. Don't hold me to this but Euphoria 4.0 is I believe one of those languages. Euphoria 4.0 is both an interpreter and an .e to .c compiler.
My name is Michael, but you can call me Mike :)