Author Topic: Division Does Not Take Longer Than Multiplication?  (Read 2754 times)

0 Members and 1 Guest are viewing this topic.

Offline Qwerkey

  • Forum Resident
  • Posts: 755
    • View Profile
Division Does Not Take Longer Than Multiplication?
« on: August 13, 2021, 07:31:34 am »
I'm just finishing a program involving lots of pi/2 equations.  It came into my head that a processor would take longer to do a division than it would a multiplication.  So I did a comparison with a billion calculations (_PI/2 cf 0.5*_PI, and division takes no longer than multiplication.  So my intuitive knowledge is hopeless, it seems! [ I suppose that a "division" process is required to get the value of 0.5].

And interesting (?) but unimportant piece of knowledge.

Offline bplus

  • Global Moderator
  • Forum Resident
  • Posts: 8053
  • b = b + ...
    • View Profile
Re: Division Does Not Take Longer Than Multiplication?
« Reply #1 on: August 13, 2021, 08:26:26 am »
I haven't tested on QB64 but a quick google gets this:
Quote
People also ask
Which is faster division or multiplication?
3 Answers. Multiplication is faster than division. At university I was taught that division takes six times that of multiplication. The actual timings are architecture dependent but in general multiplication will never be slower or even as slow as division.Jul 26, 2013

Is multiplication faster than float division? - Stack Overflowhttps://stackoverflow.com › questions › is-multiplication-f...
Search for: Which is faster division or multiplication?
Why does division take longer than multiplication?
The big difference is that in a long multiplication you just need to add up a bunch of numbers after shifting and masking. In a long division you have to test for overflow after each subtraction.Nov 17, 2018

Testing with 1 digit number is not very much to base a conclusion on.

Offline bplus

  • Global Moderator
  • Forum Resident
  • Posts: 8053
  • b = b + ...
    • View Profile
Re: Division Does Not Take Longer Than Multiplication?
« Reply #2 on: August 13, 2021, 11:03:04 am »
They are pretty close, I had to go to 10,000,000 to start seeing any difference.
Code: [Select]
DefDbl A-Z
Dim i As _Integer64
Const limit = 10000000
Dim test(1 To limit, 0 To 4) 'set up a test set of numbers
For i = 1 To limit
    test(i, 0) = Rnd * 10 ^ Rnd * 100 ' first number to mult or divide
    r = Rnd * 10 ^ Rnd * 100
    test(i, 1) = r 'test number 2 to divide
    test(i, 2) = 1 / r 'test number 3 to mult
Next

t1 = Timer(.001)
For i = 1 To limit ' do divisions
    test(i, 3) = test(i, 0) / test(i, 1)
Next
divideTime = Timer(.001) - t1
Print "Divide time"; divideTime

t1 = Timer(.01) 'reset
For i = 1 To limit ' do divisions
    test(i, 4) = test(i, 0) * test(i, 2)
Next
multTime = Timer(.001) - t1
Print "Mult time"; multTime

' check that numbers are same for both division and mult
For i = 1 To limit
    If Abs((test(i, 3) - test(i, 4)) / test(i, 3)) > .01 Then Print i, test(i, 3), test(i, 4)
Next


Offline SMcNeill

  • QB64 Developer
  • Forum Resident
  • Posts: 3972
    • View Profile
    • Steve’s QB64 Archive Forum
Re: Division Does Not Take Longer Than Multiplication?
« Reply #3 on: August 13, 2021, 03:32:33 pm »
I’ve found that when it comes to speed of math operations, there’s *never* an easy answer — it’s all based on data type, hardware, OS, compiler, and color of God’s sneeze….

Is X + X faster than X * 2?

MAYBE…..

IF X is an INTEGER.  Maybe not.

MAYBE…..

If X is a _FLOAT.  Maybe not.

Maybe if that 2 is an INTEGER?  X = X * 2%
Maybe it depends on if X is an INTEGER, then that 2 needs to be an INTEGER?  If X is a _FLOAT then that 2 needs to be a _FLOAT?
Maybe on Linux the ruleset is backwards from Windows?  And Mac is different than either?
Maybe it’s different on 32-bit math processors than it is on 64-bit processors?
Maybe swapping out the compiler to a newer version will reverse results?
What it -fast-math or -Os is set as compiler options to optimize for math, or speed?
What if you test it and your OS is busy processing background tasks, such as downloading the next windows update?  Can you *really* trust .000001 second variations?

It all depends on each individual piece of hardware you compare it on, what your data types are, and what OS you’re running, and which version of the compiler you’re using.  It’s truly hard to set hard and fast rules like “x * x is faster than x ^ 2”…

https://github.com/SteveMcNeill/Steve64 — A github collection of all things Steve!