@NOVARSEG @bplus Thanks for replies.
I admit that a "well-written", "hand crafted" , "optimized" assembly code would be the most elegant, resource efficient, fastest and possibly the easiest to understand and debug. At this stage I am not quite ready to attempt assembly programming (Intel x86 syntax) and incorporate this seaminglessly into QB64 programs. At present I would have to have a "stand alone" assembly program and "cheat" by saving and writing to a disk temp file for (assembly <-> QB64) program interchange as I cannot correctly pass data and commands between the two. Hopefully one day I will sort this out for me.
Yes I have heard about Taylor Series (and even used same) - and there were other series too (McClarens ???, etc). The "fun" starts when the Mathematical Field of "Numerical Analysis" is used (where upon many other things - an analysis is done to see how many "additions", "subtractions", etc is performed to do a certain task (and with corresponding correlation to precision/accuracy of answer at each stage). And to top this, for "certain kinds" of tasks - to use methods where a "seed" is used to "kick-off" an iteration process to rapidly converge upon the answer (when eg a predetermined accuracy say 5 decimal places is only needed). A simple example is to estimated factorial N (N!) = 1 * 2 * 3 *... (n-1) * n if precision required is only say the most significant digit (or two) then Sterlings Approximation for factorial may be sufficient.
Now I am at a "cross-roads" - having to spread my resources over
- assembly programming (long term)
- BPlus OHI approach (maybe up and going quickly)
- GMP and similar packages (probably get lost along the way)
- Learning C++ and to use 128bit registers (learning curve for C++ to overcome)
- my approach to use _MEM tools (just starting - however _MEM expertise already available with QB64 forum members (particularly
@SMcNeill)
Just a side note - although many topics/replies have been made regarding FLOAT vs LONG (double precision) - in a nutshell you "always" need more bits (80 verses 64) to get a more accurate result - one thing I noticed "missing" from all the topics I have read is actual possible "problems" with 64bit math (as performed by the Intel x64 cpu) and am not referring to it being only 64 bits. Maybe I should reply on this if anyone is interested (my reply would rely on documentation from INTEL).