I'm currently trying to port an algorithm from IDL to Python 3 and while comparing results I came across the following questioning: How do I look at the numbers and determine if I'm effectively reproducing the results?
Assuming that different languages will deal slightly differently with pointing float precision it is expected that the results should be a bit different but up to which point this is acceptable?
In the figure below I will use the mean values of the data sets produced by IDL and python to illustrate my point:
While in some of the calculations I can see that the values are similar in others they don't quite hit the mark.
Looking at the step below where a trace of the set of matrices that will be used to is used to determine if there are points where the solution is ill posed I had the following results for both IDL and Python:
Which looks quite good (may I say that?).
The (100, y-dimension, x-dimension) matrices where this trace was calculated from are then re-organised to calculate a least square fitting that is ultimately where the values that will create the means will arise.
I'm using those comparisons to find clues on what need to be changed and improved on the python version therefore I appreciate any thoughts that can lead me in this direction.
In advance I thank for your time.
Each number representation injects into the computation process two quantities:
- some value ( the primary quantity represented by the number )
- some error ( the secondary quantity, as a side-effect, caused by a numerical representation )
No one can compute without the former ( The Value ... )
No one can escape from the latter, being in practice visible as (i-)responsibility for the computation-process flow's final result error ( better accumulated levels of uncertainty ).
There are not much strategies for coping with principal errors ( uncertainties ) embedded in simplified "common" representations, as advocated by IEEE-754(-1985).
Be it astronomy, be it interplanetary flight dynamics calculations, there are cases, where IEEE-754 numbers soon fail to provide an acceptable service.
Here the computing tools provide a few solutions to chose from.
>>> import decimal
>>>
>>> with decimal.localcontext() as locCTX:
... for aPREC in range( 20, 31 ):
... locCTX.prec = aPREC
... ( pure_dec_LSQ_5DoF( locCTX, dec_fmin_x0_SEARCH_TRIM_TO_BE_PRECISE, decX, decY ), pure_dec_RESi( locCTX, dec_fmin_x0_SEARCH_TRIM_TO_BE_PRECISE, decX, decY ) )
...
(Decimal('0.038471115298826195147'), (Decimal('0.023589050081780503'), Decimal('-0.082605913918299990'), Decimal('0.150647690402532134'), Decimal('-0.091630826566012630')))
(Decimal('0.0384711152988261953165'), (Decimal('0.0235890500817804889'), Decimal('-0.0826059139182999933'), Decimal('0.1506476904025321349'), Decimal('-0.0916308265660126301')))
(Decimal('0.03847111529882619531420'), (Decimal('0.02358905008178048823'), Decimal('-0.08260591391829999331'), Decimal('0.15064769040253213501'), Decimal('-0.09163082656601263007')))
(Decimal('0.038471115298826195324048'), (Decimal('0.023589050081780488368'), Decimal('-0.082605913918299993309'), Decimal('0.150647690402532135021'), Decimal('-0.091630826566012630071')))
(Decimal('0.0384711152988261953231489'), (Decimal('0.0235890500817804883582'), Decimal('-0.0826059139182999933087'), Decimal('0.1506476904025321350199'), Decimal('-0.0916308265660126300707')))
(Decimal('0.03847111529882619532322276'), (Decimal('0.02358905008178048835950'), Decimal('-0.08260591391829999330863'), Decimal('0.15064769040253213501998'), Decimal('-0.09163082656601263007070')))
(Decimal('0.038471115298826195323213788'), (Decimal('0.023589050081780488359358'), Decimal('-0.082605913918299993308625'), Decimal('0.150647690402532135019974'), Decimal('-0.091630826566012630070702')))
(Decimal('0.0384711152988261953232136753'), (Decimal('0.0235890500817804883593541'), Decimal('-0.0826059139182999933086251'), Decimal('0.1506476904025321350199740'), Decimal('-0.0916308265660126300707023')))
(Decimal('0.03847111529882619532321367314'), (Decimal('0.02358905008178048835935336'), Decimal('-0.08260591391829999330862505'), Decimal('0.15064769040253213501997413'), Decimal('-0.09163082656601263007070231')))
(Decimal('0.038471115298826195323213665675'), (Decimal('0.023589050081780488359353229'), Decimal('-0.082605913918299993308625043'), Decimal('0.150647690402532135019974132'), Decimal('-0.091630826566012630070702306')))
(Decimal('0.0384711152988261953232136649869'), (Decimal('0.0235890500817804883593532187'), Decimal('-0.0826059139182999933086250437'), Decimal('0.1506476904025321350199741307'), Decimal('-0.0916308265660126300707023064')))
Python is happy to enjoy it's almost-infinite precision mathematics, so the simplest step forwards is to re-design the algorithm on the python side, so that is contains purely this sort of almost-non-degrading precision mathematics and you suddenly stand on the safer-side, independently of where the IDL original was or not.
Given one re-formulated all computation steps in this precision non-degrading way, the results are worth the time:
def pure_dec_LSQ_5DoF( decCTX, Xopt, decX_measured, decY_measured ): # [PERF] ~ 2400 [us] @ .prec = 14
return decCTX.add( decCTX.add( decCTX.power( decCTX.subtract( decCTX.fma( Xopt[0], decCTX.power( Xopt[4], decCTX.fma( Xopt[1], decX_measured[0], Xopt[2] ) ), Xopt[3] ), decY_measured[0] ), decimal.Decimal( 2 ) ), # ~ 2800 [us] @ .prec = 28
decCTX.power( decCTX.subtract( decCTX.fma( Xopt[0], decCTX.power( Xopt[4], decCTX.fma( Xopt[1], decX_measured[1], Xopt[2] ) ), Xopt[3] ), decY_measured[1] ), decimal.Decimal( 2 ) ) # ~ 7700 [us] @ .prec = 100
),
""" [0] [4] [1] [2] [3] _measured[i] ~ [X1,Y1], ...
| | | | |
| | | | | Xopt[0,1,2,3,4] ~ [a,b,c,d,f]
| | | | | | | | | |
+----------------------|--------------------|--------------------------|------------|----------------------------+ | | | |
| +--------------------------|------------|------------------------------+ | | |
| +------------|--------------------------------+ | |
| +----------------------------------+ |
+-------------------------------------------------------------------------------------------------+
"""
decCTX.add( decCTX.power( decCTX.subtract( decCTX.fma( Xopt[0], decCTX.power( Xopt[4], decCTX.fma( Xopt[1], decX_measured[2], Xopt[2] ) ), Xopt[3] ), decY_measured[2] ), decimal.Decimal( 2 ) ), # ~ 1340 [ms] @ .prec = 1000
decCTX.power( decCTX.subtract( decCTX.fma( Xopt[0], decCTX.power( Xopt[4], decCTX.fma( Xopt[1], decX_measured[3], Xopt[2] ) ), Xopt[3] ), decY_measured[3] ), decimal.Decimal( 2 ) ) #
)
)
If one needs ~ 14
-digits precision, just spend ~ 2.4 [ms]
per such step,
if one needs ~ 28
-digits precision, just spend ~ 2.8 [ms]
per such step,
if one needs ~100
-digits precision, just spend ~ 7.7 [ms]
per such step,
if one needs 1000
-digits precision, just spend ~ 1.3 [ s]
per such step,
not bad at all,
is it?
# [PERF] ~ 2400 [us] @ .prec = 14
# ~ 2800 [us] @ .prec = 28
# ~ 7700 [us] @ .prec = 100
# ~ 1340 [ms] @ .prec = 1000
And that's all already included in python tools and cool to re-use, isn't it?