Search code examples
pythoncurve-fittingexponentialchi-squared

Add Power-law and exponential fit based on chi square error minimization to my PDF


Hello as the title suggests I have been trying to add an exponential and power law fit to my PDF. As shown in this picture: enter image description here

The code i am using produces the underlying graph: enter image description here

The code is this one:

   a11=[9.76032106e-02, 6.73754187e-02, 3.20683249e-02, 2.21788509e-02,
       2.70850237e-02, 9.90377323e-03, 2.11573411e-02, 8.46232347e-03,
       8.49027869e-03, 7.33997745e-03, 5.71819070e-03, 4.62720448e-03,
       4.11562884e-03, 3.20064313e-03, 2.66192941e-03, 1.69116510e-03,
       1.94355212e-03, 2.55224949e-03, 1.23822395e-03, 5.29618250e-04,
       4.03769641e-04, 3.96865740e-04, 3.38530868e-04, 2.04124701e-04,
       1.63913557e-04, 2.04486864e-04, 1.82216592e-04, 1.34708400e-04,
       9.24289261e-05, 9.55074181e-05, 8.13695322e-05, 5.15610541e-05,
       4.15425149e-05, 4.68101099e-05, 3.33696885e-05, 1.61893058e-05,
       9.61743970e-06, 1.17314090e-05, 6.65239507e-06]

b11=[3.97213201e+00, 4.77600082e+00, 5.74255432e+00, 6.90471618e+00,
       8.30207306e+00, 9.98222306e+00, 1.20023970e+01, 1.44314081e+01,
       1.73519956e+01, 2.08636432e+01, 2.50859682e+01, 3.01627952e+01,
       3.62670562e+01, 4.36066802e+01, 5.24316764e+01, 6.30426504e+01,
       7.58010432e+01, 9.11414433e+01, 1.09586390e+02, 1.31764173e+02,
       1.58430233e+02, 1.90492894e+02, 2.29044305e+02, 2.75397642e+02,
       3.31131836e+02, 3.98145358e+02, 4.78720886e+02, 5.75603061e+02,
       6.92091976e+02, 8.32155588e+02, 1.00056488e+03, 1.20305636e+03,
       1.44652749e+03, 1.73927162e+03, 2.09126048e+03, 2.51448384e+03,
       3.02335795e+03, 3.63521656e+03, 4.37090138e+03]
                                                      
    plt.plot(b11,a11, 'ro')
    plt.yscale("log")
    plt.xscale("log")
    
    plt.show()
     

I would like to add to the underlying graph a power law fit at smaller time and an exponential fit for loner times based on chi square error minimization method.

The data for the x axis saved in csv form:

The data for the x axis:


Solution

  • As mentioned in my comments, I think you can couple the power law and the exponential via a constant term. Alternatively, the data look like it can be fitted by two power laws. Although the comments suggest that there is truly an exponential behavior. Anyhow, I show both approaches here. In both cases I try to avoid any type of piece-wise definition. This also ensures $C^infty$.

    In the first approach we have a * x**( -b ) for small x and a1 * exp( -d * x ) for large x. The idea is to choose an c such that the power law is much bigger than c for the required small x but significantly smaller otherwise. This allows for the function mentioned in my comment, namely ( a * x**( -b ) + c ) * exp( -d * x ) . One may consider c as an transition parameter.

    In the alternative approaches, I am taking two power-laws. There are, hence, two regions, In the first one function one is smaller, in the second, the second is smaller. As I always want the smaller function I make inverse summation, i.e., f = 1 / ( 1 / f1 + 1 / f2 ). As can be seen in the code below, I add an additional parameter ( technically in ] 0, infty [ ). This parameter controls the smoothness of the transition.

    import matplotlib.pyplot as mp
    import numpy as np
    from scipy.optimize import curve_fit
    
    data = np.loadtxt( "7jyRi.txt", delimiter=',' )
    
    #### p-e: power and exponential coupled via a small constant term
    def func_log( x, a, b, c, d ):
        return np.log10( ( a * x**( -b ) + c ) * np.exp( -d * x ) )
    
    guess = [.1, .8, 0.01, .005 ]
    testx = np.logspace( 0, 3, 150 )
    testy = np.fromiter( ( 10**func_log( x, *guess ) for x in testx ), np.float )
    sol, _ = curve_fit( func_log, data[ ::, 0 ], np.log10( data[::,1] ), p0=guess )
    fity = np.fromiter( ( 10**func_log( x, *sol ) for x in testx ), np.float )
    
    #### p-p: alternatively using two power laws
    def double_power_log( x, a, b, c, d, k ):
        s1 = ( a * x**( -b ) )**k
        s2 = ( c * x**( -d ) )**k
        out = 1.0 / ( 1.0 / s1 + 1.0 / s2 )**( 1.0 / k )
        return np.log10( out )
    
    aguess = [.1, .8, 1e7, 4, 1 ]
    atesty = np.fromiter( ( 10**double_power_log( x, *aguess ) for x in testx ), np.float )
    asol, _ = curve_fit( double_power_log, data[ ::, 0 ], np.log10( data[ ::, 1 ] ), p0=aguess )
    afity = np.fromiter( ( 10**double_power_log( x, *asol ) for x in testx ), np.float )
    
    #### plotting
    fig = mp.figure( figsize=( 10, 8 ) )
    ax = fig.add_subplot( 1, 1, 1 )
    ax.plot( data[::,0], data[::,1] ,ls='', marker='o', label="data" )
    ax.plot( testx, testy ,ls=':', label="guess p-e" )
    ax.plot( testx, atesty ,ls=':',label="guess p-p" )
    ax.plot( testx, fity ,ls='-',label="fit p-e: {}".format( sol ) )
    ax.plot( testx, afity ,ls='-', label="fit p-p: {}".format( asol ) )
    ax.set_xscale( "log" )
    ax.set_yscale( "log" )
    ax.set_xlim( [ 5e-1, 2e3 ] )
    ax.set_ylim( [ 1e-5, 2e-1 ] )
    ax.legend( loc=0 )
    mp.show() 
    

    The results look like

    Fit and alternative approach