Search code examples
pythonpython-3.xregressiongradient-descent

Gradient Descent to solve for Empirical Equation with 2 knowns and 2 unkowns


I have the following empirical equation (engineering):

Y = A + (X  - B) * (0.3026506 * (A/B))^0.3895556 * (0.2444663 * (A/B))^1.226 + 0.00000560643 * A^(0.00125 * B + 0.3026)

Where I don't know values of A and B (but know that there are between some physical boundaries) and have the values of Y and X given me in a table format:

X Y
35 179.92
40 181.46
50 184.53
60 187.61
70 190.69
90 196.84
100 199.92
110 203
120 206.08
130 209.16
140 212.23
150 215.31

My aim is to tweak the values of A and B such that the equation on the RHS will have similar values to Y in the table given all the constants given in the equation. One of my assumptions, is to use Gradient Descent for multivariate regression. I think I should take Y as my cost function, but how do I create gradient descent plot if I don't know what kind of values A and B should have? May be other approach is required? Basically, it is one equation with two knowns and two unkowns.

Thanks in advance


Solution

  • In python you could do:

    def error(par, X, Y):
        A = par[0]
        B = par[1]
        V = A + (X  - B) * (0.3026506 * (A/B))**0.3895556 * (0.2444663 * (A/B))**1.226 + 0.00000560643 * A**(0.00125 * B + 0.3026)
        return ((Y-V)**2).sum()
    
    from scipy.optimize import minimize
    
    
    X = [ 35,  40,  50,  60,  70,  90, 100, 110, 120, 130, 140, 150]
    Y = [179.92, 181.46, 184.53, 187.61, 190.69, 196.84, 199.92, 203.  ,
           206.08, 209.16, 212.23, 215.31]
    
    minimize(error, [1,2], (X, Y))['x']
    array([202.39468192, 108.03429635])