Search code examples
pythonpython-3.xnumpylinear-regressionpearson-correlation

How to optimise the Pearson's correlation coefficient by adjusting the weights?


I would like to adjust the weight w to optimise the r-squared of the Pearson's correlation coefficient.

import numpy as np
from scipy import stats

x1_raw=np.array([277, 115, 196])
x2_raw=np.array([263, 118, 191])
x3_raw=np.array([270, 114, 191])

w=np.array([w1, w2, w3])

x1=np.prod([w,x1_raw], axis=0).sum()
x2=np.prod([w,x2_raw], axis=0).sum()
x3=np.prod([w,x3_raw], axis=0).sum()

x=np.array([x1, x2, x3])

y=np.array([71.86, 71.14, 70.76])

slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
r_squared = r_value**2

So what is the code to adjust [w1, w2, w3] to maximise the r_squared?


Thank you @mathew gunther

The result I got from print(res) is:

final_simplex: (array([[ 0.41998763,  2.66314965,  3.34462572],
  [ 0.4199877 ,  2.66314968,  3.34462654],
  [ 0.41998749,  2.66314983,  3.34462649],
  [ 0.41998765,  2.66314917,  3.34462607]]), array([-1., -1., -1., -1.]))
      fun: -0.99999999999999822
  message: 'Optimization terminated successfully.'
     nfev: 130
      nit: 65
   status: 0
  success: True
        x: array([ 0.41998763,  2.66314965,  3.34462572])

I can understand that x: array([ 0.41998763, 2.66314965, 3.34462572]) is the w; nfev is number of Function evaluations; nit is number of Iterations

But what is the following parameters?

array([[ 0.41998763,  2.66314965,  3.34462572],
  [ 0.4199877 ,  2.66314968,  3.34462654],
  [ 0.41998749,  2.66314983,  3.34462649],
  [ 0.41998765,  2.66314917,  3.34462607]])

array([-1., -1., -1., -1.]))
status: 0

Solution

  • I'm willing to bet there is some closed form solution, but if hacked code suffices, see below

    (this solution is based on the scipy.optimize package https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)

    (the minimize is turned into a maximize by returning -1 times r_squared)

    import numpy as np
    from scipy import stats
    from scipy import optimize
    import IPython
    
    def get_linregress(*args):
    
        #IPython.embed()
        w1,w2,w3 = args[0]
    
        x1_raw=np.array([277, 115, 196])
        x2_raw=np.array([263, 118, 191])
        x3_raw=np.array([270, 114, 191])
    
        w=np.array([w1, w2, w3])
        #w=np.array([1, 1, 1])
    
        x1=np.prod([w,x1_raw], axis=0).sum()
        x2=np.prod([w,x2_raw], axis=0).sum()
        x3=np.prod([w,x3_raw], axis=0).sum()
    
        x=np.array([x1, x2, x3])
    
        y=np.array([71.86, 71.14, 70.76])
    
        slope, intercept, r_value, p_value, std_err = stats.linregress(x,y) r_squared = r_value**2
    
        return -1*r_squared
    
    res = optimize.minimize(get_linregress, [1,2,3], method='Nelder-Mead', tol=1e-6)
    
    res.x