Search code examples
python-3.xneural-networkrecurrent-neural-networkunsupervised-learning

Implementing Oja's Learning rule in Hopfield Network using python


I am following this paper to implement Oja's Learning rule in python

Oja's Learning Rule

u = 0.01
V = np.dot(self.weight , input_data.T)
print(V.shape , self.weight.shape , input_data.shape) #(625, 2) (625, 625) (2, 625)

So far, I am able to follow the paper, however on arriving at the final equation from the link, I run into numpy array dimension mismatch errors which seems to be expected. This is the code for the final equation

self.weight += u * V * (input_data.T - (V * self.weight)

If I break it down like so:

u = 0.01
V = np.dot(self.weight , input_data.T)
temp = u * V  #(625, 2)
x = input_data - np.dot(V.T , self.weight)   #(2, 625)
k = np.dot(temp , x)   #(625, 625)
self.weight = np.add(self.weight , k , casting = 'same_kind')

This clears out the dimension constraints, but the answer pattern is wrong by a stretch (I was just fixing the dimension orders knowing well the result would be incorrect). I want to know if my interpretation of the equation is correct in the first approach which seemed like the logical way to do so. Any suggestions on implementing the equation properly?


Solution

  • I have implemented the rule based on this link Oja Rule. The results I get are similar to the hebbian learning rule. So I am not exactly sure on the correctness of the implementation. However posting it so anyone looking for an implementation can get few ideas and correct the code if wrong

    u = 0.01
    V = np.dot(self.weight , input_data.T)
    i = 0
    
    for inp in input_data:
        v = V[ : , i].reshape((n_features , 1))  #n_features is # of columns
        self.weight += (inp * v) - u * np.square(v) * self.weight
        i += 1