Search code examples
pythonneural-network

AI that follows the mouse cursor


I have two inputs that have the mouse X and the mouse Y. The neural network seems to ignore the X position.

I tried adding another input node that represents the mouse X, just to see what happens.

Now the input nodes are:

1: mouseX

2: mouseY

3: mouseX

But now it seems to ignore the Y position and only use X. I think the code is wrong.

The compute() function runs all the numbers.

#brains: amount of AIs
#hiddenLayerAmount: amount of hidden layers (in this case 1)
#hiddenNodes[j]: Amount of nodes (2)

def compute():
    for i in range(brains):
        #INPUT LAYER TO FIRST HIDDEN LAYER
        for j in range(hiddenLayerAmount):
            for k in range(hiddenNodes[j]):
                if j == 0:
                    for l in range(inputNodes):
                        #OUTPUT                    #INPUT           #WEIGHT                     #BIAS
                        hiddenLayers[i][j][k][2] = inputLayer[i][l]*hiddenLayers[i][j][k][0][l]+hiddenLayers[i][j][k][1]
                        
                #HIDDEN LAYER TO NEXT HIDDEN LAYER
                else:
                    for l in range(hiddenNodes[j-1]):
                        #OUTPUT                    #INPUT                     #WEIGHT                     #BIAS
                        hiddenLayers[i][j][k][2] = hiddenLayers[i][j-1][k][2]*hiddenLayers[i][j][k][0][l]+hiddenLayers[i][j][k][1]
        #LAST HIDDEN LAYER TO OUTPUT LAYER
        for j in range(outputNodes):
            for k in range(hiddenNodes[-1]):
                #OUTPUT                #INPUT                    #WEIGHT                 #BIAS
                outputLayer[i][j][2] = hiddenLayers[i][-1][k][2]*outputLayer[i][j][0][k]+outputLayer[i][j][1]

Solution

  • Looking at your compute() function, I can't see anything immediately wrong that would cause the network to ignore either X or Y positions. I noticed that you are directly using the weighted sum as the output. Usually, an activation function like ReLU, Sigmoid, or Tanh is used to introduce non-linearity. You are directly assigning the weighted sum to hiddenLayers[i][j][k][2] and outputLayer[i][j][2], which would overwrite the value in every iteration. Usually, you'd start with zero and then accumulate the weighted sum.

    Just some thoughts, anyway...