I'm new to AI and Neural Networks. I've read a few articles on how to build a neural network from scratch in python so I've decided to build my own.
My code consists of a NN class without a training function that is dynamic in layers. But every time I test it with random numbers the predictions are always greater than 0.7.
What am I doing wrong ?
A little info: layers_counts is a list of integers of neuron counts in each layer, like
(Layer 1: 5 neurons, Layer 2: 3 neurons. -> [5, 3])
import numpy as np
class NeuralNetwork:
def __init__(self, inputs_count, layers_counts, bias=0):
self.weights = []
self.layer_schema = layers_counts
self.bias = bias
for lidx, litem in enumerate(layers_counts):
if lidx == 0:
self.weights.append(np.random.rand(inputs_count, layers_counts[0]).tolist())
if len(layers_counts) == 1:
break
else:
continue
self.weights.append(np.random.rand(layers_counts[lidx-1], litem).tolist())
def train(self, inputs, epochs=100, acc_threshold=0.9):
todolist = 1
def predict(self, inputs):
last_result = 0.0
last_inputs = inputs
for layer_c in range(0, len(self.layer_schema)):
last_inputs = np.dot(last_inputs, self.weights[layer_c])
last_result = last_inputs + self.bias
return 1 / (1 + np.exp(-last_result))
net = NeuralNetwork(4, [2, 1, 4], bias=5.3)
print(net.predict([-0.5, 0.3, 0.9, 1]))
Sample results:
[0.99845756 0.99601029 0.99808744 0.99788011]
or
[0.99716477 0.99547246 0.99525549 0.99702588]
It seems to be because you are adding a bias of 5.3 to each layer! This would cause the output to tend towards 1.