I got the results by running the code provided in this link Neural Network – Predicting Values of Multiple Variables. I was able to compute losses accuracy etc. However, every time I run this code, I get a new result. Is it possible to get the same (consistent) result?
The code is full of random.randint()
everywhere! Furthermore, the weights are most of the time randomly set aswell, and the batch_size also has an influence (although pretty minor) in the result.
adam
as optimizer, means you'll be performing stochastic gradient descent. With a random beginning point of the iterations in order to converge.Solution:
np.random.seed()
If I find a way to have consistente sampling methods for the batch_size
/epoch
issue I will edit my answer.