I have a dataset of about a 100 numeric values. I have set the learning rate of my neural network to 0.0001. I have successfully trained it on the dataset for over 1 million times. But my question is that what is the effect of very low learning rates in the neural networks ?
Low learning rate mainly implies slow convergence: you're moving down the loss function with smaller steps (the step size is the learning rate). If your function is convex this is not a problem, you will wait more but you'll reach a good solution.
If, as in the case of deep neural networks, your function is not convex than a low learning rate could lead to the reaching of a "good" optimum that's not the best one (getting stuck in a local minimum, without making steps as big as required to jump out of it).
That's why there are different optimization algorithms that are adaptive: such algorithm, like ADAM, RMSProp, ... have different learning rates for each weight in the network (every single learning rate starts from the same value). In this way, the optimization algorithm can work on every single parameter independently with the aim of finding a better solution (and letting the chose of the initial learning rate less critical)