Is there any method that is faster and more efficient than gradient descent for updating weights in a neural network. Can we use multiplicative weight update in place of gradient-descent. Is it better
You could take a look at the LMA. I've heard that MLPs are most efficiently trained with it.
Another promising algorithm is BFGS.
Both algorithms are based on Newton's method to approximate functions. And in my opinion harder to understand than Backpropagation.