Search code examples
tensorflowdeep-learningtorch

Will there be difference of performance using different deep learning implementations, when the settings are the same


I have used Torch, Tensorflow and sknn, and found major differences in their mechanism design, syntax, environment requirement, performance in terms of speed, however, I want to know if there is any difference among these libraries when the neural network have the exact same settings?

In another word, will the performance (in terms of, say, accuracy in classification tasks) be difference using different implementations to build the network, when the network itself has the same settings (number of layers, type of layers, dropout, activations, objective functions etc.) ?

Thank you so much.


Solution

  • is any difference among these libraries when the neural network have the exact same settings?

    Sure. See ConvNet Benchmark for example.

    There are always minor differences how to compute things, even if you compute the same. For example, x^4 can be computed as tmp=(x*x); tmp*tmp or as x*(x*(x*x)). Or the matrix multiplication ijk vs ikj (see my article for more). Compiler optimizations might also be quite important.

    In another word, will the performance (in terms of, say, accuracy in classification tasks) be difference using different implementations to build the network, when the network itself has the same settings (number of layers, type of layers, dropout, activations, objective functions etc.) ?

    That is a different question than speed. However, due to numerical issues, you might still get some different results. However, they should not be significant.

    See also