Search code examples
tensorflowmxnet

Does MXNET has auto-differentiation feature?


Mxnet and Tensorflow both declare that they has auto-differentiation feature.

In Mxnet, I need to define the backward part when creating a new op(like loss function), but not in Tensorflow.

In my knowledge, auto-differentiation means I don't need to care about the backward part. So, does mxnet has auto-differentiation feature?


Solution

  • Yes, MXNet has autograd.

    Here is a tutorial: http://gluon.mxnet.io/chapter01_crashcourse/autograd.html