Search code examples
pythonautomatic-differentiationjax

Understanding JAX argnums parameter in its gradient function


I'm trying to understand the behaviour of argnums in JAX's gradient function. Suppose I have the following function:

def make_mse(x, t):  
  def mse(w,b): 
    return np.sum(jnp.power(x.dot(w) + b - t, 2))/2
  return mse 

And I'm taking the gradient in the following way:

w_gradient, b_gradient = grad(make_mse(train_data, y), (0,1))(w,b)

argnums= (0,1) in this case, but what does it mean? With respect to which variables the gradient is calculated? What will be the difference if I will use argnums=0 instead? Also, can I use the same function to get the Hessian matrix?

I looked at JAX help section about it, but couldn't figure it out


Solution

  • When you pass multiple argnums to grad, the result is a function that returns a tuple of gradients, equivalent to if you had computed each separately:

    def f(x, y):
      return x ** 2 + x * y + y ** 2
    
    df_dxy = grad(f, argnums=(0, 1))
    df_dx = grad(f, argnums=0)
    df_dy = grad(f, argnums=1)
    
    x = 3.0
    y = 4.25
    assert df_dxy(x, y) == (df_dx(x, y), df_dy(x, y))
    

    If you want to compute a mixed second derivatives, you can do this by repeatedly applying the gradient:

    d2f_dxdy = grad(grad(f, argnums=0), argnums=1)
    assert d2f_dxdy(x, y) == 1