Search code examples
pythonnumpymachine-learningmathematical-optimization

Linear regression and autograd


Let $F \in \mathbb{R}^{S \times F}$ be a matrix of features, I want to classify them using logistic regression with autograd [1]. The code I am using is similar to the one in the following example [2].

The only thing I want to change is that I have an additional weight matrix $W$ in $\mathbb{R}^{F \times L}$ that I want to apply to each feature. So each feature is multiplied with $W$ and then feed into the logistic regression.

Is it somehow possible to train $W$ and the weights of the logistic regression simultaneously using autograd?

I have tried the following code, unfortunately the weights stay at value 0.

import autograd.numpy as np
from autograd import grad

    global inputs

    def sigmoid(x):
        return 0.5 * (np.tanh(x) + 1)


    def logistic_predictions(weights, inputs):
        # Outputs probability of a label being true according to logistic model.
        return sigmoid(np.dot(inputs, weights))


    def training_loss(weights):
        global inputs
        # Training loss is the negative log-likelihood of the training labels.

        feature_weights = weights[3:]
        feature_weights = np.reshape(feature_weights, (3, 3))

        inputs = np.dot(inputs, feature_weights)

        preds = logistic_predictions(weights[0:3], inputs)
        label_probabilities = preds * targets + (1 - preds) * (1 - targets)

        return -np.sum(np.log(label_probabilities))


    # Build a toy dataset.
    inputs = np.array([[0.52, 1.12, 0.77],
                       [0.88, -1.08, 0.15],
                       [0.52, 0.06, -1.30],
                       [0.74, -2.49, 1.39]])

    targets = np.array([True, True, False, True])

    # Define a function that returns gradients of training loss using autograd.
    training_gradient_fun = grad(training_loss)

    # Optimize weights using gradient descent.
    weights = np.zeros([3 + 3 * 3])
    print "Initial loss:", training_loss(weights)
    for i in xrange(100):
        print(i)
        print(weights)
        weights -= training_gradient_fun(weights) * 0.01

    print  "Trained loss:", training_loss(weights)

[1] https://github.com/HIPS/autograd

[2] https://github.com/HIPS/autograd/blob/master/examples/logistic_regression.py


Solution

  • Typical practice is to concatenate all "vectorized" parameters into the decision variables vector.

    If you update logistic_predictions to include the W matrix, via something like

    def logistic_predictions(weights_and_W, inputs):
        '''
        Here, :arg weights_and_W: is an array of the form [weights W.ravel()]
        '''
        # Outputs probability of a label being true according to logistic model.
        weights = weights_and_W[:inputs.shape[1]]
        W_raveled = weights_and_W[inputs.shape[1]:]
        n_W = len(W_raveled)
        W = W_raveled.reshape(inputs.shape[1], n_W/inputs.shape[1])
    
        return sigmoid(np.dot(np.dot(inputs, W), weights))
    

    then simply change traning_loss to (from the original source example)

    def training_loss(weights_and_W):
        # Training loss is the negative log-likelihood of the training labels.
        preds = logistic_predictions(weights_and_W, inputs)
        label_probabilities = preds * targets + (1 - preds) * (1 - targets)
        return -np.sum(np.log(label_probabilities))