Search code examples
pythonmachine-learningdeep-learningneural-networkmlp

Is there a way to use MLP/any other Algorithms which take the objective and error functions as input and returns the optimum parameters?


I was thinking if there is a pre-built implementation of MLP in python that can take my objective function, loss function, and tolerance as input and return the optimum parameters for my function. I have gone through the MLPs in Tensorflow and scikit-learn but there seems to be nothing of this sort. Any suggestions are welcome.

Thanks in Advance


Solution

  • As long as your objective function is differentiable this is literally what a neural network is designed to do. You can write any function in TF as an objective, and then train your MLP with say SGD. It is a matter of understanding how things work or accepting that "pre built" is not going to be as easy as a function called "solve my problem", it requires a few more commands, but in the end what you are asking for is literally any NN implementation, let it be TF, Keras etc.

    For example you can use Keras and implement your custom loss

    def my_loss_fn(y_true, y_pred):
        squared_difference = tf.square(y_true - y_pred)
        return tf.reduce_mean(squared_difference, axis=-1)  # Note the `axis=-1`
    
    model.compile(optimizer='adam', loss=my_loss_fn)