Search code examples
pythonvalidationmachine-learningconfusion-matrix

Confusion matrix for numerical output? - Python


I want to evaluate the performance of my model, but the problem I have is I have always used a confusion matrix because I have always done models with categorical output (classification). Now, I have this model with numerical output and I find neither a way nor explanation of how to evaluate his performance, and when I use other kernels codes they give me the % accuracy (if it is the accuracy?) and I can't find any references or infer how this % is computed.

So, with a model with output as numerical, how and where can I find techniques of evaluation? (and their explanation, because I don't like to use things I don't understand/know).

I'm working with python.


Solution

  • The most popular techniques used to evaluate regression models that come to my mind are:

    • Mean Square Error (and all it's possible variations e.g. Mean Absolute Error, Mean Absolute Percentage Error, Mean Percentage Error)

    • R^2

    If you are interested in how to calculate percentage error you probably want to look at sections "Mean absolute percentage error" and "Mean percentage error" in the article I mentioned above.