Search code examples
machine-learningclassificationoctavesupervised-learning

How to fit a classifier with high accuracy on the training set with low features?


I have input (r,c) in range (0, 1] as the coordinate of a pixel of an image and its color 1 or 2 only.

I have about 6,400 pixels. My attempt of fitting X=(r,c) and y=color was a failure the accuracy won't go higher than 70%.

Here's the image: an anime character

The first is the actual image, the 2nd is the image I use to train on, it has only 2 colors. The last is the image that the neural network generated with about 500 weights training with 50 iterations. Input Layer is 2, one hidden layer of size 100, and the output layer is 2. (for binary classification like this, I may need only one output layer but I am just preparing for multi-class classification)

The classifier failed to fit the training set, why is that? I tried generating high polynomial terms of those 2 features but it doesn't help. I tried using Gaussian kernel and random 20-100 landmarks on the picture to add more features, also got similar output. I tried using logistic regressions, doesn't help.

Please help me increase the accuracy.

Here's the input:input.txt (you can load it into Octave the variable is coordinate (r,c features) and idx (color)

You can try plotting it first to make sure that you understand the input then try training on it and tell me if you get better result.


Solution

  • Your problem is hard to model. You are trying to fit function from R^2 to R, which has lots of complexity - lots of "spikes", lots of discontinuous regions (pixels that are completely separated from the rest). This is not an easy problem, and not usefull one.. In order to overfit your network to such setting you will need plenty of hidden units. Thus, what are the options to do so?

    General things that are missing in the question, and are important

    1. Your output variable should be {0, 1} if you are fitting your network through cross entropy cost (log likelihood), which you should use for classification.
    2. 50 iteraions (if you are talking about some mini-batch iteraions) is orders of magnitude to small, unless you mean 50 epochs (iterations over whole training set).

    Actual things, that will probably need to be done (at least one of the below):

    1. I assume that you are using ReLU activations (or Tanh, hard to say looking at the output) - you can instead use RBF activations, and increase number of hidden neurons to ~5000,
    2. If you do not want to go with RBFs, then you will need 1-2 additional hidden layers to fit function of this complexity. Try architecture of type 100-100-100 instaed.
    3. If the above fails - increase number of hidden units, that's all you need - enough capacity.

    In general: neural networks are not designed for working with low dimensional datasets. This is nice example from the web, that you can learn pix-pos to color mapping, but it is completely artificial and seems to actually harm people intuitions.