I have created a model with neural network (backpropagation), then i want to classify an instance.
what i've did :
The problem is how to classify new instance that have a new value (or some new values) in a feature (or some feature) with existing model that i made before?
Any one have solution for this condition? or some references that i can use to resolve this issue?
thanks
actually i have a discussion with my stochastic lecturer in my campus and he has an idea to resolve this problem by distribute the error that i got from the process when build the model. Then, the new instance can be match or see the likelihood of the instance in the distribution (like gaussian, mixture gaussian, or empirical distribution). But the problem that come in this idea is, we still have to get the error for that instance so we can see the likelihood in the distribution (or it's mean we still have to classify the instance into the existing model/function that same as the function that used in error distribution).
and i have a discussion with my friend too, and he has an idea to use FFT to replace the real normlization function, so the result not in certain range. But the effect is the error maybe increase by the error that come from the result of FFT function.
As a short-term solution, perhaps what you could do is set the value of the attribute to 0 or 1 (within the range of the original dataset) depending on the value of the attribute.
A longer-term solution would be to include such cases in future training of the neural network. Such values may cause the values of other instances to be skewed to the left or right so some attention may be required for the preprocessing of the training data.
Hope this Helps!