in order to improve the accuracy of an adaboost classifier (for image classification), I am using genetic programming to derive new statistical Measures. Every Time when a new feature is generated, i evaluate its fitness by training an adaboost Classifier and by testing its performances. But i want to know if that procedure is correct; I mean the use of a single feature to train a learning model.
You can build a model on one feature. I assume, that by "one feature" you mean simply one number in R
(otherwise, it would be completely "traditional" usage). However this means, that you are building a classifier in one-dimensional space, and as such - many classifiers will be redundant (as it is really a simple problem). What is more important - checking whether you can correctly classify objects using one particular dimensions does not mean that it is a good/bad feature once you use combination of them. In particular it may be the case that:
(0,0),(0,1),(1,0),(1,1)
such that (0,0),(1,1)
are elements of one class, and rest of another. If you look separatly on each dimension - then the best possible accuracy is 0.5
(as you always have points of two different classes in exactly same points - 0 and 1). Once combined - you can easily separate them, as it is a xor
problem.To sum up - it is ok to build a classifier in one dimensional space, but: