From what I've read, it appears that a split of around 80% training 20% va validation data is close to optimal. As the size of the testing dataset increases, the variance of the validation results should go down at the cost of less effective training (lower validation accuracy).
Therefore, I am confused as to the following results which seemed to show optimal accuracy and low variance with a TEST_SIZE=0.5
(each trial ran multiple times and one trial was selected to represent the different testing sizes).
TEST_SIZE=0.1
, this should work effectively due to large training size but have a larger variance (5 trials varied between 16% and 50% accuracy).
Epoch 0, Loss 0.021541, Targets [ 1. 0. 0.], Outputs [ 0.979 0.011 0.01 ], Inputs [ 0.086 0.052 0.08 0.062 0.101 0.093 0.107 0.058 0.108 0.08 0.084 0.115 0.104]
Epoch 100, Loss 0.001154, Targets [ 0. 0. 1.], Outputs [ 0. 0.001 0.999], Inputs [ 0.083 0.099 0.084 0.079 0.085 0.061 0.02 0.103 0.038 0.083 0.078 0.053 0.067]
Epoch 200, Loss 0.000015, Targets [ 0. 0. 1.], Outputs [ 0. 0. 1.], Inputs [ 0.076 0.092 0.087 0.107 0.077 0.063 0.02 0.13 0.054 0.106 0.054 0.051 0.086]
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
50.0% overall accuracy for validation set.
TEST_SIZE=0.5
, this should work okay (accuracy between the other two cases) - 5 trials varied between 92 and 97% accuracy for some reason.
Epoch 0, Loss 0.547218, Targets [ 1. 0. 0.], Outputs [ 0.579 0.087 0.334], Inputs [ 0.106 0.08 0.142 0.133 0.129 0.115 0.127 0.13 0.12 0.068 0.123 0.126 0.11 ]
Epoch 100, Loss 0.002716, Targets [ 0. 1. 0.], Outputs [ 0.003 0.997 0. ], Inputs [ 0.09 0.059 0.097 0.114 0.088 0.108 0.102 0.144 0.125 0.036 0.186 0.113 0.054]
Epoch 200, Loss 0.002874, Targets [ 0. 1. 0.], Outputs [ 0.003 0.997 0. ], Inputs [ 0.102 0.067 0.088 0.109 0.088 0.097 0.091 0.088 0.092 0.056 0.113 0.141 0.089]
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
97.75280898876404% overall accuracy for validation set.
TEST_SIZE=0.9
, this should generalize poorly due to small training sample - 5 trials varied between 38% and 54% accuracy.
Epoch 0, Loss 2.448474, Targets [ 0. 0. 1.], Outputs [ 0.707 0.206 0.086], Inputs [ 0.229 0.421 0.266 0.267 0.223 0.15 0.057 0.33 0.134 0.148 0.191 0.12 0.24 ]
Epoch 100, Loss 0.017506, Targets [ 1. 0. 0.], Outputs [ 0.983 0.017 0. ], Inputs [ 0.252 0.162 0.274 0.255 0.241 0.275 0.314 0.175 0.278 0.135 0.286 0.36 0.281]
Epoch 200, Loss 0.001819, Targets [ 0. 0. 1.], Outputs [ 0.002 0. 0.998], Inputs [ 0.245 0.348 0.248 0.274 0.284 0.153 0.167 0.212 0.191 0.362 0.145 0.125 0.183]
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
64.59627329192547% overall accuracy for validation set.
Importing and splitting dataset
import numpy as np
from sklearn.preprocessing import normalize
from sklearn.model_selection import train_test_split
def readInput(filename, delimiter, inputlen, outputlen, categories, test_size):
def onehot(num, categories):
arr = np.zeros(categories)
arr[int(num[0])-1] = 1
return arr
with open(filename) as file:
inputs = list()
outputs = list()
for line in file:
assert(len(line.split(delimiter)) == inputlen+outputlen)
outputs.append(onehot(list(map(lambda x: float(x), line.split(delimiter)))[:outputlen], categories))
inputs.append(list(map(lambda x: float(x), line.split(delimiter)))[outputlen:outputlen+inputlen])
inputs = np.array(inputs)
outputs = np.array(outputs)
inputs_train, inputs_val, outputs_train, outputs_val = train_test_split(inputs, outputs, test_size=test_size)
assert len(inputs_train) > 0
assert len(inputs_val) > 0
return normalize(inputs_train, axis=0), outputs_train, normalize(inputs_val, axis=0), outputs_val
Some parameters
import numpy as np
import helper
FILE_NAME = 'data2.csv'
DATA_DELIM = ','
ACTIVATION_FUNC = 'tanh'
TESTING_FREQ = 100
EPOCHS = 200
LEARNING_RATE = 0.2
TEST_SIZE = 0.9
INPUT_SIZE = 13
HIDDEN_LAYERS = [5]
OUTPUT_SIZE = 3
The main program flow
def step(self, x, targets, lrate):
self.forward_propagate(x)
self.backpropagate_errors(targets)
self.adjust_weights(x, lrate)
def test(self, epoch, x, target):
predictions = self.forward_propagate(x)
print('Epoch %5i, Loss %2f, Targets %s, Outputs %s, Inputs %s' % (epoch, helper.crossentropy(target, predictions), target, predictions, x))
def train(self, inputs, targets, epochs, testfreq, lrate):
xindices = [i for i in range(len(inputs))]
for epoch in range(epochs):
np.random.shuffle(xindices)
if epoch % testfreq == 0:
self.test(epoch, inputs[xindices[0]], targets[xindices[0]])
for i in xindices:
self.step(inputs[i], targets[i], lrate)
self.test(epochs, inputs[xindices[0]], targets[xindices[0]])
def validate(self, inputs, targets):
correct = 0
targets = np.argmax(targets, axis=1)
for i in range(len(inputs)):
prediction = np.argmax(self.forward_propagate(inputs[i]))
if prediction == targets[i]: correct += 1
print('Target Class %s, Predicted Class %s' % (targets[i], prediction))
print('%s%% overall accuracy for validation set.' % (correct/len(inputs)*100))
np.random.seed()
inputs_train, outputs_train, inputs_val, outputs_val = helper.readInput(FILE_NAME, DATA_DELIM, inputlen=INPUT_SIZE, outputlen=1, categories=OUTPUT_SIZE, test_size=TEST_SIZE)
nn = Classifier([INPUT_SIZE] + HIDDEN_LAYERS + [OUTPUT_SIZE], ACTIVATION_FUNC)
nn.train(inputs_train, outputs_train, EPOCHS, TESTING_FREQ, LEARNING_RATE)
nn.validate(inputs_val, outputs_val)
1) The sample size is very small. You have 13 dimensions and only 178 samples. As your need to train parameters of your 5-layer NN, no matter how you split, there are just not enough data. So you model is too complex for the amount of data you have, which will lead to overfitting. This means, that your model does not generalize well, and will not provide you good results in general case, and will not provide stable results.
2) You always will have some differences between training and testing dataset. In your case, because of the small sample size, the consistency between statistics of your testing and training data is mostly random.
3) When you split 90-10, your test set is only 17 samples. You can't get much value out of just 17 trials. It can hardly can be called "statistics". Try a different split, and your results will also change (you have already seen that phenomena, as I've mentioned above about robustness)
4) Always compare you classifier to the random classifier performance. In your case of 3 classes, you should get at least better than 33%.
5) Read about cross-validation and leave-one-out.