I have used VGG16 for transfer learning and got very low accuracy. Is it possible to use data augmentation technique to increase the accuracy when using transfer learning?
Following is the code for better understanding:
# Show the image paths
train_path = 'myNetDB/train' # Relative Path
valid_path = 'myNetDB/valid'
test_path = 'myNetDB/test'
train_batches = ImageDataGenerator().flow_from_directory(train_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=10)
valid_batches = ImageDataGenerator().flow_from_directory(valid_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=4)
test_batches = ImageDataGenerator().flow_from_directory(test_path, target_size=(224, 224), classes=['dog', 'cat'], batch_size=10)
vgg16_model= load_model('Fetched_VGG.h5')
# transform the model to Sequential
model= Sequential()
for layer in vgg16_model.layers[:-1]:
model.add(layer)
# Freezing the layers (Oppose weights to be updated)
for layer in model.layers:
layer.trainable = False
# adding the last layer
model.add(Dense(2, activation='softmax'))
model.compile(Adam(lr=.0001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, steps_per_epoch=4,
validation_data=valid_batches, validation_steps=4, epochs=5, verbose=2)
predictions = model.predict_generator(test_batches, steps=1, verbose=0)
If you got very low accuracy, it might be that your dataset is very different from the dataset VGG16 was trained on. There are two possibilities:
your dataset is big enough such that you can train your model starting from the pre-trained weights.
your dataset is small. In this case there are no shortcuts. You should consider a simpler model than VGG16 so that you're less likely to incur in overfitting.
In both cases, to answer your question, yes, augmentation techniques, when done consciously, help increasing the accuracy.