Search code examples
kerasdeep-learningconv-neural-networksequential

Error fitting the model - expected conv2d_3_input to have 4 dimensions


I am writing to build a model to predict handwritten characters using the dataset given here (https://www.kaggle.com/sachinpatel21/az-handwritten-alphabets-in-csv-format)

EDIT: ( after making the changes suggested in the comments )

Error I get now : ValueError: Error when checking input: expected conv2d_4_input to have shape (28, 28, 1) but got array with shape (249542, 784, 1)

Find below the code for the CNN :

from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras import backend as K
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd 

seed = 785
np.random.seed(seed)

dataset = np.loadtxt('../input/A_Z Handwritten Data/A_Z Handwritten Data.csv', delimiter=',')

print(dataset.shape) # (372451, 785)

X = dataset[:,1:785]
Y = dataset[:,0]

(X_train, X_test, Y_train, Y_test) = train_test_split(X, Y, test_size=0.33, random_state=seed)

X_train = X_train / 255
X_test = X_test / 255

X_train = X_train.reshape((-1, X_train.shape[0], X_train.shape[1], 1))
X_test = X_test.reshape((-1, X_test.shape[0], X_test.shape[1], 1))

print(X_train.shape) # (1, 249542, 784, 1)

Y_train = np_utils.to_categorical(Y_train)
Y_test = np_utils.to_categorical(Y_test)

print(Y_test.shape) # (122909, 26)

num_classes = Y_test.shape[1] # 26

model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu', data_format="channels_last"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print("DONE")
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=10, batch_size=256, verbose=2)


# Final evaluation of the model
scores = model.evaluate(X_test,Y_test, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))

model.save('weights.model')


Solution

  • So the problem is that your data isn't structured properly. Look at the solution below:

    Read the data with pandas:

    data = pd.read_csv('/users/vpolimenov/Downloads/A_Z Handwritten Data.csv')
    data.shape
    # shape: (372450, 785)
    

    Get your X and y:

    data.rename(columns={'0':'label'}, inplace=True)
    
    X = data.drop('label',axis = 1)
    y = data['label']
    

    Split and scale:

    X_train, X_test, y_train, y_test = train_test_split(X,y)
    
    standard_scaler = MinMaxScaler()
    standard_scaler.fit(X_train)
    
    X_train = standard_scaler.transform(X_train)
    X_test = standard_scaler.transform(X_test)
    

    Here is the magic:

    X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')
    X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')
    
    y_train = np_utils.to_categorical(y_train)
    y_test = np_utils.to_categorical(y_test)
    
    X_train.shape
    # (279337, 28, 28, 1)
    

    Here is your model:

    num_classes = y_test.shape[1] # 26
    
    model = Sequential()
    model.add(Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu', data_format="channels_last"))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dense(num_classes, activation='softmax'))
    
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    print("DONE")
    model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=256, verbose=2) # WHERE I GET THE ERROR
    

    Summary of your model:

    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    conv2d_25 (Conv2D)           (None, 24, 24, 32)        832       
    _________________________________________________________________
    max_pooling2d_25 (MaxPooling (None, 12, 12, 32)        0         
    _________________________________________________________________
    dropout_1 (Dropout)          (None, 12, 12, 32)        0         
    _________________________________________________________________
    flatten_25 (Flatten)         (None, 4608)              0         
    _________________________________________________________________
    dense_42 (Dense)             (None, 128)               589952    
    _________________________________________________________________
    dense_43 (Dense)             (None, 26)                3354      
    =================================================================
    Total params: 594,138
    Trainable params: 594,138
    Non-trainable params: 0
    

    I've stopped it after the second epoch, but you can see it working:

    Train on 279337 samples, validate on 93113 samples
    Epoch 1/10
     - 80s - loss: 0.2478 - acc: 0.9308 - val_loss: 0.1021 - val_acc: 0.9720
    Epoch 2/10
     - 273s - loss: 0.0890 - acc: 0.9751 - val_loss: 0.0716 - val_acc: 0.9803
    Epoch 3/10
    

    Note:

    It takes so long to fit due to the huge number of parameters in your network. You can try to reduce those and get a much faster/efficient network.