Search code examples
tensorflowmachine-learningkerastext-classification

Text classification issue


I'm newbie in ML and try to classify text into two categories. My dataset is made with Tokenizer from medical texts, it's unbalanced and there are 572 records for training and 471 for testing.

It's really hard for me to make model with diverse predict output, almost all values are same. I've tired using models from examples like this and to tweak parameters myself but output is always without sense

Here are tokenized and prepared data

Here is script: Gist

Sample model that I used

    sequential_model = keras.Sequential([
        layers.Dense(15, activation='tanh',input_dim=vocab_size),
        layers.BatchNormalization(),
        layers.Dense(8, activation='relu'),
        layers.BatchNormalization(),
        layers.Dense(1, activation='sigmoid')
    ])

    sequential_model.summary()
    sequential_model.compile(optimizer='adam',
                             loss='binary_crossentropy',
                             metrics=['acc'])

    train_history = sequential_model.fit(train_data,
                                         train_labels,
                                         epochs=15,
                                         batch_size=16,
                                         validation_data=(test_data, test_labels),
                                         class_weight={1: 1, 0: 0.2},
                                         verbose=1)

Unfortunately I can't share datasets. Also I've tired to use keras.utils.to_categorical with class labels but it didn't help


Solution

  • Your loss curves makes sense as we see the network overfit to training set while we see the usual bowl-shaped validation curve.

    To make your network perform better, you can always deepen it (more layers), widen it (more units per hidden layer) and/or add more nonlinear activation functions for your layers to be able to map to a wider range of values.

    Also, I believe the reason why you originally got so many repeated values is due to the size of your network. Apparently, each of the data points has roughly 20,000 features (pretty large feature space); the size of your network is too small and the possible space of output values that can be mapped to is consequently smaller. I did some testing with some larger hidden unit layers (and bumped up the number of layers) and was able to see that the prediction values did vary: [0.519], [0.41], [0.37]...

    It is also understandable that your network performance varies so because the number of features that you have is about 50 times the size of your training (usually you would like a smaller proportion). Keep in mind that training for too many epochs (like more than 10) for so small training and test dataset to see improvements in loss is not great practice as you can seriously overfit and is probably a sign that your network needs to be wider/deeper.

    All of these factors, such as layer size, hidden unit size and even number of epochs can be treated as hyperparameters. In other words, hold out some percentage of your training data as part of your validation split, go one by one through the each category of factors and optimize to get the highest validation accuracy. To be fair, your training set is not too high, but I believe you should hold out some 10-20% of the training as a sort of validation set to tune these hyperparameters given that you have such a large number of features per data point. At the end of this process, you should be able to determine your true test accuracy. This is how I would optimize to get the best performance of this network. Hope this helps.

    More about training, test, val split