Search code examples
pythontensorflowkeraslstmtensorflow-datasets

How to create inputs and labels dataset list for LSTM without concating strings from multiple files at once?


I have multiple large text files each of size 1 GB. I want to train an LSTM model for next word prediction on the datasets using Tensorflow Keras. I need to take block size of text at a time from the string made from all the files' contents and then use first block size - 1 string as input and the last string of the block as label. Every tutorial I have found loads the full text files and then concatenates them and creates two lists - one for inputs and another for their labels. When I try to do this for my datasets my machine gets out of memory and the process is killed by OS. I have a RAM of 8 GB size. What is the best way to create the dataset using Tensorflow?

Example:

I have text files as:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam efficitur viverra lacus, at porttitor ex bibendum at. Aenean venenatis lacus ex. Mauris ultrices laoreet sapien, at pharetra dolor consectetur id. Proin eleifend, ex condimentum auctor tincidunt, felis erat pharetra tellus, et venenatis augue metus in leo. Donec euismod orci non cursus eleifend. Vivamus blandit gravida arcu, sed pulvinar arcu. Fusce lobortis mauris in lectus molestie, eget condimentum ipsum cursus. Proin ultrices lobortis mauris quis dignissim. Maecenas efficitur feugiat sem nec accumsan. Nam placerat sapien sit amet sem interdum tristique. Praesent eu nibh elementum, iaculis risus eget, cursus lectus.

What I want is lists as follows:

inputs = ["Lorem ipsum dolor", "ipsum dolor sit", "dolor sit amet,", ...]
labels = ["sit", "amet,", "consectetur", ...]

Solution

  • You can try using tensorflow-text:

    import tensorflow as tf
    import tensorflow_text as tft
    
    with open('data.txt', 'w') as f:
      f.write('Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam efficitur viverra lacus?\n')
    
    train_data = tf.data.TextLineDataset(['/content/data.txt'])
    
    vectorize_layer = tf.keras.layers.TextVectorization(output_mode='int', max_tokens=50, pad_to_max_tokens=True)
    vectorize_layer.adapt(train_data)
    
    def sliding_window(x):
      window_size = 5
      encoded = vectorize_layer(x)
      x = tft.sliding_window(encoded, width=window_size, axis=0)
      y = tft.sliding_window(encoded, width=window_size + 1, axis=0)[:, -1]
      return x[:tf.shape(y)[0],:], y
    
    train_data = train_data.map(sliding_window)
    
    inputs = train_data.map(lambda x, y: x).flat_map(tf.data.Dataset.from_tensor_slices)
    labels = train_data.map(lambda x, y: y).flat_map(tf.data.Dataset.from_tensor_slices)