I am sending a list of lists through the Keras Tokenizer with char_level = True, yet the result is word tokenization, not character tokenization.
from tensorflow import keras
from keras.preprocessing.text import Tokenizer
# List of lists
train_data = [['SMITH', 'JOHN', '', 'CHESTERTOWN', 'MD', '21620', '555555555', 'F'], ['CROW', 'JOE', '', 'FREDERICK', 'MD', '217011313', '9999999999', 'F']]
t = Tokenizer(filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n', split=',', char_level=True, oov_token=True)
t.fit_on_texts(train_data)
train_token = np.array(t.texts_to_sequences(train_data))
print(train_token)
array([[ 5, 6, 2, 7, 3, 8, 9, 4], [10, 11, 2, 12, 3, 13, 14, 4]])
This is happening because you your data should be a string, not a list. If you concatenate all words into one string, it will work as expected.
Just add the following to your code:
def concat_list(l):
concat = ''
for word in l:
concat += word + ' '
return concat
train_data = [concat_list(data) for data in train_data]
You will then get:
>>> [list([16, 9, 17, 10, 11, 2, 18, 7, 11, 19, 2, 2, 12, 11, 5, 16, 10, 5, 8, 10, 7, 20, 19, 2, 9, 13, 2, 14, 6, 23, 14, 21, 2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 15, 2])
list([12, 8, 7, 20, 2, 18, 7, 5, 2, 2, 15, 8, 5, 13, 5, 8, 17, 12, 24, 2, 9, 13, 2, 14, 6, 25, 21, 6, 6, 22, 6, 22, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 15, 2])]