I thought I had a really straight-forward code for opening a file, reading it, and tokenizing it into sentences.
import nltk
text = open('1865-Lincoln.txt', 'r')
tokens = nltk.sent_tokenize(text)
print(tokens)
But I just keep getting the crazy long error that ends with
TypeError: expected string or bytes-like object
You need a read command between open and tokens.
fileObj = open('1865-Lincoln.txt', 'r')
text = fileObj.read()