I have some problem with tokenization, the assignment is to separate a sentence into words.
This is what I have done at the moment.
def tokenize(s):
d = []
start = 0
while start < len(s):
while start < len(s) and s[start].isspace():
start = start+1
end = start
while end < len(s) and not s[end].isspace():
end = end+1
d = d + [s[start:end]]
start = end
print(d)
Running the program:
>>> tokenize("He was walking, it was fun")
['He', 'was', 'walking,', 'it', 'was', 'fun']
This works fine, but the problem is as you can see that my program will include the comma in the word walking. I want to separate the comma (and other "symbols") as an individual "word".
Such as:
['He', 'was', 'walking', ',', 'it', 'was', 'fun']
How can I modify my code to fix this?
Thanks in advance!
Here's a possible suggestion for a modification which will work with your specific example but which will definitely fail with examples like "How are you?!":
def tokenize(s):
d = []
start = 0
while start < len(s):
while start < len(s) and s[start].isspace():
start = start+1
end = start
while end < len(s) and not s[end].isspace():
end = end+1
if(s[end-1] in ["!", ",", ".", ";", ":"]):
d = d + [s[start:(end-1)]]
d = d + [s[end-1]]
else:
d = d + [s[start:end]]
start = end
print(d)
tokenize("He was walking, it was fun!")
# ['He', 'was', 'walking', ',', 'it', 'was', 'fun', '!']