I am trying to extract text from 3000+ PDFs in one txt file (while I had to remove headers from each page):
for x in range(len(files)-len(files)+15):
pdfFileObj=open(files[x],'rb')
pdfReader=PyPDF2.PdfFileReader(pdfFileObj)
for pageNum in range(1,pdfReader.numPages):
pageObj=pdfReader.getPage(pageNum)
content=pageObj.extractText()
removeIndex = content.find('information.') + len('information.')
newContent=content[removeIndex:]
file.write(newContent)
file.close()
However, I get the following error:
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\ufb02' in position 5217: character maps to <undefined>
I was not able to check the encoding of each PDF so I just used replace(). Below is the working code:
for x in range(len(files)):
pdfFileObj=open(os.path.join(filepath,files[x]),'rb')
for pageNum in range(1,pdfReader.numPages):
pageObj=pdfReader.getPage(pageNum)
content=pageObj.extractText()
removeIndex = content.find('information.') + len('information.')
newContent=content[removeIndex:]
newContent=newContent.replace('\n',' ')
newContent=newContent.replace('\ufb02','FL')
file.write(str(newContent.encode('utf-8')))
file.close()