I've used pdfminer to convert complex (tables, figures) and very long pdfs to html. I want to parse the results further (e.g. extract tables, paragraphs etc) and then use sentence tokenizer from nltk to do further analysis. For this purposes I want to save the html to text file to figure out how to do the parsing. Unfortunately my code does not write html to txt:
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import HTMLConverter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
from cStringIO import StringIO
def convert_pdf_to_html(path):
rsrcmgr = PDFResourceManager()
retstr = StringIO()
codec = 'utf-8'
laparams = LAParams()
device = HTMLConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
fp = file(path, 'rb')
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0 #is for all
caching = True
pagenos=set()
for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True):
interpreter.process_page(page)
fp.close()
device.close()
str1 = retstr.getvalue()
retstr.close()
return str1
with open("D:/my_new_file.txt", "wb") as fh:
fh.write(str1)
Besides, the code prints the whole html string in the shell: how can I avoid it?
First, unless there's a trivial error,
the .txt write happens after the return function: txt file write is never executed!
Then, to suppress output to the console, just do that before running your routine:
import sys,os
oldstdout = sys.stdout # save to be able to restore it later
sys.stdout = os.devnull