Search code examples
htmlparsingtexthtml-content-extraction

How do you parse a poorly formatted HTML file?


I have to parse a series of web pages in order to import data into an application. Each type of web page provides the same kind of data. The problem is that the HTML of each page is different, so the location of the data varies. Another problem is that the HTML code is poorly formatted, making it impossible to use a XML-like parser.

So far, the best strategy I can think of, is to define a template for each kind of page, like:

Template A:

<html>
...
  <tr><td>Table column that is missing a td 
      <td> Another table column</td></tr>
  <tr><td>$data_item_1$</td>
...
</html>

Template B:

<html>
...
  <ul><li>Yet another poorly formatted page <li>$data_item_1$</td></tr>
...
</html>

This way I would only need one single parser for all the pages, that would compare each page with its template and retrieving the $data_item_1$, $data_item_2$, etc. Still, it is going to be a lot of work. Can you think of any simpler solution? Any library that can help?

Thanks


Solution

  • You can pass the page's source through tidy to get a valid page. You can find tidy here . Tidy has bindings for a lot of programming languages. After you've done this, you can use your favorite parser/content extraction technique.