Search code examples
pythonhtmlxpathlxmlurllib

Viewing html text between tags (python, lxml, urllib, xpath)


I am trying to parse some html and I want to retrieve the actual html between the tags, but instead my code is giving me what I believe is the location of the elements.

Here is my code so far:

import urllib.request, http.cookiejar
from lxml import etree
import io
site = "http://somewebsite.com"


cj = http.cookiejar.CookieJar()
request = urllib.request.Request(site)
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
request.add_header('User-agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20100101 Firefox/17.0')
html = etree.HTML(opener.open(request).read())

xpath = "//li[1]//cite[1]"
filtered_html = html.xpath(xpath)
print(filtered_html)

Here is a piece of the html:

<div class="f kv">
<cite>
www.
<b>hello</b>
online.com/
</cite>
<span class="vshid">
</div>

Currently my code returns:

[<Element cite at 0x36a65e8>, <Element cite at 0x36a6510>, <Element cite at 0x36a64c8>]

How do I extract the actual html code between the cite tags? If I add "/text()" to the end of my xpath it gets me closer, but it leaves out what is in the b tags. My ultimate goal is for my code to give me "www.helloonline.com/".

Thank you


Solution

  • Use //text() to get all text elements from a given location:

    text = filtered_html.xpath('//text()')
    print ''.join(t.strip() for t in text)  # prints "www.helloonline.com/"