Search code examples
pythonbeautifulsoupurllibmechanize-python

Simulate browser access to load all html elements


I am trying to load a youtube page and get the <embed> element as follows. However, the embed element cannot be found (soup.find('embed') returns None).

import urllib
import urllib2
from bs4 import BeautifulSoup
import mechanize

YT_URL = 'http://www.youtube.com/watch'
vidId = 'OuSdU8tbcHY'

br = mechanize.Browser()
# Browser options
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
# User-Agent (this is cheating, ok?)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
br.open('%s?v=%s' % (YT_URL, vidId))
soup = BeautifulSoup(br.response().read())
print soup.find('embed')

However, when I write the soup to an html file and load it in a browser it loads the <embed> element. Presumably this has something to do with the browser being different to mechanize and some kind of document.onload() magic?

How can I simulate the browser loading the page so that I can see the <embed> element?


Solution

  • The page uses js to load the content dymanically. Mechanize simply cannot handle it. You have two options here:

    • try to simulate those js calls manually in the script
    • switch to in-browser tools like selenium

    Here's the same sample using selenium:

    import selenium.webdriver as webdriver
    
    url = "http://www.youtube.com/watch?v=OuSdU8tbcHY"
    
    driver = webdriver.Firefox()
    driver.get(url)
    
    embed = driver.find_elements_by_tag_name('embed')[0]
    
    print embed
    

    Hope that helps.