Search code examples
rrvestxml2

read_html not retrieving all data from simple html page, instead returning incomplete html?


read_html() usually returns all the page html for a given url.

But when I try on this url, I can see that not all of the page is returned.

Why is this (and more importantly, how do I fix it)?

Reproducible example

page_html <- "https://raw.githubusercontent.com/mjaniec2013/ExecutionTime/master/ExecutionTime.R" %>% 
  read_html

page_html %>% html_text %>% cat
# We can see not all the page html has been retrieved

# And just to be sure
page_html %>% as.character

Notes

  • It looks like github is okay with bots visiting, so I don't think it's an issue to do with github
  • I tried the same scrape with ruby's Nokogiri library. It gives exactly the same result as read_html. So it looks like it's not something that's specific to R or read_html()

Solution

  • This looks like it's treating the assignment operator in the page as an unclosed tag.

    fakepage <- "<html>the text after <- will be lost</html>"
    
    read_html(fakepage) %>%
      html_text()
    
    [1] "the text after "
    

    As the page you're after is a plain text file, you can use readr::read_file() in this instance.

    readr::read_file("https://raw.githubusercontent.com/mjaniec2013/ExecutionTime/master/ExecutionTime.R")