Search code examples
rxmlweb-scrapingrssrvest

Cannot extract links from RSS feed using Rvest package


I am trying to get links to WSJ articles from an RSS feed.

The feed looks like this:

<rss xmlns:wsj="http://dowjones.net/rss/" xmlns:dj="http://dowjones.net/rss/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>WSJ.com: World News</title>
<link>http://online.wsj.com/page/2_0006.html</link>
<atom:link type="application/rss+xml" rel="self" href="http://online.wsj.com/page/2_0006.html"/>
<description>World News</description>
<language>en-us</language>
<pubDate>Mon, 09 Sep 2019 10:56:42 -0400</pubDate>
<lastBuildDate>Mon, 09 Sep 2019 10:56:42 -0400</lastBuildDate>
<copyright>Dow Jones & Company, Inc.</copyright>
<generator>http://online.wsj.com/page/2_0006.html</generator>
<docs>http://cyber.law.harvard.edu/rss/rss.html</docs>

<item>
<title>
Boris Johnson Promises Oct. 31 Brexit as Law Passes to Rule Out No Deal
</title>
<link>
https://www.wsj.com/articles/boris-johnson-insists-he-wants-a-brexit-deal-despite-no-deal-planning-11568037248
</link>
<description>
<![CDATA[
British Prime Minister Boris Johnson stuck to his pledge that the U.K. would leave the European Union on Oct. 31—even as a bill aimed at preventing the country from leaving on that date without an agreement became law.
]]>
</description>
<content:encoded/>
<pubDate>Mon, 09 Sep 2019 10:46:00 -0400</pubDate>
<guid isPermaLink="false">SB10710731395272083797004585540162284821560</guid>
<category domain="AccessClassName">PAID</category>
<wsj:articletype>U.K. News</wsj:articletype>
</item>
<item>
<title>
Russian Opposition Puts Putin Under Pressure in Moscow Election
</title>
<link>
https://www.wsj.com/articles/russian-opposition-puts-putin-under-pressure-in-moscow-election-11568029495
</link>
<description>
<![CDATA[
Candidates backed by Russia’s opposition won nearly half the seats up for grabs in Moscow’s city elections Sunday, building on a wave of protests that exposed some of the frailties in President Putin’s closely controlled political machine, but failed to make significant inroads in local races elsewhere.
]]>
</description>
<content:encoded/>
<pubDate>Mon, 09 Sep 2019 07:44:00 -0400</pubDate>
<guid isPermaLink="false">SB10710731395272083797004585539862964447000</guid>
<category domain="AccessClassName">PAID</category>
<wsj:articletype>Russia News</wsj:articletype>
</item>

I've been using rvest to get the titles for each article and that works, but the links return blank every time. I've tried the code a couple of different ways but this was the most recent attempt:


rm(list=ls())
library(tidyverse)
library(rvest)
setwd("~/wsj/world_news")

wsj_1 <- "wsj-world_news-1568041806.39885.xml" # a file like the example one provided above

test <- wsj_1 %>% read_html() # reading in example file

items <- wsj_1 %>%
  read_html() %>%
  html_nodes('item') # parsing the xml to get each 'item' which is a separate article

title <- items %>% 
  html_nodes('title') %>% 
  html_text()

link <- items %>% 
  html_node('link') %>% 
  html_text()

Any idea why I cannot get the links to show up? I get <link> instead of the url.

I also am unable to extract the CDATA text in the description tag but that's not of primary concern for me. If I can get the link that will be enough.


Solution

  • Without the exact full rss feed you are using I am going to go out on a limb and assume similar style to rss feed I can find. If you look at the output html you will see that the links are actually next siblings so you can use xpath and specify to following-sibling. I use purrr to generate a dataframe and str_squish to do some cleaning of output


    R:

    library(rvest)
    library(tidyverse)
    library(stringr)
    
    wsj_1 <- 'https://feeds.a.dj.com/rss/RSSWorldNews.xml'
    nodes <- wsj_1%>%read_html()%>%html_nodes('item')
    
    df <- map_df(nodes, function(item) {
    
      data.frame(title = str_squish(item%>%html_node('title') %>% html_text()),
                 link = str_squish(item%>%html_node(xpath="*/following-sibling::text()") %>%
                        html_text()),
                 stringsAsFactors=FALSE)
    })
    

    enter image description here


    Py:

    import requests, re
    from bs4 import BeautifulSoup as bs
    import pandas as pd
    
    r = requests.get('https://feeds.a.dj.com/rss/RSSWorldNews.xml')
    soup = bs(r.content, 'lxml')
    titles = []; links = [] 
    
    for i in soup.select('item'):
        titles+=[re.sub(r'\n+\s+\t+',' ',i.title.text.strip())]
        links+=[i.link.next_sibling]
    
    df = pd.DataFrame(zip(titles, links), columns = ['Title', 'Link'])
    print(df)