Search code examples
pythonregexpython-3.xpandasurlparse

Given a list with several junk links, how to extract all the links that finish in .pdf in this way?


I have a pandas dataframe column with several links on each cell:

Name|COL
San Diego|'https://foo.com/energy_docs/tyv/2004/019787_S30_gasTOC.cfm https://foo.com/energy_docs/tyv/99/19787s022_gas.pdf https://foo.com/energy_docs/tyv/2000/19787s021_gas.pdf https://foo.com/energy_docs/tyv/2000/19787-s017_report.pdf https://foo.com/energy_docs/tyv/99/293-_9302SDFS 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/99/19787-s018_gas.pdf https://foo.com/energy_docs/tyv/2000/19787-s017_report.pdf https://foo.com/energy_docs/tyv/98/019787-S16_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/97/019787-S15_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/97/019787-S14_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/97/19787-S013_gas.pdf https://foo.com/energy_docs/tyv/96/019787-S12_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/96/019787-S11_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/96/019787-S10_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/pre96/019787-S9_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/pre96/019787-S8_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/96/19-787s007_Amlodipine.cfm https://foo.com/energy_docs/tyv/pre96/019787-S6_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/pre96/019787-S5_gas 2.5 KM, 5.0 KM, & 10.0 KM GAS_TPC.cfm https://foo.com/energy_docs/tyv/pre96/019787-S4_gas GAS_TPC.cfm https://foo.com/energy_docs/tyv/pre96/019787-S3_gas_toc.cfm https://foo.com/energy_docs/tyv/pre96/019787-S2_gas GAS_TPC.cfm'
Washington|'https://foo.com/energy_docs/a32/2007/022136.cfm'
Texas|'https://foo.com/energy/29380/no_ant/USA/2/2007.pdf'

How can I extract all the links that end in .pdf in the following way:

Name|COL
San Diego|https://foo.com/energy_docs/tyv/99/19787s022_gas.pdf
San Diego|https://foo.com/energy_docs/tyv/2000/19787s021_gas.pdf
San Diego|https://foo.com/energy_docs/tyv/2000/19787-s017_report.pdf
San Diego|https://foo.com/energy_docs/tyv/99/19787-s018_gas.pdf 
San Diego|https://foo.com/energy_docs/tyv/2000/19787-s017_report.pdf 
San Diego|https://foo.com/energy_docs/tyv/97/19787-S013_gas.pdf
Washington|NaN
Texas|https://foo.com/energy/29380/no_ant/USA/2/2007.pdf

I tried to:

import re

def url_extractor(row):

    url=str(row)

    r = re.compile('(http[^\s]+\.pdf)')

    urls = r.findall(url)

    if len(urls) == 0:

        return 'NaN'

    else:

        return ' '.join(urls)

​

In:

df4['COL'] = df4['COL'].apply(url_extractor)
df4

Out:

    Name    COL
0   San Diego   https://foo.com/energy_docs/tyv/99/19787s022_g...
1   Washington  NaN
2   Texas   https://foo.com/energy/29380/no_ant/USA/2/2007...

However I do not understand how to do the stacking/splitting row part in order to get one link/url on each row. For example, let's check the first row:

In:

df4['COL'][0]

Out:

'https://foo.com/energy_docs/tyv/99/19787s022_gas.pdf https://foo.com/energy_docs/tyv/2000/19787s021_gas.pdf https://foo.com/energy_docs/tyv/2000/19787-s017_report.pdf https://foo.com/energy_docs/tyv/99/19787-s018_gas.pdf https://foo.com/energy_docs/tyv/2000/19787-s017_report.pdf https://foo.com/energy_docs/tyv/97/19787-S013_gas.pdf'

Each link should be "mapped" to its name which is San Diego.


Solution

  • If this is already loaded into a pandas dataframe, you can use the pandas built-in string methods to break the strings in COL into lists, extract the elements you want from the lists, restructure the col of lists into a long Series, and then merge that with the original data frame

    # break COL into lists of strings that only end if '.pdf'
    COL_series = df.COL.str.split().apply(lambda x: [y for y in x if y.endswith('pdf')])
    # create a long format series from the lists
    COL_series = COL_series.apply(pd.Series).stack().reset_index(level=1, drop=True)
    COL_series.name = 'COL'
    
    # merge with df
    pd.merge(df.Name.reset_index(), 
             COL_series.reset_index(), 
             how='outer', 
             on='index').drop('index', axis=1)
    
    # returns:
            Name                                                         COL
    0  San Diego        https://foo.com/energy_docs/tyv/99/19787s022_gas.pdf
    1  San Diego      https://foo.com/energy_docs/tyv/2000/19787s021_gas.pdf
    2  San Diego  https://foo.com/energy_docs/tyv/2000/19787-s017_report.pdf
    3  San Diego       https://foo.com/energy_docs/tyv/99/19787-s018_gas.pdf
    4  San Diego  https://foo.com/energy_docs/tyv/2000/19787-s017_report.pdf
    5  San Diego       https://foo.com/energy_docs/tyv/97/19787-S013_gas.pdf
    6 Washington                                                         NaN
    7      Texas          https://foo.com/energy/29380/no_ant/USA/2/2007.pdf