Search code examples
pythonpython-3.xparsingurlhierarchy

How do I hierarchically sort URLs in python?


Given an initial list of URLs crawled from a site:

https://somesite.com/
https://somesite.com/advertise
https://somesite.com/articles
https://somesite.com/articles/read
https://somesite.com/articles/read/1154
https://somesite.com/articles/read/1155
https://somesite.com/articles/read/1156
https://somesite.com/articles/read/1157
https://somesite.com/articles/read/1158
https://somesite.com/blogs

I am trying to turn the list into a tab-organized tree hierarchy:

https://somesite.com
    /advertise
    /articles
        /read
            /1154
            /1155
            /1156
            /1157
            /1158
    /blogs

I've tried using lists, tuples, and dictionaries. So far I have figured out two flawed ways to output the content.

Method 1 will miss elements if they have the same name and position in the hierarchy:

Input:
https://somesite.com
https://somesite.com/missions
https://somesite.com/missions/playit
https://somesite.com/missions/playit/extbasic
https://somesite.com/missions/playit/extbasic/0
https://somesite.com/missions/playit/stego
https://somesite.com/missions/playit/stego/0
Output:
https://somesite.com/
    /missions
        /playit
            /extbasic
                /0
            /stego

----------------^ Missing expected output "/0"

Method 2 will not miss any elements, but it will print redundant content:

Input:
https://somesite.com
https://somesite.com/missions
https://somesite.com/missions/playit
https://somesite.com/missions/playit/extbasic
https://somesite.com/missions/playit/extbasic/0
https://somesite.com/missions/playit/stego
https://somesite.com/missions/playit/stego/0
Output:
https://somesite.com/
    /missions
        /playit
            /extbasic
                /0
    /missions       <- Redundant content
        /playit     <- Redundant content
            /stego      
                /0

I'm not sure how to properly do this, and my googling has only turned up references to urllib that don't seem to be what I need. Perhaps there is a much better approach, but I have been unable to find it.

My code for getting the content into a usable list:

#!/usr/bin/python3

import re

# Read the original list of URLs from file
with open("sitelist.raw", "r") as f:
    raw_site_list = f.readlines()

# Extract the prefix and domain from the first line
first_line = raw_site_list[0]
prefix, domain = re.match("(http[s]://)(.*)[/]" , first_line).group(1, 2)

# Remove instances of prefix and domain, and trailing newlines, drop any lines that are only a slash
clean_site_list = []
for line in raw_site_list:
    clean_line = line.strip(prefix).strip(domain).strip()
    if not clean_line == "/":
        if not clean_line[len(clean_line) - 1] == "/":
            clean_site_list += [clean_line]

# Split the resulting relative paths into their component parts and filter out empty strings
split_site_list = []
for site in clean_site_list:
    split_site_list += [list(filter(None, site.split("/")))]

This gives a list to manipulate, but I've run out of ideas on how to output it without losing elements or outputting redundant elements.

Thanks


Edit: This is the final working code I put together based on the answer chosen below:

# Read list of URLs from file
with open("sitelist.raw", "r") as f:
    urls = f.readlines()

# Remove trailing newlines
for url in urls:
    urls[urls.index(url)] = url[:-1]

# Remove any trailing slashes
for url in urls:
    if url[-1:] == "/":
        urls[urls.index(url)] = url[:-1]

# Remove duplicate lines
unique_urls = []
for url in urls:
    if url not in unique_urls:
        unique_urls += [url]

# Do the actual work (modified to use unique_urls and use tabs instead of 4x spaces, and to write to file)
base = unique_urls[0]
tabdepth = 0
tlen = len(base.split('/'))

final_urls = []
for url in unique_urls[1:]:
    t = url.split('/')
    lt = len(t)
    if lt != tlen:
        tabdepth += 1 if lt > tlen else -1
        tlen = lt
    pad = ''.join(['\t' for _ in range(tabdepth)])
    final_urls += [f'{pad}/{t[-1]}']

with open("sitelist.new", "wt") as f:
    f.write(base + "\n")
    for url in final_urls:
        f.write(url + "\n")

Solution

  • This works with your sample data:

    urls = ['https://somesite.com',
            'https://somesite.com/missions',
            'https://somesite.com/missions/playit',
            'https://somesite.com/missions/playit/extbasic',
            'https://somesite.com/missions/playit/extbasic/0',
            'https://somesite.com/missions/playit/stego',
            'https://somesite.com/missions/playit/stego/0']
    
    
    base = urls[0]
    print(base)
    tabdepth = 0
    tlen = len(base.split('/'))
    
    for url in urls[1:]:
        t = url.split('/')
        lt = len(t)
        if lt != tlen:
            tabdepth += 1 if lt > tlen else -1
            tlen = lt
        pad = ''.join(['    ' for _ in range(tabdepth)])
        print(f'{pad}/{t[-1]}')