Search code examples
pythonauthenticationscrapyvbulletin

Scrapy login to vBulletin guidance needed


I already read a lots of post on the subject(including scrapy docs) but for some reason I am not able to login to a vBulletin website. Let me clarify that I am not a developer and my knowledge about programming/scraping is extremely fundamental so if anyone decides to help please be more specific in order to understand you.

Now let me explain the details:

I am trying to login into our company forums to scrape information from it and organize it into excel spreadsheets. The login website address is: https://forums.chaosgroup.com/auth/login-form

Except the username(scrapy) and password(12345) fields there are few hidden values/fields into the source page.

<input type="hidden" name="url" value="aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v" />
<input type="hidden" id="vb_loginmd5" name="vb_login_md5password" value="">
<input type="hidden" id="vb_loginmd5_utf8" name="vb_login_md5password_utf" value="">

When I submit the data from the website I got the following POST request in Chrome Inspect tools:

url:aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v
username:scrapy
password:
vb_login_md5password:827ccb0eea8a706c4c34a16891f84e7b
vb_login_md5password_utf:827ccb0eea8a706c4c34a16891f84e7b

It's static information in most of the time. Very rarely I have found that hidden url:value change it's last character, but overall everything stays the same.

Now I try to submit that data from a Scrapy spider(code bellow) in order to login but the spider returns to the login page instead of opening the actual forums.

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import FormRequest
from scrapy.utils.response import open_in_browser
from scrapy.shell import inspect_response


class ForumsSpider(scrapy.Spider):
    name = 'forums'
    start_urls = ['https://forums.chaosgroup.com/auth/login-form/']


    def parse(self, response):
        return FormRequest.from_response(response,
                                         formdata={'url':'aHR0cHM6Ly9mb3J1bXMuY2hhb3Nncm91cC5jb20v',
                                                   'username':'scrapy',
                                                   'password':'',
                                                   'vb_login_md5password':'827ccb0eea8a706c4c34a16891f84e7b',
                                                   'vb_login_md5password_utf':'827ccb0eea8a706c4c34a16891f84e7b'},
                                         callback=self.scrape_home_page)


    def scrape_home_page(self, response):
        open_in_browser(response)
        a = response.css('h1::text').extract_first()
        print(a)
        yield a

The log file I got from Scrapy is: https://pastebin.com/XtPHnBcF (for better reading)

D:\Scrapy\forum>scrapy crawl forums 2018-02-24 11:42:10 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: forum) 2018-02-24 11:42:10 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2
2.9.5, cssselect 1.0.3, parsel 1.3.1, w3lib 1.19.0, Twisted 17.9.0, Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [M SC v.1900 32 bit (Intel)], pyOpenSSL 17.5.0 (OpenSSL 1.1.0g  2 Nov 2017), cryptography 2.1.4, Platform Windows-8.1-6.3.9600-SP0 2018-02-24 11:42:10 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'forum', 'COOKIES_DEBUG': True, 'DOWNLOAD_DELAY': 3, 'NEWSPIDER_MODULE': 'forum.spiders', 'SPIDER_MODULES': ['forum.spiders ']} 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats',  'scrapy.extensions.telnet.TelnetConsole',  'scrapy.extensions.logstats.LogStats'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',  'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',  'scrapy.downloadermiddlewares.retry.RetryMiddleware',  'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',  'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',  'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',  'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',  'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',  'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',  'scrapy.spidermiddlewares.referer.RefererMiddleware',  'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',  'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2018-02-24 11:42:10 [scrapy.middleware] INFO: Enabled item pipelines: [] 2018-02-24 11:42:10 [scrapy.core.engine] INFO: Spider opened 2018-02-24 11:42:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2018-02-24 11:42:10 [scrapy.extensions.telnet] DEBUG: Telnet console listening on
127.0.0.1:6023 2018-02-24 11:42:11 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 https://forums.chaosgroup.com/auth/login-form/> Set-Cookie: bbsessionhash=97ed47f40f0376dd5c33276eefe2cb53; path=/; secure; HttpOnly

Set-Cookie: bblastvisit=1519465318; path=/; secure; HttpOnly

Set-Cookie: bblastactivity=1519465318; path=/; secure; HttpOnly

2018-02-24 11:42:11 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://forums.chaosgroup.com/auth/login-form/> (referer: None) 2018-02-24 11:42:11 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <POST https://forums.chaosgroup.com/auth/login> Cookie: bbsessionhash=97ed47f40f0376dd5c33276eefe2cb53; bblastvisit=1519465318; bblastactivity=1519465318

2018-02-24 11:42:13 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 https://forums.chaosgroup.com/auth/login> Set-Cookie: bblastactivity=1519465321; path=/; secure; HttpOnly

Set-Cookie: bbsessionhash=58e04286cf781704ef718c38d4dbb0a2; path=/; secure; HttpOnly

2018-02-24 11:42:13 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://forums.chaosgroup.com/auth/login> (referer: https://forums.chaosgroup.com/auth/login-form/) None 2018-02-24 11:42:13 [scrapy.core.engine] INFO: Closing spider (finished) 2018-02-24 11:42:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 862,  'downloader/request_count': 2,  'downloader/request_method_count/GET': 1,  'downloader/request_method_count/POST': 1,  'downloader/response_bytes': 3538,  'downloader/response_count': 2,  'downloader/response_status_count/200': 2,  'finish_reason': 'finished',  'finish_time': datetime.datetime(2018, 2, 24, 9, 42, 13, 954670),  'log_count/DEBUG': 6,  'log_count/INFO': 7,  'request_depth_max': 1,  'response_received_count': 2,  'scheduler/dequeued': 2,  'scheduler/dequeued/memory': 2,  'scheduler/enqueued': 2,  'scheduler/enqueued/memory': 2,  'start_time': datetime.datetime(2018, 2, 24, 9, 42, 10, 928535)} 2018-02-24 11:42:13 [scrapy.core.engine] INFO: Spider closed (finished)

I trying to figure out what I'm doing wrong, compared my code with other similar codes, tried(and succeed) to login into other websites but I can't manage to make it work with our vBulletin site.

What I'm doing wrong, what I am missing? If one could point to to the right direction I will be extremely thankful and I'll try to return the favor somehow.

Thanks in advance to everybody.


Solution

  • Your login data is posted to https://forums.chaosgroup.com/auth/login

    If you take a look at the source of that page (response.text in your scrape_home_page()), you will see, among other things:

    <div class="redirectMessage-wrapper">
            <div id="redirectMessage">Logging in...</div>
    </div>
    
    
    <script type="text/javascript">
    (function()
    {
            var url = "https://forums.chaosgroup.com" || "/";
    
            //remove hash from the url of the top most window (if any)
            var a = document.createElement('a');
            a.setAttribute('href', url);
            if (a.hash) {
                    url = url.replace(a.hash, '');
            }
            else if (url.lastIndexOf('#') != -1) { //a.hash with just # returns empty
                    url = url.replace('#', '');
            }
    
    
    
            window.open(url, '_top');
    })();
    </script>
    

    This shows that the login was indeed successful and you are getting redirected to the index page using javascript.
    So you're already logged in, and all you need to do to continue scraping is go to the index page.