Search code examples
javascriptnode.jspuppeteerdata-extraction

How to fix puppeteer scrape fail


I'd like to save a webpage's HTML code with node.js and puppeteer. When I start the program with 'headless-browser: false', I can see that the page loads fully, all the datas are there. But if I try to save the HTML, I get only this:

<!DOCTYPE html><html><head>
<meta name="ROBOTS" content="NOINDEX, NOFOLLOW">
<meta http-equiv="cache-control" content="max-age=0">
<meta http-equiv="cache-control" content="no-cache">
<meta http-equiv="expires" content="0">
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT">
<meta http-equiv="pragma" content="no-cache">
<meta http-equiv="refresh" content="10; url=/distil_r_captcha.html?requestId=16a-84c6-42b6-9023-a45b3854e34c&amp;httpReferrer=%2Fli">
<script type="text/javascript">
        (function(window){
                try {
                        if (typeof sessionStorage !== 'undefined'){
                                sessionStorage.setItem('distil_referrer', document.referrer);
                        }
                } catch (e){}
        })(window);
</script>
<script type="text/javascript" src="/elrhculcipoedjwh.js" defer=""></script><style type="text/css">#d__fFH{position:absolute;top:-5000px;left:-5000px}#d__fF{font-family:serif;font-size:200px;visibility:hidden}#xaqctssquudxqdqxzveurrreayw{display:none!important}</style></head>
<body>
<div id="distilIdentificationBlock">&nbsp;</div>
</body></html>

So I'm a little bit confused: If the webpage knows that the request came from a robot (=so I can only download this blocked HTML code) then why the content shows up? Or from the other perspective: If the webpage doesn't know that the request came from a robot (=so the content shows up) then why can I only download this blocked HTML?

My code:

const puppeteer = require('puppeteer');

(async () => {

    const browser = await puppeteer.launch({ headless: false });
    const context = await browser.createIncognitoBrowserContext();
    const page = await context.newPage();

    await page.evaluateOnNewDocument(() => {
        Object.defineProperty(navigator, 'webdriver', {
            get: () => false,
        });
    });

    await page.evaluateOnNewDocument(() => {
        window.navigator.chrome = {
            runtime: {},
        };
    });

    await page.evaluateOnNewDocument(() => {
        const originalQuery = window.navigator.permissions.query;
        return window.navigator.permissions.query = (parameters) => (
            parameters.name === 'notifications' ?
                Promise.resolve({
                    state: Notification.permission
                }) :
                originalQuery(parameters)
        );
    });

    await page.evaluateOnNewDocument(() => {
        Object.defineProperty(navigator, 'plugins', {
            get: () => [1, 2, 3, 4, 5],
        });
    });

    await page.evaluateOnNewDocument(() => {
        Object.defineProperty(navigator, 'languages', {
            get: () => ['en-EN', 'en'],
        });
    });

    await page.setViewport({
        'width': 1024,
        'height': 768,
        'deviceScaleFactor': 1,
        'isMobile': false,
        'hasTouch': false,
        'isLandscape': false
    });

    await page.setUserAgent('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36');
    await page.goto(url, { waitUntil: 'load' });
    const html = await page.content();
    console.log(html);
    await browser.close();
})();

How can I sole this problem? Maybe I try to save the HTML code too early? Thanks in advance.


Solution

  • I think I found the solution. Since the targeted webpage has some anti-bot system, when it loads, first it renders an "empty" page only with one div. After that it redirects to the content. So I had to add

    await page.waitFor(5000)
    

    to wait until the page loads fully.