I tried to scrapy the content(#recent_list_box > li) data of Samsung Newsroom Mexico. But it doesn't work, can you tell me why?
think I brought the content with javascript, but I can't read i
version : scrapy : 2.1.0 splash : 3.4.1
spider code
import scrapy
from scrapy_splash import SplashRequest
from scrapy import Request
class CrawlspiderSpider(scrapy.Spider):
name = 'crawlspider'
allowed_domains = ['news.samsung.com/mx']
page = 1
start_urls = ['https://news.samsung.com/mx']
def start_request(self):
for url in self.start_urls:
yield SplashRequest(
url,
self.main_parse,
endpoint='render.html',
args = {'wait': 10}
)
def parse(self, response):
lists = response.css('#recent_list_box > li').getAll()
for list in lists:
yield {"list" :lists.get() }
We've included the middleware involved. setting code
BOT_NAME = 'spider'
SPIDER_MODULES = ['spider.spiders']
NEWSPIDER_MODULE = 'spider.spiders'
LOG_FILE = 'log.txt'
AJAXCRAWL_ENABLED = True
ROBOTSTXT_OBEY = False
SPLASH_URL = 'http://127.0.0.1'
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
SPLASH_LOG_400 = True
Below are the remaining logs in the log file. I would appreciate it if you could tell me why the log below is left and why I can't read the data I want
2020-07-02 15:27:09 [scrapy.core.engine] INFO: Spider opened
2020-07-02 15:27:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-02 15:27:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-07-02 15:27:09 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://news.samsung.com/mx/> from <GET https://news.samsung.com/mx>
2020-07-02 15:27:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://news.samsung.com/mx/> (referer: None)
2020-07-02 15:27:09 [scrapy.core.scraper] ERROR: Spider error processing <GET https://news.samsung.com/mx/> (referer: None)
Traceback (most recent call last):
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\defer.py", line 117, in iter_errback
yield next(it)
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\python.py", line 345, in __next__
return next(self.data)
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\utils\python.py", line 345, in __next__
return next(self.data)
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy_splash\middleware.py", line 156, in process_spider_output
for el in result:
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 338, in <genexpr>
return (_set_referer(r) for r in result or ())
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:\users\doje1\appdata\local\programs\python\python38\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "C:\scrapy_tutorial\spider\spider\spiders\crawlspider.py", line 22, in parse
lists = response.css('#recent_list_box > li').getAll()
AttributeError: 'SelectorList' object has no attribute 'getAll'
2020-07-02 15:27:09 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-02 15:27:09 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
You have to change
lists = response.css('#recent_list_box > li').getAll()
to
lists = response.css('#recent_list_box > li').getall()
lower letter 'a'