首页 新闻 赞助 找找看

请问scrapy为什么会爬取失败

0
[待解决问题]

C:\Users\Administrator\Desktop\新建文件夹\xiaozhu>python -m scrapy crawl xiaozhu

2019-10-26 11:43:11 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: xiaozhu)

2019-10-26 11:43:11 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9
.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.7.0, Python 3.5.3 (v
3.5.3:1880cb95a742, Jan 16 2017, 15:51:26) [MSC v.1900 32 bit (Intel)], pyOpenSS
L 19.0.0 (OpenSSL 1.1.1c 28 May 2019), cryptography 2.7, Platform Windows-7-6.1
.7601-SP1
2019-10-26 11:43:11 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'xi
aozhu', 'SPIDER_MODULES': ['xiaozhu.spiders'], 'NEWSPIDER_MODULE': 'xiaozhu.spid
ers'}
2019-10-26 11:43:11 [scrapy.extensions.telnet] INFO: Telnet Password: c61bda45d6
3b8138
2019-10-26 11:43:11 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.logstats.LogStats']
2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-10-26 11:43:12 [scrapy.core.engine] INFO: Spider opened
2019-10-26 11:43:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag
es/min), scraped 0 items (at 0 items/min)
2019-10-26 11:43:12 [scrapy.extensions.telnet] INFO: Telnet console listening on
127.0.0.1:6023
2019-10-26 11:43:12 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (
307) to <GET https://bizverify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozh
u.com%2Ffangzi%2F125535477903.html> from <GET http://bj.xiaozhu.com/fangzi/12553
5477903.html>
2019-10-26 11:43:12 [scrapy.core.engine] DEBUG: Crawled (400) <GET https://bizve
rify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozhu.com%2Ffangzi%2F125535477
903.html> (referer: None)
2019-10-26 11:43:12 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response
<400 https://bizverify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozhu.com%2
Ffangzi%2F125535477903.html>: HTTP status code is not handled or not allowed
2019-10-26 11:43:12 [scrapy.core.engine] INFO: Closing spider (finished)
2019-10-26 11:43:12 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 529,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 725,
'downloader/response_count': 2,
'downloader/response_status_count/307': 1,
'downloader/response_status_count/400': 1,
'elapsed_time_seconds': 0.427734,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 10, 26, 3, 43, 12, 889648),
'httperror/response_ignored_count': 1,
'httperror/response_ignored_status_count/400': 1,
'log_count/DEBUG': 2,
'log_count/INFO': 11,
'response_received_count': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2019, 10, 26, 3, 43, 12, 461914)}
2019-10-26 11:43:12 [scrapy.core.engine] INFO: Spider closed (finished)

kenny.feng的主页 kenny.feng | 初学一级 | 园豆:151
提问于:2019-10-26 11:56
< >
分享
所有回答(1)
0

你能把日志级别调一下嘛, INFO, DEBUG,看有啥意义,还有你看哪里爬取失败不应该打断点看爬虫程序吗
设置日志打印级别
LOG_LEVEL='ERROR'

小小咸鱼YwY | 园豆:3210 (老鸟四级) | 2019-10-26 13:52
清除回答草稿
   您需要登录以后才能回答,未注册用户请先注册