I've used some proxies to crawl some website. Here is I did in the settings.py:php
# Retry many times since proxies often failRETRY_TIMES = 10# Retry on most error codes since proxies fail for different reasonsRETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]DOWNLOAD_DELAY = 3 # 5,000 ms of delayDOWNLOADER_MIDDLEWARES = { 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None, 'myspider.comm.rotate_useragent.RotateUserAgentMiddleware' : 100, 'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200, 'myspider.comm.random_proxy.RandomProxyMiddleware': 300, 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 400, }
And I also have a proxy download middleware which have following methods:html
def process_request(self, request, spider): log('Requesting url %s with proxy %s...' % (request.url, proxy))def process_response(self, request, response, spider): log('Response received from request url %s with proxy %s' % (request.url, proxy if proxy else 'nil'))def process_exception(self, request, exception, spider): log_msg('Failed to request url %s with proxy %s with exception %s' % (request.url, proxy if proxy else 'nil', str(exception))) #retry again. return request
Since the proxy is not very stable sometimes, process_exception often prompts a lot of request failure messages. The problem here is that the failed request never been tried again.python
As the before shows, I've set the settings RETRY_TIMES and RETRY_HTTP_CODES, and I've also return the request for a retry in the process_exception method of the proxy middle ware.web
Why scrapy never retries for the failure request again, or how can I make sure the request is tried at least RETRY_TIMES I've set in the settings.py?app
我使用了一些代理抓取一些網站。這是我在settings.py:less
# Retry many times since proxies often failRETRY_TIMES = 10# Retry on most error codes since proxies fail for different reasonsRETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]DOWNLOAD_DELAY = 3 # 5,000 ms of delayDOWNLOADER_MIDDLEWARES = { 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None, 'myspider.comm.rotate_useragent.RotateUserAgentMiddleware' : 100, 'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200, 'myspider.comm.random_proxy.RandomProxyMiddleware': 300, 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 400, }
我也有一個代理下載中間件,有如下方法:dom
def process_request(self, request, spider): log('Requesting url %s with proxy %s...' % (request.url, proxy))def process_response(self, request, response, spider): log('Response received from request url %s with proxy %s' % (request.url, proxy if proxy else 'nil'))def process_exception(self, request, exception, spider): log_msg('Failed to request url %s with proxy %s with exception %s' % (request.url, proxy if proxy else 'nil', str(exception))) #retry again. return request
有時由於代理不是很是穩定,process_exception常常提示請求失敗消息。這裏的問題是,沒有再次嘗試失敗的請求。scrapy
如以前所示,我設置設置RETRY_TIMES和RETRY_HTTP_CODES,我還返回重試請求process_exception代理中間製品的方法。ide
爲何scrapy再也不重試失敗的請求,或如何確保請求是我試着至少RETRY_TIMES settings.py設置?網站
Thanks for the help from @nyov of Scrapy IRC Channel.
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 200,
'myspider.comm.random_proxy.RandomProxyMiddleware': 300,
Here Retry middleware gets run first, so it will retry the request before it makes it to the Proxy middleware. In my situation, scrapy needs the proxies to crawl the website, or it will timeout endlessly.
So I've reverse the priority between these two download middle wares:
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 300,
'myspider.comm.random_proxy.RandomProxyMiddleware': 200,
謝謝你的幫助從@nyov Scrapy IRC頻道。
scrapy.contrib.downloadermiddleware.retry。RetryMiddleware」:200年,
「myspider.comm.random_proxy.RandomProxyMiddleware」:300年,
這裏重試中間件運行第一,因此它將重試請求以前代理中間件。���個人狀況下,scrapy須要爬行網站的代理,或者它將超時沒完沒了地。
因此我改變這兩個下載中間產品之間的優先級:
scrapy.contrib.downloadermiddleware.retry。RetryMiddleware」:300年,
「myspider.comm.random_proxy.RandomProxyMiddleware」:200年,
it seem that your proxy download middleware -> process_response is not playing by the rules and hence breaking the middlewares chain
process_response() should either: return a Response object, return a Request object or raise a IgnoreRequest exception.
If it returns a Response (it could be the same given response, or a brand-new one), that response will continue to be processed with the process_response() of the next middleware in the chain.
...
看起來,你的代理中間件- >下載process_response不按規則玩,所以打破了中間件鏈嗎
process_response():返回一個響應對象,返回一個請求對象或提升IgnoreRequest異常。
若是它返回一個響應(能夠是相同的反應,或一個全新的),響應將繼續處理與process_response鏈中的下一個中間件的()。
...
本文翻譯自StackoverFlow,英語好的童鞋可直接參考原文:http://stackoverflow.com/questions/20533614