此次讓咱們分析scrapy重試機制的源碼,學習其中的思想,編寫定製化middleware,捕捉爬取失敗的URL等信息。
python
Scrapy是一個爲了爬取網站數據,提取結構性數據而編寫的應用框架。 能夠應用在包括數據挖掘,信息處理或存儲歷史數據等一系列的程序中。shell
其最初是爲了 頁面抓取 (更確切來講, 網絡抓取 )所設計的, 也能夠應用在獲取API所返回的數據(例如 Amazon Associates Web Services ) 或者通用的網絡爬蟲。api
一張圖可看清楚scrapy中數據的流向:網絡
簡單瞭解一下各個部分的功能,能夠看下面簡化版數據流:框架
無論你的主機配置多麼吊炸天,仍是網速多麼給力,在scrapy的大規模任務中,最終爬取的item數量都不會等於指望爬取的數量,也就是說總有那麼一些爬取失敗的漏網之魚,經過分析scrapy的日誌,能夠知道形成失敗的緣由有如下兩種狀況:dom
以上的無論是exception仍是httperror, scrapy中都有對應的retry機制,在settings.py
文件中咱們能夠設置有關重試的參數,等運行遇到異常和錯誤時候,scrapy就會自動處理這些問題,其中最關鍵的部分就是重試中間件,下面讓咱們看一下scrapy的retry middleware。scrapy
在scrapy項目的middlewares.py
文件中 敲以下代碼:ide
from scrapy.downloadermiddlewares.retry import RetryMiddleware
複製代碼
按住ctrl鍵(Mac是command鍵),鼠標左鍵點擊RetryMiddleware進入該中間件所在的項目文件的位置,也能夠經過查看文件的形式找到該該中間件的位置,路徑是:源碼分析
site-packages/scrapy/downloadermiddlewares/retry.RetryMiddleware
複製代碼
源碼以下:學習
class RetryMiddleware(object):
# IOError is raised by the HttpCompression middleware when trying to
# decompress an empty response
# 須要重試的異常狀態,能夠看出,其中有些是上面log中的異常
EXCEPTIONS_TO_RETRY = (defer.TimeoutError, TimeoutError, DNSLookupError,
ConnectionRefusedError, ConnectionDone, ConnectError,
ConnectionLost, TCPTimedOutError, ResponseFailed,
IOError, TunnelError)
def __init__(self, settings):
# 讀取 settings.py 中關於重試的配置信息,若是沒有配置重試的話,直接跳過
if not settings.getbool('RETRY_ENABLED'):
raise NotConfigured
self.max_retry_times = settings.getint('RETRY_TIMES')
self.retry_http_codes = set(int(x) for x in settings.getlist('RETRY_HTTP_CODES'))
self.priority_adjust = settings.getint('RETRY_PRIORITY_ADJUST')
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings)
# 若是response的狀態碼,是咱們要重試的
def process_response(self, request, response, spider):
if request.meta.get('dont_retry', False):
return response
if response.status in self.retry_http_codes:
reason = response_status_message(response.status)
return self._retry(request, reason, spider) or response
return response
# 出現了須要重試的異常狀態,
def process_exception(self, request, exception, spider):
if isinstance(exception, self.EXCEPTIONS_TO_RETRY) \
and not request.meta.get('dont_retry', False):
return self._retry(request, exception, spider)
# 重試操做
def _retry(self, request, reason, spider):
retries = request.meta.get('retry_times', 0) + 1
retry_times = self.max_retry_times
if 'max_retry_times' in request.meta:
retry_times = request.meta['max_retry_times']
stats = spider.crawler.stats
if retries <= retry_times:
logger.debug("Retrying %(request)s (failed %(retries)d times): %(reason)s",
{'request': request, 'retries': retries, 'reason': reason},
extra={'spider': spider})
retryreq = request.copy()
retryreq.meta['retry_times'] = retries
retryreq.dont_filter = True
retryreq.priority = request.priority + self.priority_adjust
if isinstance(reason, Exception):
reason = global_object_name(reason.__class__)
stats.inc_value('retry/count')
stats.inc_value('retry/reason_count/%s' % reason)
return retryreq
else:
stats.inc_value('retry/max_reached')
logger.debug("Gave up retrying %(request)s (failed %(retries)d times): %(reason)s",
{'request': request, 'retries': retries, 'reason': reason},
extra={'spider': spider})
複製代碼
查看源碼咱們能夠發現,對於返回http code的response,該中間件會經過process_response方法來處理,處理辦法比較簡單,判斷response.status是否在retry_http_codes集合中,這個集合是讀取的配置文件:
RETRY_ENABLED = True # 默認開啓失敗重試,通常關閉
RETRY_TIMES = 3 # 失敗後重試次數,默認兩次
RETRY_HTTP_CODES = [500, 502, 503, 504, 522, 524, 408] # 碰到這些驗證碼,纔開啓重試
複製代碼
對於httperror的處理也是一樣的道理,定義了一個 EXCEPTIONS_TO_RETRY的列表,裏面存放全部的異常類型,而後判斷傳入的異常是否存在於該集合中,若是在就進入retry邏輯,不在就忽略。
瞭解scrapy如何處理異常後,就能夠利用這種思想,寫一個middleware,對爬取失敗的漏網之魚進行捕獲,方便之後作補爬。
process_response()
和process_exception()
方法進行重寫;Talk is cheap, show the code:
class GetFailedUrl(RetryMiddleware):
def __init__(self, settings):
self.max_retry_times = settings.getint('RETRY_TIMES')
self.retry_http_codes = set(int(x) for x in settings.getlist('RETRY_HTTP_CODES'))
self.priority_adjust = settings.getint('RETRY_PRIORITY_ADJUST')
def process_response(self, request, response, spider):
if response.status in self.retry_http_codes:
# 將爬取失敗的URL存下來,你也能夠存到別的存儲
with open(str(spider.name) + ".txt", "a") as f:
f.write(response.url + "\n")
return response
return response
def process_exception(self, request, exception, spider):
# 出現異常的處理
if isinstance(exception, self.EXCEPTIONS_TO_RETRY):
with open(str(spider.name) + ".txt", "a") as f:
f.write(str(request) + "\n")
return None
複製代碼
setting.py中添加該中間件:
DOWNLOADER_MIDDLEWARES = {
'myspider.middlewares.TabelogDownloaderMiddleware': 543,
'myspider.middlewares.RandomProxy': 200,
'myspider.middlewares.GetFailedUrl': 220,
}
複製代碼
爲了測試,咱們故意寫錯URL,或者將download_delay縮短,就會出現各類異常,可是咱們如今可以捕獲它們了: