scrapy之中間件

中間件的簡介

  1.中間件的做用

          在scrapy運行的整個過程當中,對scrapy框架運行的某些步驟作一些適配本身項目的動做.python

     例如scrapy內置的HttpErrorMiddleware,能夠在http請求出錯時作一些處理.框架

       2.中間件的使用方法

          配置settings.py.詳見scrapy文檔 https://doc.scrapy.orgscrapy

中間件的分類

  scrapy的中間件理論上有三種(Schduler Middleware,Spider Middleware,Downloader Middleware),在應用上通常有如下兩種ide

       1.爬蟲中間件Spider Middleware

         主要功能是在爬蟲運行過程當中進行一些處理.函數

  2.下載器中間件Downloader Middleware

         主要功能在請求到網頁後,頁面被下載時進行一些處理.this

 中間件的方法

      1.Spider Middleware有如下幾個函數被管理:

       - process_spider_input 接收一個response對象並處理,url

         位置是Downloader-->process_spider_input-->Spiders(Downloader和Spiders是scrapy官方結構圖中的組件)中間件

       - process_spider_exception spider出現的異常時被調用對象

       - process_spider_output 當Spider處理response返回result時,該方法被調用blog

       - process_start_requests 當spider發出請求時,被調用

    位置是Spiders-->process_start_requests-->Scrapy Engine(Scrapy Engine是scrapy官方結構圖中的組件)         

   2.Downloader Middleware有如下幾個函數被管理

   - process_request  request經過下載中間件時,該方法被調用

   - process_response 下載結果通過中間件時被此方法處理

   - process_exception 下載過程當中出現異常時被調用

      編寫中間件時,須要思考要實現的功能最適合在那個過程處理,就編寫哪一個方法.

      中間件能夠用來處理請求,處理結果或者結合信號協調一些方法的使用等.也能夠在原有的爬蟲上添加適應項目的其餘功能,這一點在擴展中編寫也能夠達到目的,實際上擴展更加去耦合化,推薦使用擴展.

代碼示例

下載中間件代碼示例

from scrapy.http import HtmlResponse
from scrapy.http import Request

class Md1(object):
	@classmethod
	def from_crawler(cls, crawler):
		# This method is used by Scrapy to create your spiders.
		s = cls()
		return s

	def process_request(self, request, spider):
		# Called for each request that goes through the downloader
		# middleware.

		# Must either:
		# - return None: continue processing this request
		# - or return a Response object
		# - or return a Request object
		# - or raise IgnoreRequest: process_exception() methods of
		#   installed downloader middleware will be called
		print('md1.process_request',request)
		# 1. 返回Response
		# import requests
		# result = requests.get(request.url)
		# return HtmlResponse(url=request.url, status=200, headers=None, body=result.content)
		# 2. 返回Request
		# return Request('https://dig.chouti.com/r/tec/hot/1')

		# 3. 拋出異常
		# from scrapy.exceptions import IgnoreRequest
		# raise IgnoreRequest

		# 4. 對請求進行加工(*)
		# request.headers['user-agent'] = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"


	def process_response(self, request, response, spider):
		# Called with the response returned from the downloader.

		# Must either;
		# - return a Response object
		# - return a Request object
		# - or raise IgnoreRequest
		print('m1.process_response',request,response)
		return response

	def process_exception(self, request, exception, spider):
		# Called when a download handler or a process_request()
		# (from other downloader middleware) raises an exception.

		# Must either:
		# - return None: continue processing this exception
		# - return a Response object: stops process_exception() chain
		# - return a Request object: stops process_exception() chain
		pass

配置

DOWNLOADER_MIDDLEWARES = {
   #'xdb.middlewares.XdbDownloaderMiddleware': 543,
	# 'xdb.proxy.XdbProxyMiddleware':751,
	'xdb.md.Md1':666,
	'xdb.md.Md2':667,
}

爬蟲中間件下載示例

編寫類

class Sd1(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.

@classmethod
def from_crawler(cls, crawler):
	# This method is used by Scrapy to create your spiders.
	s = cls()
	return s

def process_spider_input(self, response, spider):
	# Called for each response that goes through the spider
	# middleware and into the spider.

	# Should return None or raise an exception.
	return None

def process_spider_output(self, response, result, spider):
	# Called with the results returned from the Spider, after
	# it has processed the response.

	# Must return an iterable of Request, dict or Item objects.
	for i in result:
		yield i

def process_spider_exception(self, response, exception, spider):
	# Called when a spider or process_spider_input() method
	# (from other spider middleware) raises an exception.

	# Should return either None or an iterable of Response, dict
	# or Item objects.
	pass

# 只在爬蟲啓動時,執行一次。
def process_start_requests(self, start_requests, spider):
	# Called with the start requests of the spider, and works
	# similarly to the process_spider_output() method, except
	# that it doesn’t have a response associated.

	# Must return only requests (not items).
	for r in start_requests:
		yield r

配置

SPIDER_MIDDLEWARES = {
   # 'xdb.middlewares.XdbSpiderMiddleware': 543,
	'xdb.sd.Sd1': 666,
	'xdb.sd.Sd2': 667,
}
相關文章
相關標籤/搜索