爬蟲 - scrapy之中間件

中間件的簡介

1.中間件的做用

          在scrapy運行的整個過程當中,對scrapy框架運行的某些步驟作一些適配本身項目的動做.框架

     例如scrapy內置的HttpErrorMiddleware,能夠在http請求出錯時作一些處理.scrapy

 

 2.中間件的使用方法

         配置settings.py.詳見scrapy文檔 https://doc.scrapy.orgide

 

中間件的分類

  scrapy的中間件理論上有三種(Schduler Middleware,Spider Middleware,Downloader Middleware),在應用上通常有如下兩種函數

       1.爬蟲中間件Spider Middleware

         主要功能是在爬蟲運行過程當中進行一些處理.this

  2.下載器中間件Downloader Middleware

         主要功能在請求到網頁後,頁面被下載時進行一些處理.url

 

 中間件的方法

      1.Spider Middleware有如下幾個函數被管理:

       - process_spider_input 接收一個response對象並處理,spa

         位置是Downloader-->process_spider_input-->Spiders(Downloader和Spiders是scrapy官方結構圖中的組件)code

       - process_spider_exception spider出現的異常時被調用中間件

       - process_spider_output 當Spider處理response返回result時,該方法被調用對象

       - process_start_requests 當spider發出請求時,被調用

    位置是Spiders-->process_start_requests-->Scrapy Engine(Scrapy Engine是scrapy官方結構圖中的組件)         

   2.Downloader Middleware有如下幾個函數被管理

   - process_request  request經過下載中間件時,該方法被調用

   - process_response 下載結果通過中間件時被此方法處理

   - process_exception 下載過程當中出現異常時被調用

      編寫中間件時,須要思考要實現的功能最適合在那個過程處理,就編寫哪一個方法.

      中間件能夠用來處理請求,處理結果或者結合信號協調一些方法的使用等.也能夠在原有的爬蟲上添加適應項目的其餘功能,這一點在擴展中編寫也能夠達到目的,實際上擴展更加去耦合化,推薦使用擴展.

代碼示例

下載中間件代碼示例

 1 from scrapy.http import HtmlResponse
 2 from scrapy.http import Request
 3  
 4 class Md1(object):
 5     @classmethod
 6     def from_crawler(cls, crawler):
 7         # This method is used by Scrapy to create your spiders.
 8         s = cls()
 9         return s
10  
11     def process_request(self, request, spider):
12         # Called for each request that goes through the downloader
13         # middleware.
14  
15         # Must either:
16         # - return None: continue processing this request
17         # - or return a Response object
18         # - or return a Request object
19         # - or raise IgnoreRequest: process_exception() methods of
20         #   installed downloader middleware will be called
21         print('md1.process_request',request)
22         # 1. 返回Response
23         # import requests
24         # result = requests.get(request.url)
25         # return HtmlResponse(url=request.url, status=200, headers=None, body=result.content)
26         # 2. 返回Request
27         # return Request('https://dig.chouti.com/r/tec/hot/1')
28  
29         # 3. 拋出異常
30         # from scrapy.exceptions import IgnoreRequest
31         # raise IgnoreRequest
32  
33         # 4. 對請求進行加工(*)
34         # request.headers['user-agent'] = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
35  
36  
37     def process_response(self, request, response, spider):
38         # Called with the response returned from the downloader.
39  
40         # Must either;
41         # - return a Response object
42         # - return a Request object
43         # - or raise IgnoreRequest
44         print('m1.process_response',request,response)
45         return response
46  
47     def process_exception(self, request, exception, spider):
48         # Called when a download handler or a process_request()
49         # (from other downloader middleware) raises an exception.
50  
51         # Must either:
52         # - return None: continue processing this exception
53         # - return a Response object: stops process_exception() chain
54         # - return a Request object: stops process_exception() chain

配置

1 DOWNLOADER_MIDDLEWARES = {
2    #'xdb.middlewares.XdbDownloaderMiddleware': 543,
3     # 'xdb.proxy.XdbProxyMiddleware':751,
4     'xdb.md.Md1':666,
5     'xdb.md.Md2':667,
6 }

爬蟲中間件下載示例

編寫類

class Sd1(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
 
@classmethod
def from_crawler(cls, crawler):
    # This method is used by Scrapy to create your spiders.
    s = cls()
    return s
 
def process_spider_input(self, response, spider):
    # Called for each response that goes through the spider
    # middleware and into the spider.
 
    # Should return None or raise an exception.
    return None
 
def process_spider_output(self, response, result, spider):
    # Called with the results returned from the Spider, after
    # it has processed the response.
 
    # Must return an iterable of Request, dict or Item objects.
    for i in result:
        yield i
 
def process_spider_exception(self, response, exception, spider):
    # Called when a spider or process_spider_input() method
    # (from other spider middleware) raises an exception.
 
    # Should return either None or an iterable of Response, dict
    # or Item objects.
    pass
 
# 只在爬蟲啓動時,執行一次。
def process_start_requests(self, start_requests, spider):
    # Called with the start requests of the spider, and works
    # similarly to the process_spider_output() method, except
    # that it doesn’t have a response associated.
 
    # Must return only requests (not items).
    for r in start_requests:
        yield r

配置

1 SPIDER_MIDDLEWARES = {
2    # 'xdb.middlewares.XdbSpiderMiddleware': 543,
3     'xdb.sd.Sd1': 666,
4     'xdb.sd.Sd2': 667,
5 }
相關文章
相關標籤/搜索