爬蟲之scrapy框架

一 介紹

    Scrapy一個開源和協做的框架,其最初是爲了頁面抓取 (更確切來講, 網絡抓取 )所設計的,使用它能夠以快速、簡單、可擴展的方式從網站中提取所需的數據。但目前Scrapy的用途十分普遍,可用於如數據挖掘、監測和自動化測試等領域,也能夠應用在獲取API所返回的數據(例如 Amazon Associates Web Services ) 或者通用的網絡爬蟲。css

    Scrapy 是基於twisted框架開發而來,twisted是一個流行的事件驅動的python網絡框架。所以Scrapy使用了一種非阻塞(又名異步)的代碼來實現併發。總體架構大體以下
html

The data flow in Scrapy is controlled by the execution engine, and goes like this:python

  1. The Engine gets the initial Requests to crawl from the Spider.
  2. The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl.
  3. The Scheduler returns the next Requests to the Engine.
  4. The Engine sends the Requests to the Downloader, passing through the Downloader Middlewares (see process_request()).
  5. Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see process_response()).
  6. The Engine receives the Response from the Downloader and sends it to the Spider for processing, passing through the Spider Middleware (see process_spider_input()).
  7. The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware (see process_spider_output()).
  8. The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl.
  9. The process repeats (from step 1) until there are no more requests from the Scheduler.

 

Components:react

  1. 引擎(EGINE)

    引擎負責控制系統全部組件之間的數據流,並在某些動做發生時觸發事件。有關詳細信息,請參見上面的數據流部分。linux

  2. 調度器(SCHEDULER)
    用來接受引擎發過來的請求, 壓入隊列中, 並在引擎再次請求的時候返回. 能夠想像成一個URL的優先級隊列, 由它來決定下一個要抓取的網址是什麼, 同時去除重複的網址
  3. 下載器(DOWLOADER)
    用於下載網頁內容, 並將網頁內容返回給EGINE,下載器是創建在twisted這個高效的異步模型上的
  4. 爬蟲(SPIDERS)
    SPIDERS是開發人員自定義的類,用來解析responses,而且提取items,或者發送新的請求
  5. 項目管道(ITEM PIPLINES)
    在items被提取後負責處理它們,主要包括清理、驗證、持久化(好比存到數據庫)等操做
  6. 下載器中間件(Downloader Middlewares)
    位於Scrapy引擎和下載器之間,主要用來處理從EGINE傳到DOWLOADER的請求request,已經從DOWNLOADER傳到EGINE的響應response,你可用該中間件作如下幾件事
    1. process a request just before it is sent to the Downloader (i.e. right before Scrapy sends the request to the website);
    2. change received response before passing it to a spider;
    3. send a new Request instead of passing received response to a spider;
    4. pass response to a spider without fetching a web page;
    5. silently drop some requests.
  7. 爬蟲中間件(Spider Middlewares)
    位於EGINE和SPIDERS之間,主要工做是處理SPIDERS的輸入(即responses)和輸出(即requests)

官網連接:https://docs.scrapy.org/en/latest/topics/architecture.htmlweb

二 安裝

#Windows平臺
    一、pip3 install wheel #安裝後,便支持經過wheel文件安裝軟件,wheel文件官網:https://www.lfd.uci.edu/~gohlke/pythonlibs
    3、pip3 install lxml
    4、pip3 install pyopenssl
    五、下載並安裝pywin32:https://sourceforge.net/projects/pywin32/files/pywin32/
    六、下載twisted的wheel文件:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
    七、執行pip3 install 下載目錄\Twisted-17.9.0-cp36-cp36m-win_amd64.whl
    8、pip3 install scrapy

#Linux平臺
    一、pip3 install scrapy

三 命令行工具

#1 查看幫助
    scrapy -h
    scrapy <command> -h

#2 有兩種命令:其中Project-only必須切到項目文件夾下才能執行,而Global的命令則不須要
    Global commands:
        startproject #建立項目
        genspider    #建立爬蟲程序
        settings     #若是是在項目目錄下,則獲得的是該項目的配置
        runspider    #運行一個獨立的python文件,沒必要建立項目
        shell        #scrapy shell url地址  在交互式調試,如選擇器規則正確與否
        fetch        #獨立於程單純地爬取一個頁面,能夠拿到請求頭
        view         #下載完畢後直接彈出瀏覽器,以此能夠分辨出哪些數據是ajax請求
        version      #scrapy version 查看scrapy的版本,scrapy version -v查看scrapy依賴庫的版本
    Project-only commands:
        crawl        #運行爬蟲,必須建立項目才行,確保配置文件中ROBOTSTXT_OBEY = False
        check        #檢測項目中有無語法錯誤
        list         #列出項目中所包含的爬蟲名
        edit         #編輯器,通常不用
        parse        #scrapy parse url地址 --callback 回調函數  #以此能夠驗證咱們的回調函數是否正確
        bench        #scrapy bentch壓力測試

#3 官網連接
    https://docs.scrapy.org/en/latest/topics/commands.html
#一、執行全局命令:請確保不在某個項目的目錄下,排除受該項目配置的影響
scrapy startproject MyProject

cd MyProject
scrapy genspider baidu www.baidu.com

scrapy settings --get XXX #若是切換到項目目錄下,看到的則是該項目的配置

scrapy runspider baidu.py

scrapy shell https://www.baidu.com
    response
    response.status
    response.body
    view(response)
    
scrapy view https://www.taobao.com #若是頁面顯示內容不全,不全的內容則是ajax請求實現的,以此快速定位問題

scrapy fetch --nolog --headers https://www.taobao.com

scrapy version #scrapy的版本

scrapy version -v #依賴庫的版本

#二、執行項目命令:切到項目目錄下
scrapy crawl baidu
scrapy check
scrapy list
scrapy parse http://quotes.toscrape.com/ --callback parse
scrapy bench
    

示範用法
用法示例

四 項目結構以及爬蟲應用簡介 

project_name/
   scrapy.cfg
   project_name/
       __init__.py
       items.py
       pipelines.py
       settings.py
       spiders/
           __init__.py
           爬蟲1.py
           爬蟲2.py
           爬蟲3.py

文件說明:ajax

  • scrapy.cfg  項目的主配置信息,用來部署scrapy時使用,爬蟲相關的配置信息在settings.py文件中。
  • items.py    設置數據存儲模板,用於結構化數據,如:Django的Model
  • pipelines    數據處理行爲,如:通常結構化的數據持久化
  • settings.py 配置文件,如:遞歸的層數、併發數,延遲下載等。強調:配置文件的選項必須大寫不然視爲無效,正確寫法USER_AGENT='xxxx'
  • spiders      爬蟲目錄,如:建立文件,編寫爬蟲規則

注意:通常建立爬蟲文件時,以網站域名命名正則表達式

#在項目目錄下新建:entrypoint.py
from scrapy.cmdline import execute
execute(['scrapy', 'crawl', 'xiaohua'])
項目啓動
import sys,os
sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')
Windows編碼

五 Spiders

一、介紹算法

#一、Spiders是由一系列類(定義了一個網址或一組網址將被爬取)組成,具體包括如何執行爬取任務而且如何從頁面中提取結構化的數據。

#二、換句話說,Spiders是你爲了一個特定的網址或一組網址自定義爬取和解析頁面行爲的地方

二、Spiders會循環作以下事情mongodb

#一、生成初始的Requests來爬取第一個URLS,而且標識一個回調函數
第一個請求定義在start_requests()方法內默認從start_urls列表中得到url地址來生成Request請求,默認的回調函數是parse方法。回調函數在下載完成返回response時自動觸發

#二、在回調函數中,解析response而且返回值
返回值能夠4種:
        包含解析數據的字典
        Item對象
        新的Request對象(新的Requests也須要指定一個回調函數)
        或者是可迭代對象(包含Items或Request)

#三、在回調函數中解析頁面內容
一般使用Scrapy自帶的Selectors,但很明顯你也可使用Beutifulsoup,lxml或其餘你愛用啥用啥。

#四、最後,針對返回的Items對象將會被持久化到數據庫
經過Item Pipeline組件存到數據庫:https://docs.scrapy.org/en/latest/topics/item-pipeline.html#topics-item-pipeline)
或者導出到不一樣的文件(經過Feed exports:https://docs.scrapy.org/en/latest/topics/feed-exports.html#topics-feed-exports)

三、Spiders總共提供了五種類:

#一、scrapy.spiders.Spider #scrapy.Spider等同於scrapy.spiders.Spider
#二、scrapy.spiders.CrawlSpider
#三、scrapy.spiders.XMLFeedSpider
#四、scrapy.spiders.CSVFeedSpider
#五、scrapy.spiders.SitemapSpider

四、導入使用

# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import Spider,CrawlSpider,XMLFeedSpider,CSVFeedSpider,SitemapSpider

class AmazonSpider(scrapy.Spider): #自定義類,繼承Spiders提供的基類
    name = 'amazon'
    allowed_domains = ['www.amazon.cn']
    start_urls = ['http://www.amazon.cn/']
    

```
def parse(self, response):
    pass
```

五、class scrapy.spiders.Spider

這是最簡單的spider類,任何其餘的spider類都須要繼承它(包含你本身定義的)。

該類不提供任何特殊的功能,它僅提供了一個默認的start_requests方法默認從start_urls中讀取url地址發送requests請求,而且默認parse做爲回調函數

class AmazonSpider(scrapy.Spider):
    name = 'amazon' 

```
allowed_domains = ['www.amazon.cn'] 

start_urls = ['http://www.amazon.cn/']

custom_settings = {
    'BOT_NAME' : 'Egon_Spider_Amazon',
    'REQUEST_HEADERS' : {
      'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
      'Accept-Language': 'en',
    }
}

def parse(self, response):
    pass
```
#一、name = 'amazon' 
定義爬蟲名,scrapy會根據該值定位爬蟲程序
因此它必需要有且必須惟一(In Python 2 this must be ASCII only.)

#二、allowed_domains = ['www.amazon.cn'] 
定義容許爬取的域名,若是OffsiteMiddleware啓動(默認就啓動),
那麼不屬於該列表的域名及其子域名都不容許爬取
若是爬取的網址爲:https://www.example.com/1.html,那就添加'example.com'到列表.

#三、start_urls = ['http://www.amazon.cn/']
若是沒有指定url,就從該列表中讀取url來生成第一個請求

#四、custom_settings
值爲一個字典,定義一些配置信息,在運行爬蟲程序時,這些配置會覆蓋項目級別的配置
因此custom_settings必須被定義成一個類屬性,因爲settings會在類實例化前被加載

#五、settings
經過self.settings['配置項的名字']能夠訪問settings.py中的配置,若是本身定義了custom_settings仍是以本身的爲準

#六、logger
日誌名默認爲spider的名字
self.logger.debug('=============>%s' %self.settings['BOT_NAME'])

#五、crawler:瞭解
該屬性必須被定義到類方法from_crawler中

#六、from_crawler(crawler, *args, **kwargs):瞭解
You probably won’t need to override this directly  because the default implementation acts as a proxy to the __init__() method, calling it with the given arguments args and named arguments kwargs.

#七、start_requests()
該方法用來發起第一個Requests請求,且必須返回一個可迭代的對象。它在爬蟲程序打開時就被Scrapy調用,Scrapy只調用它一次。
默認從start_urls裏取出每一個url來生成Request(url, dont_filter=True)

#針對參數dont_filter,請看自定義去重規則

若是你想要改變起始爬取的Requests,你就須要覆蓋這個方法,例如你想要起始發送一個POST請求,以下
class MySpider(scrapy.Spider):
    name = 'myspider'

```
def start_requests(self):
    return [scrapy.FormRequest("http://www.example.com/login",
                               formdata={'user': 'john', 'pass': 'secret'},
                               callback=self.logged_in)]

def logged_in(self, response):
    # here you would extract links to follow and return Requests for
    # each of them, with another callback
    pass
```

#八、parse(response)
這是默認的回調函數,全部的回調函數必須返回an iterable of Request and/or dicts or Item objects.

#九、log(message[, level, component]):瞭解
Wrapper that sends a log message through the Spider’s logger, kept for backwards compatibility. For more information see Logging from Spiders.

#十、closed(reason)
爬蟲程序結束時自動觸發
spider方法詳解
去重規則應該多個爬蟲共享的,但凡一個爬蟲爬取了,其餘都不要爬了,實現方式以下

#方法一:
1、新增類屬性
visited=set() #類屬性

2、回調函數parse方法內:
def parse(self, response):
    if response.url in self.visited:
        return None
    .......

```
self.visited.add(response.url) 
```

#方法一改進:針對url可能過長,因此咱們存放url的hash值
def parse(self, response):
        url=md5(response.request.url)
    if url in self.visited:
        return None
    .......

```
self.visited.add(url) 
```

#方法二:Scrapy自帶去重功能
配置文件:
DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter' #默認的去重規則幫咱們去重,去重規則在內存中
DUPEFILTER_DEBUG = False
JOBDIR = "保存範文記錄的日誌路徑,如:/root/"  # 最終路徑爲 /root/requests.seen,去重規則放文件中

scrapy自帶去重規則默認爲RFPDupeFilter,只須要咱們指定
Request(...,dont_filter=False) ,若是dont_filter=True則告訴Scrapy這個URL不參與去重。

#方法三:
咱們也能夠仿照RFPDupeFilter自定義去重規則,

from scrapy.dupefilter import RFPDupeFilter,看源碼,仿照BaseDupeFilter

#步驟一:在項目目錄下自定義去重文件dup.py
class UrlFilter(object):
    def __init__(self):
        self.visited = set() #或者放到數據庫

    @classmethod
    def from_settings(cls, settings):
        return cls()

```
def request_seen(self, request):
    if request.url in self.visited:
        return True
    self.visited.add(request.url)

def open(self):  # can return deferred
    pass

def close(self, reason):  # can return a deferred
    pass

def log(self, request, spider):  # log that a request has been filtered
    pass
```

#步驟二:配置文件settings.py:
DUPEFILTER_CLASS = '項目名.dup.UrlFilter'

# 源碼分析:
from scrapy.core.scheduler import Scheduler
見Scheduler下的enqueue_request方法:self.df.request_seen(request)

去重規則:去除重複的url
去重:去除詳細的url
#例一:
import scrapy

class MySpider(scrapy.Spider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = [
        'http://www.example.com/1.html',
        'http://www.example.com/2.html',
        'http://www.example.com/3.html',
    ]

```
def parse(self, response):
    self.logger.info('A response from %s just arrived!', response.url)
```

​    
#例二:一個回調函數返回多個Requests和Items
import scrapy

class MySpider(scrapy.Spider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = [
        'http://www.example.com/1.html',
        'http://www.example.com/2.html',
        'http://www.example.com/3.html',
    ]

```
def parse(self, response):
    for h3 in response.xpath('//h3').extract():
        yield {"title": h3}

    for url in response.xpath('//a/@href').extract():
        yield scrapy.Request(url, callback=self.parse)
```

​            
#例三:在start_requests()內直接指定起始爬取的urls,start_urls就沒有用了,

import scrapy
from myproject.items import MyItem

class MySpider(scrapy.Spider):
    name = 'example.com'
    allowed_domains = ['example.com']

```
def start_requests(self):
    yield scrapy.Request('http://www.example.com/1.html', self.parse)
    yield scrapy.Request('http://www.example.com/2.html', self.parse)
    yield scrapy.Request('http://www.example.com/3.html', self.parse)

def parse(self, response):
    for h3 in response.xpath('//h3').extract():
        yield MyItem(title=h3)

    for url in response.xpath('//a/@href').extract():
        yield scrapy.Request(url, callback=self.parse)
```
複製代碼
案例
咱們可能須要在命令行爲爬蟲程序傳遞參數,好比傳遞初始的url,像這樣
#命令行執行
scrapy crawl myspider -a category=electronics

#在__init__方法中能夠接收外部傳進來的參數
import scrapy

class MySpider(scrapy.Spider):
    name = 'myspider'

```
def __init__(self, category=None, *args, **kwargs):
    super(MySpider, self).__init__(*args, **kwargs)
    self.start_urls = ['http://www.example.com/categories/%s' % category]
    #...

```

​        
#注意接收的參數全都是字符串,若是想要結構化的數據,你須要用相似json.loads的方法
參數傳遞

六、其餘通用Spiders:https://docs.scrapy.org/en/latest/topics/spiders.html#generic-spiders

六 Selectors

#1 //與/
#2 text
#三、extract與extract_first:從selector對象中解出內容
#四、屬性:xpath的屬性加前綴@
#四、嵌套查找
#五、設置默認值
#四、按照屬性查找
#五、按照屬性模糊查找
#六、正則表達式
#七、xpath相對路徑
#八、帶變量的xpath
response.selector.css()
response.selector.xpath()
可簡寫爲
response.css()
response.xpath()

#1 //與/
response.xpath('//body/a/')#
response.css('div a::text')

>>> response.xpath('//body/a') #開頭的//表明從整篇文檔中尋找,body以後的/表明body的兒子
[]
>>> response.xpath('//body//a') #開頭的//表明從整篇文檔中尋找,body以後的//表明body的子子孫孫
[<Selector xpath='//body//a' data='<a href="image1.html">Name: My image 1 <'>, <Selector xpath='//body//a' data='<a href="image2.html">Name: My image 2 <'>, <Selector xpath='//body//a' data='<a href="
image3.html">Name: My image 3 <'>, <Selector xpath='//body//a' data='<a href="image4.html">Name: My image 4 <'>, <Selector xpath='//body//a' data='<a href="image5.html">Name: My image 5 <'>]

#2 text
>>> response.xpath('//body//a/text()')
>>> response.css('body a::text')

#三、extract與extract_first:從selector對象中解出內容
>>> response.xpath('//div/a/text()').extract()
['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 ']
>>> response.css('div a::text').extract()
['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 ']

>>> response.xpath('//div/a/text()').extract_first()
'Name: My image 1 '
>>> response.css('div a::text').extract_first()
'Name: My image 1 '

#四、屬性:xpath的屬性加前綴@
>>> response.xpath('//div/a/@href').extract_first()
'image1.html'
>>> response.css('div a::attr(href)').extract_first()
'image1.html'

#四、嵌套查找
>>> response.xpath('//div').css('a').xpath('@href').extract_first()
'image1.html'

#五、設置默認值
>>> response.xpath('//div[@id="xxx"]').extract_first(default="not found")
'not found'

#四、按照屬性查找
response.xpath('//div[@id="images"]/a[@href="image3.html"]/text()').extract()
response.css('#images a[@href="image3.html"]/text()').extract()

#五、按照屬性模糊查找
response.xpath('//a[contains(@href,"image")]/@href').extract()
response.css('a[href*="image"]::attr(href)').extract()

response.xpath('//a[contains(@href,"image")]/img/@src').extract()
response.css('a[href*="imag"] img::attr(src)').extract()

response.xpath('//*[@href="image1.html"]')
response.css('*[href="image1.html"]')

#六、正則表達式
response.xpath('//a/text()').re(r'Name: (.*)')
response.xpath('//a/text()').re_first(r'Name: (.*)')

#七、xpath相對路徑
>>> res=response.xpath('//a[contains(@href,"3")]')[0]
>>> res.xpath('img')
[<Selector xpath='img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('./img')
[<Selector xpath='./img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('.//img')
[<Selector xpath='.//img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('//img') #這就是從頭開始掃描
[<Selector xpath='//img' data='<img src="image1_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image2_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image3_thumb.jpg">'>, <Selector xpa
th='//img' data='<img src="image4_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image5_thumb.jpg">'>]

#八、帶變量的xpath
>>> response.xpath('//div[@id=$xxx]/a/text()',xxx='images').extract_first()
'Name: My image 1 '
>>> response.xpath('//div[count(a)=$yyy]/@id',yyy=5).extract_first() #求有5個a標籤的div的id
'images'
案例展現

七 Items

https://docs.scrapy.org/en/latest/topics/items.html

八 Item Pipeline

#一:能夠寫多個Pipeline類
#一、若是優先級高的Pipeline的process_item返回一個值或者None,會自動傳給下一個pipline的process_item,
#二、若是隻想讓第一個Pipeline執行,那得讓第一個pipline的process_item拋出異常raise DropItem()

#三、能夠用spider.name == '爬蟲名' 來控制哪些爬蟲用哪些pipeline

二:示範
from scrapy.exceptions import DropItem

class CustomPipeline(object):
    def __init__(self,v):
        self.value = v

```
@classmethod
def from_crawler(cls, crawler):
    """
    Scrapy會先經過getattr判斷咱們是否自定義了from_crawler,有則調它來完
    成實例化
    """
    val = crawler.settings.getint('MMMM')
    return cls(val)

def open_spider(self,spider):
    """
    爬蟲剛啓動時執行一次
    """
    print('000000')

def close_spider(self,spider):
    """
    爬蟲關閉時執行一次
    """
    print('111111')

```

```
def process_item(self, item, spider):
    # 操做並進行持久化

    # return表示會被後續的pipeline繼續處理
    return item

    # 表示將item丟棄,不會被後續pipeline處理
    # raise DropItem()
```
自定義pipline
#一、settings.py
HOST="127.0.0.1"
PORT=27017
USER="root"
PWD="123"
DB="amazon"
TABLE="goods"



ITEM_PIPELINES = {
   'Amazon.pipelines.CustomPipeline': 200,
}

#二、pipelines.py
class CustomPipeline(object):
    def __init__(self,host,port,user,pwd,db,table):
        self.host=host
        self.port=port
        self.user=user
        self.pwd=pwd
        self.db=db
        self.table=table

```
@classmethod
def from_crawler(cls, crawler):
    """
    Scrapy會先經過getattr判斷咱們是否自定義了from_crawler,有則調它來完
    成實例化
    """
    HOST = crawler.settings.get('HOST')
    PORT = crawler.settings.get('PORT')
    USER = crawler.settings.get('USER')
    PWD = crawler.settings.get('PWD')
    DB = crawler.settings.get('DB')
    TABLE = crawler.settings.get('TABLE')
    return cls(HOST,PORT,USER,PWD,DB,TABLE)

def open_spider(self,spider):
    """
    爬蟲剛啓動時執行一次
    """
    self.client = MongoClient('mongodb://%s:%s@%s:%s' %(self.user,self.pwd,self.host,self.port))

def close_spider(self,spider):
    """
    爬蟲關閉時執行一次
    """
    self.client.close()

```

```
def process_item(self, item, spider):
    # 操做並進行持久化

```


        self.client[self.db][self.table].save(dict(item))
案例展現

https://docs.scrapy.org/en/latest/topics/item-pipeline.html

九 Dowloader Middeware

下載中間件的用途
    1、在process——request內,自定義下載,不用scrapy的下載
    2、對請求進行二次加工,好比
        設置請求頭
        設置cookie
        添加代理
            scrapy自帶的代理組件:
                from scrapy.downloadermiddlewares.httpproxy import HttpProxyMiddleware
                from urllib.request import getproxies
class DownMiddleware1(object):
    def process_request(self, request, spider):
        """
        請求須要被下載時,通過全部下載器中間件的process_request調用
        :param request: 
        :param spider: 
        :return:  
            None,繼續後續中間件去下載;
            Response對象,中止process_request的執行,開始執行process_response
            Request對象,中止中間件的執行,將Request從新調度器
            raise IgnoreRequest異常,中止process_request的執行,開始執行process_exception
        """
        pass



```
def process_response(self, request, response, spider):
    """
    spider處理完成,返回時調用
    :param response:
    :param result:
    :param spider:
    :return: 
        Response 對象:轉交給其餘中間件process_response
        Request 對象:中止中間件,request會被從新調度下載
        raise IgnoreRequest 異常:調用Request.errback
    """
    print('response1')
    return response

def process_exception(self, request, exception, spider):
    """
    當下載處理器(download handler)或 process_request() (下載中間件)拋出異常
    :param response:
    :param exception:
    :param spider:
    :return: 
        None:繼續交給後續中間件處理異常;
        Response對象:中止後續process_exception方法
        Request對象:中止中間件,request將會被從新調用下載
    """
    return None
```
下載中間件
#一、與middlewares.py同級目錄下新建proxy_handle.py
import requests

def get_proxy():
    return requests.get("http://127.0.0.1:5010/get/").text

def delete_proxy(proxy):
    requests.get("http://127.0.0.1:5010/delete/?proxy={}".format(proxy))
    
    

#二、middlewares.py
from Amazon.proxy_handle import get_proxy,delete_proxy

class DownMiddleware1(object):
    def process_request(self, request, spider):
        """
        請求須要被下載時,通過全部下載器中間件的process_request調用
        :param request:
        :param spider:
        :return:
            None,繼續後續中間件去下載;
            Response對象,中止process_request的執行,開始執行process_response
            Request對象,中止中間件的執行,將Request從新調度器
            raise IgnoreRequest異常,中止process_request的執行,開始執行process_exception
        """
        proxy="http://" + get_proxy()
        request.meta['download_timeout']=20
        request.meta["proxy"] = proxy
        print('爲%s 添加代理%s ' % (request.url, proxy),end='')
        print('元數據爲',request.meta)

```
def process_response(self, request, response, spider):
    """
    spider處理完成,返回時調用
    :param response:
    :param result:
    :param spider:
    :return:
        Response 對象:轉交給其餘中間件process_response
        Request 對象:中止中間件,request會被從新調度下載
        raise IgnoreRequest 異常:調用Request.errback
    """
    print('返回狀態嗎',response.status)
    return response

```

```
def process_exception(self, request, exception, spider):
    """
    當下載處理器(download handler)或 process_request() (下載中間件)拋出異常
    :param response:
    :param exception:
    :param spider:
    :return:
        None:繼續交給後續中間件處理異常;
        Response對象:中止後續process_exception方法
        Request對象:中止中間件,request將會被從新調用下載
    """
    print('代理%s,訪問%s出現異常:%s' %(request.meta['proxy'],request.url,exception))
    import time
    time.sleep(5)
    delete_proxy(request.meta['proxy'].split("//")[-1])
    request.meta['proxy']='http://'+get_proxy()

    return request
```
配置代理

十 Spider Middleware

一、爬蟲中間件方法介紹

from scrapy import signals

class SpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) #當前爬蟲執行時觸發spider_opened
        return s

```
def spider_opened(self, spider):
    # spider.logger.info('我是egon派來的爬蟲1: %s' % spider.name)
    print('我是egon派來的爬蟲1: %s' % spider.name)

def process_start_requests(self, start_requests, spider):
    # Called with the start requests of the spider, and works
    # similarly to the process_spider_output() method, except
    # that it doesn’t have a response associated.

    # Must return only requests (not items).
    print('start_requests1')
    for r in start_requests:
        yield r

def process_spider_input(self, response, spider):
    # Called for each response that goes through the spider
    # middleware and into the spider.
    # 每一個response通過爬蟲中間件進入spider時調用

    # 返回值:Should return None or raise an exception.
    #一、None: 繼續執行其餘中間件的process_spider_input
    #二、拋出異常:
    # 一旦拋出異常則再也不執行其餘中間件的process_spider_input
    # 而且觸發request綁定的errback
    # errback的返回值倒着傳給中間件的process_spider_output
    # 若是未找到errback,則倒着執行中間件的process_spider_exception

    print("input1")
    return None

def process_spider_output(self, response, result, spider):
    # Called with the results returned from the Spider, after
    # it has processed the response.

    # Must return an iterable of Request, dict or Item objects.
    print('output1')

    # 用yield返回屢次,與return返回一次是一個道理
    # 若是生成器掌握很差(函數內有yield執行函數獲得的是生成器而並不會馬上執行),生成器的形式會容易誤導你對中間件執行順序的理解
    # for i in result:
    #     yield i
    return result

def process_spider_exception(self, response, exception, spider):
    # Called when a spider or process_spider_input() method
    # (from other spider middleware) raises an exception.

    # Should return either None or an iterable of Response, dict
    # or Item objects.
    print('exception1')
```
複製代碼
View Code

 二、當前爬蟲啓動時以及初始請求產生時

#步驟一:
'''
打開註釋:
SPIDER_MIDDLEWARES = {
   'Baidu.middlewares.SpiderMiddleware1': 200,
   'Baidu.middlewares.SpiderMiddleware2': 300,
   'Baidu.middlewares.SpiderMiddleware3': 400,
}

'''

#步驟二:middlewares.py
from scrapy import signals

class SpiderMiddleware1(object):
    @classmethod
    def from_crawler(cls, crawler):
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) #當前爬蟲執行時觸發spider_opened
        return s

```
def spider_opened(self, spider):
    print('我是egon派來的爬蟲1: %s' % spider.name)

def process_start_requests(self, start_requests, spider):
    # Must return only requests (not items).
    print('start_requests1')
    for r in start_requests:
        yield r

```

​        
​        
class SpiderMiddleware2(object):
​    @classmethod
​    def from_crawler(cls, crawler):
​        s = cls()
​        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)  # 當前爬蟲執行時觸發spider_openedreturn s

```
def spider_opened(self, spider):
    print('我是egon派來的爬蟲2: %s' % spider.name)

def process_start_requests(self, start_requests, spider):
    print('start_requests2')
    for r in start_requests:
        yield r

```

class SpiderMiddleware3(object):
    @classmethod
    def from_crawler(cls, crawler):
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)  # 當前爬蟲執行時觸發spider_opened
        return s

```
def spider_opened(self, spider):
    print('我是egon派來的爬蟲3: %s' % spider.name)

def process_start_requests(self, start_requests, spider):
    print('start_requests3')
    for r in start_requests:
        yield r

```

#步驟三:分析運行結果
#一、啓動爬蟲時則馬上執行:

我是egon派來的爬蟲1: baidu
我是egon派來的爬蟲2: baidu
我是egon派來的爬蟲3: baidu

#二、而後產生一個初始的request請求,依次通過爬蟲中間件1,2,3:
start_requests1
start_requests2
start_requests3
View Code

三、process_spider_input返回None時

#步驟一:打開註釋:
SPIDER_MIDDLEWARES = {
   'Baidu.middlewares.SpiderMiddleware1': 200,
   'Baidu.middlewares.SpiderMiddleware2': 300,
   'Baidu.middlewares.SpiderMiddleware3': 400,
}

'''

#步驟二:middlewares.py
from scrapy import signals

class SpiderMiddleware1(object):

```
def process_spider_input(self, response, spider):
    print("input1")

def process_spider_output(self, response, result, spider):
    print('output1')
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception1')

```

class SpiderMiddleware2(object):

```
def process_spider_input(self, response, spider):
    print("input2")
    return None

def process_spider_output(self, response, result, spider):
    print('output2')
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception2')

```

class SpiderMiddleware3(object):

```
def process_spider_input(self, response, spider):
    print("input3")
    return None

def process_spider_output(self, response, result, spider):
    print('output3')
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception3')

```

#步驟三:運行結果分析

#一、返回response時,依次通過爬蟲中間件1,2,3
input1
input2
input3

#二、spider處理完畢後,依次通過爬蟲中間件3,2,1
output3
output2
output1
View Code

四、process_spider_input拋出異常時

#步驟一:
'''
打開註釋:
SPIDER_MIDDLEWARES = {
   'Baidu.middlewares.SpiderMiddleware1': 200,
   'Baidu.middlewares.SpiderMiddleware2': 300,
   'Baidu.middlewares.SpiderMiddleware3': 400,
}

'''

#步驟二:middlewares.py

from scrapy import signals

class SpiderMiddleware1(object):

```
def process_spider_input(self, response, spider):
    print("input1")

def process_spider_output(self, response, result, spider):
    print('output1')
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception1')

```

class SpiderMiddleware2(object):

```
def process_spider_input(self, response, spider):
    print("input2")
    raise Type

def process_spider_output(self, response, result, spider):
    print('output2')
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception2')

```

class SpiderMiddleware3(object):

```
def process_spider_input(self, response, spider):
    print("input3")
    return None

def process_spider_output(self, response, result, spider):
    print('output3')
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception3')

```

​        

#運行結果        
input1
input2
exception3
exception2
exception1

#分析:
#一、當response通過中間件1的 process_spider_input返回None,繼續交給中間件2的process_spider_input
#二、中間件2的process_spider_input拋出異常,則直接跳事後續的process_spider_input,將異常信息傳遞給Spiders裏該請求的errback
#三、沒有找到errback,則該response既沒有被Spiders正常的callback執行,也沒有被errback執行,即Spiders啥事也沒有幹,那麼開始倒着執行process_spider_exception
#四、若是process_spider_exception返回None,表明該方法推卸掉責任,並沒處理異常,而是直接交給下一個process_spider_exception,全都返回None,則異常最終交給Engine拋出
View Code

五、指定errback

#步驟一:spider.py
import scrapy



class BaiduSpider(scrapy.Spider):
    name = 'baidu'
    allowed_domains = ['www.baidu.com']
    start_urls = ['http://www.baidu.com/']

```
def start_requests(self):
    yield scrapy.Request(url='http://www.baidu.com/',
                         callback=self.parse,
                         errback=self.parse_err,
                         )

def parse(self, response):
    pass

def parse_err(self,res):
    #res 爲異常信息,異常已經被該函數處理了,所以不會再拋給所以,因而開始走process_spider_output
    return [1,2,3,4,5] #提取異常信息中有用的數據以可迭代對象的形式存放於管道中,等待被process_spider_output取走

```



#步驟二:
'''
打開註釋:
SPIDER_MIDDLEWARES = {
   'Baidu.middlewares.SpiderMiddleware1': 200,
   'Baidu.middlewares.SpiderMiddleware2': 300,
   'Baidu.middlewares.SpiderMiddleware3': 400,
}

'''

#步驟三:middlewares.py

from scrapy import signals

class SpiderMiddleware1(object):

```
def process_spider_input(self, response, spider):
    print("input1")

def process_spider_output(self, response, result, spider):
    print('output1',list(result))
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception1')

```

class SpiderMiddleware2(object):

```
def process_spider_input(self, response, spider):
    print("input2")
    raise TypeError('input2 拋出異常')

def process_spider_output(self, response, result, spider):
    print('output2',list(result))
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception2')

```

class SpiderMiddleware3(object):

```
def process_spider_input(self, response, spider):
    print("input3")
    return None

def process_spider_output(self, response, result, spider):
    print('output3',list(result))
    return result

def process_spider_exception(self, response, exception, spider):
    print('exception3')

```



#步驟四:運行結果分析
input1
input2
output3 [1, 2, 3, 4, 5] #parse_err的返回值放入管道中,只能被取走一次,在output3的方法內能夠根據異常信息封裝一個新的request請求
output2 []
output1 []
View Code

十一 自定義擴展

自定義擴展(與django的信號相似)
    1、django的信號是django是預留的擴展,信號一旦被觸發,相應的功能就會執行
    二、scrapy自定義擴展的好處是能夠在任意咱們想要的位置添加功能,而其餘組件中提供的功能只能在規定的位置執行
#一、在與settings同級目錄下新建一個文件,文件名能夠爲extentions.py,內容以下
from scrapy import signals



class MyExtension(object):
    def __init__(self, value):
        self.value = value

```
@classmethod
def from_crawler(cls, crawler):
    val = crawler.settings.getint('MMMM')
    obj = cls(val)

    crawler.signals.connect(obj.spider_opened, signal=signals.spider_opened)
    crawler.signals.connect(obj.spider_closed, signal=signals.spider_closed)

    return obj

def spider_opened(self, spider):
    print('=============>open')

def spider_closed(self, spider):
    print('=============>close')

```

#二、配置生效
EXTENSIONS = {
    "Amazon.extentions.MyExtension":200
}
View Code

十二 settings.py

#==>第一部分:基本配置<===
#一、項目名稱,默認的USER_AGENT由它來構成,也做爲日誌記錄的日誌名
BOT_NAME = 'Amazon'

#二、爬蟲應用路徑
SPIDER_MODULES = ['Amazon.spiders']
NEWSPIDER_MODULE = 'Amazon.spiders'

#三、客戶端User-Agent請求頭
#USER_AGENT = 'Amazon (+http://www.yourdomain.com)'

#四、是否遵循爬蟲協議
# Obey robots.txt rules
ROBOTSTXT_OBEY = False

#五、是否支持cookie,cookiejar進行操做cookie,默認開啓
#COOKIES_ENABLED = False

#六、Telnet用於查看當前爬蟲的信息,操做爬蟲等...使用telnet ip port ,而後經過命令操做
#TELNETCONSOLE_ENABLED = False
#TELNETCONSOLE_HOST = '127.0.0.1'
#TELNETCONSOLE_PORT = [6023,]

#七、Scrapy發送HTTP請求默認使用的請求頭
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}



#===>第二部分:併發與延遲<===
#一、下載器總共最大處理的併發請求數,默認值16
#CONCURRENT_REQUESTS = 32

#二、每一個域名可以被執行的最大併發請求數目,默認值8
#CONCURRENT_REQUESTS_PER_DOMAIN = 16

#三、可以被單個IP處理的併發請求數,默認值0,表明無限制,須要注意兩點
#I、若是不爲零,那CONCURRENT_REQUESTS_PER_DOMAIN將被忽略,即併發數的限制是按照每一個IP來計算,而不是每一個域名
#II、該設置也影響DOWNLOAD_DELAY,若是該值不爲零,那麼DOWNLOAD_DELAY下載延遲是限制每一個IP而不是每一個域
#CONCURRENT_REQUESTS_PER_IP = 16

#四、若是沒有開啓智能限速,這個值就表明一個規定死的值,表明對同一網址延遲請求的秒數
#DOWNLOAD_DELAY = 3

#===>第三部分:智能限速/自動節流:AutoThrottle extension<===
#一:介紹
from scrapy.contrib.throttle import AutoThrottle #http://scrapy.readthedocs.io/en/latest/topics/autothrottle.html#topics-autothrottle
設置目標:
1、比使用默認的下載延遲對站點更好
2、自動調整scrapy到最佳的爬取速度,因此用戶無需本身調整下載延遲到最佳狀態。用戶只須要定義容許最大併發的請求,剩下的事情由該擴展組件自動完成

#二:如何實現?
在Scrapy中,下載延遲是經過計算創建TCP鏈接到接收到HTTP包頭(header)之間的時間來測量的。
注意,因爲Scrapy可能在忙着處理spider的回調函數或者沒法下載,所以在合做的多任務環境下準確測量這些延遲是十分苦難的。 不過,這些延遲仍然是對Scrapy(甚至是服務器)繁忙程度的合理測量,而這擴展就是以此爲前提進行編寫的。

#三:限速算法
自動限速算法基於如下規則調整下載延遲
#一、spiders開始時的下載延遲是基於AUTOTHROTTLE_START_DELAY的值
#二、當收到一個response,對目標站點的下載延遲=收到響應的延遲時間/AUTOTHROTTLE_TARGET_CONCURRENCY
#三、下一次請求的下載延遲就被設置成:對目標站點下載延遲時間和過去的下載延遲時間的平均值
#四、沒有達到200個response則不容許下降延遲
#五、下載延遲不能變的比DOWNLOAD_DELAY更低或者比AUTOTHROTTLE_MAX_DELAY更高

#四:配置使用
#開啓True,默認False
AUTOTHROTTLE_ENABLED = True
#起始的延遲
AUTOTHROTTLE_START_DELAY = 5
#最小延遲
DOWNLOAD_DELAY = 3
#最大延遲
AUTOTHROTTLE_MAX_DELAY = 10
#每秒併發請求數的平均值,不能高於 CONCURRENT_REQUESTS_PER_DOMAIN或CONCURRENT_REQUESTS_PER_IP,調高了則吞吐量增大強姦目標站點,調低了則對目標站點更加」禮貌「
#每一個特定的時間點,scrapy併發請求的數目均可能高於或低於該值,這是爬蟲視圖達到的建議值而不是硬限制
AUTOTHROTTLE_TARGET_CONCURRENCY = 16.0
#調試
AUTOTHROTTLE_DEBUG = True
CONCURRENT_REQUESTS_PER_DOMAIN = 16
CONCURRENT_REQUESTS_PER_IP = 16



#===>第四部分:爬取深度與爬取方式<===
#一、爬蟲容許的最大深度,能夠經過meta查看當前深度;0表示無深度
# DEPTH_LIMIT = 3

#二、爬取時,0表示深度優先Lifo(默認);1表示廣度優先FiFo

# 後進先出,深度優先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先進先出,廣度優先

# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'

#三、調度器隊列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler

#四、訪問URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'



#===>第五部分:中間件、Pipelines、擴展<===
#一、Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'Amazon.middlewares.AmazonSpiderMiddleware': 543,
#}

#二、Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   # 'Amazon.middlewares.DownMiddleware1': 543,
}

#三、Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

#四、Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   # 'Amazon.pipelines.CustomPipeline': 200,
}



#===>第六部分:緩存<===
"""

1. 啓用緩存
   目的用於將已經發送的請求或相應緩存下來,以便之後使用

   from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
   from scrapy.extensions.httpcache import DummyPolicy
   from scrapy.extensions.httpcache import FilesystemCacheStorage
   """
   # 是否啓用緩存策略
   # HTTPCACHE_ENABLED = True

# 緩存策略:全部請求均緩存,下次在請求直接訪問原來的緩存便可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 緩存策略:根據Http響應頭:Cache-Control、Last-Modified 等進行緩存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"

# 緩存超時時間
# HTTPCACHE_EXPIRATION_SECS = 0

# 緩存保存路徑
# HTTPCACHE_DIR = 'httpcache'

# 緩存忽略的Http狀態碼
# HTTPCACHE_IGNORE_HTTP_CODES = []

# 緩存存儲的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

#===>第七部分:線程池<===
REACTOR_THREADPOOL_MAXSIZE = 10

#Default: 10
#scrapy基於twisted異步IO框架,downloader是多線程的,線程數是Twisted線程池的默認大小(The maximum limit for Twisted Reactor thread pool size.)

#關於twisted線程池:
http://twistedmatrix.com/documents/10.1.0/core/howto/threading.html

#線程池實現:twisted.python.threadpool.ThreadPool
twisted調整線程池大小:
from twisted.internet import reactor
reactor.suggestThreadPoolSize(30)

#scrapy相關源碼:
D:\python3.6\Lib\site-packages\scrapy\crawler.py

#補充:
windows下查看進程內線程數的工具:
    https://docs.microsoft.com/zh-cn/sysinternals/downloads/pslist
    或
    https://pan.baidu.com/s/1jJ0pMaM
    

```
命令爲:
pslist |findstr python

```

linux下:top -p 進程id

#===>第八部分:其餘默認配置參考<===
D:\python3.6\Lib\site-packages\scrapy\settings\default_settings.py

settings.py
settings.py

十三 自定製命令

  • 在spiders同級建立任意目錄,如:commands
  • 在其中建立 crawlall.py 文件 (此處文件名就是自定義的命令)
  • from scrapy.commands import ScrapyCommand
        from scrapy.utils.project import get_project_settings
    
    
        class Command(ScrapyCommand):
    
            requires_project = True
    
            def syntax(self):
                return '[options]'
    
            def short_desc(self):
                return 'Runs all of the spiders'
    
            def run(self, args, opts):
                spider_list = self.crawler_process.spiders.list()
                for name in spider_list:
                    self.crawler_process.crawl(name, **opts.__dict__)
                self.crawler_process.start()
    View Code
  • 在settings.py 中添加配置 COMMANDS_MODULE = '項目名稱.目錄名稱'
  • 在項目目錄執行命令:scrapy crawlall 
相關文章
相關標籤/搜索