Scrapy

1、簡介

  Scrapy是一個爲了爬取網站數據,提取結構性數據而編寫的應用框架。其能夠應用在數據挖掘,信息處理或存儲歷史數據得一系列的程序中。css

  其最初是爲了網絡抓取所設計的,也能夠應用在獲取API所返回的數據或者通用的網路爬蟲。scrapy用途普遍,能夠用於數據挖掘、檢測和自動化測試。html

  Scrapy使用了Twisted異步網絡庫來處理網絡通信。總體架構大體以下python

  

Scrapy包括如下組件:react

  1.引擎(EGINE)web

  引擎負責控制系統全部組件之間的數據流,並在某些動做發生時觸發事件。(框架核心)ajax

  2.調度器(Schedule)正則表達式

  用來接收引擎發過來的請求,壓入隊列中,並在引擎再次請求的時候返回,能夠想象成一個URL的優先隊列,由它來決定下一個要抓取的網址是什麼,同時去除重複的網址。算法

  3.下載器(Dowloader)shell

  用於下載網頁內容,並將網頁內容返回給EGINE,下載器是創建在Twisted這個高效的異步模型上的。數據庫

  4.爬蟲(Spiders)

  Spiders是開發人員自定義的類,用來解析response,而且提取items,或者發送新的請求

  5.項目管道(Pipline)

  在items被提取後負責處理它們,主要包括清理、驗證、持久化(好比存到數據庫)等操做

  6.下載器中間件(Downloader Middlewares)

  位於Scrapy引擎和下載器之間,主要用來處理從scrapy引擎與下載器之間的請求及相應。

  7.爬蟲中間件(Spider Middlewares

  介於Scrapy引擎和爬蟲之間的框架,主要工做是處理爬蟲的響應輸入和請求輸出。

Scrapy運行流程大概以下:

  1.引擎從調度器中取出一個連接(URL)用於接下來的抓取

  2.引擎把URL封裝成一個請求(Request)傳給下載器

  3.下載器把資源下載下來,並封裝成應答包(Response)

  4.爬蟲解析Response

  5.解析出實體(Item),則交給實體管道進行進一步的處理

  6.解析出的是連接(URL),則把URL交給調度器等待抓取

2、安裝

#Windows平臺
    一、pip3 install wheel #安裝後,便支持經過wheel文件安裝軟件,wheel文件官網:https://www.lfd.uci.edu/~gohlke/pythonlibs
    3、pip3 install lxml
    4、pip3 install pyopenssl
    五、下載並安裝pywin32:https://sourceforge.net/projects/pywin32/files/pywin32/
    六、下載twisted的wheel文件:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
    七、執行pip3 install 下載目錄\Twisted-17.9.0-cp36-cp36m-win_amd64.whl
    8、pip3 install scrapy
  
#Linux平臺
    一、pip3 install scrapy

3、命令行工具

#1 查看幫助
    scrapy -h
    scrapy <command> -h

#2 有兩種命令:其中Project-only必須切到項目文件夾下才能執行,而Global的命令則不須要
    Global commands:
        startproject #建立項目
        genspider    #建立爬蟲程序
        settings     #若是是在項目目錄下,則獲得的是該項目的配置
        runspider    #運行一個獨立的python文件,沒必要建立項目
        shell        #scrapy shell url地址  在交互式調試,如選擇器規則正確與否
        fetch        #獨立於程單純地爬取一個頁面,能夠拿到請求頭
        view         #下載完畢後直接彈出瀏覽器,以此能夠分辨出哪些數據是ajax請求
        version      #scrapy version 查看scrapy的版本,scrapy version -v查看scrapy依賴庫的版本
    Project-only commands:
        crawl        #運行爬蟲,必須建立項目才行,確保配置文件中ROBOTSTXT_OBEY = False
        check        #檢測項目中有無語法錯誤
        list         #列出項目中所包含的爬蟲名
        edit         #編輯器,通常不用
        parse        #scrapy parse url地址 --callback 回調函數  #以此能夠驗證咱們的回調函數是否正確
        bench        #scrapy bentch壓力測試

#3 官網連接
    https://docs.scrapy.org/en/latest/topics/commands.html
#一、執行全局命令:請確保不在某個項目的目錄下,排除受該項目配置的影響
scrapy startproject MyProject

cd MyProject
scrapy genspider baidu www.baidu.com

scrapy settings --get XXX #若是切換到項目目錄下,看到的則是該項目的配置

scrapy runspider baidu.py

scrapy shell https://www.baidu.com
    response
    response.status
    response.body
    view(response)
    
scrapy view https://www.taobao.com #若是頁面顯示內容不全,不全的內容則是ajax請求實現的,以此快速定位問題

scrapy fetch --nolog --headers https://www.taobao.com

scrapy version #scrapy的版本

scrapy version -v #依賴庫的版本


#二、執行項目命令:切到項目目錄下
scrapy crawl baidu
scrapy check
scrapy list
scrapy parse http://quotes.toscrape.com/ --callback parse
scrapy bench
示範用法

4、項目結構以及爬蟲應用簡介

project_name/
   scrapy.cfg
   project_name/
       __init__.py
       items.py
       pipelines.py
       settings.py
       spiders/
           __init__.py
           爬蟲1.py
           爬蟲2.py
           爬蟲3.py

文件說明:

  • scrapy.cfg  項目的主配置信息,用來部署scrapy時使用,爬蟲相關的配置信息在settings.py文件中。
  • items.py    設置數據存儲模板,用於結構化數據,如:Django的Model
  • pipelines    數據處理行爲,如:通常結構化的數據持久化
  • settings.py 配置文件,如:遞歸的層數、併發數,延遲下載等。強調:配置文件的選項必須大寫不然視爲無效,正確寫法USER_AGENT='xxxx'
  • spiders      爬蟲目錄,如:建立文件,編寫爬蟲規則

  注意:通常建立爬蟲文件時,以網站域名命名

import scrapy
 
class XiaoHuarSpider(scrapy.spiders.Spider):
    name = "xiaohuar"                            # 爬蟲名稱 *****
    allowed_domains = ["xiaohuar.com"]  # 容許的域名
    start_urls = [
        "http://www.xiaohuar.com/hua/",   # 其實URL
    ]
 
    def parse(self, response):
        # 訪問起始URL並獲取結果後的回調函數
爬蟲1.py
import sys,os
sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')
關於Windows編碼

5、Spiders

#在項目目錄下新建:entrypoint.py
from scrapy.cmdline import execute
execute(['scrapy', 'crawl', 'xiaohua'])
默認只能在cmd中執行爬蟲,若是想在pycharm中執行須要作

  代碼示例

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
 
 
class DigSpider(scrapy.Spider):
    # 爬蟲應用的名稱,經過此名稱啓動爬蟲命令
    name = "dig"
 
    # 容許的域名
    allowed_domains = ["chouti.com"]
 
    # 起始URL
    start_urls = [
        'http://dig.chouti.com/',
    ]
 
    has_request_set = {}
 
    def parse(self, response):
        print(response.url)
 
        hxs = HtmlXPathSelector(response)
        page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()
        for page in page_list:
            page_url = 'http://dig.chouti.com%s' % page
            key = self.md5(page_url)
            if key in self.has_request_set:
                pass
            else:
                self.has_request_set[key] = page_url
                obj = Request(url=page_url, method='GET', callback=self.parse)
                yield obj
 
    @staticmethod
    def md5(val):
        import hashlib
        ha = hashlib.md5()
        ha.update(bytes(val, encoding='utf-8'))
        key = ha.hexdigest()
        return key

執行此爬蟲文件,則在終端進入項目目錄執行以下命令:

scrapy crawl dig --nolog  # --nolog表示不打印日誌

對於上述代碼重要之處在於:

  • Request是一個封裝用戶請求的類,在回調函數中yield該對象表示繼續訪問
  • HtmlXpathSelector用於結構化HTML代碼並提供選擇器功能

6、選擇器

#1 //與/
#2 text
#三、extract與extract_first:從selector對象中解出內容
#四、屬性:xpath的屬性加前綴@
#四、嵌套查找
#五、設置默認值
#四、按照屬性查找
#五、按照屬性模糊查找
#六、正則表達式
#七、xpath相對路徑
#八、帶變量的xpath

  示範

response.selector.css()
response.selector.xpath()
可簡寫爲
response.css()
response.xpath()

#1 //與/
response.xpath('//body/a/')#
response.css('div a::text')

>>> response.xpath('//body/a') #開頭的//表明從整篇文檔中尋找,body以後的/表明body的兒子
[]
>>> response.xpath('//body//a') #開頭的//表明從整篇文檔中尋找,body以後的//表明body的子子孫孫
[<Selector xpath='//body//a' data='<a href="image1.html">Name: My image 1 <'>, <Selector xpath='//body//a' data='<a href="image2.html">Name: My image 2 <'>, <Selector xpath='//body//a' data='<a href="
image3.html">Name: My image 3 <'>, <Selector xpath='//body//a' data='<a href="image4.html">Name: My image 4 <'>, <Selector xpath='//body//a' data='<a href="image5.html">Name: My image 5 <'>]

#2 text
>>> response.xpath('//body//a/text()')
>>> response.css('body a::text')

#三、extract與extract_first:從selector對象中解出內容
>>> response.xpath('//div/a/text()').extract()
['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 ']
>>> response.css('div a::text').extract()
['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 ']

>>> response.xpath('//div/a/text()').extract_first()
'Name: My image 1 '
>>> response.css('div a::text').extract_first()
'Name: My image 1 '

#四、屬性:xpath的屬性加前綴@
>>> response.xpath('//div/a/@href').extract_first()
'image1.html'
>>> response.css('div a::attr(href)').extract_first()
'image1.html'

#四、嵌套查找
>>> response.xpath('//div').css('a').xpath('@href').extract_first()
'image1.html'

#五、設置默認值
>>> response.xpath('//div[@id="xxx"]').extract_first(default="not found")
'not found'

#四、按照屬性查找
response.xpath('//div[@id="images"]/a[@href="image3.html"]/text()').extract()
response.css('#images a[@href="image3.html"]/text()').extract()

#五、按照屬性模糊查找
response.xpath('//a[contains(@href,"image")]/@href').extract()
response.css('a[href*="image"]::attr(href)').extract()

response.xpath('//a[contains(@href,"image")]/img/@src').extract()
response.css('a[href*="imag"] img::attr(src)').extract()

response.xpath('//*[@href="image1.html"]')
response.css('*[href="image1.html"]')

#六、正則表達式
response.xpath('//a/text()').re(r'Name: (.*)')
response.xpath('//a/text()').re_first(r'Name: (.*)')

#七、xpath相對路徑
>>> res=response.xpath('//a[contains(@href,"3")]')[0]
>>> res.xpath('img')
[<Selector xpath='img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('./img')
[<Selector xpath='./img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('.//img')
[<Selector xpath='.//img' data='<img src="image3_thumb.jpg">'>]
>>> res.xpath('//img') #這就是從頭開始掃描
[<Selector xpath='//img' data='<img src="image1_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image2_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image3_thumb.jpg">'>, <Selector xpa
th='//img' data='<img src="image4_thumb.jpg">'>, <Selector xpath='//img' data='<img src="image5_thumb.jpg">'>]

#八、帶變量的xpath
>>> response.xpath('//div[@id=$xxx]/a/text()',xxx='images').extract_first()
'Name: My image 1 '
>>> response.xpath('//div[count(a)=$yyy]/@id',yyy=5).extract_first() #求有5個a標籤的div的id
'images'
選擇示範

7、格式化處理

  上述實例只是簡單的處理,因此在parse方法中直接處理。若是對於想要獲取更多的數據處理,則能夠利用Scrapy的items將數據格式化,而後統一交由pipeline來處理。

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequest


class XiaoHuarSpider(scrapy.Spider):
    # 爬蟲應用的名稱,經過此名稱啓動爬蟲命令
    name = "xiaohuar"
    # 容許的域名
    allowed_domains = ["xiaohuar.com"]

    start_urls = [
        "http://www.xiaohuar.com/list-1-1.html",
    ]
    # custom_settings = {
    #     'ITEM_PIPELINES':{
    #         'spider1.pipelines.JsonPipeline': 100
    #     }
    # }
    has_request_set = {}

    def parse(self, response):
        # 分析頁面
        # 找到頁面中符合規則的內容(校花圖片),保存
        # 找到全部的a標籤,再訪問其餘a標籤,一層一層的搞下去

        hxs = HtmlXPathSelector(response)

        items = hxs.select('//div[@class="item_list infinite_scroll"]/div')
        for item in items:
            src = item.select('.//div[@class="img"]/a/img/@src').extract_first()
            name = item.select('.//div[@class="img"]/span/text()').extract_first()
            school = item.select('.//div[@class="img"]/div[@class="btns"]/a/text()').extract_first()
            url = "http://www.xiaohuar.com%s" % src
            from ..items import XiaoHuarItem
            obj = XiaoHuarItem(name=name, school=school, url=url)
            yield obj

        urls = hxs.select('//a[re:test(@href, "http://www.xiaohuar.com/list-1-\d+.html")]/@href')
        for url in urls:
            key = self.md5(url)
            if key in self.has_request_set:
                pass
            else:
                self.has_request_set[key] = url
                req = Request(url=url,method='GET',callback=self.parse)
                yield req

    @staticmethod
    def md5(val):
        import hashlib
        ha = hashlib.md5()
        ha.update(bytes(val, encoding='utf-8'))
        key = ha.hexdigest()
        return key

spiders/xiahuar.py
spiders
import scrapy

class XiaoHuarItem(scrapy.Item):
    name = scrapy.Field()
    school = scrapy.Field()
    url = scrapy.Field()
items
import json
import os
import requests


class JsonPipeline(object):
    def __init__(self):
        self.file = open('xiaohua.txt', 'w')

    def process_item(self, item, spider):
        v = json.dumps(dict(item), ensure_ascii=False)
        self.file.write(v)
        self.file.write('\n')
        self.file.flush()
        return item


class FilePipeline(object):
    def __init__(self):
        if not os.path.exists('imgs'):
            os.makedirs('imgs')

    def process_item(self, item, spider):
        response = requests.get(item['url'], stream=True)
        file_name = '%s_%s.jpg' % (item['name'], item['school'])
        with open(os.path.join('imgs', file_name), mode='wb') as f:
            f.write(response.content)
        return item

pipelines
pipelines
ITEM_PIPELINES = {
   'spider1.pipelines.JsonPipeline': 100,
   'spider1.pipelines.FilePipeline': 300,
}
# 每行後面的整型值,肯定了他們運行的順序,item按數字從低到高的順序,經過pipeline,一般將這些數字定義在0-1000範圍內。
settings

  對於pipeline能夠作更多,以下:

from scrapy.exceptions import DropItem

class CustomPipeline(object):
    def __init__(self,v):
        self.value = v

    def process_item(self, item, spider):
        # 操做並進行持久化

        # return表示會被後續的pipeline繼續處理
        return item

        # 表示將item丟棄,不會被後續pipeline處理
        # raise DropItem()


    @classmethod
    def from_crawler(cls, crawler):
        """
        初始化時候,用於建立pipeline對象
        :param crawler: 
        :return: 
        """
        val = crawler.settings.getint('MMMM')
        return cls(val)

    def open_spider(self,spider):
        """
        爬蟲開始執行時,調用
        :param spider: 
        :return: 
        """
        print('000000')

    def close_spider(self,spider):
        """
        爬蟲關閉時,被調用
        :param spider: 
        :return: 
        """
        print('111111')

自定義pipeline
自定義pipeline

8、中間件

class SpiderMiddleware(object):

    def process_spider_input(self,response, spider):
        """
        下載完成,執行,而後交給parse處理
        :param response: 
        :param spider: 
        :return: 
        """
        pass

    def process_spider_output(self,response, result, spider):
        """
        spider處理完成,返回時調用
        :param response:
        :param result:
        :param spider:
        :return: 必須返回包含 Request 或 Item 對象的可迭代對象(iterable)
        """
        return result

    def process_spider_exception(self,response, exception, spider):
        """
        異常調用
        :param response:
        :param exception:
        :param spider:
        :return: None,繼續交給後續中間件處理異常;含 Response 或 Item 的可迭代對象(iterable),交給調度器或pipeline
        """
        return None


    def process_start_requests(self,start_requests, spider):
        """
        爬蟲啓動時調用
        :param start_requests:
        :param spider:
        :return: 包含 Request 對象的可迭代對象
        """
        return start_requests

爬蟲中間件
爬蟲中間件
class DownMiddleware1(object):
    def process_request(self, request, spider):
        """
        請求須要被下載時,通過全部下載器中間件的process_request調用
        :param request: 
        :param spider: 
        :return:  
            None,繼續後續中間件去下載;
            Response對象,中止process_request的執行,開始執行process_response
            Request對象,中止中間件的執行,將Request從新調度器
            raise IgnoreRequest異常,中止process_request的執行,開始執行process_exception
        """
        pass



    def process_response(self, request, response, spider):
        """
        spider處理完成,返回時調用
        :param response:
        :param result:
        :param spider:
        :return: 
            Response 對象:轉交給其餘中間件process_response
            Request 對象:中止中間件,request會被從新調度下載
            raise IgnoreRequest 異常:調用Request.errback
        """
        print('response1')
        return response

    def process_exception(self, request, exception, spider):
        """
        當下載處理器(download handler)或 process_request() (下載中間件)拋出異常
        :param response:
        :param exception:
        :param spider:
        :return: 
            None:繼續交給後續中間件處理異常;
            Response對象:中止後續process_exception方法
            Request對象:中止中間件,request將會被從新調用下載
        """
        return None

下載器中間件
下載器中間件

9、自定製命令

  • 在spiders同級建立任意目錄,如:commands
  • 在其中建立 crawlall.py 文件 (此處文件名就是自定義的命令)
from scrapy.commands import ScrapyCommand
from scrapy.utils.project import get_project_settings
from scrapy.crawler import CrawlerProcess
class Command(ScrapyCommand):
    requires_project = True
    def syntax(self):
        return '[options]'
    def short_desc(self):
        return 'Runs all of the spiders'
    def run(self, args, opts):
        """
        源碼入口
        :param args:
        :param opts:
        :return:
        """
        spider_list = self.crawler_process.spiders.list() # 拿到全部爬蟲
        for name in spider_list: # 循環全部爬蟲
            self.crawler_process.crawl(name, **opts.__dict__)
        self.crawler_process.start() # 開始爬取
crawlall.py
  • 在settings.py 中添加配置 COMMANDS_MODULE = '項目名稱.目錄名稱'
  • 在項目目錄執行命令:scrapy crawlall 

 10、自定義擴展(基於Django的信號)

  自定義擴展時,利用信號在指定位置註冊指定操做

from scrapy import signals


class MyExtension(object):
    def __init__(self, value):
        self.value = value

    @classmethod
    def from_crawler(cls, crawler):
        val = crawler.settings.getint('MMMM')
        ext = cls(val)

        crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)

        return ext

    def spider_opened(self, spider):
        print('open')

    def spider_closed(self, spider):
        print('close')
View Code

11、避免重複訪問

  scrapy默認使用 scrapy.dupefilter.RFPDupeFilter 進行去重,相關配置有:

DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
DUPEFILTER_DEBUG = False
JOBDIR = "保存範文記錄的日誌路徑,如:/root/"  # 最終路徑爲 /root/requests.seen
class RepeatUrl:
    def __init__(self):
        self.visited_url = set()

    @classmethod
    def from_settings(cls, settings):
        """
        初始化時,調用
        :param settings: 
        :return: 
        """
        return cls()

    def request_seen(self, request):
        """
        檢測當前請求是否已經被訪問過
        :param request: 
        :return: True表示已經訪問過;False表示未訪問過
        """
        if request.url in self.visited_url:
            return True
        self.visited_url.add(request.url)
        return False

    def open(self):
        """
        開始爬去請求時,調用
        :return: 
        """
        print('open replication')

    def close(self, reason):
        """
        結束爬蟲爬取時,調用
        :param reason: 
        :return: 
        """
        print('close replication')

    def log(self, request, spider):
        """
        記錄日誌
        :param request: 
        :param spider: 
        :return: 
        """
        print('repeat', request.url)

自定義URL去重操做
自定義去重

12、settings配置

# -*- coding: utf-8 -*-

# Scrapy settings for step8_king project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

# 1. 爬蟲名稱
BOT_NAME = 'step8_king'

# 2. 爬蟲應用路徑
SPIDER_MODULES = ['step8_king.spiders']
NEWSPIDER_MODULE = 'step8_king.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
# 3. 客戶端 user-agent請求頭
# USER_AGENT = 'step8_king (+http://www.yourdomain.com)'

# Obey robots.txt rules
# 4. 禁止爬蟲配置
# ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# 5. 併發請求數
# CONCURRENT_REQUESTS = 4

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 6. 延遲下載秒數
# DOWNLOAD_DELAY = 2


# The download delay setting will honor only one of:
# 7. 單域名訪問併發數,而且延遲下次秒數也應用在每一個域名
# CONCURRENT_REQUESTS_PER_DOMAIN = 2
# 單IP訪問併發數,若是有值則忽略:CONCURRENT_REQUESTS_PER_DOMAIN,而且延遲下次秒數也應用在每一個IP
# CONCURRENT_REQUESTS_PER_IP = 3

# Disable cookies (enabled by default)
# 8. 是否支持cookie,cookiejar進行操做cookie
# COOKIES_ENABLED = True
# COOKIES_DEBUG = True

# Disable Telnet Console (enabled by default)
# 9. Telnet用於查看當前爬蟲的信息,操做爬蟲等...
#    使用telnet ip port ,而後經過命令操做
# TELNETCONSOLE_ENABLED = True
# TELNETCONSOLE_HOST = '127.0.0.1'
# TELNETCONSOLE_PORT = [6023,]


# 10. 默認請求頭
# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#     'Accept-Language': 'en',
# }


# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 11. 定義pipeline處理請求
# ITEM_PIPELINES = {
#    'step8_king.pipelines.JsonPipeline': 700,
#    'step8_king.pipelines.FilePipeline': 500,
# }



# 12. 自定義擴展,基於信號進行調用
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#     # 'step8_king.extensions.MyExtension': 500,
# }


# 13. 爬蟲容許的最大深度,能夠經過meta查看當前深度;0表示無深度
# DEPTH_LIMIT = 3

# 14. 爬取時,0表示深度優先Lifo(默認);1表示廣度優先FiFo

# 後進先出,深度優先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先進先出,廣度優先

# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'

# 15. 調度器隊列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler


# 16. 訪問URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'


# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html

"""
17. 自動限速算法
    from scrapy.contrib.throttle import AutoThrottle
    自動限速設置
    1. 獲取最小延遲 DOWNLOAD_DELAY
    2. 獲取最大延遲 AUTOTHROTTLE_MAX_DELAY
    3. 設置初始下載延遲 AUTOTHROTTLE_START_DELAY
    4. 當請求下載完成後,獲取其"鏈接"時間 latency,即:請求鏈接到接受到響應頭之間的時間
    5. 用於計算的... AUTOTHROTTLE_TARGET_CONCURRENCY
    target_delay = latency / self.target_concurrency
    new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延遲時間
    new_delay = max(target_delay, new_delay)
    new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
    slot.delay = new_delay
"""

# 開始自動限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下載延遲
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下載延遲
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒併發數
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

# Enable showing throttling stats for every response received:
# 是否顯示
# AUTOTHROTTLE_DEBUG = True

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings


"""
18. 啓用緩存
    目的用於將已經發送的請求或相應緩存下來,以便之後使用
    
    from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
    from scrapy.extensions.httpcache import DummyPolicy
    from scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否啓用緩存策略
# HTTPCACHE_ENABLED = True

# 緩存策略:全部請求均緩存,下次在請求直接訪問原來的緩存便可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 緩存策略:根據Http響應頭:Cache-Control、Last-Modified 等進行緩存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"

# 緩存超時時間
# HTTPCACHE_EXPIRATION_SECS = 0

# 緩存保存路徑
# HTTPCACHE_DIR = 'httpcache'

# 緩存忽略的Http狀態碼
# HTTPCACHE_IGNORE_HTTP_CODES = []

# 緩存存儲的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


"""
19. 代理,須要在環境變量中設置
    from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware
    
    方式一:使用默認
        os.environ
        {
            http_proxy:http://root:woshiniba@192.168.11.11:9999/
            https_proxy:http://192.168.11.11:9999/
        }
    方式二:使用自定義下載中間件
    
    def to_bytes(text, encoding=None, errors='strict'):
        if isinstance(text, bytes):
            return text
        if not isinstance(text, six.string_types):
            raise TypeError('to_bytes must receive a unicode, str or bytes '
                            'object, got %s' % type(text).__name__)
        if encoding is None:
            encoding = 'utf-8'
        return text.encode(encoding, errors)
        
    class ProxyMiddleware(object):
        def process_request(self, request, spider):
            PROXIES = [
                {'ip_port': '111.11.228.75:80', 'user_pass': ''},
                {'ip_port': '120.198.243.22:80', 'user_pass': ''},
                {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
                {'ip_port': '101.71.27.120:80', 'user_pass': ''},
                {'ip_port': '122.96.59.104:80', 'user_pass': ''},
                {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
            ]
            proxy = random.choice(PROXIES)
            if proxy['user_pass'] is not None:
                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
                encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
                request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
                print "**************ProxyMiddleware have pass************" + proxy['ip_port']
            else:
                print "**************ProxyMiddleware no pass************" + proxy['ip_port']
                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
    
    DOWNLOADER_MIDDLEWARES = {
       'step8_king.middlewares.ProxyMiddleware': 500,
    }
    
"""

"""
20. Https訪問
    Https訪問時有兩種狀況:
    1. 要爬取網站使用的可信任證書(默認支持)
        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"
        
    2. 要爬取網站使用的自定義證書
        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"
        
        # https.py
        from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
        from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
        
        class MySSLFactory(ScrapyClientContextFactory):
            def getCertificateOptions(self):
                from OpenSSL import crypto
                v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
                v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
                return CertificateOptions(
                    privateKey=v1,  # pKey對象
                    certificate=v2,  # X509對象
                    verify=False,
                    method=getattr(self, 'method', getattr(self, '_ssl_method', None))
                )
    其餘:
        相關類
            scrapy.core.downloader.handlers.http.HttpDownloadHandler
            scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
            scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
        相關配置
            DOWNLOADER_HTTPCLIENTFACTORY
            DOWNLOADER_CLIENTCONTEXTFACTORY

"""



"""
21. 爬蟲中間件
    class SpiderMiddleware(object):

        def process_spider_input(self,response, spider):
            '''
            下載完成,執行,而後交給parse處理
            :param response: 
            :param spider: 
            :return: 
            '''
            pass
    
        def process_spider_output(self,response, result, spider):
            '''
            spider處理完成,返回時調用
            :param response:
            :param result:
            :param spider:
            :return: 必須返回包含 Request 或 Item 對象的可迭代對象(iterable)
            '''
            return result
    
        def process_spider_exception(self,response, exception, spider):
            '''
            異常調用
            :param response:
            :param exception:
            :param spider:
            :return: None,繼續交給後續中間件處理異常;含 Response 或 Item 的可迭代對象(iterable),交給調度器或pipeline
            '''
            return None
    
    
        def process_start_requests(self,start_requests, spider):
            '''
            爬蟲啓動時調用
            :param start_requests:
            :param spider:
            :return: 包含 Request 對象的可迭代對象
            '''
            return start_requests
    
    內置爬蟲中間件:
        'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
        'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
        'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
        'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
        'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,

"""
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
   # 'step8_king.middlewares.SpiderMiddleware': 543,
}


"""
22. 下載中間件
    class DownMiddleware1(object):
        def process_request(self, request, spider):
            '''
            請求須要被下載時,通過全部下載器中間件的process_request調用
            :param request:
            :param spider:
            :return:
                None,繼續後續中間件去下載;
                Response對象,中止process_request的執行,開始執行process_response
                Request對象,中止中間件的執行,將Request從新調度器
                raise IgnoreRequest異常,中止process_request的執行,開始執行process_exception
            '''
            pass
    
    
    
        def process_response(self, request, response, spider):
            '''
            spider處理完成,返回時調用
            :param response:
            :param result:
            :param spider:
            :return:
                Response 對象:轉交給其餘中間件process_response
                Request 對象:中止中間件,request會被從新調度下載
                raise IgnoreRequest 異常:調用Request.errback
            '''
            print('response1')
            return response
    
        def process_exception(self, request, exception, spider):
            '''
            當下載處理器(download handler)或 process_request() (下載中間件)拋出異常
            :param response:
            :param exception:
            :param spider:
            :return:
                None:繼續交給後續中間件處理異常;
                Response對象:中止後續process_exception方法
                Request對象:中止中間件,request將會被從新調用下載
            '''
            return None

    
    默認下載中間件
    {
        'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
        'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
        'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
        'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
        'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
        'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
        'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
        'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
        'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
        'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
        'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
        'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
        'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
    }

"""
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'step8_king.middlewares.DownMiddleware1': 100,
#    'step8_king.middlewares.DownMiddleware2': 500,
# }

settings
View Code

十3、TinyScrapy

#!/usr/bin/env python
# -*- coding:utf-8 -*-
import types
from twisted.internet import defer
from twisted.web.client import getPage
from twisted.internet import reactor



class Request(object):
    def __init__(self, url, callback):
        self.url = url
        self.callback = callback
        self.priority = 0


class HttpResponse(object):
    def __init__(self, content, request):
        self.content = content
        self.request = request


class ChouTiSpider(object):

    def start_requests(self):
        url_list = ['http://www.cnblogs.com/', 'http://www.bing.com']
        for url in url_list:
            yield Request(url=url, callback=self.parse)

    def parse(self, response):
        print(response.request.url)
        # yield Request(url="http://www.baidu.com", callback=self.parse)




from queue import Queue
Q = Queue()


class CallLaterOnce(object):
    def __init__(self, func, *a, **kw):
        self._func = func
        self._a = a
        self._kw = kw
        self._call = None

    def schedule(self, delay=0):
        if self._call is None:
            self._call = reactor.callLater(delay, self)

    def cancel(self):
        if self._call:
            self._call.cancel()

    def __call__(self):
        self._call = None
        return self._func(*self._a, **self._kw)


class Engine(object):
    def __init__(self):
        self.nextcall = None
        self.crawlling = []
        self.max = 5
        self._closewait = None

    def get_response(self,content, request):
        response = HttpResponse(content, request)
        gen = request.callback(response)
        if isinstance(gen, types.GeneratorType):
            for req in gen:
                req.priority = request.priority + 1
                Q.put(req)


    def rm_crawlling(self,response,d):
        self.crawlling.remove(d)

    def _next_request(self,spider):
        if Q.qsize() == 0 and len(self.crawlling) == 0:
            self._closewait.callback(None)

        if len(self.crawlling) >= 5:
            return
        while len(self.crawlling) < 5:
            try:
                req = Q.get(block=False)
            except Exception as e:
                req = None
            if not req:
                return
            d = getPage(req.url.encode('utf-8'))
            self.crawlling.append(d)
            d.addCallback(self.get_response, req)
            d.addCallback(self.rm_crawlling,d)
            d.addCallback(lambda _: self.nextcall.schedule())


    @defer.inlineCallbacks
    def crawl(self):
        spider = ChouTiSpider()
        start_requests = iter(spider.start_requests())
        flag = True
        while flag:
            try:
                req = next(start_requests)
                Q.put(req)
            except StopIteration as e:
                flag = False

        self.nextcall = CallLaterOnce(self._next_request,spider)
        self.nextcall.schedule()

        self._closewait = defer.Deferred()
        yield self._closewait

    @defer.inlineCallbacks
    def pp(self):
        yield self.crawl()

_active = set()
obj = Engine()
d = obj.crawl()
_active.add(d)

li = defer.DeferredList(_active)
li.addBoth(lambda _,*a,**kw: reactor.stop())

reactor.run()

參考版
參考
相關文章
相關標籤/搜索