Requests爬蟲和scrapy框架多線程爬蟲

1.基於Requests和BeautifulSoup的單線程爬蟲

1.1 BeautifulSoup用法總結
html

1. find,獲取匹配的第一個標籤python

tag = soup.find('a')
print(tag)
tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
print(tag)

2.find_all,獲取匹配的全部標籤,包含標籤裏的標籤,若不想要標籤裏的標籤,可將recursive(遞歸尋找)=Falseweb

tag = soup.find('a')
print(tag)
tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
print(tag)

3.get 得到屬性的值算法

img_url = soup.find('div',class_='main-image').find('img').get('src')

4.text 獲取標籤內容緩存

title = soup.find('h2',class_='main-title').text.strip()

1.2 簡單應用,爬取mzitu圖片cookie

import requests,os
from bs4 import BeautifulSoup


base_url = 'http://www.mzitu.com/'
BASE_DIR = os.path.dirname(os.path.abspath(__file__))

r1 = requests.get(url=base_url)
# print(r1.text)
soup = BeautifulSoup(r1.text,features='lxml')
# 獲取全部套圖連接
tags = soup.find(name='ul',id="pins").find_all('li')
url_list = []
for tag in tags:
    url = tag.find('span').find('a').get('href')
    # print(img_url)
    url_list.append(url)

for url in url_list:
    # 獲取套圖連接信息
    r2 = requests.get(url=url)
    soup = BeautifulSoup(r2.text,features='lxml')

    title = soup.find('h2',class_='main-title').text.strip()
    # img_url = soup.find('div',class_='main-image').find('img').get('src')
    # 獲取套圖總張數
    num = int(soup.find('div',class_='pagenavi').find_all('span')[-2].text)
    # 保存路徑文件夾
    path = os.path.join(BASE_DIR,title)
    # print(path)
    if os.path.exists(path):
        pass
    else:
        os.makedirs(path)
    #循環獲取各圖片URL
    for i in range(1,num+1):
        url_new = "%s/%s"%(url,i)
        r3 = requests.get(url=url_new)
        soup = BeautifulSoup(r3.text,features='lxml')
        img_url = str(soup.find('div',class_='main-image').find('img').get('src'))
        # 添加請求頭應對圖片防盜鏈
        r4 = requests.get(url=img_url,
                    headers={'Referer':url_new})
        # print(type(img_url))
        dict = img_url.rsplit('/',maxsplit=1)
        file_name = os.path.join(path,dict[1])
        # print(file_name)
        with open(file_name,'wb') as f:
            f.write(r4.content)

1.3 模擬登陸choti網站並點贊網絡

import requests
from fake_useragent import UserAgent

agent = UserAgent()
# ############## 方式一 ##############
"""
## 一、首先登錄任何頁面,獲取cookie
i1 = requests.get(url="https://dig.chouti.com/",
                  headers={
                      "User-Agent":agent.random,
                  })
i1_cookies = i1.cookies.get_dict()
print(i1_cookies)

# ## 二、用戶登錄,攜帶上一次的cookie,後臺對cookie中的 gpsd 進行受權
i2 = requests.post(
    url="https://dig.chouti.com/login",
    data={
        'phone': "8615057101356",
        'password': "199SulkyBuckets",
        'oneMonth': "1"
    },
    headers={"User-Agent":agent.random,},
    cookies=i1_cookies,
)

# ## 三、點贊(只須要攜帶已經被受權的gpsd便可)

i3 = requests.post(
    url="https://dig.chouti.com/link/vote?linksId=19444596",
    headers={"User-Agent":agent.random,},
    cookies=i1_cookies,
)
print(i3.text)
"""

# ############## 方式二 ##############

# import requests

session = requests.Session()
i1 = session.get(url="https://dig.chouti.com",
                 headers={"User-Agent": agent.random})
i2 = session.post(
    url="https://dig.chouti.com/login",
    data={
        'phone': "8615057101356",
        'password': "199SulkyBuckets",
        'oneMonth': "1"
    },
    headers={"User-Agent": agent.random}
)
i3 = session.post(
    url="https://dig.chouti.com/link/vote?linksId=19444596",
    headers={"User-Agent": agent.random}
)
print(i3.text)

2.Scrapy框架session

 

Scrapy是一個爲了爬取網站數據,提取結構性數據而編寫的應用框架。 其能夠應用在數據挖掘,信息處理或存儲歷史數據等一系列的程序中。
其最初是爲了頁面抓取 (更確切來講, 網絡抓取 )所設計的, 也能夠應用在獲取API所返回的數據(例如 Amazon Associates Web Services ) 或者通用的網絡爬蟲。Scrapy用途普遍,能夠用於數據挖掘、監測和自動化測試。架構

 

Scrapy 使用了 Twisted異步網絡庫來處理網絡通信。總體架構大體以下併發

Scrapy主要包括瞭如下組件:

  • 引擎(Scrapy)
    用來處理整個系統的數據流處理, 觸發事務(框架核心)
  • 調度器(Scheduler)
    用來接受引擎發過來的請求, 壓入隊列中, 並在引擎再次請求的時候返回. 能夠想像成一個URL(抓取網頁的網址或者說是連接)的優先隊列, 由它來決定下一個要抓取的網址是什麼, 同時去除重複的網址
  • 下載器(Downloader)
    用於下載網頁內容, 並將網頁內容返回給蜘蛛(Scrapy下載器是創建在twisted這個高效的異步模型上的)
  • 爬蟲(Spiders)
    爬蟲是主要幹活的, 用於從特定的網頁中提取本身須要的信息, 即所謂的實體(Item)。用戶也能夠從中提取出連接,讓Scrapy繼續抓取下一個頁面
  • 項目管道(Pipeline)
    負責處理爬蟲從網頁中抽取的實體,主要的功能是持久化實體、驗證明體的有效性、清除不須要的信息。當頁面被爬蟲解析後,將被髮送到項目管道,並通過幾個特定的次序處理數據。
  • 下載器中間件(Downloader Middlewares)
    位於Scrapy引擎和下載器之間的框架,主要是處理Scrapy引擎與下載器之間的請求及響應。
  • 爬蟲中間件(Spider Middlewares)
    介於Scrapy引擎和爬蟲之間的框架,主要工做是處理蜘蛛的響應輸入和請求輸出。
  • 調度中間件(Scheduler Middewares)
    介於Scrapy引擎和調度之間的中間件,從Scrapy引擎發送到調度的請求和響應。

Scrapy運行流程大概以下:

    1. 引擎從調度器中取出一個連接(URL)用於接下來的抓取
    2. 引擎把URL封裝成一個請求(Request)傳給下載器
    3. 下載器把資源下載下來,並封裝成應答包(Response)
    4. 爬蟲解析Response
    5. 解析出實體(Item),則交給實體管道進行進一步的處理
    6. 解析出的是連接(URL),則把URL交給調度器等待抓取

2.1 基本命令

1. scrapy startproject 項目名稱
   - 在當前目錄中建立中建立一個項目文件(相似於Django)
 
2. scrapy genspider [-t template] <name> <domain>
   - 建立爬蟲應用
   如:
      scrapy gensipider -t basic oldboy oldboy.com
      scrapy gensipider -t xmlfeed autohome autohome.com.cn
   PS:
      查看全部命令:scrapy gensipider -l
      查看模板命令:scrapy gensipider -d 模板名稱
 
3. scrapy list
   - 展現爬蟲應用列表
 
4. scrapy crawl 爬蟲應用名稱 --nolog(無運行日誌顯示)
   - 運行單獨爬蟲應用

2.2 選擇器SELECTOR

#!/usr/bin/env python
# -*- coding:utf-8 -*-
from scrapy.selector import Selector, HtmlXPathSelector
from scrapy.http import HtmlResponse
html = """<!DOCTYPE html>
<html>
    <head lang="en">
        <meta charset="UTF-8">
        <title></title>
    </head>
    <body>
        <ul>
            <li class="item-"><a id='i1' href="link.html">first item</a></li>
            <li class="item-0"><a id='i2' href="llink.html">first item</a></li>
            <li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li>
        </ul>
        <div><a href="llink2.html">second item</a></div>
    </body>
</html>
"""
response = HtmlResponse(url='http://example.com', body=html,encoding='utf-8')
# hxs = HtmlXPathSelector(response)
# print(hxs)
# hxs = Selector(response=response).xpath('//a')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[2]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/text()').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first()
# print(hxs)
 
# ul_list = Selector(response=response).xpath('//body/ul/li')
# for item in ul_list:
#     v = item.xpath('./a/span')
#     # 或
#     # v = item.xpath('a/span')
#     # 或
#     # v = item.xpath('*/a/span')
#     print(v)

chouti 自動登入點贊

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request
from scrapy.http.cookies import CookieJar
from scrapy import FormRequest


class ChouTiSpider(scrapy.Spider):
    # 爬蟲應用的名稱,經過此名稱啓動爬蟲命令
    name = "chouti"
    # 容許的域名
    allowed_domains = ["chouti.com"]

    cookie_dict = {}
    has_request_set = {}
    # 重寫起始函數
    def start_requests(self):
        url = 'http://dig.chouti.com/'
        # return [Request(url=url, callback=self.login)]
        yield Request(url=url, callback=self.login)

    def login(self, response):
        cookie_jar = CookieJar()
        cookie_jar.extract_cookies(response, response.request)
        for k, v in cookie_jar._cookies.items():
            for i, j in v.items():
                for m, n in j.items():
                    self.cookie_dict[m] = n.value
        print(self.cookie_dict)
        req = Request(
            url='http://dig.chouti.com/login',
            method='POST',
            headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},
            body='phone=8615057101356&password=199SulkyBuckets&Month=1',
            cookies=self.cookie_dict,
            callback=self.check_login
        )
        yield req

    def check_login(self, response):
        # print(response.text)
        req = Request(
            url='http://dig.chouti.com/',
            method='GET',
            callback=self.show,
            cookies=self.cookie_dict,
            dont_filter=True
        )
        yield req

    def show(self, response):
        # print(response.text)
        hxs = HtmlXPathSelector(response)
        news_list = hxs.select('//div[@id="content-list"]/div[@class="item"]')
        for new in news_list:
            # temp = new.xpath('div/div[@class="part2"]/@share-linkid').extract()
            link_id = new.xpath('*/div[@class="part2"]/@share-linkid').extract_first()
            yield Request(
                url='http://dig.chouti.com/link/vote?linksId=%s' %(link_id,),
                method='POST',
                cookies=self.cookie_dict,
                callback=self.do_favor
            )

        # page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()
        # for page in page_list:
        #
        #     page_url = 'http://dig.chouti.com%s' % page
        #     import hashlib
        #     hash = hashlib.md5()
        #     hash.update(bytes(page_url,encoding='utf-8'))
        #     key = hash.hexdigest()
        #     if key in self.has_request_set:
        #         pass
        #     else:
        #         self.has_request_set[key] = page_url
        #         yield Request(
        #             url=page_url,
        #             method='GET',
        #             callback=self.show
        #         )

    def do_favor(self, response):
        print(response.text)

注意:settings.py中設置DEPTH_LIMIT = 1來指定「遞歸」的層數。注意:settings.py中設置DEPTH_LIMIT = 1來指定「遞歸」的層數。

屢次爬取同一個頁面注意設置REQUEST:dont_filter=True,防止爬蟲自行去重

 2.3 避免重複訪問

scrapy默認使用 scrapy.dupefilter.RFPDupeFilter 進行去重,相關配置有:

DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
DUPEFILTER_DEBUG = False
JOBDIR = "保存範文記錄的日誌路徑,如:/root/"  # 最終路徑爲 /root/requests.seen

2.4 爬取mzitu圖片

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
from scrapy.selector import Selector,XmlXPathSelector
from ..items import MzituItem

class MeizituSpider(scrapy.Spider):
    name = 'meizitu'
    allowed_domains = ['mzitu.com']
    # start_urls = ['http://mzitu.com/']

    def start_requests(self):
        url = 'http://www.mzitu.com/all/'
        yield Request(url=url,method='GET',callback=self.main_page)

    def main_page(self,response):
        # 取得全部套圖地址
        hxs = Selector(response = response).xpath('//p[contains(@class,"url")]/a/@href').extract()
        for url in hxs:
            req = Request(url = url,
                          callback=self.fenye)
            yield req

    def fenye(self,response):
        # 取得圖片路徑和標題
        img_url = Selector(response=response).xpath('//div[@class="main-image"]//img/@src').extract_first().strip()
        title = Selector(response=response).xpath('//div[@class="main-image"]//img/@alt').extract_first().strip()
        yield MzituItem(img_url=img_url,title=title)
        # 取得下方導航條頁面路徑
        xhs = Selector(response=response).xpath('//div[@class="pagenavi"]/a/@href').extract()
        for url in xhs:
            req = Request(
                url=url,
                callback=self.fenye,
            )
            yield req
meizitu.py
import scrapy
class MzituItem(scrapy.Item):
    # define the fields for your item here like:
    img_url = scrapy.Field()
    title = scrapy.Field()
items
from scrapy.exceptions import DropItem
import requests,os
base_path = 'F:\mzitu'
class MzituPipeline(object):
    def process_item(self, item, spider):
        # print(item['title'],item['img_url'])
        title = item['title']
        url = str(item['img_url'])
        if os.path.exists(os.path.join(base_path,item['title'])):
            pass
        else:
            os.makedirs(os.path.join(base_path,item['title']))
        dict = url.rsplit('/', maxsplit=1)
        file_name = os.path.join(base_path,title,dict[1])

        if os.path.exists(file_name):
            pass
        else:
            response = requests.get(url=url, headers={'Referer': 'http://www.mzitu.com/net/'})
            print('正在下載', title, '......')
            with open(file_name,'wb') as f:
                f.write(response.content)
            print('下載完成.')
        raise DropItem()
piplines
ITEM_PIPELINES = {
   'mzitu.pipelines.MzituPipeline': 300,
}

#去重,以及設定深度
DEPTH_LIMIT = 3
DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'
DUPEFILTER_DEBUG = False
settings

2.5 其餘

# -*- coding: utf-8 -*-

# Scrapy settings for step8_king project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

# 1. 爬蟲名稱
BOT_NAME = 'step8_king'

# 2. 爬蟲應用路徑
SPIDER_MODULES = ['step8_king.spiders']
NEWSPIDER_MODULE = 'step8_king.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
# 3. 客戶端 user-agent請求頭
# USER_AGENT = 'step8_king (+http://www.yourdomain.com)'

# Obey robots.txt rules
# 4. 禁止爬蟲配置
# ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# 5. 併發請求數
# CONCURRENT_REQUESTS = 4

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# 6. 延遲下載秒數
# DOWNLOAD_DELAY = 2


# The download delay setting will honor only one of:
# 7. 單域名訪問併發數,而且延遲下次秒數也應用在每一個域名
# CONCURRENT_REQUESTS_PER_DOMAIN = 2
# 單IP訪問併發數,若是有值則忽略:CONCURRENT_REQUESTS_PER_DOMAIN,而且延遲下次秒數也應用在每一個IP
# CONCURRENT_REQUESTS_PER_IP = 3

# Disable cookies (enabled by default)
# 8. 是否支持cookie,cookiejar進行操做cookie
# COOKIES_ENABLED = True
# COOKIES_DEBUG = True

# Disable Telnet Console (enabled by default)
# 9. Telnet用於查看當前爬蟲的信息,操做爬蟲等...
#    使用telnet ip port ,而後經過命令操做
# TELNETCONSOLE_ENABLED = True
# TELNETCONSOLE_HOST = '127.0.0.1'
# TELNETCONSOLE_PORT = [6023,]


# 10. 默認請求頭
# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#     'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#     'Accept-Language': 'en',
# }


# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# 11. 定義pipeline處理請求
# ITEM_PIPELINES = {
#    'step8_king.pipelines.JsonPipeline': 700,
#    'step8_king.pipelines.FilePipeline': 500,
# }



# 12. 自定義擴展,基於信號進行調用
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#     # 'step8_king.extensions.MyExtension': 500,
# }


# 13. 爬蟲容許的最大深度,能夠經過meta查看當前深度;0表示無深度
# DEPTH_LIMIT = 3

# 14. 爬取時,0表示深度優先Lifo(默認);1表示廣度優先FiFo

# 後進先出,深度優先
# DEPTH_PRIORITY = 0
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue'
# 先進先出,廣度優先

# DEPTH_PRIORITY = 1
# SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
# SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'

# 15. 調度器隊列
# SCHEDULER = 'scrapy.core.scheduler.Scheduler'
# from scrapy.core.scheduler import Scheduler


# 16. 訪問URL去重
# DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'


# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html

"""
17. 自動限速算法
    from scrapy.contrib.throttle import AutoThrottle
    自動限速設置
    1. 獲取最小延遲 DOWNLOAD_DELAY
    2. 獲取最大延遲 AUTOTHROTTLE_MAX_DELAY
    3. 設置初始下載延遲 AUTOTHROTTLE_START_DELAY
    4. 當請求下載完成後,獲取其"鏈接"時間 latency,即:請求鏈接到接受到響應頭之間的時間
    5. 用於計算的... AUTOTHROTTLE_TARGET_CONCURRENCY
    target_delay = latency / self.target_concurrency
    new_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延遲時間
    new_delay = max(target_delay, new_delay)
    new_delay = min(max(self.mindelay, new_delay), self.maxdelay)
    slot.delay = new_delay
"""

# 開始自動限速
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# 初始下載延遲
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# 最大下載延遲
# AUTOTHROTTLE_MAX_DELAY = 10
# The average number of requests Scrapy should be sending in parallel to each remote server
# 平均每秒併發數
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

# Enable showing throttling stats for every response received:
# 是否顯示
# AUTOTHROTTLE_DEBUG = True

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings


"""
18. 啓用緩存
    目的用於將已經發送的請求或相應緩存下來,以便之後使用
    
    from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
    from scrapy.extensions.httpcache import DummyPolicy
    from scrapy.extensions.httpcache import FilesystemCacheStorage
"""
# 是否啓用緩存策略
# HTTPCACHE_ENABLED = True

# 緩存策略:全部請求均緩存,下次在請求直接訪問原來的緩存便可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 緩存策略:根據Http響應頭:Cache-Control、Last-Modified 等進行緩存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"

# 緩存超時時間
# HTTPCACHE_EXPIRATION_SECS = 0

# 緩存保存路徑
# HTTPCACHE_DIR = 'httpcache'

# 緩存忽略的Http狀態碼
# HTTPCACHE_IGNORE_HTTP_CODES = []

# 緩存存儲的插件
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


"""
19. 代理,須要在環境變量中設置
    from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware
    
    方式一:使用默認
        os.environ
        {
            http_proxy:http://root:woshiniba@192.168.11.11:9999/
            https_proxy:http://192.168.11.11:9999/
        }
    方式二:使用自定義下載中間件
    
    def to_bytes(text, encoding=None, errors='strict'):
        if isinstance(text, bytes):
            return text
        if not isinstance(text, six.string_types):
            raise TypeError('to_bytes must receive a unicode, str or bytes '
                            'object, got %s' % type(text).__name__)
        if encoding is None:
            encoding = 'utf-8'
        return text.encode(encoding, errors)
        
    class ProxyMiddleware(object):
        def process_request(self, request, spider):
            PROXIES = [
                {'ip_port': '111.11.228.75:80', 'user_pass': ''},
                {'ip_port': '120.198.243.22:80', 'user_pass': ''},
                {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
                {'ip_port': '101.71.27.120:80', 'user_pass': ''},
                {'ip_port': '122.96.59.104:80', 'user_pass': ''},
                {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
            ]
            proxy = random.choice(PROXIES)
            if proxy['user_pass'] is not None:
                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
                encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))
                request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)
                print "**************ProxyMiddleware have pass************" + proxy['ip_port']
            else:
                print "**************ProxyMiddleware no pass************" + proxy['ip_port']
                request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])
    
    DOWNLOADER_MIDDLEWARES = {
       'step8_king.middlewares.ProxyMiddleware': 500,
    }
    
"""

"""
20. Https訪問
    Https訪問時有兩種狀況:
    1. 要爬取網站使用的可信任證書(默認支持)
        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"
        
    2. 要爬取網站使用的自定義證書
        DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"
        
        # https.py
        from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
        from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
        
        class MySSLFactory(ScrapyClientContextFactory):
            def getCertificateOptions(self):
                from OpenSSL import crypto
                v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())
                v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())
                return CertificateOptions(
                    privateKey=v1,  # pKey對象
                    certificate=v2,  # X509對象
                    verify=False,
                    method=getattr(self, 'method', getattr(self, '_ssl_method', None))
                )
    其餘:
        相關類
            scrapy.core.downloader.handlers.http.HttpDownloadHandler
            scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
            scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
        相關配置
            DOWNLOADER_HTTPCLIENTFACTORY
            DOWNLOADER_CLIENTCONTEXTFACTORY

"""



"""
21. 爬蟲中間件
    class SpiderMiddleware(object):

        def process_spider_input(self,response, spider):
            '''
            下載完成,執行,而後交給parse處理
            :param response: 
            :param spider: 
            :return: 
            '''
            pass
    
        def process_spider_output(self,response, result, spider):
            '''
            spider處理完成,返回時調用
            :param response:
            :param result:
            :param spider:
            :return: 必須返回包含 Request 或 Item 對象的可迭代對象(iterable)
            '''
            return result
    
        def process_spider_exception(self,response, exception, spider):
            '''
            異常調用
            :param response:
            :param exception:
            :param spider:
            :return: None,繼續交給後續中間件處理異常;含 Response 或 Item 的可迭代對象(iterable),交給調度器或pipeline
            '''
            return None
    
    
        def process_start_requests(self,start_requests, spider):
            '''
            爬蟲啓動時調用
            :param start_requests:
            :param spider:
            :return: 包含 Request 對象的可迭代對象
            '''
            return start_requests
    
    內置爬蟲中間件:
        'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
        'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
        'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
        'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
        'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,

"""
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
   # 'step8_king.middlewares.SpiderMiddleware': 543,
}


"""
22. 下載中間件
    class DownMiddleware1(object):
        def process_request(self, request, spider):
            '''
            請求須要被下載時,通過全部下載器中間件的process_request調用
            :param request:
            :param spider:
            :return:
                None,繼續後續中間件去下載;
                Response對象,中止process_request的執行,開始執行process_response
                Request對象,中止中間件的執行,將Request從新調度器
                raise IgnoreRequest異常,中止process_request的執行,開始執行process_exception
            '''
            pass
    
    
    
        def process_response(self, request, response, spider):
            '''
            spider處理完成,返回時調用
            :param response:
            :param result:
            :param spider:
            :return:
                Response 對象:轉交給其餘中間件process_response
                Request 對象:中止中間件,request會被從新調度下載
                raise IgnoreRequest 異常:調用Request.errback
            '''
            print('response1')
            return response
    
        def process_exception(self, request, exception, spider):
            '''
            當下載處理器(download handler)或 process_request() (下載中間件)拋出異常
            :param response:
            :param exception:
            :param spider:
            :return:
                None:繼續交給後續中間件處理異常;
                Response對象:中止後續process_exception方法
                Request對象:中止中間件,request將會被從新調用下載
            '''
            return None

    
    默認下載中間件
    {
        'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
        'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
        'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
        'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
        'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
        'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
        'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
        'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
        'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
        'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
        'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
        'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
        'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
    }

"""
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'step8_king.middlewares.DownMiddleware1': 100,
#    'step8_king.middlewares.DownMiddleware2': 500,
# }

settings
配置文件
相關文章
相關標籤/搜索