Scrapy日誌等級以及請求傳參

 

日誌等級

- 日誌信息:   使用命令:scrapy crawl 爬蟲文件 運行程序時,在終端輸出的就是日誌信息;html

 

- 日誌信息的種類:nginx

  - ERROR:通常錯誤;json

  - WARNING:警告;cookie

  - INFO:通常的信息;併發

  - DEBUG: 調試信息;dom

 

- 設置日誌信息指定輸出:scrapy

  - 在settings配置文件中添加:ide

    - LOG_LEVEL = ‘指定日誌信息種類’便可。post

    - LOG_FILE = 'log.txt'則表示將日誌信息寫入到指定文件中進行存儲。測試

 

請求傳參

- 在某些狀況下,咱們爬取的數據不在同一個頁面中,例如,咱們爬取一個電影網站,電影的名稱,評分在一級頁面,而要爬取的其餘電影詳情在其二級子頁面中。這時咱們就須要用到請求傳參。

 

- 經過 在scrapy.Request()中添加 meta參數 進行傳參;

scrapy.Request()

 

- 案例展現:爬取www.id97.com電影網,將一級頁面中的電影名稱,類型,評分一級二級頁面中的上映時間,導演,片長進行爬取。

  - 爬蟲文件

# -*- coding: utf-8 -*-
import scrapy
from moviePro.items import MovieproItem

class MovieSpider(scrapy.Spider):
    name = 'movie'
    allowed_domains = ['www.id97.com']
    start_urls = ['http://www.id97.com/']

    def parse(self, response):
        div_list = response.xpath('//div[@class="col-xs-1-5 movie-item"]')

        for div in div_list:
            item = MovieproItem()
            item['name'] = div.xpath('.//h1/a/text()').extract_first()
            item['score'] = div.xpath('.//h1/em/text()').extract_first()
#xpath(string(.))表示提取當前節點下全部子節點中的數據值(.)表示當前節點 item['kind'] = div.xpath('.//div[@class="otherinfo"]').xpath('string(.)').extract_first() item['detail_url'] = div.xpath('./div/a/@href').extract_first()
#請求二級詳情頁面,解析二級頁面中的相應內容,經過meta參數進行Request的數據傳遞 yield scrapy.Request(url=item['detail_url'],callback=self.parse_detail,meta={'item':item}) def parse_detail(self,response): #經過response獲取item item = response.meta['item']
item[
'actor'] = response.xpath('//div[@class="row"]//table/tr[1]/a/text()').extract_first() item['time'] = response.xpath('//div[@class="row"]//table/tr[7]/td[2]/text()').extract_first() item['long'] = response.xpath('//div[@class="row"]//table/tr[8]/td[2]/text()').extract_first()
#提交item到管道 yield item

 

   - items文件:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class MovieproItem(scrapy.Item):
    # define the fields for your item here like:
    name = scrapy.Field()
    score = scrapy.Field()
    time = scrapy.Field()
    long = scrapy.Field()
    actor = scrapy.Field()
    kind = scrapy.Field()
    detail_url = scrapy.Field()
 

 

  - 管道文件:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

import json
class MovieproPipeline(object):
    def __init__(self):
        self.fp = open('data.txt','w')
    def process_item(self, item, spider):
        dic = dict(item)
        print(dic)
        json.dump(dic,self.fp,ensure_ascii=False)
        return item
    def close_spider(self,spider):
        self.fp.close()

 

提升scrapy的爬取效率

- 增長併發量:

  - 默認最大的併發量爲32,能夠經過設置settings文件修改

    CONCURRENT_REQUESTS = 100

    - 將併發改成100

 

- 下降日誌等級:

  - 在運行scrapy時,會有大量日誌信息的輸出,爲了減小CPU的使用率。能夠設置log輸出信息爲INFO或者ERROR便可。修改settings.py

    LOG_LEVEL = 'INFO'

 

- 禁止cookie:

  - 若是不是真的須要cookie,則在scrapy爬取數據時能夠進制cookie從而減小CPU的使用率,提高爬取效率。修改settings.py

    COOKIES_ENABLED = False

 

- 禁止重試:

  - 對失敗的HTTP進行從新請求(重試)會減慢爬取速度,所以能夠禁止重試。修改settings.py

    RETRY_ENABLED = False

 

- 減小下載超時:

  - 若是對一個很是慢的連接進行爬取,減小下載超時能夠能讓卡住的連接快速被放棄,從而提高效率。修改settings.py

    DOWNLOAD_TIMEOUT = 10

 

- 測試案例:

# -*- coding: utf-8 -*-
import scrapy
from ..items import PicproItem
# 提高spider的爬取效率測試
# 爬取4k高清壁紙網站的圖片


class PicSpider(scrapy.Spider):
    name = 'pic'
    # allowed_domains = ['www.pic.com']
    start_urls = ['http://pic.netbian.com/']

    def parse(self, response):
        li_list = response.xpath('//div[@class="slist"]/ul/li')
        print(li_list)
        for li in li_list:
            img_url ="http://pic.netbian.com/"+li.xpath('./a/span/img/@src').extract_first()
            # print(66,img_url)
            title = li.xpath('./a/span/img/@alt').extract_first()
            print("title:", title)
            item = PicproItem()
            item["name"] = title

            yield scrapy.Request(url=img_url, callback =self.getImgData,meta={"item":item})


    def getImgData(self, response):
        item = response.meta['item']
        # 取二進制數據在body中
        item['img_data'] = response.body

        yield item

import os
class PicproPipeline(object):
    def open_spider(self,spider):
        if not os.path.exists('picLib'):
            os.mkdir('./picLib')
    def process_item(self, item, spider):
        imgPath = './picLib/'+item['name']+".jpg"
        with open(imgPath,'wb') as fp:
            fp.write(item['img_data'])
            print(imgPath+'下載成功!')
        return item

配置文件:

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36'


# Obey robots.txt rules
ROBOTSTXT_OBEY = False

ITEM_PIPELINES = {
   'picPro.pipelines.PicproPipeline': 300,
}


# 打印具體錯誤信息
LOG_LEVEL ="ERROR"

#提高爬取效率

CONCURRENT_REQUESTS = 10
COOKIES_ENABLED = False
RETRY_ENABLED = False
DOWNLOAD_TIMEOUT = 5
相關文章
相關標籤/搜索