Scrapy學習-22-擴展開發

開發scrapy擴展
定義
  擴展框架提供一個機制,使得你能將自定義功能綁定到Scrapy。
  擴展只是正常的類,它們在Scrapy啓動時被實例化、初始化
 
注意
  實際上自定義擴展和spider中間件、下載中間件都是擴展
  spider middlewares、downloader middlewares、pipelines 都擁有本身的manager管理器,這些管理器都繼承與extension管理器
 
擴展設置
  擴展使用 Scrapy settings 管理它們的設置,這跟其餘Scrapy代碼同樣。
  一般擴展須要給它們的設置加上前綴,以免跟已有(或未來)的擴展衝突。
    好比,一個擴展處理 Google Sitemaps, 則可使用相似 GOOGLESITEMAP_ENABLED、GOOGLESITEMAP_DEPTH 等設置
 
加載和激活擴展
  擴展在擴展類被實例化時加載和激活。 所以,全部擴展的實例化代碼必須在類的構造函數(__init__)中執行
  要使得擴展可用,須要把它添加到Scrapy的 EXTENSIONS 配置中。 在 EXTENSIONS 中,每一個擴展都使用一個字符串表示,即擴展類的全Python路徑。
   好比
      EXTENSIONS = { 'scrapy.contrib.corestats.CoreStats': 500, 'scrapy.telnet.TelnetConsole': 500, } 
  如你所見,EXTENSIONS 配置是一個dict,key是擴展類的路徑,value是順序, 它定義擴展加載的順序。
     擴展順序不像中間件的順序那麼重要,並且擴展之間通常沒有關聯。 擴展加載的順序並不重要,由於它們並不相互依賴
 
禁用擴展
EXTENSIONS = {
    'scrapy.contrib.corestats.CoreStats': None,
}

 

如何實現你的擴展
  實現你的擴展很簡單。每一個擴展是一個單一的Python class,它不須要實現任何特殊的方法。
  Scrapy擴展(包括middlewares和pipelines)的主要入口是 from_crawler 類方法, 它接收一個 Crawler 類的實例,該實例是控制Scrapy crawler的主要對象。
     若是擴展須要,你能夠經過這個對象訪問settings,signals,stats,控制爬蟲的行爲。
  一般來講,擴展關聯到 signals 並執行它們觸發的任務。
  最後,若是 from_crawler 方法拋出 NotConfigured 異常, 擴展會被禁用。不然,擴展會被開啓
 
擴展實例
from scrapy import signals
from scrapy.exceptions import NotConfigured

class SpiderOpenCloseLogging(object):

    def __init__(self, item_count):
        self.item_count = item_count

        self.items_scraped = 0

    @classmethod
    def from_crawler(cls, crawler):
        # first check if the extension should be enabled and raise

        # NotConfigured otherwise

        if not crawler.settings.getbool('MYEXT_ENABLED'):

            raise NotConfigured

        # get the number of items from settings

        item_count = crawler.settings.getint('MYEXT_ITEMCOUNT', 1000)

        # instantiate the extension object

        ext = cls(item_count)

        # connect the extension object to signals

        crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)

        crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)

        crawler.signals.connect(ext.item_scraped, signal=signals.item_scraped)

        # return the extension object

        return ext

    def spider_opened(self, spider):
        spider.log("opened spider %s" % spider.name)

    def spider_closed(self, spider):
        spider.log("closed spider %s" % spider.name)

    def item_scraped(self, item, spider):
        self.items_scraped += 1

        if self.items_scraped % self.item_count == 0:
            spider.log("scraped %d items" % self.items_scraped)

 

內置擴展介紹
記錄統計擴展(Log Stats extension)       記錄基本的統計信息,好比爬取的頁面和條目(items)

核心統計擴展(Core Stats extension)      若是統計收集器(stats collection)啓用了,該擴展開啓核心統計收集

Telnet console 擴展                     提供一個telnet控制檯,用於進入當前執行的Scrapy進程的Python解析器

內存使用擴展(Memory usage extension)     監控Scrapy進程內存使用量而且:若是使用內存量超過某個指定值,發送提醒郵件。若是超過某個指定值,關閉spider

內存調試擴展(Memory debugger extension)  該擴展用於調試內存使用量,它收集如下信息:沒有被Python垃圾回收器收集的對象。應該被銷燬卻仍然存活的對象

關閉spider擴展                          當某些情況發生,spider會自動關閉。每種狀況使用指定的關閉緣由

StatsMailer extension                   這個簡單的擴展可用來在一個域名爬取完畢時發送提醒郵件, 包含Scrapy收集的統計信息

Debugging extensions                    當收到 SIGQUIT 或 SIGUSR2 信號,spider進程的信息將會被存儲下來

調試擴展(Debugger extension)            當收到 SIGUSR2 信號,將會在Scrapy進程中調用 Python debugger。 debugger退出後,Scrapy進程繼續正常運行

 

內置核心統計擴展源碼
"""
Extension for collecting core stats like items scraped and start/finish times
"""
import datetime

from scrapy import signals

class CoreStats(object):

    def __init__(self, stats):
        self.stats = stats

    @classmethod
    def from_crawler(cls, crawler):
        o = cls(crawler.stats)
        crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(o.spider_closed, signal=signals.spider_closed)
        crawler.signals.connect(o.item_scraped, signal=signals.item_scraped)
        crawler.signals.connect(o.item_dropped, signal=signals.item_dropped)
        crawler.signals.connect(o.response_received, signal=signals.response_received)
        return o

    def spider_opened(self, spider):
        self.stats.set_value('start_time', datetime.datetime.utcnow(), spider=spider)

    def spider_closed(self, spider, reason):
        self.stats.set_value('finish_time', datetime.datetime.utcnow(), spider=spider)
        self.stats.set_value('finish_reason', reason, spider=spider)

    def item_scraped(self, item, spider):
        self.stats.inc_value('item_scraped_count', spider=spider)

    def response_received(self, spider):
        self.stats.inc_value('response_received_count', spider=spider)

    def item_dropped(self, item, spider, exception):
        reason = exception.__class__.__name__
        self.stats.inc_value('item_dropped_count', spider=spider)
        self.stats.inc_value('item_dropped_reasons_count/%s' % reason, spider=spider)

 

內置內存使用擴展源碼
"""
MemoryUsage extension

See documentation in docs/topics/extensions.rst
"""
import sys
import socket
import logging
from pprint import pformat
from importlib import import_module

from twisted.internet import task

from scrapy import signals
from scrapy.exceptions import NotConfigured
from scrapy.mail import MailSender
from scrapy.utils.engine import get_engine_status

logger = logging.getLogger(__name__)


class MemoryUsage(object):

    def __init__(self, crawler):
        if not crawler.settings.getbool('MEMUSAGE_ENABLED'):
            raise NotConfigured
        try:
            # stdlib's resource module is only available on unix platforms.
            self.resource = import_module('resource')
        except ImportError:
            raise NotConfigured

        self.crawler = crawler
        self.warned = False
        self.notify_mails = crawler.settings.getlist('MEMUSAGE_NOTIFY_MAIL')
        self.limit = crawler.settings.getint('MEMUSAGE_LIMIT_MB')*1024*1024
        self.warning = crawler.settings.getint('MEMUSAGE_WARNING_MB')*1024*1024
        self.check_interval = crawler.settings.getfloat('MEMUSAGE_CHECK_INTERVAL_SECONDS')
        self.mail = MailSender.from_settings(crawler.settings)
        crawler.signals.connect(self.engine_started, signal=signals.engine_started)
        crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped)

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def get_virtual_size(self):
        size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss
        if sys.platform != 'darwin':
            # on Mac OS X ru_maxrss is in bytes, on Linux it is in KB
            size *= 1024
        return size

    def engine_started(self):
        self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())
        self.tasks = []
        tsk = task.LoopingCall(self.update)
        self.tasks.append(tsk)
        tsk.start(self.check_interval, now=True)
        if self.limit:
            tsk = task.LoopingCall(self._check_limit)
            self.tasks.append(tsk)
            tsk.start(self.check_interval, now=True)
        if self.warning:
            tsk = task.LoopingCall(self._check_warning)
            self.tasks.append(tsk)
            tsk.start(self.check_interval, now=True)

    def engine_stopped(self):
        for tsk in self.tasks:
            if tsk.running:
                tsk.stop()

    def update(self):
        self.crawler.stats.max_value('memusage/max', self.get_virtual_size())

    def _check_limit(self):
        if self.get_virtual_size() > self.limit:
            self.crawler.stats.set_value('memusage/limit_reached', 1)
            mem = self.limit/1024/1024
            logger.error("Memory usage exceeded %(memusage)dM. Shutting down Scrapy...",
                        {'memusage': mem}, extra={'crawler': self.crawler})
            if self.notify_mails:
                subj = "%s terminated: memory usage exceeded %dM at %s" % \
                        (self.crawler.settings['BOT_NAME'], mem, socket.gethostname())
                self._send_report(self.notify_mails, subj)
                self.crawler.stats.set_value('memusage/limit_notified', 1)

            open_spiders = self.crawler.engine.open_spiders
            if open_spiders:
                for spider in open_spiders:
                    self.crawler.engine.close_spider(spider, 'memusage_exceeded')
            else:
                self.crawler.stop()

    def _check_warning(self):
        if self.warned: # warn only once
            return
        if self.get_virtual_size() > self.warning:
            self.crawler.stats.set_value('memusage/warning_reached', 1)
            mem = self.warning/1024/1024
            logger.warning("Memory usage reached %(memusage)dM",
                        {'memusage': mem}, extra={'crawler': self.crawler})
            if self.notify_mails:
                subj = "%s warning: memory usage reached %dM at %s" % \
                        (self.crawler.settings['BOT_NAME'], mem, socket.gethostname())
                self._send_report(self.notify_mails, subj)
                self.crawler.stats.set_value('memusage/warning_notified', 1)
            self.warned = True

    def _send_report(self, rcpts, subject):
        """send notification mail with some additional useful info"""
        stats = self.crawler.stats
        s = "Memory usage at engine startup : %dM\r\n" % (stats.get_value('memusage/startup')/1024/1024)
        s += "Maximum memory usage           : %dM\r\n" % (stats.get_value('memusage/max')/1024/1024)
        s += "Current memory usage           : %dM\r\n" % (self.get_virtual_size()/1024/1024)

        s += "ENGINE STATUS ------------------------------------------------------- \r\n"
        s += "\r\n"
        s += pformat(get_engine_status(self.crawler.engine))
        s += "\r\n"
        self.mail.send(rcpts, subject, s)
相關文章
相關標籤/搜索