Scrapy-redis分佈式組件

Scrapy 和 scrapy-redis的區別

Scrapy 是一個通用的爬蟲框架,可是不支持分佈式,Scrapy-redis是爲了更方便地實現Scrapy分佈式爬取,而提供了一些以redis爲基礎的組件(僅有組件)。python

pip install scrapy-redis

Scrapy-redis提供了下面四種組件(components):(四種組件意味着這四個模塊都要作相應的修改)git

  • Scheduler
  • Duplication Filter
  • Item Pipeline
  • Base Spider

scrapy-redis架構

圖片描述

如上圖所⽰示,scrapy-redis在scrapy的架構上增長了redis,基於redis的特性拓展了以下組件:github

Scheduler:

Scrapy改造了python原本的collection.deque(雙向隊列)造成了本身的Scrapy queue(https://github.com/scrapy/que...,可是Scrapy多個spider不能共享待爬取隊列Scrapy queue, 即Scrapy自己不支持爬蟲分佈式,scrapy-redis 的解決是把這個Scrapy queue換成redis數據庫(也是指redis隊列),從同一個redis-server存放要爬取的request,便能讓多個spider去同一個數據庫裏讀取。redis

Scrapy中跟「待爬隊列」直接相關的就是調度器Scheduler,它負責對新的request進行入列操做(加入Scrapy queue),取出下一個要爬取的request(從Scrapy queue中取出)等操做。它把待爬隊列按照優先級創建了一個字典結構,好比:數據庫

{
    優先級0 : 隊列0
    優先級1 : 隊列1
    優先級2 : 隊列2
}

而後根據request中的優先級,來決定該入哪一個隊列,出列時則按優先級較小的優先出列。爲了管理這個比較高級的隊列字典,Scheduler須要提供一系列的方法。可是原來的Scheduler已經沒法使用,因此使用Scrapy-redis的scheduler組件。json

Duplication Filter

Scrapy中用集合實現這個request去重功能,Scrapy中把已經發送的request指紋放入到一個集合中,把下一個request的指紋拿到集合中比對,若是該指紋存在於集合中,說明這個request發送過了,若是沒有則繼續操做。這個核心的判重功能是這樣實現的:服務器

def request_seen(self, request):
# self.request_figerprints就是一個指紋集合  
fp = self.request_fingerprint(request)

# 這就是判重的核心操做  
if fp in self.fingerprints: 
    return True
self.fingerprints.add(fp)
if self.file:
    self.file.write(fp + os.linesep)

在scrapy-redis中去重是由Duplication Filter組件來實現的,它經過redis的set 不重複的特性,巧妙的實現了Duplication Filter去重。scrapy-redis調度器從引擎接受request,將request的指紋存⼊redis的set檢查是否重複,並將不重複的request push寫⼊redis的 request queue。數據結構

引擎請求request(Spider發出的)時,調度器從redis的request queue隊列⾥里根據優先級pop 出⼀個request 返回給引擎,引擎將此request發給spider處理。架構

Item Pipeline:

引擎將(Spider返回的)爬取到的Item給Item Pipeline,scrapy-redis 的Item Pipeline將爬取到的 Item 存⼊redis的 items queue。app

修改過Item Pipeline能夠很方便的根據 key 從 items queue 提取item,從⽽實現 items processes集羣。

Base Spider

不在使用scrapy原有的Spider類,重寫的RedisSpider繼承了Spider和RedisMixin這兩個類,RedisMixin是用來從redis讀取url的類。

當咱們生成一個Spider繼承RedisSpider時,調用setup_redis函數,這個函數會去鏈接redis數據庫,而後會設置signals(信號):

  • 一個是當spider空閒時候的signal,會調用spider_idle函數,這個函數調用schedule_next_request函數,保證spider是一直活着的狀態,而且拋出DontCloseSpider異常。
  • 一個是當抓到一個item時的signal,會調用item_scraped函數,這個函數會調用schedule_next_request函數,獲取下一個request。

官方站點:https://github.com/rolando/sc...

scrapy-redis的官方文檔寫的比較簡潔,沒有說起其運行原理,因此若是想全面的理解分佈式爬蟲的運行原理,仍是得看scrapy-redis的源代碼才行。

scrapy-redis工程的主體仍是是redis和scrapy兩個庫,工程自己實現的東西不是不少,這個工程就像膠水同樣,把這兩個插件粘結了起來。下面咱們來看看,scrapy-redis的每個源代碼文件都實現了什麼功能,最後如何實現分佈式的爬蟲系統:


connection.py

負責根據setting中配置實例化redis鏈接。被dupefilter和scheduler調用,總之涉及到redis存取的都要使用到這個模塊。

# 這裏引入了redis模塊,這個是redis-python庫的接口,用於經過python訪問redis數據庫,
# 這個文件主要是實現鏈接redis數據庫的功能,這些鏈接接口在其餘文件中常常被用到

import redis
import six

from scrapy.utils.misc import load_object

DEFAULT_REDIS_CLS = redis.StrictRedis

# 能夠在settings文件中配置套接字的超時時間、等待時間等
# Sane connection defaults.
DEFAULT_PARAMS = {
    'socket_timeout': 30,
    'socket_connect_timeout': 30,
    'retry_on_timeout': True,
}

# 要想鏈接到redis數據庫,和其餘數據庫差很少,須要一個ip地址、端口號、用戶名密碼(可選)和一個整形的數據庫編號
# Shortcut maps 'setting name' -> 'parmater name'.
SETTINGS_PARAMS_MAP = {
    'REDIS_URL': 'url',
    'REDIS_HOST': 'host',
    'REDIS_PORT': 'port',
}


def get_redis_from_settings(settings):
    """Returns a redis client instance from given Scrapy settings object.
    This function uses ``get_client`` to instantiate the client and uses
    ``DEFAULT_PARAMS`` global as defaults values for the parameters. You can
    override them using the ``REDIS_PARAMS`` setting.
    Parameters
    ----------
    settings : Settings
        A scrapy settings object. See the supported settings below.
    Returns
    -------
    server
        Redis client instance.
    Other Parameters
    ----------------
    REDIS_URL : str, optional
        Server connection URL.
    REDIS_HOST : str, optional
        Server host.
    REDIS_PORT : str, optional
        Server port.
    REDIS_PARAMS : dict, optional
        Additional client parameters.
    """
    params = DEFAULT_PARAMS.copy()
    params.update(settings.getdict('REDIS_PARAMS'))
    # XXX: Deprecate REDIS_* settings.
    for source, dest in SETTINGS_PARAMS_MAP.items():
        val = settings.get(source)
        if val:
            params[dest] = val

    # Allow ``redis_cls`` to be a path to a class.
    if isinstance(params.get('redis_cls'), six.string_types):
        params['redis_cls'] = load_object(params['redis_cls'])

    # 返回的是redis庫的Redis對象,能夠直接用來進行數據操做的對象
    return get_redis(**params)


# Backwards compatible alias.
from_settings = get_redis_from_settings


def get_redis(**kwargs):
    """Returns a redis client instance.
    Parameters
    ----------
    redis_cls : class, optional
        Defaults to ``redis.StrictRedis``.
    url : str, optional
        If given, ``redis_cls.from_url`` is used to instantiate the class.
    **kwargs
        Extra parameters to be passed to the ``redis_cls`` class.
    Returns
    -------
    server
        Redis client instance.
    """
    redis_cls = kwargs.pop('redis_cls', DEFAULT_REDIS_CLS)
    url = kwargs.pop('url', None)


    if url:
        return redis_cls.from_url(url, **kwargs)
    else:
        return redis_cls(**kwargs)

dupefilter.py

負責執行requst的去重,實現的頗有技巧性,使用redis的set數據結構。可是注意scheduler並不使用其中用於在這個模塊中實現的dupefilter鍵作request的調度,而是使用queue.py模塊中實現的queue。

當request不重複時,將其存入到queue中,調度時將其彈出。

import logging
import time

from scrapy.dupefilters import BaseDupeFilter
from scrapy.utils.request import request_fingerprint

from .connection import get_redis_from_settings


DEFAULT_DUPEFILTER_KEY = "dupefilter:%(timestamp)s"

logger = logging.getLogger(__name__)


# TODO: Rename class to RedisDupeFilter.
class RFPDupeFilter(BaseDupeFilter):
    """Redis-based request duplicates filter.
    This class can also be used with default Scrapy's scheduler.
    """

    logger = logger

    def __init__(self, server, key, debug=False):
        """Initialize the duplicates filter.
        Parameters
        ----------
        server : redis.StrictRedis
            The redis server instance.
        key : str
            Redis key Where to store fingerprints.
        debug : bool, optional
            Whether to log filtered requests.
        """
        self.server = server
        self.key = key
        self.debug = debug
        self.logdupes = True

    @classmethod
    def from_settings(cls, settings):
        """Returns an instance from given settings.
        This uses by default the key ``dupefilter:<timestamp>``. When using the
        ``scrapy_redis.scheduler.Scheduler`` class, this method is not used as
        it needs to pass the spider name in the key.
        Parameters
        ----------
        settings : scrapy.settings.Settings
        Returns
        -------
        RFPDupeFilter
            A RFPDupeFilter instance.
        """
        server = get_redis_from_settings(settings)
        # XXX: This creates one-time key. needed to support to use this
        # class as standalone dupefilter with scrapy's default scheduler
        # if scrapy passes spider on open() method this wouldn't be needed
        # TODO: Use SCRAPY_JOB env as default and fallback to timestamp.
        key = DEFAULT_DUPEFILTER_KEY % {'timestamp': int(time.time())}
        debug = settings.getbool('DUPEFILTER_DEBUG')
        return cls(server, key=key, debug=debug)

    @classmethod
    def from_crawler(cls, crawler):
        """Returns instance from crawler.
        Parameters
        ----------
        crawler : scrapy.crawler.Crawler
        Returns
        -------
        RFPDupeFilter
            Instance of RFPDupeFilter.
        """
        return cls.from_settings(crawler.settings)

    def request_seen(self, request):
        """Returns True if request was already seen.
        Parameters
        ----------
        request : scrapy.http.Request
        Returns
        -------
        bool
        """
        fp = self.request_fingerprint(request)
        # This returns the number of values added, zero if already exists.
        added = self.server.sadd(self.key, fp)
        return added == 0

    def request_fingerprint(self, request):
        """Returns a fingerprint for a given request.
        Parameters
        ----------
        request : scrapy.http.Request
        Returns
        -------
        str
        """
        return request_fingerprint(request)

    def close(self, reason=''):
        """Delete data on close. Called by Scrapy's scheduler.
        Parameters
        ----------
        reason : str, optional
        """
        self.clear()

    def clear(self):
        """Clears fingerprints data."""
        self.server.delete(self.key)

    def log(self, request, spider):
        """Logs given request.
        Parameters
        ----------
        request : scrapy.http.Request
        spider : scrapy.spiders.Spider
        """
        if self.debug:
            msg = "Filtered duplicate request: %(request)s"
            self.logger.debug(msg, {'request': request}, extra={'spider': spider})
        elif self.logdupes:
            msg = ("Filtered duplicate request %(request)s"
                   " - no more duplicates will be shown"
                   " (see DUPEFILTER_DEBUG to show all duplicates)")
            msg = "Filtered duplicate request: %(request)s"
            self.logger.debug(msg, {'request': request}, extra={'spider': spider})
            self.logdupes = False

這個文件看起來比較複雜,重寫了scrapy自己已經實現的request判重功能。由於自己scrapy單機跑的話,只須要讀取內存中的request隊列或者持久化的request隊列(scrapy默認的持久化彷佛是json格式的文件,不是數據庫)就能判斷此次要發出的request url是否已經請求過或者正在調度(本地讀就好了)。而分佈式跑的話,就須要各個主機上的scheduler都鏈接同一個數據庫的同一個request池來判斷此次的請求是不是重複的了。

在這個文件中,經過繼承BaseDupeFilter重寫他的方法,實現了基於redis的判重。根據源代碼來看,scrapy-redis使用了scrapy自己的一個fingerprint接request_fingerprint,這個接口頗有趣,根據scrapy文檔所說,他經過hash來判斷兩個url是否相同(相同的url會生成相同的hash結果),可是當兩個url的地址相同,get型參數相同可是順序不一樣時,也會生成相同的hash結果(這個真的比較神奇。。。)因此scrapy-redis依舊使用url的fingerprint來判斷request請求是否已經出現過。

這個類經過鏈接redis,使用一個key來向redis的一個set中插入fingerprint(這個key對於同一種spider是相同的,redis是一個key-value的數據庫,若是key是相同的,訪問到的值就是相同的,這裏使用spider名字+DupeFilter的key就是爲了在不一樣主機上的不一樣爬蟲實例,只要屬於同一種spider,就會訪問到同一個set,而這個set就是他們的url判重池),若是返回值爲0,說明該set中該fingerprint已經存在(由於集合是沒有重複值的),則返回False,若是返回值爲1,說明添加了一個fingerprint到set中,則說明這個request沒有重複,因而返回True,還順便把新fingerprint加入到數據庫中了。 DupeFilter判重會在scheduler類中用到,每個request在進入調度以前都要進行判重,若是重複就不須要參加調度,直接捨棄就行了,否則就是白白浪費資源。


picklecompat.py

"""A pickle wrapper module with protocol=-1 by default."""

try:
    import cPickle as pickle  # PY2
except ImportError:
    import pickle


def loads(s):
    return pickle.loads(s)


def dumps(obj):
    return pickle.dumps(obj, protocol=-1)

這裏實現了loads和dumps兩個函數,其實就是實現了一個序列化器。

由於redis數據庫不能存儲複雜對象(key部分只能是字符串,value部分只能是字符串,字符串列表,字符串集合和hash),因此咱們存啥都要先串行化成文本才行。

這裏使用的就是python的pickle模塊,一個兼容py2和py3的串行化工具。這個serializer主要用於一會的scheduler存reuqest對象。


pipelines.py

這是是用來實現分佈式處理的做用。它將Item存儲在redis中以實現分佈式處理。因爲在這裏須要讀取配置,因此就用到了from_crawler()函數。

from scrapy.utils.misc import load_object
from scrapy.utils.serialize import ScrapyJSONEncoder
from twisted.internet.threads import deferToThread

from . import connection


default_serialize = ScrapyJSONEncoder().encode


class RedisPipeline(object):
    """Pushes serialized item into a redis list/queue"""

    def __init__(self, server,
                 key='%(spider)s:items',
                 serialize_func=default_serialize):
        self.server = server
        self.key = key
        self.serialize = serialize_func

    @classmethod
    def from_settings(cls, settings):
        params = {
            'server': connection.from_settings(settings),
        }
        if settings.get('REDIS_ITEMS_KEY'):
            params['key'] = settings['REDIS_ITEMS_KEY']
        if settings.get('REDIS_ITEMS_SERIALIZER'):
            params['serialize_func'] = load_object(
                settings['REDIS_ITEMS_SERIALIZER']
            )

        return cls(**params)

    @classmethod
    def from_crawler(cls, crawler):
        return cls.from_settings(crawler.settings)

    def process_item(self, item, spider):
        return deferToThread(self._process_item, item, spider)

    def _process_item(self, item, spider):
        key = self.item_key(item, spider)
        data = self.serialize(item)
        self.server.rpush(key, data)
        return item

    def item_key(self, item, spider):
        """Returns redis key based on given spider.
        Override this function to use a different key depending on the item
        and/or spider.
        """
        return self.key % {'spider': spider.name}

pipelines文件實現了一個item pipieline類,和scrapy的item pipeline是同一個對象,經過從settings中拿到咱們配置的REDIS_ITEMS_KEY做爲key,把item串行化以後存入redis數據庫對應的value中(這個value能夠看出出是個list,咱們的每一個item是這個list中的一個結點),這個pipeline把提取出的item存起來,主要是爲了方便咱們延後處理數據。


queue.py

該文件實現了幾個容器類,能夠看這些容器和redis交互頻繁,同時使用了咱們上邊picklecompat中定義的序列化器。這個文件實現的幾個容器大致相同,只不過一個是隊列,一個是棧,一個是優先級隊列,這三個容器到時候會被scheduler對象實例化,來實現request的調度。好比咱們使用SpiderQueue最爲調度隊列的類型,到時候request的調度方法就是先進先出,而實用SpiderStack就是先進後出了。

從SpiderQueue的實現看出來,他的push函數就和其餘容器的同樣,只不過push進去的request請求先被scrapy的接口request_to_dict變成了一個dict對象(由於request對象實在是比較複雜,有方法有屬性很差串行化),以後使用picklecompat中的serializer串行化爲字符串,而後使用一個特定的key存入redis中(該key在同一種spider中是相同的)。而調用pop時,其實就是從redis用那個特定的key去讀其值(一個list),從list中讀取最先進去的那個,因而就先進先出了。 這些容器類都會做爲scheduler調度request的容器,scheduler在每一個主機上都會實例化一個,而且和spider一一對應,因此分佈式運行時會有一個spider的多個實例和一個scheduler的多個實例存在於不一樣的主機上,可是,由於scheduler都是用相同的容器,而這些容器都鏈接同一個redis服務器,又都使用spider名加queue來做爲key讀寫數據,因此不一樣主機上的不一樣爬蟲實例公用一個request調度池,實現了分佈式爬蟲之間的統一調度。

from scrapy.utils.reqser import request_to_dict, request_from_dict

from . import picklecompat


class Base(object):
    """Per-spider queue/stack base class"""

    def __init__(self, server, spider, key, serializer=None):
        """Initialize per-spider redis queue.
        Parameters:
            server -- redis connection
            spider -- spider instance
            key -- key for this queue (e.g. "%(spider)s:queue")
        """
        if serializer is None:
            # Backward compatibility.
            # TODO: deprecate pickle.
            serializer = picklecompat
        if not hasattr(serializer, 'loads'):
            raise TypeError("serializer does not implement 'loads' function: %r"
                            % serializer)
        if not hasattr(serializer, 'dumps'):
            raise TypeError("serializer '%s' does not implement 'dumps' function: %r"
                            % serializer)

        self.server = server
        self.spider = spider
        self.key = key % {'spider': spider.name}
        self.serializer = serializer

    def _encode_request(self, request):
        """Encode a request object"""
        obj = request_to_dict(request, self.spider)
        return self.serializer.dumps(obj)

    def _decode_request(self, encoded_request):
        """Decode an request previously encoded"""
        obj = self.serializer.loads(encoded_request)
        return request_from_dict(obj, self.spider)

    def __len__(self):
        """Return the length of the queue"""
        raise NotImplementedError

    def push(self, request):
        """Push a request"""
        raise NotImplementedError

    def pop(self, timeout=0):
        """Pop a request"""
        raise NotImplementedError

    def clear(self):
        """Clear queue/stack"""
        self.server.delete(self.key)


class SpiderQueue(Base):
    """Per-spider FIFO queue"""

    def __len__(self):
        """Return the length of the queue"""
        return self.server.llen(self.key)

    def push(self, request):
        """Push a request"""
        self.server.lpush(self.key, self._encode_request(request))

    def pop(self, timeout=0):
        """Pop a request"""
        if timeout > 0:
            data = self.server.brpop(self.key, timeout)
            if isinstance(data, tuple):
                data = data[1]
        else:
            data = self.server.rpop(self.key)
        if data:
            return self._decode_request(data)


class SpiderPriorityQueue(Base):
    """Per-spider priority queue abstraction using redis' sorted set"""

    def __len__(self):
        """Return the length of the queue"""
        return self.server.zcard(self.key)

    def push(self, request):
        """Push a request"""
        data = self._encode_request(request)
        score = -request.priority
        # We don't use zadd method as the order of arguments change depending on
        # whether the class is Redis or StrictRedis, and the option of using
        # kwargs only accepts strings, not bytes.
        self.server.execute_command('ZADD', self.key, score, data)

    def pop(self, timeout=0):
        """
        Pop a request
        timeout not support in this queue class
        """
        # use atomic range/remove using multi/exec
        pipe = self.server.pipeline()
        pipe.multi()
        pipe.zrange(self.key, 0, 0).zremrangebyrank(self.key, 0, 0)
        results, count = pipe.execute()
        if results:
            return self._decode_request(results[0])


class SpiderStack(Base):
    """Per-spider stack"""

    def __len__(self):
        """Return the length of the stack"""
        return self.server.llen(self.key)

    def push(self, request):
        """Push a request"""
        self.server.lpush(self.key, self._encode_request(request))

    def pop(self, timeout=0):
        """Pop a request"""
        if timeout > 0:
            data = self.server.blpop(self.key, timeout)
            if isinstance(data, tuple):
                data = data[1]
        else:
            data = self.server.lpop(self.key)

        if data:
            return self._decode_request(data)


__all__ = ['SpiderQueue', 'SpiderPriorityQueue', 'SpiderStack']

scheduler.py

此擴展是對scrapy中自帶的scheduler的替代(在settings的SCHEDULER變量中指出),正是利用此擴展實現crawler的分佈式調度。其利用的數據結構來自於queue中實現的數據結構。

scrapy-redis所實現的兩種分佈式:爬蟲分佈式以及item處理分佈式就是由模塊scheduler和模塊pipelines實現。上述其它模塊做爲爲兩者輔助的功能模塊

import importlib
import six

from scrapy.utils.misc import load_object

from . import connection


# TODO: add SCRAPY_JOB support.
class Scheduler(object):
    """Redis-based scheduler"""

    def __init__(self, server,
                 persist=False,
                 flush_on_start=False,
                 queue_key='%(spider)s:requests',
                 queue_cls='scrapy_redis.queue.SpiderPriorityQueue',
                 dupefilter_key='%(spider)s:dupefilter',
                 dupefilter_cls='scrapy_redis.dupefilter.RFPDupeFilter',
                 idle_before_close=0,
                 serializer=None):
        """Initialize scheduler.
        Parameters
        ----------
        server : Redis
            The redis server instance.
        persist : bool
            Whether to flush requests when closing. Default is False.
        flush_on_start : bool
            Whether to flush requests on start. Default is False.
        queue_key : str
            Requests queue key.
        queue_cls : str
            Importable path to the queue class.
        dupefilter_key : str
            Duplicates filter key.
        dupefilter_cls : str
            Importable path to the dupefilter class.
        idle_before_close : int
            Timeout before giving up.
        """
        if idle_before_close < 0:
            raise TypeError("idle_before_close cannot be negative")

        self.server = server
        self.persist = persist
        self.flush_on_start = flush_on_start
        self.queue_key = queue_key
        self.queue_cls = queue_cls
        self.dupefilter_cls = dupefilter_cls
        self.dupefilter_key = dupefilter_key
        self.idle_before_close = idle_before_close
        self.serializer = serializer
        self.stats = None

    def __len__(self):
        return len(self.queue)

    @classmethod
    def from_settings(cls, settings):
        kwargs = {
            'persist': settings.getbool('SCHEDULER_PERSIST'),
            'flush_on_start': settings.getbool('SCHEDULER_FLUSH_ON_START'),
            'idle_before_close': settings.getint('SCHEDULER_IDLE_BEFORE_CLOSE'),
        }

        # If these values are missing, it means we want to use the defaults.
        optional = {
            # TODO: Use custom prefixes for this settings to note that are
            # specific to scrapy-redis.
            'queue_key': 'SCHEDULER_QUEUE_KEY',
            'queue_cls': 'SCHEDULER_QUEUE_CLASS',
            'dupefilter_key': 'SCHEDULER_DUPEFILTER_KEY',
            # We use the default setting name to keep compatibility.
            'dupefilter_cls': 'DUPEFILTER_CLASS',
            'serializer': 'SCHEDULER_SERIALIZER',
        }
        for name, setting_name in optional.items():
            val = settings.get(setting_name)
            if val:
                kwargs[name] = val

        # Support serializer as a path to a module.
        if isinstance(kwargs.get('serializer'), six.string_types):
            kwargs['serializer'] = importlib.import_module(kwargs['serializer'])

        server = connection.from_settings(settings)
        # Ensure the connection is working.
        server.ping()

        return cls(server=server, **kwargs)

    @classmethod
    def from_crawler(cls, crawler):
        instance = cls.from_settings(crawler.settings)
        # FIXME: for now, stats are only supported from this constructor
        instance.stats = crawler.stats
        return instance

    def open(self, spider):
        self.spider = spider

        try:
            self.queue = load_object(self.queue_cls)(
                server=self.server,
                spider=spider,
                key=self.queue_key % {'spider': spider.name},
                serializer=self.serializer,
            )
        except TypeError as e:
            raise ValueError("Failed to instantiate queue class '%s': %s",
                             self.queue_cls, e)

        try:
            self.df = load_object(self.dupefilter_cls)(
                server=self.server,
                key=self.dupefilter_key % {'spider': spider.name},
                debug=spider.settings.getbool('DUPEFILTER_DEBUG'),
            )
        except TypeError as e:
            raise ValueError("Failed to instantiate dupefilter class '%s': %s",
                             self.dupefilter_cls, e)

        if self.flush_on_start:
            self.flush()
        # notice if there are requests already in the queue to resume the crawl
        if len(self.queue):
            spider.log("Resuming crawl (%d requests scheduled)" % len(self.queue))

    def close(self, reason):
        if not self.persist:
            self.flush()

    def flush(self):
        self.df.clear()
        self.queue.clear()

    def enqueue_request(self, request):
        if not request.dont_filter and self.df.request_seen(request):
            self.df.log(request, self.spider)
            return False
        if self.stats:
            self.stats.inc_value('scheduler/enqueued/redis', spider=self.spider)
        self.queue.push(request)
        return True

    def next_request(self):
        block_pop_timeout = self.idle_before_close
        request = self.queue.pop(block_pop_timeout)
        if request and self.stats:
            self.stats.inc_value('scheduler/dequeued/redis', spider=self.spider)
        return request

    def has_pending_requests(self):
        return len(self) > 0

這個文件重寫了scheduler類,用來代替scrapy.core.scheduler的原有調度器。其實對原有調度器的邏輯沒有很大的改變,主要是使用了redis做爲數據存儲的媒介,以達到各個爬蟲之間的統一調度。 scheduler負責調度各個spider的request請求,scheduler初始化時,經過settings文件讀取queue和dupefilters的類型(通常就用上邊默認的),配置queue和dupefilters使用的key(通常就是spider name加上queue或者dupefilters,這樣對於同一種spider的不一樣實例,就會使用相同的數據塊了)。每當一個request要被調度時,enqueue_request被調用,scheduler使用dupefilters來判斷這個url是否重複,若是不重複,就添加到queue的容器中(先進先出,先進後出和優先級均可以,能夠在settings中配置)。當調度完成時,next_request被調用,scheduler就經過queue容器的接口,取出一個request,把他發送給相應的spider,讓spider進行爬取工做。


spider.py

設計的這個spider從redis中讀取要爬的url,而後執行爬取,若爬取過程當中返回更多的url,那麼繼續進行直至全部的request完成。以後繼續從redis中讀取url,循環這個過程。

分析:在這個spider中經過connect signals.spider_idle信號實現對crawler狀態的監視。當idle時,返回新的make_requests_from_url(url)給引擎,進而交給調度器調度。

from scrapy import signals
from scrapy.exceptions import DontCloseSpider
from scrapy.spiders import Spider, CrawlSpider

from . import connection


# Default batch size matches default concurrent requests setting.
DEFAULT_START_URLS_BATCH_SIZE = 16
DEFAULT_START_URLS_KEY = '%(name)s:start_urls'


class RedisMixin(object):
    """Mixin class to implement reading urls from a redis queue."""
    # Per spider redis key, default to DEFAULT_START_URLS_KEY.
    redis_key = None
    # Fetch this amount of start urls when idle. Default to DEFAULT_START_URLS_BATCH_SIZE.
    redis_batch_size = None
    # Redis client instance.
    server = None

    def start_requests(self):
        """Returns a batch of start requests from redis."""
        return self.next_requests()

    def setup_redis(self, crawler=None):
        """Setup redis connection and idle signal.
        This should be called after the spider has set its crawler object.
        """
        if self.server is not None:
            return

        if crawler is None:
            # We allow optional crawler argument to keep backwards
            # compatibility.
            # XXX: Raise a deprecation warning.
            crawler = getattr(self, 'crawler', None)

        if crawler is None:
            raise ValueError("crawler is required")

        settings = crawler.settings

        if self.redis_key is None:
            self.redis_key = settings.get(
                'REDIS_START_URLS_KEY', DEFAULT_START_URLS_KEY,
            )

        self.redis_key = self.redis_key % {'name': self.name}

        if not self.redis_key.strip():
            raise ValueError("redis_key must not be empty")

        if self.redis_batch_size is None:
            self.redis_batch_size = settings.getint(
                'REDIS_START_URLS_BATCH_SIZE', DEFAULT_START_URLS_BATCH_SIZE,
            )

        try:
            self.redis_batch_size = int(self.redis_batch_size)
        except (TypeError, ValueError):
            raise ValueError("redis_batch_size must be an integer")

        self.logger.info("Reading start URLs from redis key '%(redis_key)s' "
                         "(batch size: %(redis_batch_size)s)", self.__dict__)

        self.server = connection.from_settings(crawler.settings)
        # The idle signal is called when the spider has no requests left,
        # that's when we will schedule new requests from redis queue
        crawler.signals.connect(self.spider_idle, signal=signals.spider_idle)

    def next_requests(self):
        """Returns a request to be scheduled or none."""
        use_set = self.settings.getbool('REDIS_START_URLS_AS_SET')
        fetch_one = self.server.spop if use_set else self.server.lpop
        # XXX: Do we need to use a timeout here?
        found = 0
        while found < self.redis_batch_size:
            data = fetch_one(self.redis_key)
            if not data:
                # Queue empty.
                break
            req = self.make_request_from_data(data)
            if req:
                yield req
                found += 1
            else:
                self.logger.debug("Request not made from data: %r", data)

        if found:
            self.logger.debug("Read %s requests from '%s'", found, self.redis_key)

    def make_request_from_data(self, data):
        # By default, data is an URL.
        if '://' in data:
            return self.make_requests_from_url(data)
        else:
            self.logger.error("Unexpected URL from '%s': %r", self.redis_key, data)

    def schedule_next_requests(self):
        """Schedules a request if available"""
        for req in self.next_requests():
            self.crawler.engine.crawl(req, spider=self)

    def spider_idle(self):
        """Schedules a request if available, otherwise waits."""
        # XXX: Handle a sentinel to close the spider.
        self.schedule_next_requests()
        raise DontCloseSpider


class RedisSpider(RedisMixin, Spider):
    """Spider that reads urls from redis queue when idle."""

    @classmethod
    def from_crawler(self, crawler, *args, **kwargs):
        obj = super(RedisSpider, self).from_crawler(crawler, *args, **kwargs)
        obj.setup_redis(crawler)
        return obj


class RedisCrawlSpider(RedisMixin, CrawlSpider):
    """Spider that reads urls from redis queue when idle."""

    @classmethod
    def from_crawler(self, crawler, *args, **kwargs):
        obj = super(RedisCrawlSpider, self).from_crawler(crawler, *args, **kwargs)
        obj.setup_redis(crawler)
        return obj

spider的改動也不是很大,主要是經過connect接口,給spider綁定了spider_idle信號,spider初始化時,經過setup_redis函數初始化好和redis的鏈接,以後經過next_requests函數從redis中取出strat url,使用的key是settings中REDIS_START_URLS_AS_SET定義的(注意了這裏的初始化url池和咱們上邊的queue的url池不是一個東西,queue的池是用於調度的,初始化url池是存放入口url的,他們都存在redis中,可是使用不一樣的key來區分,就當成是不一樣的表吧),spider使用少許的start url,能夠發展出不少新的url,這些url會進入scheduler進行判重和調度。直到spider跑到調度池內沒有url的時候,會觸發spider_idle信號,從而觸發spider的next_requests函數,再次從redis的start url池中讀取一些url。

總結

最後總結一下scrapy-redis的整體思路:這個工程經過重寫scheduler和spider類,實現了調度、spider啓動和redis的交互。實現新的dupefilter和queue類,達到了判重和調度容器和redis的交互,由於每一個主機上的爬蟲進程都訪問同一個redis數據庫,因此調度和判重都統一進行統一管理,達到了分佈式爬蟲的目的。 當spider被初始化時,同時會初始化一個對應的scheduler對象,這個調度器對象經過讀取settings,配置好本身的調度容器queue和判重工具dupefilter。每當一個spider產出一個request的時候,scrapy內核會把這個reuqest遞交給這個spider對應的scheduler對象進行調度,scheduler對象經過訪問redis對request進行判重,若是不重複就把他添加進redis中的調度池。當調度條件知足時,scheduler對象就從redis的調度池中取出一個request發送給spider,讓他爬取。當spider爬取的全部暫時可用url以後,scheduler發現這個spider對應的redis的調度池空了,因而觸發信號spider_idle,spider收到這個信號以後,直接鏈接redis讀取strart url池,拿去新的一批url入口,而後再次重複上邊的工做。

相關文章
相關標籤/搜索