scrapy利用redis實現url去重與增量爬取

引言

以前數據採集時有2個需求就是url去重與數據的增量爬去(只可以請求增長的url,否則會增長被爬網站的服務器負荷),最開始的想法是指用redis的set實現url去重,但在後面開發中無心的解決了增量爬去的類容。下面貼上主要代碼。redis

具體實現步驟

  • 將每次爬去的連接存入redis(pipeline.py)
class InsertRedis(object):
    def __init__(self):
        self.Redis = RedisOpera('insert')
    def process_item(self,item,spider):
        self.Redis.write(item['url'])
        return item

注:redis的具體操做此處不表數據庫

  • 將即將請求的url判斷是否已經爬取(middlewa.py)
class IngoreRequestMiddleware(object):
    def __init__(self):
        self.Redis = RedisOpera('query')
    def process_request(self,request,spider):
        if self.Redis.query(request.url):
            raise IgnoreRequest("IgnoreRequest : %s" % request.url)
        else:
            return None
  • 實現增量爬取
def start_requests(self):
        yield FormRequest('https://www.demo.org/vuldb/vulnerabilities?page='+str(self.page_num), callback=self.parse_page)#page_num是分頁參數
        
def parse_page(self,response):

        urls = response.xpath('//tbody/tr').extract()
        for url in urls:
            request_url = Selector(text=url).xpath('//td[@class=\'vul-title-wrapper\']/a/@href').extract()[0]
            if re.search('/vuldb/ssvid-\d+',request_url):
                yield FormRequest('https://www.demo.org'+request_url.strip(),callback=self.parse_item,dont_filter=False)
        if len(urls) == 20:
            self.page_num += 1
def parse_item(self,response):
        item = WebcrawlItem()
        self.count += 1
        item['url'] = response.url
        yield item
        yield FormRequest('https://www.demo.org/vuldb/vulnerabilities?page=' + str(self.page_num), callback=self.parse_page)

第三段函數parse_item()回調parse_page(),若是redis數據庫中沒有一條url數據則會一直將整站的page抓取,但r若是是在某個時間點咱們已經爬去完了數據,繼續啓動程序爬去增長的數據是會去判斷每一個url是否已經爬去,當url有重複時parse_page不會回調parse_item(url去重),固然也就不會在去執行yield FormRequest('https://www.demo.org/vuldb/vu...' + str(self.page_num), callback=self.parse_page),故程序會跳出循環結束。服務器

在這不上Redis相關的操做app

1 redisopera.pycurl

# -*- coding: utf-8 -*-
import redis
import time
from scrapy import log
from newscrawl.util import RedisCollection

class RedisOpera:
    def __init__(self,stat):
        log.msg('init redis %s connection!!!!!!!!!!!!!!!!!!!!!!!!!' %stat,log.INFO)
        self.r = redis.Redis(host='localhost',port=6379,db=0)

    def write(self,values):
        # print self.r.keys('*')
        collectionname = RedisCollection(values).getCollectionName()
        self.r.sadd(collectionname,values)
    def query(self,values):
        collectionname = RedisCollection(values).getCollectionName()
        return self.r.sismember(collectionname,values)

2 util.pyscrapy

# -*- coding: utf-8 -*-
import re
from scrapy import log
class RedisCollection(object):
    def __init__(self,OneUrl):
        self.collectionname = OneUrl
    def getCollectionName(self):
        # name = None
        if self.IndexAllUrls() is not None:
            name = self.IndexAllUrls()
        else:
            name = 'publicurls'
        # log.msg("the collections name is %s"(name),log.INFO)
        return name
    def IndexAllUrls(self):
        allurls = ['wooyun','freebuf']
        result = None
        for str in allurls:
            if re.findall(str,self.collectionname):
                result = str
                break
        return result
相關文章
相關標籤/搜索