當咱們在瀏覽相關網頁的時候會發現,某些網站定時會在原有網頁數據的基礎上更新一批數據,例如某電影網站會實時更新一批最近熱門的電影。小說網站會根據做者創做的進度實時更新最新的章節數據等等。那麼,相似的情景,當咱們在爬蟲的過程當中遇到時,咱們是否是須要定時更新程序以便能爬取到網站中最近更新的數據呢?redis
概念:經過爬蟲程序監測某網站數據更新的狀況,以即可以爬取到該網站更新出的新數據。dom
如何進行增量式的爬取工做:scrapy
寫入存儲介質時判斷內容是否是已經在介質中存在ide
分析: 不難發現,其實增量爬取的核心是去重, 至於去重的操做在哪一個步驟起做用,只能說各有利弊。在我看來,前兩種思路須要根據實際狀況取一個(也可能都用)。第一種思路適合不斷有新頁面出現的網站,好比說小說的新章節,天天的最新新聞等等;第二種思路則適合頁面內容會更新的網站。第三個思路是至關因而最後的一道防線。這樣作能夠最大程度上達到去重的目的。
去重方法網站
將爬取過程當中產生的url進行存儲,存儲在redis的set中。當下次進行數據爬取時,首先對即將要發起的請求對應的url在存儲的url的set中作判斷,若是存在則不進行請求,不然才進行請求。url
對爬取到的網頁內容進行惟一標識的制定,而後將該惟一表示存儲至redis的set中。當下次爬取到網頁數據的時候,在進行持久化存儲以前,首先能夠先判斷該數據的惟一標識在redis的set中是否存在,在決定是否進行持久化存儲。spa
爬蟲文件:code
# -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from redis import Redis from incrementPro.items import IncrementproItem class MovieSpider(CrawlSpider): name = 'movie' # allowed_domains = ['www.xxx.com'] start_urls = ['http://www.4567tv.tv/frim/index7-11.html'] rules = ( Rule(LinkExtractor(allow=r'/frim/index7-\d+\.html'), callback='parse_item', follow=True), ) #建立redis連接對象 conn = Redis(host='127.0.0.1',port=6379) def parse_item(self, response): li_list = response.xpath('//li[@class="p1 m1"]') for li in li_list: #獲取詳情頁的url detail_url = 'http://www.4567tv.tv'+li.xpath('./a/@href').extract_first() #將詳情頁的url存入redis的set中 ex = self.conn.sadd('urls',detail_url) if ex == 1: print('該url沒有被爬取過,能夠進行數據的爬取') yield scrapy.Request(url=detail_url,callback=self.parst_detail) else: print('數據尚未更新,暫無新數據可爬取!') #解析詳情頁中的電影名稱和類型,進行持久化存儲 def parst_detail(self,response): item = IncrementproItem() item['name'] = response.xpath('//dt[@class="name"]/text()').extract_first() item['kind'] = response.xpath('//div[@class="ct-c"]/dl/dt[4]//text()').extract() item['kind'] = ''.join(item['kind']) yield item
管道文件:htm
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html from redis import Redis class IncrementproPipeline(object): conn = None def open_spider(self,spider): self.conn = Redis(host='127.0.0.1',port=6379) def process_item(self, item, spider): dic = { 'name':item['name'], 'kind':item['kind'] } print(dic) self.conn.lpush('movieData',dic) return item
爬蟲文件:
# -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from incrementByDataPro.items import IncrementbydataproItem from redis import Redis import hashlib class QiubaiSpider(CrawlSpider): name = 'qiubai' # allowed_domains = ['www.xxx.com'] start_urls = ['https://www.qiushibaike.com/text/'] rules = ( Rule(LinkExtractor(allow=r'/text/page/\d+/'), callback='parse_item', follow=True), Rule(LinkExtractor(allow=r'/text/$'), callback='parse_item', follow=True), ) #建立redis連接對象 conn = Redis(host='127.0.0.1',port=6379) def parse_item(self, response): div_list = response.xpath('//div[@id="content-left"]/div') for div in div_list: item = IncrementbydataproItem() item['author'] = div.xpath('./div[1]/a[2]/h2/text() | ./div[1]/span[2]/h2/text()').extract_first() item['content'] = div.xpath('.//div[@class="content"]/span/text()').extract_first() #將解析到的數據值生成一個惟一的標識進行redis存儲 source = item['author']+item['content'] source_id = hashlib.sha256(source.encode()).hexdigest() #將解析內容的惟一表示存儲到redis的data_id中 ex = self.conn.sadd('data_id',source_id) if ex == 1: print('該條數據沒有爬取過,能夠爬取......') yield item else: print('該條數據已經爬取過了,不須要再次爬取了!!!')
管道文件:
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html from redis import Redis class IncrementbydataproPipeline(object): conn = None def open_spider(self, spider): self.conn = Redis(host='127.0.0.1', port=6379) def process_item(self, item, spider): dic = { 'author': item['author'], 'content': item['content'] } # print(dic) self.conn.lpush('qiubaiData', dic) return item