scrapy過濾重複數據和增量爬取

原文連接

前言

這篇筆記基於上上篇筆記的---《scrapy電影天堂實戰(二)建立爬蟲項目》,而這篇又涉及redis,因此又先熟悉了下redis,記錄了下《redis基礎筆記》,這篇爲了節省篇幅因此只添加改動部分代碼。html

我的實現思路

  • 過濾重複數據

在pipeline寫個redispipeline,要爬的內容hash後的鍵movie_hash經過pipeline時與從redis獲取的movie_hash(set類型)比對,若是在redis裏面則在pipeline裏raise DropItem將這個item去掉,在經過pipeline的mysqlpipeline存入數據庫時就不會有重複數據了。python

  • 增量爬取

雖然沒有重複數據了,可是不能增量爬取,一旦中止了爬蟲,又會從頭爬取,效率很低。想到在downloader middleware中增長對request和response的url進行處理將它放到redis,而後比對,有在redis裏面就raise IgnoreRequest忽略掉這個請求,測試也的確忽略了,但其實仍是從頭開始爬取,只不過忽略了這個請求,效率並無提高多少,好像並非增量爬取。最後默默刪掉本身的代碼。。。mysql

求助

折騰了許久,熬不住了,仍是求助吧。其實以前google就搜到了一個scrapy-redis的開源項目就實現了這些功能,但本身試了下沒有實現效果(我本身的問題),因此想找本書系統看看scrapy,而後就找了下python scrapy相關的pdf(窮逼買不起書)。找到了一本也引用了scrapy-redis這個項目的書,什麼書就不說了,尊重別人的勞動,土豪另當別論。按照書裏的配置了下,過濾重複數據是ok了,但增量爬取死活不行,一旦中止、從新爬取,它只會將redis裏面的requests爬完就完了,以後再重啓爬蟲都不會有新的requests生成了。又折騰了許久,在某個羣裏有個大佬回答了,將爬取page的連接的yield那設置dont_filter = True,本身設置測試了下,的確實現了增量爬取。再次感謝那位大佬的幫助。git

具體實現

建立個redis容器

具體查看《redis基礎筆記》的使用redis鏡像這裏的方法,這裏再也不贅述。github

引用scrapy-redis

修改settings.py添加配置,完整配置查看:https://github.com/rmax/scrapy-redis#usageredis

###### scrapy-redis settings start ######
# https://github.com/rmax/scrapy-redis
# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# Ensure all spiders share same duplicates filter through redis.
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

# Specify the full Redis URL for connecting (optional).
# If set, this takes precedence over the REDIS_HOST and REDIS_PORT settings.
REDIS_URL = 'redis://:123123@192.168.229.128:8889'

# Don't cleanup redis queues, allows to pause/resume crawls.
SCHEDULER_PERSIST = True

###### scrapy-redis settings end ######

修改spider

我一開始引用了scrapy-redis不成功的緣由是我將這個參數設置成了dont_filter = False,還有就是爬取下一頁和詳情頁的順序反了,應該先爬頁連接在爬詳情頁。sql

def parse(self, response):
        item = MovieHeavenBarItem()
        domain = "https://www.dytt8.net"

        # 爬取下一頁
        last_page_num = response.xpath('//select[@name="sldd"]//option[last()]/text()').extract()[0]
        last_page_url = 'list_23_' + last_page_num + '.html'
        next_page_url = response.xpath('//div[@class="x"]//a[last() - 1]/@href').extract()[0]
        next_page_num = next_page_url.split('_')[-1].split('.')[0]
        if next_page_url != last_page_url:
            url = 'https://www.dytt8.net/html/gndy/dyzz/' + next_page_url
            logging.log(logging.INFO, f'***************** crawling page {next_page_num} ***************** ')
            yield Request(url=url, callback=self.parse, meta={'item': item}, dont_filter = True)

        # 爬取詳情頁
        urls = response.xpath('//b/a/@href').extract()     # list type
        #print('urls', urls)
        for url in urls:
            url = domain + url
            yield Request(url=url, callback=self.parse_single_page, meta={'item': item}, dont_filter = False)

見證奇蹟的時刻

日誌輸出

2019-07-24 09:45:31 [scrapy.crawler] INFO: Received SIGINT, shutting down gracefully. Send again to force從中能夠看到我按了兩次ctrl+c強行中止了爬蟲,在這以前是在爬取這個連接的2019-07-24 09:45:30 [root] INFO: movie_link: https://www.dytt8.net/html/gndy/dyzz/20180826/57328.html,而後我從新啓動爬蟲看到接着爬取這個連接了2019-07-24 09:46:03 [root] INFO: crawling url: https://www.dytt8.net/html/gndy/dyzz/20180718/57146.html數據庫

忽略部分...
2019-07-24 09:45:30 [root] INFO: crawling url: https://www.dytt8.net/html/gndy/dyzz/20180826/57328.html
2019-07-24 09:45:30 [root] INFO: **************** movie detail log ****************
2019-07-24 09:45:30 [root] INFO: movie_link: https://www.dytt8.net/html/gndy/dyzz/20180826/57328.html
2019-07-24 09:45:30 [root] INFO: movie_name: 金蟬脫殼2/金蟬脫殼2:冥府
2019-07-24 09:45:30 [root] INFO: movie_publish_date: 2018-06-13(菲律賓)/2018-06-29(中國)/2018-06-29(美國)
2019-07-24 09:45:30 [root] INFO: movie_score: 4.0/10 from 1,180 users
2019-07-24 09:45:30 [root] INFO: movie_directors: 史蒂芬·C·米勒 Steven C. Miller
2019-07-24 09:45:30 [root] INFO: ***************** commit to mysql *****************
2019-07-24 09:45:31 [scrapy.crawler] INFO: Received SIGINT, shutting down gracefully. Send again to force
2019-07-24 09:45:31 [scrapy.core.engine] INFO: Closing spider (shutdown)
2019-07-24 09:45:58 [scrapy.extensions.telnet] INFO: Telnet Password: 4e5dbb60f52f81fe
2019-07-24 09:45:58 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2019-07-24 09:45:58 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'movie_heaven_bar.middlewares.MovieHeavenBarDownloaderMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-07-24 09:45:58 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-07-24 09:45:58 [scrapy.middleware] INFO: Enabled item pipelines:
['movie_heaven_bar.pipelines.MovieHeavenBarPipeline']
2019-07-24 09:45:58 [scrapy.core.engine] INFO: Spider opened
2019-07-24 09:45:58 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-07-24 09:45:58 [newest_movie] INFO: Spider opened: newest_movie
2019-07-24 09:45:58 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-07-24 09:46:03 [root] INFO: crawling url: https://www.dytt8.net/html/gndy/dyzz/20180718/57146.html
2019-07-24 09:46:03 [root] INFO: **************** movie detail log ****************
2019-07-24 09:46:03 [root] INFO: movie_link: https://www.dytt8.net/html/gndy/dyzz/20180718/57146.html
2019-07-24 09:46:03 [root] INFO: movie_name: Operation Red Sea
2019-07-24 09:46:03 [root] INFO: movie_publish_date: 2018-02-16(中國)
2019-07-24 09:46:03 [root] INFO: movie_score: 8.3/10 from 433,101 users
2019-07-24 09:46:03 [root] INFO: movie_directors: 林超賢 Dante Lam
2019-07-24 09:46:03 [root] INFO: movie_actors: 張譯 Yi Zhang , 黃景瑜 Jingyu Huang , 海清 Hai-Qing , 杜江 Jiang Du , 蔣璐霞 Luxia Jiang , 尹昉 Fang Yin , 王強 Qiang Wang , 郭鬱濱 Yubin Guo , 王雨甜 Yutian Wang , 麥亨利 Henry Mai , 張涵>予 Hanyu Zhang , 王彥霖 Yanlin Wang
2019-07-24 09:46:03 [root] INFO: movie_download_link: magnet:?xt=urn:btih:3c1188fdbec2f63ce1e30d2061913fcba15ebb90&dn=%e9%98%b3%e5%85%89%e7%94%b5%e5%bd%b1www.ygdy8.com.%e7%ba%a2%e6%b5%b7%e8%a1%8c%e5%8a%a8.BD.720p.%e5%9b%bd%e8%af%ad%e4%b8%ad%e5%ad%97.mkv&tr=udp%3a%2f%2ftracker.leechers-paradise.org%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2feddie4.nl%3a6969%2fannounce&tr=udp%3a%2f%2fshadowshq.eddie4.nl%3a6969%2fannounce
2019-07-24 09:46:03 [root] INFO: ***************** commit to mysql *****************
忽略部分...

查看redis key的變化

scrapy-redis 會在redis生成兩個set,用於存儲請求有序集合requests和過濾連接無序集合dupefilter,當請求消費完了,有序集合requests就會被幹掉,直到有新請求時才從新生成,若是未消費完就一直追加。而無序集合dupefilter因爲設置了SCHEDULER_PERSIST = True就不會被幹掉,有新的請求就會追加進來。cookie

root@fa2b076097e9:/data# redis-cli 
127.0.0.1:6379> AUTH 123123
OK
127.0.0.1:6379> KEYS *
1) "newest_movie:requests"
2) "newest_movie:dupefilter"
127.0.0.1:6379> TYPE newest_movie:requests
zset
127.0.0.1:6379> TYPE newest_movie:dupefilter
set
127.0.0.1:6379> ZCARD newest_movie:requests
(integer) 47
127.0.0.1:6379> ZCARD newest_movie:requests
(integer) 45
127.0.0.1:6379> ZCARD newest_movie:requests
(integer) 0
忽略部分...
127.0.0.1:6379> KEYS *
1) "newest_movie:dupefilter"
127.0.0.1:6379> SCARD newest_movie:dupefilter
(integer) 775
127.0.0.1:6379> ZRANGE newest_movie:requests 0 -1
 1) "\x80\x04\x95\x7f\x01\x00\x00\x00\x00\x00\x00}\x94(\x8c\x03url\x94\x8c8https://www.dytt8.net/html/gndy/dyzz/20180107/56002.html\x94\x8c\bcallback\x94\x8c\x11parse_single_page\x94\x8c\aerrback\x94N\x8c\x06method\x94\x8c\x03GET\x94\x8c\aheaders\x94}\x94C\aReferer\x94]\x94C4https://www.dytt8.net/html/gndy/dyzz/list_23_31.html\x94as\x8c\x04body\x94C\x00\x94\x8c\acookies\x94}\x94\x8c\x04meta\x94}\x94(\x8c\x04item\x94\x8c\x16movie_heaven_bar.items\x94\x8c\x12MovieHeavenBarItem\x94\x93\x94)\x81\x94}\x94\x8c\a_values\x94}\x94sb\x8c\x05depth\x94K\x1fu\x8c\t_encoding\x94\x8c\x05utf-8\x94\x8c\bpriority\x94K\x00\x8c\x0bdont_filter\x94\x89\x8c\x05flags\x94]\x94u."
 2) "\x80\x04\x95\x7f\x01\x00\x00\x00\x00\x00\x00}\x94(\x8c\x03url\x94\x8c8https://www.dytt8.net/html/gndy/dyzz/20180108/56021.html\x94\x8c\bcallback\x94\x8c\x11parse_single_page\x94\x8c\aerrback\x94N\x8c\x06method\x94\x8c\x03GET\x94\x8c\aheaders\x94}\x94C\aReferer\x94]\x94C4https://www.dytt8.net/html/gndy/dyzz/list_23_31.html\x94as\x8c\x04body\x94C\x00\x94\x8c\acookies\x94}\x94\x8c\x04meta\x94}\x94(\x8c\x04item\x94\x8c\x16movie_heaven_bar.items\x94\x8c\x12MovieHeavenBarItem\x94\x93\x94)\x81\x94}\x94\x8c\a_values\x94}\x94sb\x8c\x05depth\x94K\x1fu\x8c\t_encoding\x94\x8c\x05utf-8\x94\x8c\bpriority\x94K\x00\x8c\x0bdont_filter\x94\x89\x8c\x05flags\x94]\x94u."
忽略部分...
127.0.0.1:6379> SMEMBERS newest_movie:dupefilter
  1) "1bff65c147e71ea6d43b7e4a4ac86fc982375939"
  2) "9d99491255ee83dd4ffb72c3c59c17c938dbe08f"
忽略部分...

檢查數據

將數據庫數據導出到excel發現500條記錄仍是有4條重複值,這多是我屢次測試強行中止致使,因此也算是基本實現了過濾重複數據,增量爬取數據的功能了。dom

newest_movie

結語

這個項目基本功能是實現了,但仍是有寫細節要處理下,並且scrapy-redis的原理還不瞭解,下一篇就瞭解下scrapy-redis的原理。

相關文章
相關標籤/搜索