Scrapy-redis分佈式爬蟲爬取豆瓣電影詳情頁

平時爬蟲通常都使用Scrapy框架,一般都是在一臺機器上跑,爬取速度也不能達到預期效果,數據量小,並且很容易就會被封禁IP或者帳號,這時候能夠使用代理IP或者登陸方式爬,然而代理IP不少時候都很雞肋,除非使用付費版IP,可是和真實IP差異很大。這時候便有了Scrapy-redis分佈式爬蟲框架,它基於Scrapy改造,把Scrapy的調度器(scheduler)換成了Scrapy-redis的調度器,能夠輕鬆達到目的,利用多臺服務器來爬取數據,並且還能夠自動去重,效率高。爬取的數據默認保存在redis緩存中,速度很快。python

Scrapy工做原理:

Scrapy-redis工做原理:

中間的就是調度器mysql

豆瓣電影簡易分佈式爬蟲

我這裏直接使用start_urls的方式,數據存入到Mysql中

class DoubanSpider(RedisSpider):
    name = 'douban'
    redis_key = 'douban:start_urls'
    allowed_domains = ['douban.com']


    def start_requests(self):
        urls = get_urls()
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)


    def parse(self, response):
        # item_loader = MovieItemLoader(item=MovieItem, response=response)
        #
        # item_loader.add_xpath('title', '')
        item = MovieItem()
        print(response.url)
        item['movieId'] = int(response.url.split('subject/')[1].replace('/', ''))
        item['title'] = response.xpath('//h1/span/text()').extract()[0]
        item['year'] = response.xpath('//h1/span/text()').extract()[1].split('(')[1].split(')')[0] or '2019'
        item['url'] = response.url
        item['cover'] = response.xpath('//a[@class="nbgnbg"]/img/@src').extract()[0]
        try:
            item['director'] = response.xpath('//a[@rel="v:directedBy"]/text()').extract()[0] or '無'
        except Exception:
            item['director'] = '暫無'
        item['major'] = '/'.join(response.xpath('//a[@rel="v:starring"]/text()').extract())
        item['category'] = ','.join(response.xpath('//span[@property="v:genre"]/text()').extract())


        item['time'] = ','.join(response.xpath('//span[@property="v:initialReleaseDate"]/text()').extract())
        try:
            item['duration'] = response.xpath('//span[@property="v:runtime"]/text()').extract()[0]
        except Exception:
            item['duration'] = '暫無'

        item['score'] = response.xpath('//strong[@property="v:average"]/text()').extract()[0]
        item['comment_nums'] = response.xpath('//span[@property="v:votes"]/text()').extract()[0] or 0
        item['desc'] = response.xpath('//span[@property="v:summary"]/text()').extract()[0].strip()

        actor_list = response.xpath('//ul[@class="celebrities-list from-subject __oneline"]/li/a/@title').extract()
        actor_img_list = response.xpath('//ul[@class="celebrities-list from-subject __oneline"]/li/a/div/@style').extract()
        actor_img_list = [i.split('url(')[1].replace(')', '') for i in actor_img_list]

        item['actor_name_list'] = '----'.join(actor_list)
        item['actor_img_list'] = '----'.join(actor_img_list)


        yield item

settings.py文件redis

BOT_NAME = 'MovieSpider'

SPIDER_MODULES = ['MovieSpider.spiders']
NEWSPIDER_MODULE = 'MovieSpider.spiders'

# REDIS_HOST = '127.0.0.1'
# REDIS_PORT = 6379
REDIS_URL = 'redis://username:password@xxx.xxx.xxx.xxx:6379'


# Obey robots.txt rules
ROBOTSTXT_OBEY = False
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# Ensure all spiders share same duplicates filter through redis.
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
ITEM_PIPELINES = {
    'scrapy_redis.pipelines.RedisPipeline': 300,
    'MovieSpider.pipelines.MysqlPipeline': 200,
}

這裏只是爲了多臺服務器一塊兒爬取,沒有手動在redis中推入起始的URLsql

此時將爬蟲項目上傳到其餘服務器上,一塊兒開始緩存

效果以下:
服務器

相關文章
相關標籤/搜索
本站公眾號
   歡迎關注本站公眾號,獲取更多信息