轉載主註明出處:http://www.cnblogs.com/codefish/p/4968260.htmlhtml
在爬蟲中,咱們遇到比較多需求就是文件下載以及圖片下載,在其它的語言或者框架中,咱們可能在通過數據篩選,而後異步的使用文件下載類來達到目的,Scrapy框架中自己已經實現了文件及圖片下載的文件,至關的方便,只要幾行代碼,就能夠輕鬆的搞定下載。下面我將演示如何使用scrapy下載豆瓣的相冊首頁內容。git
優勢介紹:github
1)自動去重框架
2)異步操做,不會阻塞dom
3)能夠生成指定尺寸的縮略圖異步
4)計算過時時間scrapy
5)格式轉化ide
編碼過程:編碼
一,定義Item url
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.html import scrapy from scrapy import Item,Field class DoubanImgsItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() image_urls = Field() images = Field() image_paths = Field() pass
二,定義spider
#coding=utf-8 from scrapy.spiders import Spider import re from douban_imgs.items import DoubanImgsItem from scrapy.http.request import Request # please pay attention to the encoding of info,otherwise raise error import sys reload(sys) sys.setdefaultencoding('utf8') class download_douban(Spider): name = 'download_douban' def __init__(self, url='152686895', *args, **kwargs): self.allowed_domains = ['douban.com'] self.start_urls = [ 'http://www.douban.com/photos/album/%s/' %(url) ] #call the father base function self.url = url super(download_douban, self).__init__(*args, **kwargs) def parse(self, response): """ :type response: response infomation """ list_imgs = response.xpath('//div[@class="photolst clearfix"]//img/@src').extract() if list_imgs: item = DoubanImgsItem() item['image_urls'] = list_imgs yield item
三,定義piepline
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html from scrapy.pipelines.images import ImagesPipeline from scrapy.exceptions import DropItem from scrapy import Request from scrapy import log class DoubanImgsPipeline(object): def process_item(self, item, spider): return item class DoubanImgDownloadPieline(ImagesPipeline): def get_media_requests(self,item,info): for image_url in item['image_urls']: yield Request(image_url) def item_completed(self, results, item, info): image_paths = [x['path'] for ok, x in results if ok] if not image_paths: raise DropItem("Item contains no images") item['image_paths'] = image_paths return item
四,定義setting.py,啓用item處理器
# Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'douban_imgs.pipelines.DoubanImgDownloadPieline': 300, } IMAGES_STORE='C:\\doubanimgs' IMAGES_EXPIRES = 90
運行效果:
github地址:https://github.com/BruceDone/scrapy_demo
轉載主註明出處:http://www.cnblogs.com/codefish/p/4968260.html
若是scrapy或者爬蟲系列對你有幫助,請推薦一下,我後續會更新更多的爬蟲系列