Scrapy是一個爲了爬取網站數據,提取結構性數據而編寫的應用框架。 其能夠應用在數據挖掘,信息處理或存儲歷史數據等一系列的程序中。
其最初是爲了頁面抓取 (更確切來講, 網絡抓取 )所設計的, 也能夠應用在獲取API所返回的數據(例如 Amazon Associates Web Services ) 或者通用的網絡爬蟲。Scrapy用途普遍,能夠用於數據挖掘、監測和自動化測試。css
Scrapy 使用了 Twisted異步網絡庫來處理網絡通信。總體架構大體以下html
一、從spider中獲取到初始url給引擎,告訴引擎幫我給調度器;web
二、引擎將初始url給調度器,調度器安排入隊列;數據庫
三、調度器告訴引擎已經安排好,並把url給引擎,告訴引擎,給下載器進行下載;json
四、引擎將url給下載器,下載器下載頁面源碼;cookie
五、下載器告訴引擎已經下載好了,並把頁面源碼response給到引擎;網絡
六、引擎拿着response給到spider,spider解析數據、提取數據;架構
七、spider將提取到的數據給到引擎,告訴引擎,幫我把新的url給到調度器入隊列,把信息給到Item Pipelines進行保存;app
八、Item Pipelines將提取到的數據保存,保存好後告訴引擎,能夠進行下一個url的提取了;框架
九、循環3-8步,直到調度器中沒有url,關閉網站(若url下載失敗了,會返回從新下載)。
Scrapy: # 建立項目,在當前目錄中建立中建立一個項目文件(相似於Django) scrapy startproject sp1 生成目錄以下: sp1 - sp1 - spiders 目錄,放置建立的爬蟲應用 - middlewares.py 中間件 - items.py 格式化,與pipelines.py一同作持久化 - pipelines.py 持久化 - settings.py 配置文件 - scrapy.cfg 配置 # 建立爬蟲應用 cd sp1 scrapy genspider xiaohuar xiaohuar.com # 建立了xiaohuar.py scrapy genspider baidu baidu.com # 建立了baidu.py # 展現爬蟲應用列表 scrapy list # 執行爬蟲,進入project scrapy crawl baidu scrapy crawl baidu --nolog
文件說明:
注意:通常建立爬蟲文件時,以網站域名命名
# -*- coding: utf-8 -*- import scrapy
class QuoteItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() text = scrapy.Field() #語錄內容 author = scrapy.Field() #做者 tags = scrapy.Field() #標籤
# -*- coding: utf-8 -*- import scrapy from quotetutorial.items import QuoteItem class QuotesSpider(scrapy.Spider): name = 'quotes' allowed_domains = ['quotes.toscrape.com'] start_urls = ['http://quotes.toscrape.com/'] def parse(self, response): #print(response.text) quotes = response.css('.quote') #獲取每行的所有信息 for quote in quotes: item = QuoteItem() #建立獲取對象 text = quote.css('.text::text').extract_first() #*::text 用於獲取文本信息,axtract_first() 用於得到第一個文本信息* author = quote.css('.author::text').extract_first() tags = quote.css('.tags .tag::text').extract() #沒有指定獲取第一個--->獲取全部知足條件的 item['text'] = text item['author'] = author item['tags'] = tags yield item next = response.css('.pager .next a::attr(href)').extract_first() #獲取元素屬性信息 url = response.urljoin(next) #把鏈接拼接起來 yield scrapy.Request(url=url,callback=self.parse) #回調函數
# -*- coding: utf-8 -*- import pymongo from scrapy.exceptions import DropItem class TextPipeline(object): #對語錄進行處理,當長度超過50時,截斷而後在後面加*...* def __init__(self): self.limit =50 def process_item(self, item, spider): if item['text']: if len(item['text']) > self.limit: item['text'] = item['text'][0:self.limit].rstrip() + '...' return item else: return DropItem('Miss Text') class MongoPipeline(object): #連接數據庫 def __init__(self ,mongo_uri, mongo_db): self.mongo_uri = mongo_uri self.mongo_db = mongo_db @classmethod def from_crawler(cls, crawler): #從ettings中拿到須要的配置信息(類方法) return cls( mongo_uri=crawler.settings.get('MONGO_URI'), mongo_db=crawler.settings.get('MONGO_DB') ) def open_spider(self,spider): #初始化數據庫 self.client = pymongo.MongoClient(self.mongo_uri) self.db = self.client[self.mongo_db] def process_item(self, item ,spider): #向數據庫插入數據 name = item.__class__.__name__ self.db[name].insert(dict(item)) return item def close_spider(self ,spider): self.client.close()
# -*- coding: utf-8 -*- # Scrapy settings for quotetutorial project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://doc.scrapy.org/en/latest/topics/settings.html # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'quotetutorial' SPIDER_MODULES = ['quotetutorial.spiders'] NEWSPIDER_MODULE = 'quotetutorial.spiders' MONGO_URI = 'localhost' MONGO_DB = 'quotestutorial' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'quotetutorial (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'quotetutorial.middlewares.QuotetutorialSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'quotetutorial.middlewares.QuotetutorialDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://doc.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'quotetutorial.pipelines.TextPipeline': 300, 'quotetutorial.pipelines.MongoPipeline': 400, } # Enable and configure the AutoThrottle extension (disabled by default) # See https://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
注意:若是有多個item pipelines的話(多種保存方式),須要在ITEM_PIPELINES中配置類,後面的「300」隨意設置。
分配給每一個類的整型值,肯定了它們的運行順序。數值越低,組件的優先級越高,運行順序越靠前。
scrapy crawl quotes
scrapy crawl quotes -o quotes.{json | jl | csv | xml | pickle | marshal}