Scrapy做爲爬蟲的進階內容,能夠實現多線程爬取目標內容,簡化代碼邏輯,提升開發效率,深受爬蟲開發者的喜好,本文主要以爬取某股票網站爲例,簡述如何經過Scrpay實現爬蟲,僅供學習分享使用,若有不足之處,還請指正。html
Scrapy是用python實現的一個爲了爬取網站數據,提取結構性數據而編寫的應用框架。使用Twisted高效異步網絡框架來處理網絡通訊。Scrapy架構:python
關於Scrpay架構各項說明,以下所示:web
Scrapy數據流:ajax
在命令行模式下,經過pip install scrapy命令進行安裝Scrapy,以下所示:cookie
當出現如下提示信息時,表示安裝成功網絡
在命令行模式下,切換到項目存放目錄,經過scrapy startproject stockstar 建立爬蟲項目,以下所示:多線程
根據提示,經過提供的模板,建立爬蟲【命令格式:scrapy genspider 爬蟲名稱 域名】,以下所示:架構
注意:爬蟲名稱,不能跟項目名稱一致,不然會報錯,以下所示:app
經過Pycharm打開新建立的scrapy項目,以下所示:框架
本例主要爬取某證券網站行情中心股票ID與名稱信息,以下所示:
經過命令行建立項目後,基本Scrapy爬蟲框架已經造成,剩下的就是業務代碼填充。
定義須要爬取的字段信息,以下所示:
1 class StockstarItem(scrapy.Item): 2 """ 3 定義須要爬取的字段名稱 4 """ 5 # define the fields for your item here like: 6 # name = scrapy.Field() 7 stock_type = scrapy.Field() # 股票類型 8 stock_id = scrapy.Field() # 股票ID 9 stock_name = scrapy.Field() # 股票名稱
Scrapy的爬蟲結構是固定的,定義一個類,繼承自scrapy.Spider,類中定義屬性【爬蟲名稱,域名,起始url】,重寫父類方法【parse】,根據須要爬取的頁面邏輯不一樣,在parse中定製不一樣的爬蟲代碼,以下所示:
1 class StockSpider(scrapy.Spider): 2 name = 'stock' 3 allowed_domains = ['quote.stockstar.com'] # 域名 4 start_urls = ['http://quote.stockstar.com/stock/stock_index.htm'] # 啓動的url 5 6 def parse(self, response): 7 """ 8 解析函數 9 :param response: 10 :return: 11 """ 12 item = StockstarItem() 13 styles = ['滬A', '滬B', '深A', '深B'] 14 index = 0 15 for style in styles: 16 print('********************本次抓取' + style[index] + '股票********************') 17 ids = response.xpath( 18 '//div[@class="w"]/div[@class="main clearfix"]/div[@class="seo_area"]/div[' 19 '@class="seo_keywordsCon"]/ul[@id="index_data_' + str(index) + '"]/li/span/a/text()').getall() 20 names = response.xpath( 21 '//div[@class="w"]/div[@class="main clearfix"]/div[@class="seo_area"]/div[' 22 '@class="seo_keywordsCon"]/ul[@id="index_data_' + str(index) + '"]/li/a/text()').getall() 23 # print('ids = '+str(ids)) 24 # print('names = ' + str(names)) 25 for i in range(len(ids)): 26 item['stock_type'] = style 27 item['stock_id'] = str(ids[i]) 28 item['stock_name'] = str(names[i]) 29 yield item
在Pipeline中,對抓取的數據進行處理,本例爲簡便,在控制進行輸出,以下所示:
1 class StockstarPipeline: 2 def process_item(self, item, spider): 3 print('股票類型>>>>'+item['stock_type']+'股票代碼>>>>'+item['stock_id']+'股票名稱>>>>'+item['stock_name']) 4 return item
注意:在對item進行賦值時,只能經過item['key']=value的方式進行賦值,不能夠經過item.key=value的方式賦值。
經過settings.py文件進行配置,包括請求頭,管道,robots協議等內容,以下所示:
1 # Scrapy settings for stockstar project 2 # 3 # For simplicity, this file contains only settings considered important or 4 # commonly used. You can find more settings consulting the documentation: 5 # 6 # https://docs.scrapy.org/en/latest/topics/settings.html 7 # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html 8 # https://docs.scrapy.org/en/latest/topics/spider-middleware.html 9 10 BOT_NAME = 'stockstar' 11 12 SPIDER_MODULES = ['stockstar.spiders'] 13 NEWSPIDER_MODULE = 'stockstar.spiders' 14 15 16 # Crawl responsibly by identifying yourself (and your website) on the user-agent 17 #USER_AGENT = 'stockstar (+http://www.yourdomain.com)' 18 19 # Obey robots.txt rules 是否遵照robots協議 20 ROBOTSTXT_OBEY = False 21 22 # Configure maximum concurrent requests performed by Scrapy (default: 16) 23 #CONCURRENT_REQUESTS = 32 24 25 # Configure a delay for requests for the same website (default: 0) 26 # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay 27 # See also autothrottle settings and docs 28 #DOWNLOAD_DELAY = 3 29 # The download delay setting will honor only one of: 30 #CONCURRENT_REQUESTS_PER_DOMAIN = 16 31 #CONCURRENT_REQUESTS_PER_IP = 16 32 33 # Disable cookies (enabled by default) 34 #COOKIES_ENABLED = False 35 36 # Disable Telnet Console (enabled by default) 37 #TELNETCONSOLE_ENABLED = False 38 39 # Override the default request headers: 40 DEFAULT_REQUEST_HEADERS = { 41 # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 42 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Mobile Safari/537.36' #, 43 # 'Accept-Language': 'en,zh-CN,zh;q=0.9' 44 } 45 46 # Enable or disable spider middlewares 47 # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html 48 #SPIDER_MIDDLEWARES = { 49 # 'stockstar.middlewares.StockstarSpiderMiddleware': 543, 50 #} 51 52 # Enable or disable downloader middlewares 53 # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html 54 #DOWNLOADER_MIDDLEWARES = { 55 # 'stockstar.middlewares.StockstarDownloaderMiddleware': 543, 56 #} 57 58 # Enable or disable extensions 59 # See https://docs.scrapy.org/en/latest/topics/extensions.html 60 #EXTENSIONS = { 61 # 'scrapy.extensions.telnet.TelnetConsole': None, 62 #} 63 64 # Configure item pipelines 65 # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html 66 ITEM_PIPELINES = { 67 'stockstar.pipelines.StockstarPipeline': 300, 68 } 69 70 # Enable and configure the AutoThrottle extension (disabled by default) 71 # See https://docs.scrapy.org/en/latest/topics/autothrottle.html 72 #AUTOTHROTTLE_ENABLED = True 73 # The initial download delay 74 #AUTOTHROTTLE_START_DELAY = 5 75 # The maximum download delay to be set in case of high latencies 76 #AUTOTHROTTLE_MAX_DELAY = 60 77 # The average number of requests Scrapy should be sending in parallel to 78 # each remote server 79 #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 80 # Enable showing throttling stats for every response received: 81 #AUTOTHROTTLE_DEBUG = False 82 83 # Enable and configure HTTP caching (disabled by default) 84 # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings 85 #HTTPCACHE_ENABLED = True 86 #HTTPCACHE_EXPIRATION_SECS = 0 87 #HTTPCACHE_DIR = 'httpcache' 88 #HTTPCACHE_IGNORE_HTTP_CODES = [] 89 #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
因scrapy是各個獨立的頁面,只能經過終端命令行的方式運行,格式爲:scrapy crawl 爬蟲名稱,以下所示:
1 scrapy crawl stock
以下圖所示:
本例內容相對簡單,僅爲說明Scrapy的常見用法,爬取的內容都是第一次請求可以獲取到源碼的內容,即所見即所得。
遺留兩個小問題:
以上兩個問題,待後續遇到時,再進一步分析。一首陶淵明的歸田園居,與君共享。
歸園田居(其一)
少無適俗韻,性本愛丘山。誤落塵網中,一去三十年。
羈鳥戀舊林,池魚思故淵。開荒南野際,守拙歸園田。
方宅十餘畝,草屋八九間。榆柳蔭後檐,桃李羅堂前。
曖曖遠人村,依依墟里煙。狗吠深巷中,雞鳴桑樹顛。
戶庭無塵雜,虛室有餘閒。久在樊籠裏,復得返天然。