目標: 獲取上交所和深交所全部股票的名稱和交易信息。
輸出: 保存到文件中。
技術路線:Scrapy爬蟲框架
語言: python3.5
因爲在上一篇博客中已經介紹了股票信息爬取的原理,在這裏再也不進行過多介紹,如需瞭解能夠參考博客:連接描述,在本篇文章中主要講解該項目在Scrapy框架中如何實現。css
Scrapy框架以下圖所示:html
咱們主要進行兩步操做:
(1) 首先須要在框架中編寫一個爬蟲程序spider,用於連接爬取和頁面解析;
(2) 編寫pipelines,用於處理解析後的股票數據並將這些數據存儲到文件中。python
步驟:
(1) 創建一個工程生成Spider模板
打開cmd
命令行,定位到項目所放的路徑,輸入:scrapy startproject BaiduStocks
,此時會在目錄中新建一個名字爲BaiduStocks
的工程。再輸入:cd BaiduStocks
進入目錄,接着輸入:scrapy genspider stocks baidu.com
生成一個爬蟲。以後咱們能夠在spiders/
目錄下看到一個stocks.py文件,以下圖所示:segmentfault
(2) 編寫Spider:配置stocks.py文件,修改返回頁面的處理,修改對新增URL爬取請求的處理
打開stocks.py文件,代碼以下所示:框架
# -*- coding: utf-8 -*- import scrapy class StocksSpider(scrapy.Spider): name = 'stocks' allowed_domains = ['baidu.com'] start_urls = ['http://baidu.com/'] def parse(self, response): pass
將上述代碼修改以下:dom
# -*- coding: utf-8 -*- import scrapy import re class StocksSpider(scrapy.Spider): name = "stocks" start_urls = ['http://quote.eastmoney.com/stocklist.html'] def parse(self, response): for href in response.css('a::attr(href)').extract(): try: stock = re.findall(r"[s][hz]\d{6}", href)[0] url = 'https://gupiao.baidu.com/stock/' + stock + '.html' yield scrapy.Request(url, callback=self.parse_stock) except: continue def parse_stock(self, response): infoDict = {} stockInfo = response.css('.stock-bets') name = stockInfo.css('.bets-name').extract()[0] keyList = stockInfo.css('dt').extract() valueList = stockInfo.css('dd').extract() for i in range(len(keyList)): key = re.findall(r'>.*</dt>', keyList[i])[0][1:-5] try: val = re.findall(r'\d+\.?.*</dd>', valueList[i])[0][0:-5] except: val = '--' infoDict[key]=val infoDict.update( {'股票名稱': re.findall('\s.*\(',name)[0].split()[0] + \ re.findall('\>.*\<', name)[0][1:-1]}) yield infoDict
(3) 配置pipelines.py文件,定義爬取項(Scraped Item)的處理類
打開pipelinse.py文件,以下圖所示:scrapy
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html class BaidustocksPipeline(object): def process_item(self, item, spider): return item
對上述代碼修改以下:ide
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html class BaidustocksPipeline(object): def process_item(self, item, spider): return item #每一個pipelines類中有三個方法 class BaidustocksInfoPipeline(object): #當一個爬蟲被調用時,對應的pipelines啓動的方法 def open_spider(self, spider): self.f = open('BaiduStockInfo.txt', 'w') #一個爬蟲關閉或結束時的pipelines對應的方法 def close_spider(self, spider): self.f.close() #對每個Item項進行處理時所對應的方法,也是pipelines中最主體的函數 def process_item(self, item, spider): try: line = str(dict(item)) + '\n' self.f.write(line) except: pass return item
(4) 修改settings.py
,是框架找到咱們在pipelinse.py
中寫的類
在settings.py
中加入:函數
# Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'BaiduStocks.pipelines.BaidustocksInfoPipeline': 300, }
到這裏,程序就完成了。url
(4) 執行程序
在命令行中輸入:scrapy crawl stocks