不一樣操做系統安裝操做不一樣,能夠直接看官方文檔Install Scrapycss
在命令行輸入html
scrapy startproject tutorial
進入項目目錄建立一個spiderpython
cd tutorial scrapy genspider quotes domain.com
import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" def start_requests(self): urls = [ 'http://quotes.toscrape.com/page/1/', 'http://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % page with open(filename, 'wb') as f: f.write(response.body) self.log('Saved file %s' % filename)
運行scrapy,在項目頂級目錄下輸入命令web
scrapy crawl quotes
在QuotesSpider這個類裏,name指明spider的名稱,在start_requests函數裏發出請求,用parse函數處理請求返回的結果,start_requests函數能夠替換爲start_urls列表,scrapy會自動幫咱們發出請求,並默認用parse函數處理,還能夠設置一些其它參數,詳見Document正則表達式
scrapy內置css選擇器和xpath選擇器,固然你也能夠選擇使用其餘的解析庫,好比BeautifulSoup,咱們簡單用scrapy shell展現一下scrapy內置選擇器的用法,在命令行中輸入shell
scrapy shell https://docs.scrapy.org/en/latest/_static/selectors-sample1.html
示例代碼dom
<html> <head> <base href='http://example.com/' /> <title>Example website</title> </head> <body> <div id='images'> <a href='image1.html'>Name: My image 1 <br /><img src='image1_thumb.jpg' /></a> <a href='image2.html'>Name: My image 2 <br /><img src='image2_thumb.jpg' /></a> <a href='image3.html'>Name: My image 3 <br /><img src='image3_thumb.jpg' /></a> <a href='image4.html'>Name: My image 4 <br /><img src='image4_thumb.jpg' /></a> <a href='image5.html'>Name: My image 5 <br /><img src='image5_thumb.jpg' /></a> </div> </body> </html>
# 獲取標題 # selector能夠去掉 # extract返回的是列表 response.selector.xpath('//title/text()').extract_first() response.selector.css('title::text').extract_first() # 獲取a標籤裏href參數內容 response.xpath('//a/@href').extract() response.css('a::attr(href)').extract() # 混合獲取img標籤的src屬性 response.xpath('//div[@id="images"]').css('img::attr(src)').extract() # 獲取a標籤中包含image的href屬性 response.xpath('//a[contains(@href, "image")]/@href').extract() response.css('a[href*=image]::attr(href)').extract() # 使用正則表達式 response.css('a::text').re('Name\:(.*)') response.css('a::text').re_first('Name\:(.*)') # 添加default參數指定默認提取信息 response.css('aa').extract_first(default='')
經過parse處理函數返回的Item能夠用Item Pipeline進行加工處理,主要是數據清洗,格式化。scrapy
# 過濾掉相同的item class DuplicatePipeline(object): def __init__(self): self.items = set() def process_item(self, item, spider): if item['id'] in self.items: raise DropItem('Duplicate item found: %s' % item['id']) else: self.items.add(item['id']) return item
須要在settings裏的註冊一下自定義的Pipelineide
ITEM_PIPELINES = { 'tutorial.pipelines.TutorialPipeline': 300, 'tutorial.pipelines.DuplicatePipeline': 200, }
數字越小,優先級越高函數