CrawlSpider實際上是Spider的一個子類,除了繼承到Spider的特性和功能外,還派生除了其本身獨有的更增強大的特性和功能。其中最顯著的功能就是」LinkExtractors連接提取器「。Spider是全部爬蟲的基類,其設計原則只是爲了爬取start_url列表中網頁,而從爬取到的網頁中提取出的url進行繼續的爬取工做使用CrawlSpider更合適。css
建立scrapy工程:scrapy startproject projectNamehtml
建立爬蟲文件:scrapy genspider -t crawl spiderName www.xxx.com --此指令對比之前的指令多了 "-t crawl",表示建立的爬蟲文件是基於CrawlSpider這個類的,而再也不是Spider這個基類python
import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule class ChoutidemoSpider(CrawlSpider): name = 'choutiDemo' #allowed_domains = ['www.chouti.com'] start_urls = ['http://www.chouti.com/'] rules = ( Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True), ) def parse_item(self, response): i = {} #i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract() #i['name'] = response.xpath('//div[@id="name"]').extract() #i['description'] = response.xpath('//div[@id="description"]').extract() return i - 2,3行:導入CrawlSpider相關模塊 - 7行:表示該爬蟲程序是基於CrawlSpider類的 - 12,13,14行:表示爲提取Link規則 - 16行:解析方法 CrawlSpider類和Spider類的最大不一樣是CrawlSpider多了一個rules屬性,其做用是定義」提取動做「。在rules中能夠包含一個或多個Rule對象,在Rule對象中包含了LinkExtractor對象。
LinkExtractor( allow=r'Items/',# 知足括號中「正則表達式」的值會被提取,若是爲空,則所有匹配。 deny=xxx, # 知足正則表達式的則不會被提取。 restrict_xpaths=xxx, # 知足xpath表達式的值會被提取 restrict_css=xxx, # 知足css表達式的值會被提取 deny_domains=xxx, # 不會被提取的連接的domains。 ) - 做用:提取response中符合規則的連接。
Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True) - 參數介紹: 參數1:指定連接提取器 參數2:指定規則解析器解析數據的規則(回調函數) 參數3:是否將連接提取器繼續做用到連接提取器提取出的連接網頁中。當callback爲None,參數3的默認值爲true。
a)爬蟲文件首先根據起始url,獲取該url的網頁內容
b)連接提取器會根據指定提取規則將步驟a中網頁內容中的連接進行提取
c)規則解析器會根據指定解析規則將連接提取器中提取到的連接中的網頁內容根據指定的規則進行解析
d)將解析數據封裝到item中,而後提交給管道進行持久化存儲
mport scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from qiubaiBycrawl.items import QiubaibycrawlItem import re class QiubaitestSpider(CrawlSpider): name = 'qiubaiTest' #起始url start_urls = ['http://www.qiushibaike.com/'] #定義連接提取器,且指定其提取規則 page_link = LinkExtractor(allow=r'/8hr/page/\d+/') rules = ( #定義規則解析器,且指定解析規則經過callback回調函數 Rule(page_link, callback='parse_item', follow=True), ) #自定義規則解析器的解析規則函數 def parse_item(self, response): div_list = response.xpath('//div[@id="content-left"]/div') for div in div_list: #定義item item = QiubaibycrawlItem() #根據xpath表達式提取糗百中段子的做者 item['author'] = div.xpath('./div/a[2]/h2/text()').extract_first().strip('\n') #根據xpath表達式提取糗百中段子的內容 item['content'] = div.xpath('.//div[@class="content"]/span/text()').extract_first().strip('\n') yield item #將item提交至管道
import scrapy class QiubaibycrawlItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() author = scrapy.Field() #做者 content = scrapy.Field() #內容
管道文件web
class QiubaibycrawlPipeline(object): def __init__(self): self.fp = None def open_spider(self,spider): print('開始爬蟲') self.fp = open('./data.txt','w') def process_item(self, item, spider): #將爬蟲文件提交的item寫入文件進行持久化存儲 self.fp.write(item['author']+':'+item['content']+'\n') return item def close_spider(self,spider): print('結束爬蟲') self.fp.close()
爬蟲文件正則表達式
import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from bossPro.items import DetailItem,FirstItem #爬取的是崗位名稱(首頁)和崗位描述(詳情頁) class BossSpider(CrawlSpider): name = 'boss' # allowed_domains = ['www.xxx.com'] start_urls = ['https://www.zhipin.com/c101010100/?query=python%E5%BC%80%E5%8F%91&page=1&ka=page-prev'] #獲取全部的頁碼鏈接 link = LinkExtractor(allow=r'page=\d+') link_detail = LinkExtractor(allow=r'/job_detail/.*?html') #/job_detail/f2a47b2f40c53bd41XJ93Nm_GVQ~.html #/job_detail/47dc9803e93701581XN80ty7GFI~.html rules = ( Rule(link, callback='parse_item', follow=True), Rule(link_detail, callback='parse_detail'), ) #將頁碼鏈接對應的頁面數據中的崗位名稱進行解析 def parse_item(self, response): li_list = response.xpath('//div[@class="job-list"]/ul/li') for li in li_list: item = FirstItem() job_title = li.xpath('.//div[@class="job-title"]/text()').extract_first() item['job_title'] = job_title # print(job_title) yield item def parse_detail(self,response): job_desc = response.xpath('//*[@id="main"]/div[3]/div/div[2]/div[2]/div[1]/div//text()').extract() item = DetailItem() job_desc = ''.join(job_desc) item['job_desc'] = job_desc yield item
items文件cookie
# Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class DetailItem(scrapy.Item): # define the fields for your item here like: job_desc = scrapy.Field() class FirstItem(scrapy.Item): # define the fields for your item here like: job_title = scrapy.Field()
管道文件app
# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class BossproPipeline(object): f1,f2 = None,None def open_spider(self,spider): self.f1 = open('a.txt','w',encoding='utf-8') self.f2 = open('b.txt', 'w', encoding='utf-8') def process_item(self, item, spider): #item在同一時刻只能夠接收到某一個指定item對象 if item.__class__.__name__ == 'FirstItem': job_title = item['job_title'] self.f1.write(job_title+'\n') else: job_desc = item['job_desc'] self.f2.write(job_desc) return item
配置文件dom
# -*- coding: utf-8 -*- # Scrapy settings for bossPro project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://doc.scrapy.org/en/latest/topics/settings.html # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'bossPro' SPIDER_MODULES = ['bossPro.spiders'] NEWSPIDER_MODULE = 'bossPro.spiders' USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'bossPro (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'bossPro.middlewares.BossproSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'bossPro.middlewares.BossproDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://doc.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'bossPro.pipelines.BossproPipeline': 300, } # Enable and configure the AutoThrottle extension (disabled by default) # See https://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'