接着上篇博客《用Scrapy抓取豆瓣小組數據(一)》http://my.oschina.net/chengye/blog/124157 python
1,引入Scrapy中的另外一個預約義的蜘蛛CrawlSpider cookie
from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
2, 基於CrawSpider定義一個新的類GroupSpider,並添加相應的爬行規則。 session
class GroupSpider(CrawlSpider): name = "Group" allowed_domains = ["douban.com"] start_urls = [ "http://www.douban.com/group/explore?tag=%E8%B4%AD%E7%89%A9", "http://www.douban.com/group/explore?tag=%E7%94%9F%E6%B4%BB", "http://www.douban.com/group/explore?tag=%E7%A4%BE%E4%BC%9A", "http://www.douban.com/group/explore?tag=%E8%89%BA%E6%9C%AF", "http://www.douban.com/group/explore?tag=%E5%AD%A6%E6%9C%AF", "http://www.douban.com/group/explore?tag=%E6%83%85%E6%84%9F", "http://www.douban.com/group/explore?tag=%E9%97%B2%E8%81%8A", "http://www.douban.com/group/explore?tag=%E5%85%B4%E8%B6%A3" ] rules = [ Rule(SgmlLinkExtractor(allow=('/group/[^/]+/$', )), callback='parse_group_home_page', process_request='add_cookie'), Rule(SgmlLinkExtractor(allow=('/group/explore\?tag', )), follow=True, process_request='add_cookie'), ]start_urls預約義了豆瓣有所小組分類頁面,蜘蛛會從這些頁面出發去找小組。
rules定義是CrawlSpider中最重要的一環,能夠理解爲:當蜘蛛看到某種類型的網頁,如何去進行處理。 app
例如,以下規則會處理URL以/group/XXXX/爲後綴的網頁,調用parse_group_home_page爲處理函數,而且會在request發送前調用add_cookie來附加cookie信息。 dom
Rule(SgmlLinkExtractor(allow=('/group/[^/]+/$', )), callback='parse_group_home_page', process_request='add_cookie'),又如,以下規則會抓取網頁內容,並自動抓取網頁中連接供下一步抓取,但不會處理網頁的其餘內容。
Rule(SgmlLinkExtractor(allow=('/group/explore\?tag', )), follow=True, process_request='add_cookie'),
定義以下函數,並如前面所講在Rule定義裏添加process_request=add_cookie。 scrapy
def add_cookie(self, request): request.replace(cookies=[ {'name': 'COOKIE_NAME','value': 'VALUE','domain': '.douban.com','path': '/'}, ]); return request;通常網站在client端都用cookie來保存用戶的session信息,添加cookie信息就能夠模擬登錄用戶來抓取數據。
首先能夠嘗試添加登錄用戶的cookie去抓取網頁,即便你抓取的是公開網頁,添加cookie有可能會防止蜘蛛在應用程序層被禁。這個我沒有實際驗證過,但確定沒有壞處。 ide
其次,即便你是受權用戶,若是你的訪問過於頻繁,你的IP會可能被ban,因此通常你須要讓蜘蛛在訪問網址中間休息1~2秒。 函數
還有就是配置User Agent,儘可能輪換使用不一樣的UserAgent去抓取網頁 網站
在Scrapy項目的settings.py鍾,添加以下設置: url
DOWNLOAD_DELAY = 2 RANDOMIZE_DOWNLOAD_DELAY = True USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5' COOKIES_ENABLED = True
================
到此位置,抓取豆瓣小組頁面的蜘蛛就完成了。接下來,能夠按照這種模式定義抓取小組討論頁面數據的Spider,而後就放手讓蜘蛛去爬行吧!Have Fun!
from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.item import Item from douban.items import DoubanItem import re class GroupSpider(CrawlSpider): name = "Group" allowed_domains = ["douban.com"] start_urls = [ "http://www.douban.com/group/explore?tag=%E8%B4%AD%E7%89%A9", "http://www.douban.com/group/explore?tag=%E7%94%9F%E6%B4%BB", "http://www.douban.com/group/explore?tag=%E7%A4%BE%E4%BC%9A", "http://www.douban.com/group/explore?tag=%E8%89%BA%E6%9C%AF", "http://www.douban.com/group/explore?tag=%E5%AD%A6%E6%9C%AF", "http://www.douban.com/group/explore?tag=%E6%83%85%E6%84%9F", "http://www.douban.com/group/explore?tag=%E9%97%B2%E8%81%8A", "http://www.douban.com/group/explore?tag=%E5%85%B4%E8%B6%A3" ] rules = [ Rule(SgmlLinkExtractor(allow=('/group/[^/]+/$', )), callback='parse_group_home_page', process_request='add_cookie'), # Rule(SgmlLinkExtractor(allow=('/group/[^/]+/discussion\?start\=(\d{1,4})$', )), callback='parse_group_topic_list', process_request='add_cookie'), Rule(SgmlLinkExtractor(allow=('/group/explore\?tag', )), follow=True, process_request='add_cookie'), ] def __get_id_from_group_url(self, url): m = re.search("^http://www.douban.com/group/([^/]+)/$", url) if(m): return m.group(1) else: return 0 def add_cookie(self, request): request.replace(cookies=[ ]); return request; def parse_group_topic_list(self, response): self.log("Fetch group topic list page: %s" % response.url) pass def parse_group_home_page(self, response): self.log("Fetch group home page: %s" % response.url) hxs = HtmlXPathSelector(response) item = DoubanItem() #get group name item['groupName'] = hxs.select('//h1/text()').re("^\s+(.*)\s+$")[0] #get group id item['groupURL'] = response.url groupid = self.__get_id_from_group_url(response.url) #get group members number members_url = "http://www.douban.com/group/%s/members" % groupid members_text = hxs.select('//a[contains(@href, "%s")]/text()' % members_url).re("\((\d+)\)") item['totalNumber'] = members_text[0] #get relative groups item['RelativeGroups'] = [] groups = hxs.select('//div[contains(@class, "group-list-item")]') for group in groups: url = group.select('div[contains(@class, "title")]/a/@href').extract()[0] item['RelativeGroups'].append(url) #item['RelativeGroups'] = ','.join(relative_groups) return item