Python網絡爬蟲教程:知乎爬蟲案例

1、zhihuSpider.py 爬⾍代碼:

#!/usr/bin/env python
# -*- coding:utf-8 -*-
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import Selector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.http import Request, FormRequest
from zhihu.items import ZhihuItem
class ZhihuSipder(CrawlSpider) :
name = "zhihu"
allowed_domains = ["www.zhihu.com"]
start_urls = [ "http://www.zhihu.com" ]
rules = (
Rule(SgmlLinkExtractor(allow = ('/question/\d+#.*?', )), ca
llback = 'parse_page', follow = True),
Rule(SgmlLinkExtractor(allow = ('/question/\d+', )), callba
ck = 'parse_page', follow = True),
)
headers = {
"Accept": "*/*",
"Accept-Encoding": "gzip,deflate",
"Accept-Language": "en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4",
"Connection": "keep-alive",
"Content-Type":" application/x-www-form-urlencoded; charset=UTF
-8",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/
537.36",
"Referer": "http://www.zhihu.com/" }#重寫了爬⾍類的⽅法, 實現了⾃定義請求, 運⾏成功後會調⽤callback 回調
函數
def start_requests(self):
return [Request("https://www.zhihu.com/login", meta = {'coo
kiejar' : 1}, callback = self.post_login)]
#FormRequeset 出問題了
def post_login(self, response):
print 'Preparing login'
#下⾯這句話⽤於抓取請求⽹⻚後返回⽹⻚中的_xsrf 字段的⽂字, ⽤於成
功提交表單
xsrf = Selector(response).xpath('//input[@name="_xsrf"]/@va
lue').extract()[0]
print xsrf
#FormRequeset.from_response 是 Scrapy 提供的⼀個函數, ⽤於 post 表 單#登錄成功後, 會調⽤after_login 回調函數
return [FormRequest.from_response(response, #"http://www.
zhihu.com/login",
okiejar']},
ers
meta = {'cookiejar' : response.meta['co
headers = self.headers, #注意此處的
head
formdata = {
'_xsrf': xsrf,
'email': '1095511864@qq.com',
'password': '123456'
},
callback = self.after_login,
dont_filter = True
)]
def after_login(self, response) :
for url in self.start_urls :
yield self.make_requests_from_url(url)
def parse_page(self, response):
problem = Selector(response)
item = ZhihuItem()
item['url'] = response.url
item['name'] = problem.xpath('//span[@class="name"]/text()'
).extract()
print item['name']
item['title'] = problem.xpath('//h2[@class="zm-item-title zm-editable-content"]/text()').extract()
item['description'] = problem.xpath('//div[@class="zm-editable-content"]/text()').extract()
item['answer']= problem.xpath('//div[@class=" zm-editable-c
ontent clearfix"]/text()').extract()
return item

2、Item 類設置

from scrapy.item import Item, Field
class ZhihuItem(Item): # define the fields for your item here like:
# name = scrapy.Field()
url = Field() #保存抓取問題的 url
title = Field()#抓取問題的標題
description = Field() #抓取問題的描述
answer = Field() #抓取問題的答案
name = Field() #個⼈⽤戶的名稱

3、setting.py 設置抓取間隔

BOT_NAME = 'zhihu'
SPIDER_MODULES = ['zhihu.spiders']
NEWSPIDER_MODULE = 'zhihu.spiders'
DOWNLOAD_DELAY = 0.25 #設置下載間隔爲 250ms

4、Cookie 原理

HTTP 是⽆狀態的⾯向鏈接的協議, 爲了保持鏈接狀態, 引⼊了 Cookie 機制。python

Cookie 是 http 消息頭中的⼀種屬性,包括:
Cookie 名字(Name)
Cookie 的值(Value) Cookie 的過時時間
(Expires/Max-Age) Cookie 做⽤路徑(Path)
Cookie 所在域名(Domain), 使⽤Cookie 進⾏安全鏈接(Secure)。
前兩個參數是 Cookie 應⽤的必要條件,另外,還包括 Cookie⼤⼩(Size,不一樣瀏覽器對 Cookie 個數及⼤⼩限制是有差別的)。
更多爬蟲項目爬蟲案例視頻教程學習,請點此處獲取。瀏覽器

相關文章
相關標籤/搜索