1.爲了可以將爬取到的數據存入本地數據庫,如今本地建立一個MySQL數據庫example,而後
在數據庫中創建一張表格test,示例以下:css
DROP TABLE IF EXISTS `test`; CREATE TABLE `douban_db` ( `id` int(11) NOT NULL AUTO_INCREMENT, `url` varchar(20) NOT NULL, `direct` varchar(30), `performer` date, `type` varchar(30), `district` varchar(20) NOT NULL, `language` varchar(30), `date` varchar(30), `time` varchar(30), `alias` varchar(20) NOT NULL, `score` varchar(30), `comments` varchar(300), `scenario` varchar(300), `IMDb` varchar(30), PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8;
2.若是使用開源框架pyspider來進行爬蟲的話,默認狀況下,會把爬取到的結果存放到result.db這個sqilite數據庫中,可是爲了方便操做,咱們將結果存放到mysql中。接下
來要作的一個操做就是重寫on_result方法,實例化調用咱們本身實現的SQL方法,具體
實例以下:html
#!/usr/bin/env python # -*- encoding: utf-8 -*- # Created on 2015-03-20 09:46:20 # Project: fly_spider import re from pyspider.database.mysql.mysqldb import SQL from pyspider.libs.base_handler import * class Handler(BaseHandler): headers= { "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "Accept-Encoding":"gzip, deflate, sdch", "Accept-Language":"zh-CN,zh;q=0.8", "Cache-Control":"max-age=0", "Connection":"keep-alive", "User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.101 Safari/537.36" } crawl_config = { "headers" : headers, "timeout" : 100 } @every(minutes=24 * 60) def on_start(self): self.crawl('http://movie.douban.com/tag/', callback=self.index_page) @config(age=10 * 24 * 60 * 60) def index_page(self, response): for each in response.doc('a[href^="http"]').items(): if re.match("http://movie.douban.com/tag/\w+", each.attr.href, re.U): self.crawl(each.attr.href, callback=self.list_page) @config(age=10*24*60*60, priority=2) def list_page(self, response): for each in response.doc('html > body > div#wrapper > div#content > div.grid-16-8.clearfix > div.article > div > table tr.item > td > div.pl2 > a').items(): self.crawl(each.attr.href, priority=9, callback=self.detail_page) @config(priority=3) def detail_page(self, response): return { "url": response.url, "title": response.doc('html > body > #wrapper > #content > h1 > span').text(), "direct": ",".join(x.text() for x in response.doc('a[rel="v:directedBy"]').items()), "performer": ",".join(x.text() for x in response.doc('a[rel="v:starring"]').items()), "type": ",".join(x.text() for x in response.doc('span[property="v:genre"]').items()), # "district": "".join(x.text() for x in response.doc('a[rel="v:starring"]').items()), # "language": "".join(x.text() for x in response.doc('a[rel="v:starring"]').items()), "date": ",".join(x.text() for x in response.doc('span[property="v:initialReleaseDate"]').items()), "time": ",".join(x.text() for x in response.doc('span[property="v:runtime"]').items()), # "alias": "".join(x.text() for x in response.doc('a[rel="v:starring"]').items()), "score": response.doc('.rating_num').text(), "comments": response.doc('html > body > div#wrapper > div#content > div.grid-16-8.clearfix > div.article > div#comments-section > div.mod-hd > h2 > i').text(), "scenario": response.doc('html > body > div#wrapper > div#content > div.grid-16-8.clearfix > div.article > div.related-info > div#link-report.indent').text(), "IMDb": "".join(x.text() for x in response.doc('span[href]').items()), } def on_result(self, result): if not result or not result['title']: return sql = SQL() sql.replace('douban_db',**result)
關於上面這段代碼,有下面幾點須要說明的:
a. 爲了不服務器判斷出客戶端在進行爬蟲操做,從而禁止ip訪問(具體表現爲出現403禁止訪問),咱們須要在發出請求的時候加上一個http頭,假裝成使用瀏覽器訪問,具體用法以下:python
headers= {
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "Accept-Encoding":"gzip, deflate, sdch", "Accept-Language":"zh-CN,zh;q=0.8", "Cache-Control":"max-age=0", "Connection":"keep-alive", "User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.101 Safari/537.36" } crawl_config = { "headers" : headers, "timeout" : 100 }
b. @every(minutes=24 * 60)表示天天執行一次
@config(age=10 * 24 * 60 * 60)表示數據10天后就過時了mysql
c. 接下來是一個比較重要的地方,重寫on_result方法,至關於實現了一個多態,程序在最後返回時,會執行on_result方法,默認的狀況下,on_result是將數據刷入sqlite中,可是若是咱們須要將數據插入mysql中,就須要重寫on_result方法,具體使用以下:web
def on_result(self, result): if not result or not result['title']: return sql = SQL() sql.replace('test',**result)
注意這裏的if not result or not result[‘title’]:這句判斷很重要,否則的會會報錯,提示result是未定義類型的。sql
3.在上面的額代碼中,提到了實例化調用咱們本身實現的SQL方法,而且引用了from pyspider.database.mysql.mysqldb import SQL這個庫文件,那麼就必須在這個目錄下實現這個庫,具體以下:
把下面內容文文放到pyspider/pyspider/database/mysql/目錄下命名爲mysqldb.py數據庫
from six import itervalues import mysql.connector from datetime import date, datetime, timedelta class SQL: username = 'root' #數據庫用戶名 password = 'root' #數據庫密碼 database = 'test' #數據庫 host = '172.30.25.231' #數據庫主機地址 connection = '' connect = True placeholder = '%s' def __init__(self): if self.connect: SQL.connect(self) def escape(self,string): return '`%s`' % string def connect(self): config = { 'user':SQL.username, 'password':SQL.password, 'host':SQL.host } if SQL.database != None: config['database'] = SQL.database try: cnx = mysql.connector.connect(**config) SQL.connection = cnx return True except mysql.connector.Error as err: if (err.errno == errorcode.ER_ACCESS_DENIED_ERROR): print "The credentials you provided are not correct." elif (err.errno == errorcode.ER_BAD_DB_ERROR): print "The database you provided does not exist." else: print "Something went wrong: " , err return False def replace(self,tablename=None,**values): if SQL.connection == '': print "Please connect first" return False tablename = self.escape(tablename ) if values: _keys = ", ".join(self.escape(k) for k in values) _values = ", ".join([self.placeholder, ] * len(values)) sql_query = "REPLACE INTO %s (%s) VALUES (%s)" % (tablename, _keys, _values) else: sql_query = "REPLACE INTO %s DEFAULT VALUES" % tablename cur = SQL.connection.cursor() try: if values: cur.execute(sql_query, list(itervalues(values))) else: cur.execute(sql_query) SQL.connection.commit() return True except mysql.connector.Error as err: print ("An error occured: {}".format(err)) return False
學習文檔:http://blog.binux.me/2015/01/pyspider-tutorial-level-1-html-and-css-selector/
測試環境:http://demo.pyspider.org/瀏覽器