關於數據過濾,scrapy提供xpath和css兩種過濾器(selector),通常xpath使用的較多,另外我對css也不算熟。這裏主要是xpath。css
關於xpath,是一種專門在 XML 文檔中查找信息的語言。詳細教程能夠看這裏:http://www.w3school.com.cn/xpath/,不過對於剛入門的人來講不用那麼複雜。官網的tutorial給的一些示例足夠基本入門的。html
如下爲示例:
node
/html/head/title: selects the <title> element, inside the <head>element of a HTML document (選擇html頭部的標題元素) /html/head/title/text(): selects the text inside the aforementioned<title> element.(選擇上例中元素的文本) //td: selects all the <td> elements(選擇全部td項目) //div[@class="mine"]: selects all div elements which contain an attribute class="mine"(選擇class="mine"的div標籤)
爲了方便調式,scrapy提供了交互式的方式對網站進行抓取分析(注意必定要有"",不然。。。):python
scrapy shell "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
正常的話會出現如下內容:shell
[ ... Scrapy log here ... ] 2014-01-23 17:11:42-0400 [default] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None) [s] Available Scrapy objects: [s] crawler <scrapy.crawler.Crawler object at 0x3636b50> [s] item {} [s] request <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] settings <scrapy.settings.Settings object at 0x3fadc50> [s] spider <Spider 'default' at 0x3cebf50> [s] Useful shortcuts: [s] shelp() Shell help (print this help) [s] fetch(req_or_url) Fetch request (or URL) and update local objects [s] view(response) View response in a browser In [1]:
這樣你就得到了crawler等六個類,如今咱們只關注response類,正如字面意思,response表明返回的網頁內容,使用view(response)就能夠調用默認的瀏覽器查看抓取的頁面(個人是firefox)。response.body存放着html的源代碼,你能夠在交互程序下查看。不過通常沒有換行和對齊,十分慘烈。。。express
response中包含了四個基本方法,用於數據過濾:瀏覽器
xpath(): returns a list of selectors, each of them representing the nodes selected by the xpath expression given as argument.app
css(): returns a list of selectors, each of them representing the nodes selected by the CSS expression given as argument.dom
extract(): returns a unicode string with the selected data.scrapy
re(): returns a list of unicode strings extracted by applying the regular expression given as argument.
其中xpath()就是選擇器,而extract()方法則會返回html標籤之間的unicode內容,re()則是調用正則的接口(我恨正則)。
示例以下:
In [1]: response.xpath('//title') Out[1]: [<Selector xpath='//title' data=u'<title>Open Directory - Computers: Progr'>] In [2]: response.xpath('//title').extract() Out[2]: [u'<title>Open Directory - Computers: Programming: Languages: Python: Books</title>'] In [3]: response.xpath('//title/text()') Out[3]: [<Selector xpath='//title/text()' data=u'Open Directory - Computers: Programming:'>] In [4]: response.xpath('//title/text()').extract() Out[4]: [u'Open Directory - Computers: Programming: Languages: Python: Books'] In [5]: response.xpath('//title/text()').re('(\w+):') Out[5]: [u'Computers', u'Programming', u'Languages', u'Python']
response.xpath()將會返回selector的一個列表,你能夠對其中的元素再次調用xpath, extract()等函數。
就像這樣:
>>> response.xpath('//title')[0].xpath('text()').extract() [u'DMOZ - Computers: Programming: Languages: Python: Books']
剩下的事就是把元素存入items之中了,而後經過pipeline存放到你想要的數據格式:
import scrapyfrom tutorial.items import DmozItemclass DmozSpider(scrapy.Spider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): for sel in response.xpath('//ul/li'): item = DmozItem() item['title'] = sel.xpath('a/text()').extract() item['link'] = sel.xpath('a/@href').extract() item['desc'] = sel.xpath('text()').extract() yield item
這裏說一下yield這個關鍵字,yield在python中被稱爲generator,做用是將循環轉化爲iterator並返回,詳情能夠看這裏:http://www.ibm.com/developerworks/cn/opensource/os-cn-python-yield/