Scrapy:python3下的第一次運行測試

1,引言

Scrapy的架構初探》一文講解了Scrapy的架構,本文就實際來安裝運行一下Scrapy爬蟲。本文以官網的tutorial做爲例子,完整的代碼能夠在github上下載。html

2,運行環境配置

  • 本次測試的環境是:Windows10, Python3.4.3 32bitpython

  • 安裝Scrapy : $ pip install Scrapy #實際安裝時,因爲服務器狀態的不穩定,出現好幾回中途退出的狀況git

3,編寫運行第一個Scrapy爬蟲

3.1. 生成一個新項目:tutorialgithub

$ scrapy startproject tutorial

項目目錄結構以下:
圖片描述json

3.2. 定義要抓取的itemsegmentfault

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

3.3. 定義Spiderapi

import scrapy
from tutorial.items import DmozItem

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

    def parse(self, response):
        for sel in response.xpath('//ul/li'):
            item = DmozItem()
            item['title'] = sel.xpath('a/text()').extract()
            item['link'] = sel.xpath('a/@href').extract()
            item['desc'] = sel.xpath('text()').extract()
            yield item

3.4. 運行服務器

$ scrapy crawl dmoz -o item.json

1) 結果報錯:
A) ImportError: cannot import name '_win32stdio'
B) ImportError: No module named 'win32api'網絡

2) 查錯過程:查看官方的FAQstackoverflow上的信息,原來是scrapy在python3上測試還不充分,還有小問題。架構

3) 解決過程:
A) 須要手工去下載twisted/internet下的 _win32stdio 和 _pollingfile,存放到python目錄的libsitepackagestwistedinternet下
B) 下載並安裝pywin32

再次運行,成功!在控制檯上能夠看到scrapy的輸出信息,待運行完成退出後,到項目目錄打開結果文件items.json, 能夠看到裏面以json格式存儲的爬取結果。

[
{"title": ["        About       "], "desc": [" ", " "], "link": ["/docs/en/about.html"]},
{"title": ["   Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]},
{"title": ["            Suggest a Site          "], "desc": [" ", " "], "link": ["/docs/en/add.html"]},
{"title": [" Help             "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]},
{"title": [" Login                       "], "desc": [" ", " "], "link": ["/editors/"]},
{"title": [], "desc": [" ", " Share via Facebook "], "link": []},
{"title": [], "desc": [" ", "  Share via Twitter  "], "link": []},
{"title": [], "desc": [" ", " Share via LinkedIn "], "link": []},
{"title": [], "desc": [" ", " Share via e-Mail   "], "link": []},
{"title": [], "desc": [" ", " "], "link": []},
{"title": [], "desc": [" ", "  "], "link": []},
{"title": ["        About       "], "desc": [" ", " "], "link": ["/docs/en/about.html"]},
{"title": ["   Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]},
{"title": ["            Suggest a Site          "], "desc": [" ", " "], "link": ["/docs/en/add.html"]},
{"title": [" Help             "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]},
{"title": [" Login                       "], "desc": [" ", " "], "link": ["/editors/"]},
{"title": [], "desc": [" ", " Share via Facebook "], "link": []},
{"title": [], "desc": [" ", "  Share via Twitter  "], "link": []},
{"title": [], "desc": [" ", " Share via LinkedIn "], "link": []},
{"title": [], "desc": [" ", " Share via e-Mail   "], "link": []},
{"title": [], "desc": [" ", " "], "link": []},
{"title": [], "desc": [" ", "  "], "link": []}
]

第一次運行scrapy的測試成功

4,接下來的工做

接下來,咱們將使用GooSeeker API來實現網絡爬蟲,省掉對每一個item人工去生成和測試xpath的工做量。目前有2個計劃:

  • 在gsExtractor中封裝一個方法:從xslt內容中自動提取每一個item的xpath

  • 從gsExtractor的提取結果中自動提取每一個item的結果

具體選擇哪一個方案,將在接下來的實驗中肯定,併發布到gsExtractor新版本中。

5,文檔修改歷史

2016-06-15:V1.0,首次發佈

相關文章
相關標籤/搜索