Scrapy入門教程--調試問題

在學Scrapy框架的過程當中遇到的問題進行記錄,方便之後查閱!!!html

安裝完Scrapy後我學習Scrapy的入門教程:Scrapy入門教程python

根據教程所提供的代碼進行調試,代碼以下:cookie

items.py框架

import scrapy

class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

dmoz_spider.pydom

import scrapy

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2]
        with open(filename, 'wb') as f:
            f.write(response.body)

在進入項目的根目錄,執行下列命令啓動spider:scrapy

scrapy crawl dmoz

後由於問題打斷了:ide

[twisted] CRITICAL: Unhandled error in Deferred

百度google過都沒有給出直接緣由,但大方向是由於個人環境是win10 64位機器下的pywin32包的問題,但反覆安裝64位的安裝包都沒解決的了。學習

後來再仔細閱讀了pywin32關於如何選擇包的提示,原來跟當前安裝的Python版本有關。python 64位和32位在64位機器下均可以跑,只是安裝包時要選跟python版本一致的,而不是跟機器一致。google

因此我從新安裝了pywin32-220.win32-py2.7.exe 庫!url

再從新編譯後經過了,以下:

2016-11-01 05:59:22 [scrapy] INFO: Scrapy 1.2.1 started (bot: tutorial)
2016-11-01 05:59:22 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'tutorial'}
2016-11-01 05:59:22 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-11-01 05:59:23 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-11-01 05:59:23 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-11-01 05:59:23 [scrapy] INFO: Enabled item pipelines:
[]
2016-11-01 05:59:23 [scrapy] INFO: Spider opened
2016-11-01 05:59:23 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-11-01 05:59:23 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-11-01 05:59:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/robots.txt> (referer: None)
2016-11-01 05:59:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2016-11-01 05:59:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2016-11-01 05:59:25 [scrapy] INFO: Closing spider (finished)
2016-11-01 05:59:25 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 734,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 15997,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 11, 1, 5, 59, 25, 139000),
 'log_count/DEBUG': 4,
 'log_count/INFO': 7,
 'response_received_count': 3,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2016, 11, 1, 5, 59, 23, 306000)}
2016-11-01 05:59:25 [scrapy] INFO: Spider closed (finished)

我在根目錄下也找到了,生成的兩個文件!!!

參考網址:http://blog.csdn.net/u011170540/article/details/48968739

相關文章
相關標籤/搜索