全部的Scrapy項目默認有相似於下邊的文件結構:css
scrapy.cfg myproject/ __init__.py items.py pipelines.py settings.py spiders/ __init__.py spider1.py spider2.py ...
scrapy.cfg
存放的目錄被認爲是 項目的根目錄 。該文件中包含python模塊名的字段定義了項目的設置。例如:html
[settings] default = myproject.settings
通常來講,使用 scrapy
工具的第一件事就是建立您的Scrapy項目:python
scrapy startproject myproject
該命令將會在 myproject
目錄中建立一個Scrapy項目。web
接下來,進入到項目目錄中:shell
cd myproject
這時候您就可使用 scrapy
命令來管理和控制您的項目了。json
您能夠在您的項目中使用 scrapy
工具來對其進行控制和管理。瀏覽器
好比,建立一個新的spider:dom
scrapy genspider mydomain mydomain.com
該章節提供了可用的內置命令的列表。每一個命令都提供了描述以及一些使用例子。您老是能夠經過運行命令來獲取關於每一個命令的詳細內容:scrapy
scrapy <command> -h
您也能夠查看全部可用的命令:編輯器
scrapy -h
全局命令:
項目(Project-only)命令:
scrapy genspider [-t template] <name> <domain>
在當前項目中建立spider。這僅僅是建立spider的一種快捷方法。該方法可使用提早定義好的模板來生成spider。您也能夠本身建立spider的源碼文件。
$ scrapy genspider -l Available templates: basic crawl csvfeed xmlfeed $ scrapy genspider -d basic import scrapy class $classname(scrapy.Spider): name = "$name" allowed_domains = ["$domain"] start_urls = ( 'http://www.$domain/', ) def parse(self, response): pass $ scrapy genspider -t basic example example.com Created spider 'example' using template 'basic' in module: mybot.spiders.example
scrapy crawl <spider>
使用spider進行爬取。
例子:
$ scrapy crawl myspider [ ... myspider starts crawling ... ]
scrapy check [-l] <spider>
運行contract檢查。
例子:
$ scrapy check -l first_spider * parse * parse_item second_spider * parse * parse_item $ scrapy check [FAILED] first_spider:parse_item >>> 'RetailPricex' field is missing [FAILED] first_spider:parse >>> Returned 92 requests, expected 0..4
scrapy list
列出當前項目中全部可用的spider。每行輸出一個spider。
使用例子:
$ scrapy list spider1 spider2
scrapy edit <spider>
使用 EDITOR
中設定的編輯器編輯給定的spider
該命令僅僅是提供一個快捷方式。開發者能夠自由選擇其餘工具或者IDE來編寫調試spider。
例子:
$ scrapy edit spider1
scrapy fetch <url>
使用Scrapy下載器(downloader)下載給定的URL,並將獲取到的內容送到標準輸出。
該命令以spider下載頁面的方式獲取頁面。例如,若是spider有 USER_AGENT
屬性修改了 User Agent,該命令將會使用該屬性。
所以,您可使用該命令來查看spider如何獲取某個特定頁面。
該命令若是非項目中運行則會使用默認Scrapy downloader設定。
例子:
$ scrapy fetch --nolog http://www.example.com/some/page.html [ ... html content here ... ] $ scrapy fetch --nolog --headers http://www.example.com/ {'Accept-Ranges': ['bytes'], 'Age': ['1263 '], 'Connection': ['close '], 'Content-Length': ['596'], 'Content-Type': ['text/html; charset=UTF-8'], 'Date': ['Wed, 18 Aug 2010 23:59:46 GMT'], 'Etag': ['"573c1-254-48c9c87349680"'], 'Last-Modified': ['Fri, 30 Jul 2010 15:30:18 GMT'], 'Server': ['Apache/2.2.3 (CentOS)']}
scrapy view <url>
在瀏覽器中打開給定的URL,並以Scrapy spider獲取到的形式展示。 有些時候spider獲取到的頁面和普通用戶看到的並不相同。 所以該命令能夠用來檢查spider所獲取到的頁面,並確認這是您所指望的。
例子:
$ scrapy view http://www.example.com/some/page.html [ ... browser starts ... ]
scrapy shell [url]
以給定的URL(若是給出)或者空(沒有給出URL)啓動Scrapy shell。 查看 Scrapy終端(Scrapy shell) 獲取更多信息。
例子:
$ scrapy shell http://www.example.com/some/page.html [ ... scrapy shell starts ... ]
scrapy runspider <spider_file.py>
在未建立項目的狀況下,運行一個編寫在Python文件中的spider。
例子:
$ scrapy runspider myspider.py [ ... spider starts crawling ... ]
scrapy version [-v]
輸出Scrapy版本。配合 -v
運行時,該命令同時輸出Python, Twisted以及平臺的信息,方便bug提交。
0.17 新版功能.測試爬取速度
scrapy bench
運行benchmark測試。 Benchmarking 。
scrapy提供的Feed Exports 能夠輕鬆輸出結果
保存爲JSON文件,輸出後項目多了一個quotes.json文件 scrapy crawl quote -o quotes.json 每一個Item輸出一行JSON,後綴爲jl,爲jsonline的縮寫 scrapy crawl quotes -o quotes.jl 或者 scrapy crawl quotes -o quotes.jsonline 下面命令分別輸出csv xml pickle marshal 格式及ftp遠程輸出 scrapy crawl quotes -o quotes.csv scrapy crawl quotes -o quotes.xml scrapy crawl quotes -o quotes.pickle scrapy crawl quotes -o quotes.marshal scrapy crawl quotes -o ftp://user:pass@ftp.example.com/path/to/quotes.csv ftp須要正確配置包括用戶名、密碼、地址、輸出路徑
xpath選擇器最前方加.點,表明提取元素內部的數據,沒有加.點,表明從根節點提取,用//img表示從html節點提取
提取快捷方法response.xpath() 和 response.css(),具體內容提取用extract()
xpath選取內部文本和屬性 ('//a/text()').extract() ('//a/@href').extract()
/text()獲取節點的內部文本, /@href得到節點的href屬性,@後面內容是要獲取的屬性名稱
給extract_first()方法設置一個默認值參數,xpath提取不到就會使用默認值extract_first(default='')
css選擇器 獲取文本和屬性的寫法 ::text 和::attr()
正則re_first方法能夠選取列表的第一個元素
response對象不能直接調用re()和re_first()
<html> <head> <base href='http://example.com/' /> <title>Example website</title> </head> <body> <div id='images'> <a href='image1.html'>Name: My image 1 <br /><img src='image1_thumb.jpg' /></a> <a href='image2.html'>Name: My image 2 <br /><img src='image2_thumb.jpg' /></a> <a href='image3.html'>Name: My image 3 <br /><img src='image3_thumb.jpg' /></a> <a href='image4.html'>Name: My image 4 <br /><img src='image4_thumb.jpg' /></a> <a href='image5.html'>Name: My image 5 <br /><img src='image5_thumb.jpg' /></a> </div> </body> </html> >>> response.selector <Selector xpath=None data='<html>\n <head>\n <base href="http://exam'> >>> response.selector.xpath('//title/text()') [<Selector xpath='//title/text()' data='Example website'>] >>> response.selector.xpath('//title/text()').extract_first() 'Example website' >>> response.selector.css('title::text').extract_first() 'Example website' >>> response.css('title::text').extract_first() 'Example website' >>> response.xpath('//div[@id="image"]').css('img') [] >>> response.xpath('//div[@id="images"]').css('img') [<Selector xpath='descendant-or-self::img' data='<img src="image1_thumb.jpg">'>, <Selector xpath='descendant-or-self::img' data='<img src="image2_thumb.jpg">'>, <Selector xpath=' descendant-or-self::img' data='<img src="image3_thumb.jpg">'>, < Selector xpath='descendant-or-self::img' data='<img src="image4_ thumb.jpg">'>, <Selector xpath='descendant-or-self::img' data='< img src="image5_thumb.jpg">'>] >>> response.xpath('//div[@id="images"]').css('img::attr(src)') [<Selector xpath='descendant-or-self::img/@src' data='image1_thumb.jpg'>, <Selector xpath='de scendant-or-self::img/@src' data='image2_thumb.jpg'>, <Selector xpath='descendant-or-self::[<Selector xpath='descendant-or-self::img/@src' data='image1_thumb.jpg'>, <S elector xpath='descendant-or-self::img/@src' data='image2_thumb.jpg'>, <Sele ctor xpath='descendant-or-self:: img/@src' data='image3_thumb.jpg '>, <Selector xpath='descendant- or-self::img/@src' data='image4_ thumb.jpg'>, <Selector xpath='de scendant-or-self::img/@src' data ='image5_thumb.jpg'>] >>> response.xpath('//div[@id="i mages"]').css('img:: attr(src)').extract( ) ['image1_thumb.jpg', 'image2_thumb.jpg', 'image3_thumb.jpg', 'image4_thumb.jpg', 'image5_thumb.jpg'] >>> response.xpath('//div[@id="images"]').css('img::attr(src )').extract_first() 'image1_thumb.jpg' >>> response.xpath('//div[@id="images"]').css('img::attr(src )').extract_first(default='') 'image1_thumb.jpg' >>> response.xpath('//a/@href') [<Selector xpath='//a/@href' data='image1.html'>, <Selector xpath='//a/@href' data ='image2.html'>, <Selector xpath='//a/@href' data='image3.html'>, <Selector xpath= '//a/@href' data='image4.html'>, <Selector xpath='//a/@href' data='image5.html'>] >>> response.xpath('//a/@href').extract() ['image1.html', 'image2.html', 'image3.html', 'image4.html', 'image5.html'] >>> response.css('a').extract() ['<a href="image1.html">Name: My image 1 <br><img src="image1_thumb.jpg"></a>', '< a href="image2.html">Name: My image 2 <br><img src="image2_thumb.jpg"></a>', '<a h ref="image3.html">Name: My image 3 <br><img src="image3_thumb.jpg"></a>', '<a href ="image4.html">Name: My image 4 <br><img src="image4_thumb.jpg"></a>', '<a href="i mage5.html">Name: My image 5 <br><img src="image5_thumb.jpg"></a>'] >>> response.css('a::attr(href)').extract() ['image1.html', 'image2.html', 'image3.html', 'image4.html', 'image5.html'] >>> response.xpath('//a/text()').extract() ['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 '] >>> response.css('a::text()').extract() Traceback (most recent call last): File "<console>", line 1, in <module> File "c:\python3.7\lib\site-packages\scrapy\http\response\text.py", line 122, in css return self.selector.css(query) File "c:\python3.7\lib\site-packages\parsel\selector.py", line 262, in css return self.xpath(self._css2xpath(query)) File "c:\python3.7\lib\site-packages\parsel\selector.py", line 265, in _css2xpat h return self._csstranslator.css_to_xpath(query) File "c:\python3.7\lib\site-packages\parsel\csstranslator.py", line 109, in css_ to_xpath return super(HTMLTranslator, self).css_to_xpath(css, prefix) File "c:\python3.7\lib\site-packages\cssselect\xpath.py", line 192, in css_to_xp ath for selector in parse(css)) File "c:\python3.7\lib\site-packages\cssselect\xpath.py", line 192, in <genexpr> for selector in parse(css)) File "c:\python3.7\lib\site-packages\cssselect\xpath.py", line 222, in selector_ to_xpath xpath = self.xpath_pseudo_element(xpath, selector.pseudo_element) File "c:\python3.7\lib\site-packages\parsel\csstranslator.py", line 72, in xpath _pseudo_element % pseudo_element.name) cssselect.xpath.ExpressionError: The functional pseudo-element ::text() is unknown >>> response.css('a::text').extract() ['Name: My image 1 ', 'Name: My image 2 ', 'Name: My image 3 ', 'Name: My image 4 ', 'Name: My image 5 '] >>> response.xpath('//a[contains(@href, "image")]/@href') [<Selector xpath='//a[contains(@href, "image")]/@href' data='image1.html'>, <Selec tor xpath='//a[contains(@href, "image")]/@href' data='image2.html'>, <Selector xpa th='//a[contains(@href, "image")]/@href' data='image3.html'>, <Selector xpath='//a [contains(@href, "image")]/@href' data='image4.html'>, <Selector xpath='//a[contai ns(@href, "image")]/@href' data='image5.html'>] >>> response.xpath('//a[contains(@href, "image")]/@href').extract() ['image1.html', 'image2.html', 'image3.html', 'image4.html', 'image5.html'] >>> response.css('a[href*=image]::attr(hr)') [] >>> response.css('a[href*=image]::attr(href)') [<Selector xpath="descendant-or-self::a[@href and contains(@href, 'image')]/@href" data='image1.html'>, <Selector xpath="descendant-or-self::a[@href and contains(@h ref, 'image')]/@href" data='image2.html'>, <Selector xpath="descendant-or-self::a[ @href and contains(@href, 'image')]/@href" data='image3.html'>, <Selector xpath="d escendant-or-self::a[@href and contains(@href, 'image')]/@href" data='image4.html' >, <Selector xpath="descendant-or-self::a[@href and contains(@href, 'image')]/@hre f" data='image5.html'>] >>> response.css('a[href*=image]::attr(href)').extract() ['image1.html', 'image2.html', 'image3.html', 'image4.html', 'image5.html'] >>> response.xpath('//a/img/@src').extract() ['image1_thumb.jpg', 'image2_thumb.jpg', 'image3_thumb.jpg', 'image4_thumb.jpg', ' image5_thumb.jpg'] >>> response.xpath('//a[contains(@href, "image")]/img/@src').extract() ['image1_thumb.jpg', 'image2_thumb.jpg', 'image3_thumb.jpg', 'image4_thumb.jpg', ' image5_thumb.jpg'] >>> response.css('a[href*=image] img::attr(src)').extract() ['image1_thumb.jpg', 'image2_thumb.jpg', 'image3_thumb.jpg', 'image4_thumb.jpg', ' image5_thumb.jpg'] >>> response.css('a::text').re('Name\:(.*)') [' My image 1 ', ' My image 2 ', ' My image 3 ', ' My image 4 ', ' My image 5 '] >>> response.css('a::text').re_first('Name\:(.*)') ' My image 1 '