Scrapy框架

 

Scrapy簡介

Scrapy是一個爲了爬取網站數據,提取結構性數據而編寫的應用框架,
很是出名,很是強悍。所謂的框架就是一個已經被集成了各類功能(高性能異步下載,隊列,分佈式,解析,持久化等)的具備很強通用性的項目模板。
對於框架的學習,重點是要學習其框架的特性、各個功能的用法便可。

 

環境安裝

- Linux:html

  - 直接 pip install scrapy 便可python

 

- Windows:web

- 先下載 wheel
    pip3 install wheel

- 再下載twisted  進入官網找到 對應的python解釋器版本的文件
    下載地址:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted

- 找到 twisted 文件的下載目錄執行命令:
    pip3 install Twisted‑17.1.0‑cp35‑cp35m‑win_amd64.whl(下載文件名的全稱)

- 而後命令:
    pip install pywin32
    pip install scrapy

 

基礎命令

- 建立項目:cookie

  scrapy startproject 項目名稱併發

 

- 項目目錄結構:app

項目名稱/
   scrapy.cfg:
   project_name/
       __init__.py
       items.py
       pipelines.py
       settings.py
       spiders/
           __init__.py


scrapy.cfg   項目的主配置信息。(真正爬蟲相關的配置信息在settings.py文件中)
items.py     設置數據存儲模板,用於結構化數據,如:Django的Model
pipelines    數據持久化處理
settings.py  配置文件,如:遞歸的層數、併發數,延遲下載等
spiders      爬蟲目錄,如:建立文件,編寫爬蟲解析規則

 

- 在項目中建立爬蟲程序:框架

  - 進入項目目錄: cd 項目名稱dom

  - 建立命令:異步

    scrapy genspider 應用名稱 爬取網頁的起始url (例如:scrapy genspider qiubai www.qiushibaike.com)scrapy

 

  - 命令執行結束後 會在 spiders 文件中生成一個 上面  應用名稱.py 的文件; 源碼以下:

# -*- coding: utf-8 -*-
import scrapy

class QiubaiSpider(scrapy.Spider):
    name = 'qiubai' #應用名稱

    #容許爬取的域名(若是遇到非該域名的url則爬取不到數據)
    allowed_domains = ['https://www.qiushibaike.com/']
    
    #起始爬取的url
    start_urls = ['https://www.qiushibaike.com/']

     #訪問起始URL並獲取結果後的回調函數,該函數的response參數就是向起始的url發送請求後,獲取的響應對象.該函數返回值必須爲可迭代對象或者NUll 
     def parse(self, response):
        print(response.text) #獲取字符串類型的響應內容
        print(response.body)#獲取字節類型的相應內容

 

- 執行爬蟲應用:

  scrapy crawl  應用名稱

  ps:scrapy crawl 爬蟲名稱 --nolog:該種執行形式不會顯示執行的日誌信息

 

settings配置文件

- 源碼文件:

# -*- coding: utf-8 -*-

# Scrapy settings for scrapyproDemo project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'scrapyproDemo'

SPIDER_MODULES = ['scrapyproDemo.spiders']
NEWSPIDER_MODULE = 'scrapyproDemo.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
# 經過在用戶代理上標識您本身(和您的網站),負責任地爬行
#USER_AGENT = 'scrapyproDemo (+http://www.yourdomain.com)'

# Obey robots.txt rules
# 是否遵循 robots協議,默認爲True 可是通常都要改爲False
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# 配置由Scrapy執行的最大併發請求(默認:16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'scrapyproDemo.middlewares.ScrapyprodemoSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'scrapyproDemo.middlewares.ScrapyprodemoDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'scrapyproDemo.pipelines.ScrapyprodemoPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

 

- 假裝請求載體:USER_AGENT

- 忽略robots協議:ROBOTSTXT_OBEY = False

相關文章
相關標籤/搜索