014-活該你爬蟲被封之Scrapy Ip代理中間件

這是堅持技術寫做計劃(含翻譯)的第14篇,定個小目標999,每週最少2篇。python

背景: 房租到期了。
需求: 找到便宜,交通便利的房源,瞭解當前租房行情,便於砍價。git

在爬取58,趕集,鏈家,安居客的數據時,被封是常事,基於此,fork並修改了兩個庫。用於抓取免費代理ip,用於支持爬取租房數據。github

注意:租房網站的數據,大機率失真,僅作參考。docker

其中部分數據截圖bash

本文只介紹Scrapy的ip代理中間件,很少講如何爬取租房網站數據以及數據分析,後邊可能會寫。dom

獲取代理ip

若是有付費的代理ip更好,若是沒有的話,能夠用我構建的docker鏡像scrapy

docker run -p8765:8765 -d anjia0532/ipproxy-dockerfile
複製代碼

稍等2-5分鐘,訪問  http://${docker ip}:8765/ ,若是有值,則抓取代理ip成功。網站

scrapy-proxies-tool

安裝

pip install scrapy-proxies-tool
複製代碼

配置

修改 Scrapy settings.py,源repo 只支持從文件讀取代理ipthis

# Retry many times since proxies often fail
RETRY_TIMES = 10
# Retry on most error codes since proxies fail for different reasons
RETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]

DOWNLOADER_MIDDLEWARES = {
  'scrapy.downloadermiddlewares.retry.RetryMiddleware': 90,
  'scrapy_proxies.RandomProxy': 100,
  'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
}

PROXY_SETTINGS = {
  # Proxy list containing entries like
  # http://host1:port
  # http://username:password@host2:port
  # http://host3:port
  # ...
  # if PROXY_SETTINGS[from_proxies_server] = True , proxy_list is server address (ref https://github.com/qiyeboy/IPProxyPool and https://github.com/awolfly9/IPProxyTool )
  # Only support http(ref https://github.com/qiyeboy/IPProxyPool#%E5%8F%82%E6%95%B0)
  # list : ['http://localhost:8765?protocol=0'],
  'list':['/path/to/proxy/list.txt'],

  # disable proxy settings and use real ip when all proxies are unusable
  'use_real_when_empty':False,
  'from_proxies_server':False,

  # If proxy mode is 2 uncomment this sentence :
  # 'custom_proxy': "http://host1:port",

  # Proxy mode
  # 0 = Every requests have different proxy
  # 1 = Take only one proxy from the list and assign it to every requests
  # 2 = Put a custom proxy to use in the settings
  'mode':0
}
複製代碼

能夠經過爬取 myip.ipip.net/ 來判斷代理ip是否生效。spa

參考資料

相關文章
相關標籤/搜索