訪問 heroku.com 註冊免費帳號(註冊頁面須要調用 google recaptcha 人機驗證,登陸頁面也須要科學地進行上網,訪問 app 運行頁面則沒有該問題),免費帳號最多能夠建立和運行5個 app。python
訪問 redislabs.com 註冊免費帳號,提供30MB 存儲空間,用於下文經過 scrapy-redis 實現分佈式爬蟲。git
svr-1
, svr-2
, svr-3
和 svr-4
myscrapydweb
SCRAPYD_SERVER_2
, VALUE 爲 svr-2.herokuapp.com:80#group2
<details>
<summary>查看內容</summary>github
pip install redis
命令便可。新開一個命令行提示符:web
git clone https://github.com/my8100/scrapyd-cluster-on-heroku cd scrapyd-cluster-on-heroku
heroku login # outputs: # heroku: Press any key to open up the browser to login or q to exit: # Opening browser to https://cli-auth.heroku.com/auth/browser/12345-abcde # Logging in... done # Logged in as username@gmail.com
cd scrapyd git init # explore and update the files if needed git status git add . git commit -a -m "first commit" git status
heroku apps:create svr-1 heroku git:remote -a svr-1 git remote -v git push heroku master heroku logs --tail # Press ctrl+c to stop logs outputting # Visit https://svr-1.herokuapp.com
添加環境變量redis
# python -c "import tzlocal; print(tzlocal.get_localzone())" heroku config:set TZ=Asia/Shanghai # heroku config:get TZ
heroku config:set REDIS_HOST=your-redis-host heroku config:set REDIS_PORT=your-redis-port heroku config:set REDIS_PASSWORD=your-redis-password
svr-2
,svr-3
和 svr-4
cd .. cd scrapydweb git init # explore and update the files if needed git status git add . git commit -a -m "first commit" git status
heroku apps:create myscrapydweb heroku git:remote -a myscrapydweb git remote -v git push heroku master
添加環境變量數據庫
heroku config:set TZ=Asia/Shanghai
heroku config:set SCRAPYD_SERVER_1=svr-1.herokuapp.com:80 heroku config:set SCRAPYD_SERVER_2=svr-2.herokuapp.com:80#group1 heroku config:set SCRAPYD_SERVER_3=svr-3.herokuapp.com:80#group1 heroku config:set SCRAPYD_SERVER_4=svr-4.herokuapp.com:80#group2
</details>瀏覽器
mycrawler:start_urls
觸發爬蟲並查看結果In [1]: import redis # pip install redis In [2]: r = redis.Redis(host='your-redis-host', port=your-redis-port, password='your-redis-password') In [3]: r.delete('mycrawler_redis:requests', 'mycrawler_redis:dupefilter', 'mycrawler_redis:items') Out[3]: 0 In [4]: r.lpush('mycrawler:start_urls', 'http://books.toscrape.com', 'http://quotes.toscrape.com') Out[4]: 2 # wait for a minute In [5]: r.lrange('mycrawler_redis:items', 0, 1) Out[5]: [b'{"url": "http://quotes.toscrape.com/", "title": "Quotes to Scrape", "hostname": "d6cf94d5-324e-4def-a1ab-e7ee2aaca45a", "crawled": "2019-04-02 03:42:37", "spider": "mycrawler_redis"}', b'{"url": "http://books.toscrape.com/index.html", "title": "All products | Books to Scrape - Sandbox", "hostname": "d6cf94d5-324e-4def-a1ab-e7ee2aaca45a", "crawled": "2019-04-02 03:42:37", "spider": "mycrawler_redis"}']
優勢服務器
缺點網絡