最近發現了一個spdierkeeper的庫,這個庫的主要用途是在於.配合這scrpyd管理你的爬蟲,支持一鍵式部署,定時採集任務,啓動,暫停等一系列的操做.
簡單來講將scrapyd的api進行封裝,最大限度減小你跟命令行交互次數.不得說這個是很棒的事情.python
https://github.com/DormyMo/Sp... SpiderKeeper的github鏈接
因爲 scrapyd
是基於python3+
以上的版本兼容性較好,因此咱們須要的環境爲git
xxx
咱們部署的任務.安裝完成以後,即可以啓動服務器了.博主本人使用的是ubuntu,因此就以ubuntu爲例,win&macos進本同樣.github
單臺服務器sql
sudo spiderkeeper # 啓動單臺服務器,默認啓動本地的 http://localhost:6800 scrapyd的服務 | spiderkeeper的默認端口爲5000.
鏈接多臺scrapyd服務器.
在分佈式中咱們的服務器中確定不止一臺,使用spiderkeeper能夠很好的解決這個問題macos
sudo spiderkeeper --server=http://localhost:6800 --server=http://111.111.111.111:6800 #啓動一個spiderkeeper能夠同時部署兩臺服務器的spider
config.py
更改用戶名&密碼ubuntu
# Statement for enabling the development environment import os DEBUG = True # Define the application directory BASE_DIR = os.path.abspath(os.path.dirname(__file__)) SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(os.path.abspath('.'), 'SpiderKeeper.db') SQLALCHEMY_TRACK_MODIFICATIONS = False DATABASE_CONNECT_OPTIONS = {} # Application threads. A common general assumption is # using 2 per available processor cores - to handle # incoming requests using one and performing background # operations using the other. THREADS_PER_PAGE = 2 # Enable protection agains *Cross-site Request Forgery (CSRF)* CSRF_ENABLED = True # Use a secure, unique and absolutely secret key for # signing the data. CSRF_SESSION_KEY = "secret" # Secret key for signing cookies SECRET_KEY = "secret" # log LOG_LEVEL = 'INFO' # spider services SERVER_TYPE = 'scrapyd' SERVERS = ['http://localhost:6800'] # basic auth 這裏更改用戶名&密碼 NO_AUTH = False BASIC_AUTH_USERNAME = 'admin' BASIC_AUTH_PASSWORD = 'admin' BASIC_AUTH_FORCE = True
run.py
更改端口號api
def parse_opts(config): parser = OptionParser(usage="%prog [options]", description="Admin ui for spider service") parser.add_option("--host", help="host, default:0.0.0.0", dest='host', default='0.0.0.0')#bind ip 綁定ip 默認所有人能夠訪問 parser.add_option("--port", help="port, default:5000", dest='port', type="int", default=5000)#默認端口號5000 能夠根據你的需求設計 parser.add_option("--username", help="basic auth username ,default: %s" % config.get('BASIC_AUTH_USERNAME'), dest='username', default=config.get('BASIC_AUTH_USERNAME')) parser.add_option("--password", help="basic auth password ,default: %s" % config.get('BASIC_AUTH_PASSWORD'), dest='password', default=config.get('BASIC_AUTH_PASSWORD')) parser.add_option("--type", help="access spider server type, default: %s" % config.get('SERVER_TYPE'), dest='server_type', default=config.get('SERVER_TYPE')) parser.add_option("--server", help="servers, default: %s" % config.get('SERVERS'), dest='servers', action='append', default=[]) parser.add_option("--database-url", help='SpiderKeeper metadata database default: %s' % config.get('SQLALCHEMY_DATABASE_URI'), dest='database_url', default=config.get('SQLALCHEMY_DATABASE_URI')) parser.add_option("--no-auth", help="disable basic auth", dest='no_auth', action='store_true') parser.add_option("-v", "--verbose", help="log level", dest='verbose', action='store_true') return parser.parse_args()
啓動scrapyd
使用scrapy-deploy將你的文件部署到你本地的服務器上面,你本地的scrapyd得到相應的 .egg
文件.服務器
python C:\Users\dengyi\AppData\Local\Programs\Python\Python36\Scripts\scrapyd-deploy cqvip -p Cqvip
啓動 spiderkeeper
博主這裏是啓動了多個,進入界面 http://localhost:5000
cookie
Deploy
部署,創建任務的第一步計入Deploy
建立一個新的工程咱們起名爲test.egg
文件上傳到到Deploy.Dashboard
是儀表盤在這裏你能夠啓動你的spider跟監控spider的運行狀態.到這裏一個完美的spiderkeeper就搭建成功啦.app