初始配置node
配置時間同步linux
關閉防火牆vim
ufw disable
設置ulimitapp
見 https://my.oschina.net/u/914655/blog/3067520
修改max_map_count值
在linux系統上,elasticsearch默認使用hybrid mmapfs / niofs來存儲索引文件,所以操做系統主要會經過mmap來限制存儲的空間,所以若是存儲空間滿了,那麼會拋出異常,咱們能夠使用以下命令來更改設置。curl
臨時設置: sysctl -w vm.max_map_count=655360 vm.max_map_count = 655360 永久設置: echo 'vm.max_map_count=655360' >> /etc/sysctl.conf sysctl -p vm.max_map_count = 655360
安裝 jvm
cd /app wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.1-linux-x86_64.tar.gz tar zxvf elasticsearch-7.3.1-linux-x86_64.tar.gz cd elasticsearch-7.3.1/ # es不能以root用戶運行 useradd elasticsearch cd /data mkdir elastic cd elastic mkdir data mkdir logs chown -R elasticsearch.elasticsearch /app/elasticsearch-7.3.1 chown -R elasticsearch.elasticsearch /data/elastic
配置文件elasticsearch
# 根據機器配置修改jvm參數 vim config/jvm.options -Xms12g -Xmx12g # vim config/elasticsearch.yml # 集羣名稱,要同樣 cluster.name: jp-es # 集羣裏節點名稱,要惟一,通常以數字自增 node.name: ES-node1 # 指定節點的部落屬性,這是一個比集羣更大的範圍 node.attr.rack: r1 # 是否可稱爲master節點 node.master: true # 是否存儲數據 node.data: true # 數據路徑 path.data: /data/elastic/data # 日誌路徑 path.logs: /data/elastic/logs # 容許外部訪問 network.host: 0.0.0.0 # 訪問端口 http.port: 9200 # discovery.seed_hosts: ["10.13.6.9","10.13.6.10","10.13.6.11","10.13.6.12","10.13.6.13"] # 啓動集羣時,參與master選舉的節點 cluster.initial_master_nodes: ["ES-node1", "ES-node2", "ES-node3"]
其餘節點注意node.name,node.master便可測試
啓動fetch
su elasticsearch bin/elasticsearch -d
測試
ui
curl http://localhost:9200 { "name" : "ES-node1", "cluster_name" : "jp-es", "cluster_uuid" : "KiRH05UbTTC25NPOr2zDRQ", "version" : { "number" : "7.3.1", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "4749ba6", "build_date" : "2019-08-19T20:19:25.651794Z", "build_snapshot" : false, "lucene_version" : "8.1.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } curl http://localhost:9200/_cluster/health { "cluster_name":"jp-es", "status":"green", "timed_out":false, "number_of_nodes":5, "number_of_data_nodes":5, "active_primary_shards":0, "active_shards":0, "relocating_shards":0, "initializing_shards":0, "unassigned_shards":0, "delayed_unassigned_shards":0, "number_of_pending_tasks":0, "number_of_in_flight_fetch":0, "task_max_waiting_in_queue_millis":0, "active_shards_percent_as_number":100 }