1、elasticsearch安裝並運行html
1.下載並解壓便可node
2.進入elasticsearch的bin目錄git
3.執行 ./elasticsearch 命令便可運行github
4.瀏覽器中運行:http://ip:9200/ 若是出現如下畫面表示運行成功bootstrap
2、集羣搭建瀏覽器
1.以本人爲例,在三臺機器上安裝了es,來搭建集羣app
2進入elasticsearch的config目錄,打開並編輯elasticsearch.ymlcurl
# ======================== Elasticsearch Configuration =========================elasticsearch
# NOTE: Elasticsearch comes with reasonable defaults for most settings.ide
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#三臺電腦cluster.name:保持一致
cluster.name: my-askingdata
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#三臺電腦的node.name不能同樣
node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
# ip
network.host: 192.168.1.106
#
# Set a custom port for HTTP:
#端口
http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#寫入三臺電腦的ip
discovery.zen.ping.unicast.hosts: ["192.168.1.106", "192.168.1.108","192.168.1.109"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
3、head插件安裝和使用
1.在es的plugin 目錄下新建head目錄
2.下載head並解壓進plugin/head目錄便可
3.分別啓動三臺 機器的es
4. 打開http://192.168.1.106:9200/_plugin/head/
4、索引的操做
一、索引文檔的建立
將以下一條歌曲信息的數據提交到ES中建立索引:
[plain] view plain copy
url:http://127.0.0.1:9200/song001/list001/1
data:{"number":32768,"singer":"楊坤","size":"5109132","song":"今夜二十歲","tag":"中國好聲音","timelen":319}
索引名字是:song001;
索引的類型是:list001;
本記錄的id是:1
返回的信息能夠看到建立是成功的,而且版本號是1;ES會對記錄修改進行版本跟蹤,第一次建立記錄爲1,同一條記錄每修改一次就追加1。
至此一條記錄就提交到ES中創建了索引,注意HTTP的方法是PUT,不要選擇錯了。
二、索引文檔的查詢
根據索引時的ID查詢的文檔的RESTful接口以下
url:http://127.0.0.1:9200/song001/list001/1
HTTP方法採用GET的形式。
三、索引文檔的更新
根據索引時的ID更新的文檔的內容其RESTful接口以下
url:http://127.0.0.1:9200/song001/list001/1
HTTP方法採用PUT的形式。
將歌手名由「楊坤」改爲「楊坤獨唱」;
結果中的version字段已經成了2,由於咱們這是是修改,索引版本遞增;created字段是false,表示此次不是新建而是更新。
更新接口與建立接口徹底同樣,ES會查詢記錄是否存在,若是不存在就是建立,存在就是更新操做。
四、索引文檔的刪除
根據索引時的ID更新的文檔的內容其RESTful接口以下
url:http://127.0.0.1:9200/song001/list001/1
HTTP方法採用DELETE的形式。
刪除事後,再經過查詢接口去查詢將得不到結果。
總結:
增刪改查的RESTful接口URL形式:http://localhost:9200/<index>/<type>/[<id>]
增刪改查分別對應:HTTP請求的PUT、GET、DELETE方法。PUT調用是若是不存在就是建立,已存在是更新。
快速bulk插入數據
下載 https://github.com/codelibs/elasticsearch-reindexing
執行命令在線安裝,不用下載 $ $ES_HOME/bin/plugin install org.codelibs/elasticsearch-reindexing/2.1.1
在查詢時出現max_result_window錯誤時,
執行如下命令,紅字要>=文檔總數
curl -XPUT "http://192.168.1.106:9200/shark/_settings" -d '{ "index" : { "max_result_window" :30000 } }'
關閉節點
ps -ef | grep elasticsearch
kill ****