[elk]elasticsearch實現冷熱數據分離

本文以最新的elasticsearch-6.3.0.tar.gz爲例,爲了節約資源,本文將副本調爲0, 無client角色html

https://www.elastic.co/blog/hot-warm-architecture-in-elasticsearch-5-xnode

之前es2.x版本配置elasticsearch.yml 裏的node.tag: hot這個配置不生效了

被改爲了這個
node.attr.box_type: hot

es架構

各節點的es配置

master節點:
[root@n1 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml 
cluster.name: elk
node.master: true
node.data: false
node.name: 192.168.2.11
#node.attr.box_type: hot
#node.tag: hot
path.data: /data/es 
path.logs: /data/log
network.host: 192.168.2.11
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["192.168.2.11"]
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
http.cors.enabled: true
http.cors.allow-origin: "*"
- client節點(這裏就不配置了)

node.master: false
node.data: false
- hot節點
[root@n2 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml 
cluster.name: elk
node.master: false
node.data: true
node.name: 192.168.2.12
node.attr.box_type: hot
path.data: /data/es
network.host: 192.168.2.12
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["192.168.2.11"]
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
http.cors.enabled: true
http.cors.allow-origin: "*"
- cold節點
[root@n3 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml 
cluster.name: elk
node.master: false
node.data: true
node.name: 192.168.2.13
node.attr.box_type: cold
path.data: /data/es
network.host: 192.168.2.13
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
discovery.zen.ping.unicast.hosts: ["192.168.2.11"]
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
http.cors.enabled: true
http.cors.allow-origin: "*"

如何實現某索引數據寫到指定的node?(根據節點tag便可)

我hot節點打了tagapi

node.attr.box_type: cold
建立一個template(這裏我用kibana來操做es的api)

PUT _template/test
{
    "index_patterns": "test-*",
    "settings": {
        "index.number_of_replicas": "0",
        "index.routing.allocation.require.box_type": "hot"
     }
}

意思是test-*索引命名的,都將其數據放到hot節點上.ruby

如何實現數據從hot節點遷移到老的cold節點?

以test-2018.07.05索引爲例,將它從hot節點遷移到cold節點架構

kibana裏操做:

PUT /test-2018.07.05/_settings 
{ 
  "settings": { 
    "index.routing.allocation.require.box_type": "cold"
  } 
}

生產中可能天天,或每h,生成一個index.app

test-2018.07.01
test-2018.07.02
test-2018.07.03
test-2018.07.04
test-2018.07.05
...

我能夠寫一個sh定時任務,天天晚上定時遷移數據.
如我在hot節點只保留7天的數據,7天之前的索引我匹配到, 天天晚上執行如下遷移命令便可.cors

cold節點數據保留1個月?

https://www.cnblogs.com/iiiiher/p/8029062.htmlelasticsearch

優化點:

1.爲了提升吞吐量tcp

path.data:/data1,/data2,/data3,/data4,/data5 能夠每一個目錄掛一塊盤

2.若是有10臺hot節點,能夠設置10個shardside

logstash測試

input { stdin { } }
output {
    elasticsearch { 
        index => "test-%{+YYYY.MM.dd}"
        hosts => ["192.168.2.11:9200"] 
    }
    stdout {codec => rubydebug}
}

/usr/local/logstash/bin/logstash -f logstash.yaml --config.reload.automatic

關於es的index template

關於es的template
es數據入庫時候都會匹配一個index template,默認匹配的是logstash這個template

template大體分紅setting和mappings兩部分

  1. settings主要做用於index的一些相關配置信息,如分片數、副本數,tranlog同步條件、refresh等。
  2. mappings主要是一些說明信息,大體又分爲_all、_source、prpperties這三部分: https://elasticsearch.cn/article/335

根據index name來匹配使用哪一個index template. index template屬於節點範圍,而非全局. 須要給某個節點單獨設置index_template(如給設置一些特有的tag).

相關文章
相關標籤/搜索