Elasticsearch elasticdump的數據導入導出

1、所有備份和導入node

安裝:
nginx

git clone https://github.com/taskrabbit/elasticsearch-dump.git
git

cd elasticsearch-dump
github

npm install elasticdump -gnpm

sudo yum install npmjson

1608947979726.jpg

1
2
3
4
5
6
7
8
9
10
(1)建立備份路徑
mkdir /data/es_data_backup
(2)遷移原機器上的全部索引到目標機器
#把原始索引的mapping結構和數據導出
elasticdump --input=http: //10.200.57.118:9200/ --output=/data/es_data_backup/cmdb_dump-mapping.json --all=true --type=mapping
elasticdump --input=http: //10.200.57.118:9200/ --output=/data/es_data_backup/cmdb_dump.json --all=true --type=data
 
#mapping結構和數據導入新的cluster節點
elasticdump --input=/data/es_data_backup/cmdb_dump-mapping.json --output=http: //10.200.57.118:9200/ --bulk=true
elasticdump --input=/data/es_data_backup/cmdb_dump.json --output=http: //10.200.57.118:9200/ --bulk=true

2、指定庫備份和導入app

1608948042943.jpg

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
curl -XGET '192.168.11.10:9200/_cat/indices?v&pretty' .  #查看都有哪些索引
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open jyall-test 5 1 18908740 2077368 25gb 12.5gb
 
 
# Backup index data to a file:
elasticdump --input=http: //10.200.57.118:9200/ele_nginx_clusters --output=/data/es_data_backup/ele_nginx_clusters_mapping.json --type=mapping
 
elasticdump --input=http: //10.200.57.118:9200/ele_nginx_clusters --output=/data/es_data_backup/ele_nginx_clusters.json --type=data
#或者採用gzip的方式,這種方式親測節省10多倍的空間,導入時gunzip ele_nginx_clusters.json.gz後再進行導入
#Backup and index to a gzip using stdout:
elasticdump --input=http: //10.200.57.118:9200/ele_nginx_clusters --output=$ | gzip > /data/es_data_backup/ele_nginx_clusters.json.gz
 
 
導入:
elasticdump --input=/data/es_data_backup/ele_nginx_clusters_mapping.json --output=http: //10.200.57.118:9200/ --bulk=true
elasticdump --input=/data/es_data_backup/ele_nginx_clusters.json --output=http: //10.200.57.118:9200/ --bulk=true

3、導出遇到的報錯及問題curl

1
2
3
4
5
6
7
8
9
10
11
12
(1)報錯以下:
Thu, 26 Apr 2018 09:14:49 GMT | Error Emitted => read ECONNRESET
Thu, 26 Apr 2018 09:14:49 GMT | Total Writes: 19800
Thu, 26 Apr 2018 09:14:49 GMT | dump ended with error ( get phase) => Error: read ECONNRESET
(2)
<1>
It sounds like your issue is being caused by the elasticdump opening too many sockets to your elasticsearch cluster. You can use the --maxSockets option to limit the number of sockets opened.
elasticdump --input http: //192.168.2.222:9200/index1 --output http://192.168.2.222:9200/index2 --type=data --maxSockets=5
 
Reference:
https: //stackoverflow.com/questions/33248267/dump-ended-with-error-set-phase-error-read-econnreset
https: //github.com/nodejs/node/issues/10563

Elasticsearch6.0數據導入elasticsearch6.7方法:socket

bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip
elasticsearch

curl http://192.168.150.116:9210/_cat/plugins

elasticdump --input=http://192.168.150.166:9200/ --output=http://192.168.150.114:9210 --all=true --type=data

0e6cb259ebf84b563844f3fe.jpg
相關文章
相關標籤/搜索