咱們使用的k8s和ceph環境見:
http://www.javashuo.com/article/p-szvgmdvc-gz.html
https://blog.51cto.com/leejia/2499684html
Elastic Cloud on Kubernetes,這是一款基於 Kubernetes Operator 模式的新型編排產品,用戶可以使用該產品在 Kubernetes 上配置、管理和運行 Elasticsearch 集羣。ECK 的願景是爲 Kubernetes 上的 Elastic 產品和解決方案提供 SaaS 般的體驗。node
ECK使用 Kubernetes Operator模式構建而成,須要安裝在Kubernetes集羣內,ECK用於部署,且更專一於簡化全部後期運行工做:linux
Kubernetes目前是容器編排領域的領頭羊,而Elastic社區發佈ECK,使Elasticsearch更容易的跑在雲上,也是爲雲原生技術增磚添瓦,緊跟時代潮流。nginx
部署ECK並查看日誌是否正常:docker
# kubectl apply -f https://download.elastic.co/downloads/eck/1.1.2/all-in-one.yaml # kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
過幾分鐘查看elastic-operator是否運行正常,ECK中只有一個elastic-operator pod:json
# kubectl get pods -n elastic-system NAME READY STATUS RESTARTS AGE elastic-operator-0 1/1 Running 1 2m55s
咱們測試狀況使用1臺master節點和1臺data節點來部署集羣,生產環境建議使用3+臺master節點。以下的manifest中,對實例的heap大小,容器的可以使用內存,容器的虛擬機內存都進行了配置,能夠根據集羣須要作調整:vim
# vim es.yaml apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: quickstart spec: version: 7.7.1 nodeSets: - name: master-nodes count: 1 config: node.master: true node.data: false podTemplate: spec: initContainers: - name: sysctl securityContext: privileged: true command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] containers: - name: elasticsearch env: - name: ES_JAVA_OPTS value: -Xms1g -Xmx1g resources: requests: memory: 2Gi limits: memory: 2Gi volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: rook-ceph-block - name: data-nodes count: 1 config: node.master: false node.data: true podTemplate: spec: initContainers: - name: sysctl securityContext: privileged: true command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] containers: - name: elasticsearch env: - name: ES_JAVA_OPTS value: -Xms1g -Xmx1g resources: requests: memory: 2Gi limits: memory: 2Gi volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: rook-ceph-block # kubectl apply -f es.yaml
過段時間,查看elasticsearch集羣的狀態api
# kubectl get pods quickstart-es-data-nodes-0 1/1 Running 0 54s quickstart-es-master-nodes-0 1/1 Running 0 54s # kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green 2 7.7.1 Ready 73s
查看pv的狀態,咱們能夠看到申請的pv已經建立和綁定成功:瀏覽器
# kubectl get pv pvc-512cc739-3654-41f4-8339-49a44a093ecf 10Gi RWO Retain Bound default/elasticsearch-data-quickstart-es-data-nodes-0 rook-ceph-block 9m5s pvc-eff8e0fd-f669-448a-8b9f-05b2d7e06220 5Gi RWO Retain Bound default/elasticsearch-data-quickstart-es-master-nodes-0 rook-ceph-block 9m5s
默認集羣開啓了basic認證,用戶名爲elastic,密碼能夠經過secret獲取。默認集羣也開啓了自簽名證書https訪問。咱們能夠經過service資源來訪問elasticsearch:app
# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quickstart-es-data-nodes ClusterIP None <none> <none> 4m10s quickstart-es-http ClusterIP 10.107.201.126 <none> 9200/TCP 4m11s quickstart-es-master-nodes ClusterIP None <none> <none> 4m10s quickstart-es-transport ClusterIP None <none> 9300/TCP 4m11s # kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo # curl https://10.107.201.126:9200 -u 'elastic:J1fO9bu88j8pYK8rIu91a73o' -k { "name" : "quickstart-es-data-nodes-0", "cluster_name" : "quickstart", "cluster_uuid" : "AQxFX8NiTNa40mOPapzNXQ", "version" : { "number" : "7.7.1", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423", "build_date" : "2020-05-28T16:30:01.040088Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
不停服,擴容一臺data節點:修改es.yaml中data-nodes中count的value爲2,而後apply下es.yaml便可。
# kubectl apply -f es.yaml # kubectl get pods quickstart-es-data-nodes-0 1/1 Running 0 24m quickstart-es-data-nodes-1 1/1 Running 0 8m22s quickstart-es-master-nodes-0 1/1 Running 0 24m # kubectl get elasticsearch NAME HEALTH NODES VERSION PHASE AGE quickstart green 3 7.7.1 Ready 25m
不停服,縮容一臺data節點,會自動進行數據同步:修改es.yaml中data-nodes中count的value爲1,而後apply下es.yaml便可。
因爲默認kibana也開啓了自簽名證書的https訪問,咱們能夠選擇關閉,咱們來使用ECK部署kibana:
# vim kibana.yaml apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: quickstart spec: version: 7.7.1 count: 1 elasticsearchRef: name: quickstart http: tls: selfSignedCertificate: disabled: true # kubectl apply -f kibana.yaml # kubectl get pods NAME READY STATUS RESTARTS AGE quickstart-es-data-nodes-0 1/1 Running 0 31m quickstart-es-data-nodes-1 1/1 Running 1 15m quickstart-es-master-nodes-0 1/1 Running 0 31m quickstart-kb-6558457759-2rd7l 1/1 Running 1 4m3s # kubectl get kibana NAME HEALTH NODES VERSION AGE quickstart green 1 7.7.1 4m27s
爲kibana在ingress中添加一個四層代理,提供對外訪問服務:
# vim tsp-kibana.yaml apiVersion: k8s.nginx.org/v1alpha1 kind: GlobalConfiguration metadata: name: nginx-configuration namespace: nginx-ingress spec: listeners: - name: kibana-tcp port: 5601 protocol: TCP --- apiVersion: k8s.nginx.org/v1alpha1 kind: TransportServer metadata: name: kibana-tcp spec: listener: name: kibana-tcp protocol: TCP upstreams: - name: kibana-app service: quickstart-kb-http port: 5601 action: pass: kibana-app # kubectl apply -f tsp-kibana.yaml
默認kibana訪問elasticsearch的用戶名爲elastic,密碼獲取方式以下
# kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
經過瀏覽器訪問kibana:
刪除elasticsearch和kibana以及ECK
# kubectl get namespaces --no-headers -o custom-columns=:metadata.name \ | xargs -n1 kubectl delete elastic --all -n # kubectl delete -f https://download.elastic.co/downloads/eck/1.1.2/all-in-one.yaml
先安裝Kubernetes應用的包管理工具helm。Helm是用來封裝 Kubernetes原生應用程序的YAML文件,能夠在你部署應用的時候自定義應用程序的一些metadata,helm依賴chart實現了應用程序的在k8s上的分發。helm和chart主要實現了以下功能:
# wget https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz # tar -zxvf helm-v3.0.0-linux-amd64.tar.gz # mv linux-amd64/helm /usr/local/bin/helm # helm repo add stable https://kubernetes-charts.storage.googleapis.com
經過helm安裝cerebro:
# helm install stable/cerebro --version 1.1.4 --generate-name
查看cerebro的狀態:
# kubectl get pods|grep cerebro cerebro-1591777586-7fd87f7d48-hmlp7 1/1 Running 0 11m
因爲默認ECK部署的elasticsearch開啓了自簽名證書的https服務,故能夠在cerebro配置忽略https證書認證(也能夠在cerebro中添加自簽名證書的ca證書來識別自簽名證書),並重啓cerebro:
1,導出cerebro的configmap:
# kubectl get configmap cerebro-1591777586 -o yaml > cerebro.yaml
2,替換configmap中cerebro的hosts相關配置爲以下(其中quickstart-es-http爲elasticsarch的service資源名字):
play.ws.ssl.loose.acceptAnyCertificate = true hosts = [ { host = "https://quickstart-es-http.default.svc:9200" name = "k8s elasticsearch" } ]
3,應用cerebro的configmap並重啓cerebro pod:
# kubectl apply -f cerebro.yaml # kubectl get pods|grep cerebro cerebro-1591777586-7fd87f7d48-hmlp7 1/1 Running 0 11m # kubectl get pod cerebro-1591777586-7fd87f7d48-hmlp7 -o yaml | kubectl replace --force -f -
先確認cerebro的service資源,而後配置ingress爲cerebro添加7層代理:
# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cerebro-1591777586 ClusterIP 10.111.107.171 <none> 80/TCP 19m # vim cerebro-ingress.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: cerebro-ingress spec: rules: - host: cerebro.myk8s.com http: paths: - path: / backend: serviceName: cerebro-1591777586 servicePort: 80 # kubectl apply -f cerebro-ingress.yaml
在本地pc的/etc/hosts文件添加host綁定"172.18.2.175 cerebro.myk8s.com",而後經過覽器訪問:
刪除cerebro
# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION cerebro-1591777586 default 1 2020-06-10 16:26:30.419723417 +0800 CST deployed cerebro-1.1.4 0.8.4 # heml delete name cerebro-1591777586
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-http-configuration.html
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html
https://hub.helm.sh/charts/stable/cerebro
https://www.elastic.co/cn/blog/introducing-elastic-cloud-on-kubernetes-the-elasticsearch-operator-and-beyond
https://helm.sh/docs/intro/install/