kubernetes集羣部署高可用Postgresql的Stolon方案

目錄

前言git

....前言

本文選用Stolon的方式搭建Postgresql高可用方案,主要爲Harbor提供高可用數據庫,Harbor搭建可查看kubernetes搭建Harbor無坑及Harbor倉庫同步,以後會提供redis高可用及Harbor高可用方案搭建github

方案比較

幾種postgresql高可用方案簡單比較:golang

引用https://studygolang.com/articles/19002?fr=sidebarredis

  • 首先repmgr這種方案的算法有明顯缺陷,非主流分佈式算法,直接pass;
  • Stolon和Patroni相對於Crunchy更加Cloud Native, 後者是基於pgPool實現。
  • Crunchy和Patroni相對於Stolon有更多的使用者,而且提供了Operator對於之後的管理和擴容

根據上面簡單的比較,最終選擇的stolon,做者選擇的是Patroni,感受實際區別並不大。算法

1、Stolon概述:

Stolon(https://github.com/sorintlab/stolon

是由3個部分組成的:sql

  • keeper:他負責管理PostgreSQL的實例匯聚到由sentinel(s)提供的clusterview。
  • sentinel:it負責發現而且監控keeper,而且計算最理想的clusterview。
  • proxy:客戶端的接入點。它強制鏈接到右邊PostgreSQL的master而且強制關閉鏈接到由非選舉產生的master。
    Stolon 用etcd或者consul做爲主要的集羣狀態存儲。數據庫

    2、Installation

git clone https://github.com/sorintlab/stolon.git
cd XXX/stolon/examples/kubernetes

如圖所示
stolon
若有興趣可查看官網搭建:https://github.com/sorintlab/stolon/blob/master/examples/kubernetes/README.md
以下爲yaml中注意修改的地方api

  • stolon-keeper.yaml 中設置Postgresql用戶名
- name: STKEEPER_PG_SU_USERNAME
            value: "postgres"
  • stolon-keeper.yaml 中設置stolon掛載卷
volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: "512Mi"
      storageClassName: nfs
  • secret.yaml中設置用戶密碼
apiVersion: v1
kind: Secret
metadata:
    name: stolon
type: Opaque
data:
    password: cGFzc3dvcmQx

以下是做者整理的完整的stolon的編排文件,可直接修改使用bash

# This is an example and generic rbac role definition for stolon. It could be
# fine tuned and split per component.
# The required permission per component should be:
# keeper/proxy/sentinel: update their own pod annotations
# sentinel/stolonctl: get, create, update configmaps
# sentinel/stolonctl: list components pods
# sentinel/stolonctl: get components pods annotations

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: stolon
  namespace: default
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - configmaps
  - events
  verbs:
  - "*"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: stolon
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: stolon
subjects:
- kind: ServiceAccount
  name: default
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: stolon-sentinel
spec:
  replicas: 2
  template:
    metadata:
      labels:
        component: stolon-sentinel
        stolon-cluster: kube-stolon
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
    spec:
      containers:
      - name: stolon-sentinel
        image: sorintlab/stolon:master-pg10
        command:
          - "/bin/bash"
          - "-ec"
          - |
            exec gosu stolon stolon-sentinel
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: STSENTINEL_CLUSTER_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.labels['stolon-cluster']
          - name: STSENTINEL_STORE_BACKEND
            value: "kubernetes"
          - name: STSENTINEL_KUBE_RESOURCE_KIND
            value: "configmap"
          - name: STSENTINEL_METRICS_LISTEN_ADDRESS
            value: "0.0.0.0:8080"
          ## Uncomment this to enable debug logs
          #- name: STSENTINEL_DEBUG
          #  value: "true"
        ports:
          - containerPort: 8080
---
apiVersion: v1
kind: Secret
metadata:
    name: stolon
type: Opaque
data:
    password: cGFzc3dvcmQx
---
# PetSet was renamed to StatefulSet in k8s 1.5
# apiVersion: apps/v1alpha1
# kind: PetSet
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: stolon-keeper
spec:
  serviceName: "stolon-keeper"
  replicas: 2
  template:
    metadata:
      labels:
        component: stolon-keeper
        stolon-cluster: kube-stolon
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: stolon-keeper
        image: sorintlab/stolon:master-pg10
        command:
          - "/bin/bash"
          - "-ec"
          - |
            # Generate our keeper uid using the pod index
            IFS='-' read -ra ADDR <<< "$(hostname)"
            export STKEEPER_UID="keeper${ADDR[-1]}"
            export POD_IP=$(hostname -i)
            export STKEEPER_PG_LISTEN_ADDRESS=$POD_IP
            export STOLON_DATA=/stolon-data
            chown stolon:stolon $STOLON_DATA
            exec gosu stolon stolon-keeper --data-dir $STOLON_DATA
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: STKEEPER_CLUSTER_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.labels['stolon-cluster']
          - name: STKEEPER_STORE_BACKEND
            value: "kubernetes"
          - name: STKEEPER_KUBE_RESOURCE_KIND
            value: "configmap"
          - name: STKEEPER_PG_REPL_USERNAME
            value: "repluser"
            # Or use a password file like in the below supersuser password
          - name: STKEEPER_PG_REPL_PASSWORD
            value: "replpassword"
          - name: STKEEPER_PG_SU_USERNAME
            value: "postgres"
          - name: STKEEPER_PG_SU_PASSWORDFILE
            value: "/etc/secrets/stolon/password"
          - name: STKEEPER_METRICS_LISTEN_ADDRESS
            value: "0.0.0.0:8080"
          # Uncomment this to enable debug logs
          #- name: STKEEPER_DEBUG
          #  value: "true"
        ports:
          - containerPort: 5432
          - containerPort: 8080
        volumeMounts:
        - mountPath: /stolon-data
          name: data
        - mountPath: /etc/secrets/stolon
          name: stolon
      volumes:
        - name: stolon
          secret:
            secretName: stolon
  # Define your own volumeClaimTemplate. This example uses dynamic PV provisioning with a storage class named "standard" (so it will works by default with minikube)
  # In production you should use your own defined storage-class and configure your persistent volumes (statically or dynamically using a provisioner, see related k8s doc).
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: "512Mi"
      storageClassName: nfs
---
apiVersion: v1
kind: Service
metadata:
  name: stolon-proxy-service
spec:
  ports:
    - port: 5432
      targetPort: 5432
  selector:
    component: stolon-proxy
    stolon-cluster: kube-stolon
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: stolon-proxy
spec:
  replicas: 2
  template:
    metadata:
      labels:
        component: stolon-proxy
        stolon-cluster: kube-stolon
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
    spec:
      containers:
      - name: stolon-proxy
        image: sorintlab/stolon:master-pg10
        command:
          - "/bin/bash"
          - "-ec"
          - |
            exec gosu stolon stolon-proxy
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: STPROXY_CLUSTER_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.labels['stolon-cluster']
          - name: STPROXY_STORE_BACKEND
            value: "kubernetes"
          - name: STPROXY_KUBE_RESOURCE_KIND
            value: "configmap"
          - name: STPROXY_LISTEN_ADDRESS
            value: "0.0.0.0"
          - name: STPROXY_METRICS_LISTEN_ADDRESS
            value: "0.0.0.0:8080"
          ## Uncomment this to enable debug logs
          #- name: STPROXY_DEBUG
          #  value: "true"
        ports:
          - containerPort: 5432
          - containerPort: 8080
        readinessProbe:
          tcpSocket:
            port: 5432
          initialDelaySeconds: 10
          timeoutSeconds: 5

3、部署stolon

kubectl applay -f stolon.yaml

Initialize the cluster(大概意思是stolon初始化k8s集羣,能夠大概看下官網解釋)

All the stolon components wait for an existing clusterdata entry in the store. So the first time you have to initialize a new cluster. For more details see the cluster initialization doc. You can do this step at every moment, now or after having started the stolon components.
You can execute stolonctl in different ways:app

  • as a one shot command executed inside a temporary pod:
kubectl run -i -t stolonctl --image=sorintlab/stolon:master-pg10 --restart=Never --rm -- /usr/local/bin/stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init
  • from a machine that can access the store backend:
stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init
  • later from one of the pods running the stolon components.

執行

kubectl run -i -t stolonctl --image=sorintlab/stolon:master-pg10 --restart=Never --rm -- /usr/local/bin/stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init

如圖所示,部署成功

stolon pod

4、卸載Postgresql數據庫

kubectl delete -f stolon.yaml
kubectl delete pvc data-stolon-keeper-0 data-stolon-keeper-1

5、驗證Postgresql安裝成功(也可簡單測試下)

一、驗證數據同步

鏈接master而且創建test表

psql --host --port 30543 postgres -U stolon -W
postgres=# create table test (id int primary key not null,
value text not null);
CREATE TABLE
postgres=# insert into test values (1, 'value1');
INSERT 0 1
postgres=# select * from test;
id | value
---- -------- 1 | value1
(1 row)

也可進入Pod執行postgresql命令

kubectl exec -ti stolon-proxy-5977cdbcfc-csnkq bash
#登入sql
psql --host localhost --port 5432 postgres -U postgres 
\l                      #列出全部數據庫
\c dbname               #切換數據庫
CREATE TABLE
insert into test values (1, 'value1');
INSERT 0 1
select * from test;
\d                      #列出當前數據庫的全部表
\q                      #退出數據庫

鏈接slave而且檢查數據。你能夠寫一些信息以便確認請求已經被slave處理了。

psql --host   --port 30544 postgres -U stolon -W
postgres=# select * from test;
id | value
---- -------- 1 | value1
(1 row)

二、測試failover

這個案例是官方代碼庫中statefullset的一個例子
簡單的說,就是爲模擬了master掛掉,咱們先刪除了master的statefulset又刪除了master的pod。

kubectl delete statefulset stolon-keeper --cascade=false
kubectl delete pod stolon-keeper-0

而後,在sentinel的log中咱們能夠看到新的master被選舉出來了。

no keeper info available db=cb96f42d keeper=keeper0
no keeper info available db=cb96f42d keeper=keeper0
master db is failed db=cb96f42d keeper=keeper0
trying to find a standby to replace failed master
electing db as the new master db=087ce88a keeper=keeper1

如今,在剛纔的那兩個終端中若是咱們重複上一個命令,咱們能夠看到以下輸出。

postgres=# select  from test;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset:
Succeeded.
postgres=# select 
 from test;
id | value
---- -------- 1 | value1
(1 row)

Kubernetes的service把不可用的pod去掉,把請求轉到可用的pod上。因此新的讀取鏈接被路由到了健康的pod上。

.也可用chaoskube模擬隨機的pod掛掉(準生產能夠測試下)

另外一個測試集羣彈性(resilience)的好方法是用chaoskube。Chaoskube是一個小的服務程序,它能夠週期性的在集羣裏隨機的kill掉一些的pod。它也能夠用helm charts部署。

helm install --set labels="release=factualcrocodile,
component!=factual-crocodine-etcd" --set
interval=5m stable/chaoskube

這條命令會運行chaoskube,它會每5分鐘刪除一個pod。它會選擇label中release=factual-crocodile的pod,可是會忽略etcd的pod。

本文按照官網搭建,主要爲以後的Harbor高可用作準備,有情趣的夥伴點個贊,以後會續寫redis、Harbor高可用

參考資料:
http://www.javashuo.com/article/p-xcdhdjuv-eu.html
https://github.com/sorintlab/stolon/tree/master/examples/kubernetes
https://studygolang.com/articles/19002?fr=sidebar

相關文章
相關標籤/搜索