kubernetes實戰(五):k8s持久化安裝Redis Sentinel

一、PV建立git

  在nfs或者其餘類型後端存儲建立pv,首先建立共享目錄github

[root@nfs ~]# cat /etc/exports /k8s/redis-sentinel/0 *(rw,sync,no_subtree_check,no_root_squash) /k8s/redis-sentinel/1 *(rw,sync,no_subtree_check,no_root_squash) /k8s/redis-sentinel/2 *(rw,sync,no_subtree_check,no_root_squash)

  下載yaml文件redis

https://github.com/dotbalo/k8s/

  建立pv,注意Redis的空間大小按需修改後端

[root@k8s-master01 redis-sentinel]# kubectl create -f redis-sentinel-pv.yaml [root@k8s-master01 redis-sentinel]# kubectl get pv | grep redis pv-redis-sentinel-0   4Gi        RWX            Recycle          Bound     public-service/redis-sentinel-master-storage-redis-sentinel-master-ss-0   redis-sentinel-storage-class 16h pv-redis-sentinel-1   4Gi        RWX            Recycle          Bound     public-service/redis-sentinel-slave-storage-redis-sentinel-slave-ss-0     redis-sentinel-storage-class 16h pv-redis-sentinel-2   4Gi        RWX            Recycle          Bound     public-service/redis-sentinel-slave-storage-redis-sentinel-slave-ss-1     redis-sentinel-storage-class             16h

 

二、建立namespaceless

  默認是在public-service中建立Redis哨兵模式測試

kubectl create namespace public-service # 若是不使用public-service,須要更改全部yaml文件的public-service爲你namespace。 # sed -i "s#public-service#YOUR_NAMESPACE#g" *.yaml

 

三、建立ConfigMapspa

  Redis配置按需修改,默認使用的是rdb存儲模式3d

[root@k8s-master01 redis-sentinel]# kubectl create -f redis-sentinel-configmap.yaml [root@k8s-master01 redis-sentinel]# kubectl get configmap -n public-service NAME DATA AGE redis-sentinel-config   2         17h

  注意,此時configmap中redis-slave.conf的slaveof的master地址爲ss裏面的Headless Service地址。日誌

 

四、建立servicecode

  service主要提供pods之間的互訪,StatefulSet主要用Headless Service通信,格式:statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local

  - serviceName爲Headless Service的名字

  - 0..N-1爲Pod所在的序號,從0開始到N-1

  - statefulSetName爲StatefulSet的名字

  - namespace爲服務所在的namespace,Headless Servic和StatefulSet必須在相同的namespace

  - .cluster.local爲Cluster Domain

  如本集羣的HS爲:

    Master:

      redis-sentinel-master-ss-0.redis-sentinel-master-ss.public-service.svc.cluster.local:6379

    Slave:

      redis-sentinel-slave-ss-0.redis-sentinel-slave-ss.public-service.svc.cluster.local:6379

      redis-sentinel-slave-ss-1.redis-sentinel-slave-ss.public-service.svc.cluster.local:6379

  建立Service

[root@k8s-master01 redis-sentinel]# kubectl create -f redis-sentinel-service-master.yaml -f redis-sentinel-service-slave.yaml [root@k8s-master01 redis-sentinel]# kubectl get service -n public-service NAME TYPE CLUSTER-IP      EXTERNAL-IP PORT(S) AGE redis-sentinel-master-ss   ClusterIP   None            <none>        6379/TCP 16h redis-sentinel-slave-ss    ClusterIP   None            <none>        6379/TCP                         <invalid>

 

五、建立StatefulSet

[root@k8s-master01 redis-sentinel]# kubectl create -f redis-sentinel-rbac.yaml -f redis-sentinel-ss-master.yaml -f redis-sentinel-ss-slave.yaml
[root@k8s-master01 redis-sentinel]# kubectl get statefulset -n public-service NAME DESIRED CURRENT AGE redis-sentinel-master-ss   1         1 16h redis-sentinel-slave-ss    2         2 16h rmq-cluster                3         3 3d [root@k8s-master01 redis-sentinel]# kubectl get pods -n public-service NAME READY STATUS RESTARTS AGE redis-sentinel-master-ss-0   1/1       Running   0 16h redis-sentinel-slave-ss-0    1/1       Running   0 16h redis-sentinel-slave-ss-1    1/1       Running   0          16h

  此時至關於已經在k8s上建立了Redis的主從模式。

 

六、dashboard查看

  狀態查看

  pods通信測試

  master鏈接slave測試

[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-master-ss-0 -n public-service -- redis-cli -h redis-sentinel-slave-ss-0.redis-sentinel-slave-ss.public-service.svc.cluster.local  ping PONG [root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-master-ss-0 -n public-service -- redis-cli -h redis-sentinel-slave-ss-1.redis-sentinel-slave-ss.public-service.svc.cluster.local  ping PONG

  slave鏈接master測試

[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-0 -n public-service -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.public-service.svc.cluster.local  ping PONG [root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n public-service -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.public-service.svc.cluster.local  ping PONG

  同步狀態查看

[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n public-service -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.public-service.svc.cluster.local  info replication # Replication role:master connected_slaves:2 slave0:ip=172.168.5.94,port=6379,state=online,offset=80410,lag=1 slave1:ip=172.168.6.113,port=6379,state=online,offset=80410,lag=0 master_replid:ad4341815b25f12d4aeb390a19a8bd8452875879 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:80410 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:80410

  同步測試

# master寫入數據 [root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n public-service -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.public-service.svc.cluster.local set test test_data OK # master獲取數據 [root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n public-service -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.public-service.svc.cluster.local get test "test_data" # slave獲取數據 [root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n public-service -- redis-cli get test "test_data"

  從節點沒法寫入數據

[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n public-service -- redis-cli set k v (error) READONLY You can't write against a read only replica.

  NFS查看數據存儲

[root@nfs redis-sentinel]# tree . . ├── 0 │   └── dump.rdb ├── 1 │   └── dump.rdb └── 2 └── dump.rdb 3 directories, 3 files

  說明:我的認爲在k8s上搭建Redis sentinel徹底沒有意義,通過測試,當master節點宕機後,sentinel選擇新的節點當主節點,當原master恢復後,此時沒法再次成爲集羣節點。由於在物理機上部署時,sentinel探測以及更改配置文件都是以IP的形式,集羣複製也是以IP的形式,可是在容器中,雖然採用的StatefulSet的Headless Service來創建的主從,可是主從創建後,master、slave、sentinel記錄仍是解析後的IP,可是pod的IP每次重啓都會改變,全部sentinel沒法識別宕機後又從新啓動的master節點,因此一直沒法加入集羣,雖然能夠經過固定podIP或者使用NodePort的方式來固定,或者經過sentinel獲取當前master的IP來修改配置文件,可是我的以爲也是沒有必要的,sentinel實現的是高可用Redis主從,檢測Redis Master的狀態,進行主從切換等操做,可是在k8s中,不管是dc或者ss,都會保證pod以指望的值進行運行,再加上k8s自帶的活性檢測,當端口不可用或者服務不可用時會自動重啓pod或者pod的中的服務,因此當在k8s中創建了Redis主從同步後,至關於已經成爲了高可用狀態,而且sentinel進行主從切換的時間不必定有k8s重建pod的時間快,因此我的認爲在k8s上搭建sentinel沒有意義。因此下面搭建sentinel的步驟無需在看。 

   PS:Redis Cluster:https://github.com/dotbalo/k8s/tree/master/redis/k8s-redis-cluster

七、建立sentinel

[root@k8s-master01 redis-sentinel]# kubectl create -f redis-sentinel-ss-sentinel.yaml -f redis-sentinel-service-sentinel.yaml [root@k8s-master01 redis-sentinel]# kubectl get service -n public-servicve No resources found. [root@k8s-master01 redis-sentinel]# kubectl get service -n public-service NAME TYPE CLUSTER-IP      EXTERNAL-IP PORT(S) AGE redis-sentinel-master-ss     ClusterIP   None            <none>        6379/TCP 17h redis-sentinel-sentinel-ss   ClusterIP   None            <none>        26379/TCP 36m redis-sentinel-slave-ss      ClusterIP   None            <none>        6379/TCP 1h rmq-cluster                  ClusterIP   None            <none>        5672/TCP 3d rmq-cluster-balancer         NodePort    10.107.221.85   <none>        15672:30051/TCP,5672:31892/TCP 3d [root@k8s-master01 redis-sentinel]# kubectl get statefulset -n public-service NAME DESIRED CURRENT AGE redis-sentinel-master-ss     1         1 17h redis-sentinel-sentinel-ss   3         3 8m redis-sentinel-slave-ss      2         2 17h rmq-cluster                  3         3 3d [root@k8s-master01 redis-sentinel]# kubectl get pods -n public-service | grep sentinel redis-sentinel-master-ss-0     1/1       Running   0 17h redis-sentinel-sentinel-ss-0   1/1       Running   0 2m redis-sentinel-sentinel-ss-1   1/1       Running   0 2m redis-sentinel-sentinel-ss-2   1/1       Running   0 2m redis-sentinel-slave-ss-0      1/1       Running   0 17h redis-sentinel-slave-ss-1      1/1       Running   0          17h

 

八、查看日誌

  查看哨兵狀態

[root@k8s-master01 ~]# kubectl exec -ti redis-sentinel-sentinel-ss-0 -n public-service -- redis-cli -h 127.0.0.1 -p 26379 info Sentinel # Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=172.168.6.111:6379,slaves=2,sentinels=3

 

九、容災測試

# 查看當前數據 [root@k8s-master01 ~]# kubectl exec -ti redis-sentinel-master-ss-0 -n public-service -- redis-cli -h 127.0.0.1 -p 6379 get test "test_data"

  關閉master節點

  查看狀態

[root@k8s-master01 ~]# kubectl get pods -n public-service NAME READY STATUS RESTARTS AGE redis-sentinel-sentinel-ss-0   1/1       Running   0 22m redis-sentinel-sentinel-ss-1   1/1       Running   0 22m redis-sentinel-sentinel-ss-2   1/1       Running   0 22m redis-sentinel-slave-ss-0      1/1       Running   0 17h redis-sentinel-slave-ss-1      1/1       Running   0          17h

  查看sentinel狀態

[root@k8s-master01 redis]# kubectl exec -ti redis-sentinel-sentinel-ss-2 -n public-service -- redis-cli -h 127.0.0.1 -p 26379 info Sentinel # Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=172.168.6.116:6379,slaves=2,sentinels=3 [root@k8s-master01 redis]# kubectl exec -ti redis-sentinel-slave-ss-0 -n public-service -- redis-cli -h 127.0.0.1 -p 6379 info replication # Replication role:slave master_host:172.168.6.116 master_port:6379 master_link_status:up master_last_io_seconds_ago:0 master_sync_in_progress:0 slave_repl_offset:82961 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:4097ccd725a7ffc6f3767f7c726fc883baf3d7ef master_replid2:603280e5266e0a6b0f299d2b33384c1fd8c3ee64 master_repl_offset:82961 second_repl_offset:68647 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:82961

贊助做者:

 
 

  

相關文章
相關標籤/搜索