什麼是分佈式鎖?java
1)分佈式鎖是控制分佈式系統中或不一樣系統之間共同訪問共享資源的一種鎖實現。 2)若是不一樣系統或同一系統的不一樣主機之間共享了某個資源時,每每經過互斥來防止彼此之間的干擾。 3)不會發生死鎖,即便一個server在持有鎖時間出現問題沒能主動解鎖,也能保證後續其餘server能正常加鎖。
分佈式鎖的目的?node
能夠保證分佈式部署的應用集羣中,同一個資源在同一時刻只能被一臺機器上的一個線程執行。
Redis分佈式鎖可能出現的問題?redis
在setnx和setex中間發生了服務宕機 解決方案:1)該問題須要setnx和setex連用,能夠採用lua腳本 2)Redis從2.6以後支持setnx、setex連用 當server1執行任務時間大於setex設置的過時時間時,可能在解鎖時解了server2的鎖 解決方案:每一個server解鎖時須要判斷當前要remove掉的鎖的value是否是該server的value,再去解鎖,可採用lua腳本作解鎖流程
在redis目錄下建立redis-replication目錄spring
mkdir redis-replication
在redis-replication目錄下建立目錄6380 6381centos
mkdir 6380 mkdir 6381
將src目錄下的redis-server拷貝到redis-replication目錄下springboot
[root@bogon redis-replication]# cp ../src/redis-server redis-server
將redis.conf分別拷貝到6380 6381,目錄下ruby
[root@bogon redis-replication]# cp ../redis.conf 6380 [root@bogon redis-replication]# cp ../redis.conf 6381
修改6380和6381中redis.conf的配置bash
daemonize yes #守護進程模式開啓 port 6380 slaveof 127.0.0.1 6379 #模擬環境三個redis都在同一臺機器上
daemonize yes #守護進程模式開啓 port 6381 slaveof 127.0.0.1 6379 #模擬環境三個redis都在同一臺機器上
分別啓動三個節點服務服務器
[root@bogon redis-replication]# ./redis-server ../redis.conf [root@bogon redis-replication]# ./redis-server 6380/redis.conf [root@bogon redis-replication]# ./redis-server 6381/redis.conf
在6379節點redis-cli查看info,並set,而後分別在兩個從節點get,get到即集羣搭建完畢app
[root@bogon /]# redis-cli 127.0.0.1:6379> info replication # Replication role:master connected_slaves:2 slave0:ip=127.0.0.1,port=6380,state=online,offset=1639,lag=0 slave1:ip=127.0.0.1,port=6381,state=online,offset=1639,lag=0 master_replid:5f13eaede714511b727b6058ed7299c317c66d8b master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1639 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:1639 127.0.0.1:6379>
**全量複製** 實現原理:創建主從關係時,從機會給主機發送sync命令,主機接收命令,後臺啓動的存盤進程,同時收集全部用於修改命令,傳送給從機。 **增量複製** 實現原理:主機會繼續將新收集到的修改命令依次傳給從機,實現數據的同步效果
Redis的主從複製最大的缺點就是延遲,主機負責寫,從機負責備份,這個過程有必定的延遲,當系統很繁忙的時候,延遲問題會更加嚴重,從機器數量的增長也會使這個問題更加嚴重
將redis-5.0.5-1目錄下sentinel.conf拷貝到redis-replication
[root@bogon redis-5.0.5-1]# cp sentinel.conf redis-replication/
將sentinel.conf重命名爲sentinel_1.conf並修改配置
sentinel monitor mymaster 127.0.0.1 6379 1 #指定master節點,1表明1個sentinel認爲某節點掛掉就掛掉了 sentinel down-after-milliseconds mymaster 10000 #指定sentinel認爲服務器斷線所需毫秒數 sentinel failover-timeout mymaster 60000 #執行故障轉移確認時間毫秒數 sentinel parallel-syncs mymaster 1 #執行故障轉移時,最大能夠有多少個從服務器同時對新的主服務器進行同步,這個數字越小,完成故障轉移所需時間就越長
保存後啓動redis-sentinel
[root@bogon redis-5.0.5-1]# ./src/redis-sentinel redis-replication/sentinel_1.conf 19559:X 04 Nov 2019 18:45:56.540 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 19559:X 04 Nov 2019 18:45:56.540 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=19559, just started 19559:X 04 Nov 2019 18:45:56.540 # Configuration loaded 19559:X 04 Nov 2019 18:45:56.541 * Increased maximum number of open files to 10032 (it was originally set to 1024). _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 5.0.5 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in sentinel mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 26379 | `-._ `._ / _.-' | PID: 19559 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | http://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 19559:X 04 Nov 2019 18:45:56.542 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 19559:X 04 Nov 2019 18:45:56.543 # Sentinel ID is a3bbbce34899d8534d98f7328ba9d61857a94168 19559:X 04 Nov 2019 18:45:56.543 # +monitor master mymaster 127.0.0.1 6379 quorum 1 19559:X 04 Nov 2019 18:45:56.545 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:45:56.545 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
而後shutdown主節點6379,能夠看到下面日誌,將6381選舉爲了新的主節點,當6379再次啓動時,會做爲6381的從節點
./redis-cli -p 6379 shutdown 19559:X 04 Nov 2019 18:47:24.078 # +sdown master mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:24.078 # +odown master mymaster 127.0.0.1 6379 #quorum 1/1 19559:X 04 Nov 2019 18:47:24.078 # +new-epoch 1 19559:X 04 Nov 2019 18:47:24.078 # +try-failover master mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:24.079 # +vote-for-leader a3bbbce34899d8534d98f7328ba9d61857a94168 1 19559:X 04 Nov 2019 18:47:24.079 # +elected-leader master mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:24.079 # +failover-state-select-slave master mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:24.133 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:24.133 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:24.191 * +failover-state-wait-promotion slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:25.120 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:25.120 # +failover-state-reconf-slaves master mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:25.172 * +slave-reconf-sent slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:26.128 * +slave-reconf-inprog slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:26.128 * +slave-reconf-done slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:26.222 # +failover-end master mymaster 127.0.0.1 6379 19559:X 04 Nov 2019 18:47:26.222 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6381 19559:X 04 Nov 2019 18:47:26.222 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381 19559:X 04 Nov 2019 18:47:26.222 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
Sentinel是如何工做的呢?
主觀下線:單個Sentinel實例對服務器作出下線判斷 主觀下線特色:若是一個服務器沒有在master-down-after-milliseconds所指時間內,對發送PING命令的Sentinel獲得一個有效回覆,那麼將會標記這個服務器爲主觀下線。 客觀下線:多個Sentinel投票對一個服務作出判斷。 客觀下線條件只適用於主節點,主觀下線適用於從節點。
直接在yml文件加入配置便可
spring: redis: sentinel: master: mymaster nodes: 192.168.47.129:26379 #多個sentinel以逗號分隔
啓動後set
redisTemplate.opsForValue().set(key, value); return redisTemplate.opsForValue().get(key);
可能會遇到問題
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: /127.0.0.1:6379 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:670) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748)
致使問題的緣由是Sentinel.conf中的配置
sentinel monitor mymaster 127.0.0.1 6379 1 改爲 sentinel monitor mymaster 192.168.47.129 6379 1
問題解決
在local目錄下建立redis-cluster文件夾,在redis-cluster下建6個節點分別拷入redis.conf
cd /usr/local/ mkdir redis_cluster cd redis_cluster mkdir 7000 7001 7002 7003 7004 7005 cp /usr/local/redis/redis.conf /usr/local/redis/7000
修改6個節點的redis.conf
daemonize yes //redis後臺運行 port 7000 //端口7000,7001,7002,7003,7004,7005 cluster-enabled yes //開啓集羣 把註釋#去掉 cluster-config-file nodes-xxx.conf //集羣的配置 配置文件首次啓動自動生成 cluster-node-timeout 5000 //請求超時 設置5秒夠了 appendonly yes //aof日誌開啓 有須要就開啓,它會每次寫操做都記錄一條日誌 bind 127.0.0.1 192.168.80.129(此處爲本身內網的ip地址,centos7下面採用ip addr來查看,其餘系統試一下ifconfig查看)
啓動全部節點
cd /usr/local/redis_cluster/7000 ../redis-server ./redis.conf cd /usr/local/redis-cluster/7001 ../redis-server ./redis.conf cd /usr/local/redis-cluster/7002 ../redis-server ./redis.conf cd /usr/local/redis-cluster/7003 ../redis-server ./redis.conf cd /usr/local/redis-cluster/7004 ../redis-server ./redis.conf cd /usr/local/redis-cluster/7005 ../redis-server ./redis.conf
注意:Redis 5.X前的版本安裝ruby。Redis 5.X版本請看下面第8點
yum -y install ruby ruby-devel rubygems rpm-build gem install redis
gem install redis可能會遇到以下錯誤
ERROR: Error installing redis: redis requires Ruby version >= 2.3.0. CentOS7 庫中的支持到2.0.0,可ruby安裝Redis(我用的redis版本是5.0.5)的須要最低是2.3.0
解決辦法
# 下載 curl -L get.rvm.io | bash -s stable cd /usr/local/rvm/archives #解壓 tar xvzf rvm-1.29.9.tgz cd rvm-1.29.9 ./install source /usr/local/rvm/scripts/rvm #查看全部版本 rvm list known #安裝2.3.3 rvm install 2.3.3 rvm use 2.3.3 ruby --version gem install redis
Redis 5.X前的版本,進入redis的src目錄,執行以下命令
./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
Redis 5.X版本,進入redis的src目錄,執行以下命令
./redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 --cluster-replicas 1
出現下面信息表示啓動成功
>>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 127.0.0.1:7004 to 127.0.0.1:7000 Adding replica 127.0.0.1:7005 to 127.0.0.1:7001 Adding replica 127.0.0.1:7003 to 127.0.0.1:7002 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: ebb3d01bd2578d5b38400f5330ba7c9a5bdaefba 127.0.0.1:7000 slots:[0-5460] (5461 slots) master M: 842bbbcbfd9c7b98067a8cf8cff09cb54817c8cc 127.0.0.1:7001 slots:[5461-10922] (5462 slots) master M: 815fe658394702b8e38498918510aa9ea75ff73d 127.0.0.1:7002 slots:[10923-16383] (5461 slots) master S: 55784ce9df135a51efeba908643018b13e33d11a 127.0.0.1:7003 replicates ebb3d01bd2578d5b38400f5330ba7c9a5bdaefba S: b630b0dae92977f1447623ad34981a1086bfdbfa 127.0.0.1:7004 replicates 842bbbcbfd9c7b98067a8cf8cff09cb54817c8cc S: 7a383e5d8a3b1c22c7b47c6ce9f92f820f44f9d5 127.0.0.1:7005 replicates 815fe658394702b8e38498918510aa9ea75ff73d Can I set the above configuration? (type 'yes' to accept): yes
Can I set the above configuration? (type 'yes' to accept): yes (是否這樣配置? 是)
>>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... >>> Performing Cluster Check (using node 127.0.0.1:7000) M: ebb3d01bd2578d5b38400f5330ba7c9a5bdaefba 127.0.0.1:7000 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 7a383e5d8a3b1c22c7b47c6ce9f92f820f44f9d5 127.0.0.1:7005 slots: (0 slots) slave replicates 815fe658394702b8e38498918510aa9ea75ff73d M: 815fe658394702b8e38498918510aa9ea75ff73d 127.0.0.1:7002 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: b630b0dae92977f1447623ad34981a1086bfdbfa 127.0.0.1:7004 slots: (0 slots) slave replicates 842bbbcbfd9c7b98067a8cf8cff09cb54817c8cc M: 842bbbcbfd9c7b98067a8cf8cff09cb54817c8cc 127.0.0.1:7001 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 55784ce9df135a51efeba908643018b13e33d11a 127.0.0.1:7003 slots: (0 slots) slave replicates ebb3d01bd2578d5b38400f5330ba7c9a5bdaefba [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
測試,redis-cli -c -p 7002,-c 表明鏈接集羣結點,以下結果說明沒問題
[root@bogon src]# redis-cli -c -p 7002 127.0.0.1:7002> set w x -> Redirected to slot [3696] located at 127.0.0.1:7000 OK 127.0.0.1:7000> get w "x" [root@bogon src]# redis-cli -c -p 7003 127.0.0.1:7003> get w -> Redirected to slot [3696] located at 127.0.0.1:7000 "x" 127.0.0.1:7000>