先複習一下ZK實現分佈式鎖的原理:java
每一個客戶端對某個方法加鎖時,在zookeeper上的與該方法對應的指定節點的目錄下,生成一個惟一的瞬時有序節點。 判斷是否獲取鎖的方式很簡單,只須要判斷有序節點中序號最小的一個。 當釋放鎖的時候,只需將這個瞬時節點刪除便可。同時,其能夠避免服務宕機致使的鎖沒法釋放,而產生的死鎖問題。shell
經過代碼驗證是否生成了瞬時的有序節點apache
package com.jv.zookeeper.curator; import java.util.concurrent.TimeUnit; import org.apache.curator.RetryPolicy; import org.apache.curator.framework.CuratorFramework; import org.apache.curator.framework.CuratorFrameworkFactory; import org.apache.curator.framework.recipes.locks.InterProcessMutex; import org.apache.curator.retry.ExponentialBackoffRetry; public class TestInterProcessMutex { public static void main(String[] args) throws Exception { RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3); CuratorFramework client = CuratorFrameworkFactory.newClient("192.168.245.101:2181", retryPolicy); client.start(); InterProcessMutex lock = new InterProcessMutex(client, "/mylock"); //lock.acquire(1000, TimeUnit.MILLISECONDS) 獲取鎖,超時時間爲1000毫秒 if ( lock.acquire(1000, TimeUnit.MILLISECONDS) ) { try { System.out.println("獲得鎖,並執行"); //模擬線程須要執行很長時間,觀察ZK中/mylock下的臨時ZNODE狀況 Thread.sleep(10000000); } finally { lock.release(); System.out.println("釋放鎖"); } } } }
package com.jv.zookeeper.curator; import java.util.concurrent.TimeUnit; import org.apache.curator.RetryPolicy; import org.apache.curator.framework.CuratorFramework; import org.apache.curator.framework.CuratorFrameworkFactory; import org.apache.curator.framework.recipes.locks.InterProcessMutex; import org.apache.curator.retry.ExponentialBackoffRetry; public class TestInterProcessMutex2 { public static void main(String[] args) throws Exception { RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3); CuratorFramework client = CuratorFrameworkFactory.newClient("192.168.245.101:2181", retryPolicy); client.start(); InterProcessMutex lock = new InterProcessMutex(client, "/mylock"); //將超時時間設置足夠長,觀察ZK中ZNODE的狀況,以驗證分佈式鎖的原理是不是使用創建臨時順序ZNODE實現的 if ( lock.acquire(1000000, TimeUnit.MILLISECONDS) ) { try { System.out.println("獲得鎖,並執行"); Thread.sleep(10000000); } finally { lock.release(); System.out.println("釋放鎖"); } } } }
要把代碼跑起來,在pom.xml中加入以下依賴服務器
<dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>3.4.6</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.curator/curator-recipes --> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-recipes</artifactId> <version>4.0.0</version> </dependency>
先運行TestInterProcessMutex,在運行TestInterProcessMutex2分佈式
使用xshell或者securityCRT登陸zookeeper主機ui
進入到zookeeper的安裝目錄/binthis
./zkCli.sh線程
ls /mylockcode
能夠看到確實生成了兩個瞬時有序節點,而且序號小的客戶端得到了鎖xml
curator封裝事後使用確實很方便
補充一點,curator還能夠很方便的實現選舉
LeaderSelectorListener listener = new LeaderSelectorListenerAdapter() { public void takeLeadership(CuratorFramework client) throws Exception { // 這是你變成leader時執行的方法,你能夠在這裏執行leader的全部操做 // 若是你想放棄leader,你必須退出此方法 } } LeaderSelector selector = new LeaderSelector(client, path, listener); selector.autoRequeue(); // not required, but this is behavior that you will probably expect selector.start();
它的原理就是包裝了InterProcessMutex,而後LeaderSelector跑起來以後就去獲取鎖,一旦獲取到鎖就調用listener.takeLeadership方法
這種選舉仍是有點太簡單了,沒有去考慮資源、數據問題。zk自己的選舉就須要考慮參考事務ID的大小,擁有最大事務ID的服務器才能是leader,而後follower同步leader中比本身更大的事務,達到數據一致
實際應用的話,須要考慮分佈式組件的狀況,選擇是否使用ZK提供的簡單選舉策略