最近在工做上遇見了一些高併發的場景須要加鎖來保證業務邏輯的正確性,而且要求加鎖後性能不能受到太大的影響。初步的想法是經過數據的時間戳,id等關鍵字來加鎖,從而保證不一樣類型數據處理的併發性。而java自身api提供的鎖粒度太大,很難同時知足這些需求,因而本身動手寫了幾個簡單的擴展...java
1. 分段鎖api
借鑑concurrentHashMap的分段思想,先生成必定數量的鎖,具體使用的時候再根據key來返回對應的lock。這是幾個實現裏最簡單,性能最高,也是最終被採用的鎖策略,代碼以下:併發
/** * 分段鎖,系統提供必定數量的原始鎖,根據傳入對象的哈希值獲取對應的鎖並加鎖 * 注意:要鎖的對象的哈希值若是發生改變,有可能致使鎖沒法成功釋放!!! */ public class SegmentLock<T> { private Integer segments = 16;//默認分段數量 private final HashMap<Integer, ReentrantLock> lockMap = new HashMap<>(); public SegmentLock() { init(null, false); } public SegmentLock(Integer counts, boolean fair) { init(counts, fair); } private void init(Integer counts, boolean fair) { if (counts != null) { segments = counts; } for (int i = 0; i < segments; i++) { lockMap.put(i, new ReentrantLock(fair)); } } public void lock(T key) { ReentrantLock lock = lockMap.get((key.hashCode()>>>1) % segments); lock.lock(); } public void unlock(T key) { ReentrantLock lock = lockMap.get((key.hashCode()>>>1) % segments); lock.unlock(); } }
2. 哈希鎖jvm
上述分段鎖的基礎上發展起來的第二種鎖策略,目的是實現真正意義上的細粒度鎖。每一個哈希值不一樣的對象都能得到本身獨立的鎖。在測試中,在被鎖住的代碼執行速度飛快的狀況下,效率比分段鎖慢 30% 左右。若是有長耗時操做,感受表現應該會更好。代碼以下:高併發
public class HashLock<T> { private boolean isFair = false; private final SegmentLock<T> segmentLock = new SegmentLock<>();//分段鎖 private final ConcurrentHashMap<T, LockInfo> lockMap = new ConcurrentHashMap<>(); public HashLock() { } public HashLock(boolean fair) { isFair = fair; } public void lock(T key) { LockInfo lockInfo; segmentLock.lock(key); try { lockInfo = lockMap.get(key); if (lockInfo == null) { lockInfo = new LockInfo(isFair); lockMap.put(key, lockInfo); } else { lockInfo.count.incrementAndGet(); } } finally { segmentLock.unlock(key); } lockInfo.lock.lock(); } public void unlock(T key) { LockInfo lockInfo = lockMap.get(key); if (lockInfo.count.get() == 1) { segmentLock.lock(key); try { if (lockInfo.count.get() == 1) { lockMap.remove(key); } } finally { segmentLock.unlock(key); } } lockInfo.count.decrementAndGet(); lockInfo.unlock(); } private static class LockInfo { public ReentrantLock lock; public AtomicInteger count = new AtomicInteger(1); private LockInfo(boolean fair) { this.lock = new ReentrantLock(fair); } public void lock() { this.lock.lock(); } public void unlock() { this.lock.unlock(); } } }
3. 弱引用鎖性能
哈希鎖由於引入的分段鎖來保證鎖建立和銷燬的同步,總感受有點瑕疵,因此寫了第三個鎖來尋求更好的性能和更細粒度的鎖。這個鎖的思想是藉助java的弱引用來建立鎖,把鎖的銷燬交給jvm的垃圾回收,來避免額外的消耗。測試
有點遺憾的是由於使用了ConcurrentHashMap做爲鎖的容器,因此沒能真正意義上的擺脫分段鎖。這個鎖的性能比 HashLock 快10% 左右。鎖代碼:this
/** * 弱引用鎖,爲每一個獨立的哈希值提供獨立的鎖功能 */ public class WeakHashLock<T> { private ConcurrentHashMap<T, WeakLockRef<T, ReentrantLock>> lockMap = new ConcurrentHashMap<>(); private ReferenceQueue<ReentrantLock> queue = new ReferenceQueue<>(); public ReentrantLock get(T key) { if (lockMap.size() > 1000) { clearEmptyRef(); } WeakReference<ReentrantLock> lockRef = lockMap.get(key); ReentrantLock lock = (lockRef == null ? null : lockRef.get()); while (lock == null) { lockMap.putIfAbsent(key, new WeakLockRef<>(new ReentrantLock(), queue, key)); lockRef = lockMap.get(key); lock = (lockRef == null ? null : lockRef.get()); if (lock != null) { return lock; } clearEmptyRef(); } return lock; } @SuppressWarnings("unchecked") private void clearEmptyRef() { Reference<? extends ReentrantLock> ref; while ((ref = queue.poll()) != null) { WeakLockRef<T, ? extends ReentrantLock> weakLockRef = (WeakLockRef<T, ? extends ReentrantLock>) ref; lockMap.remove(weakLockRef.key); } } private static final class WeakLockRef<T, K> extends WeakReference<K> { final T key; private WeakLockRef(K referent, ReferenceQueue<? super K> q, T key) { super(referent, q); this.key = key; } } }
後記線程
最開始想借助 locksupport 和 AQS 來實現細粒度鎖,寫着寫着發現正在實現的東西和java 原生的鎖區別不大,因而放棄改成對java自帶鎖的封裝,浪費了很多時間。code
實際上在實現了這些細粒度鎖以後,又有了新的想法,好比能夠經過分段思想將數據提交給專門的線程來處理,能夠減小大量線程的阻塞時間,留待往後探索...