Netty內存管理器ByteBufAllocator及內存分配

ByteBufAllocator 內存管理器:

  Netty 中內存分配有一個最頂層的抽象就是ByteBufAllocator,負責分配全部ByteBuf 類型的內存。功能其實不是不少,主要有如下幾個重要的API:數組

public interface ByteBufAllocator {/**分配一塊內存,自動判斷是否分配堆內內存或者堆外內存。 * Allocate a {@link ByteBuf}. If it is a direct or heap buffer depends on the actual implementation. */ ByteBuf buffer();/**儘量地分配一塊堆外直接內存,若是系統不支持則分配堆內內存。 * Allocate a {@link ByteBuf}, preferably a direct buffer which is suitable for I/O. */ ByteBuf ioBuffer();/**分配一塊堆內內存。 * Allocate a heap {@link ByteBuf}. */ ByteBuf heapBuffer();/**分配一塊堆外內存。 * Allocate a direct {@link ByteBuf}. */ ByteBuf directBuffer();/**組合分配,把多個ByteBuf 組合到一塊兒變成一個總體。 * Allocate a {@link CompositeByteBuf}.If it is a direct or heap buffer depends on the actual implementation. */ CompositeByteBuf compositeBuffer(); }

  到這裏有些小夥伴可能會有疑問,以上API 中爲何沒有前面提到的8 中類型的內存分配API?下面咱們來看ByteBufAllocator 的基本實現類AbstractByteBufAllocator,重點分析主要API 的基本實現,好比buffer()方法源碼以下:緩存

public abstract class AbstractByteBufAllocator implements ByteBufAllocator { @Override public ByteBuf buffer() {
//判斷是否默認支持directBuffer if (directByDefault) { return directBuffer(); } return heapBuffer(); } }

  咱們發現buffer()方法中作了判斷,是否默認支持directBuffer,若是支持則分配directBuffer,不然分配heapBuffer。directBuffer()方法和heapBuffer()方法的實現邏輯幾乎一致,來看directBuffer()方法:ide

@Override public ByteBuf directBuffer() {
 //分配大小,初始大小256 默認最大capacity爲Integer.MAX return directBuffer(DEFAULT_INITIAL_CAPACITY, DEFAULT_MAX_CAPACITY);
}
@Override
public ByteBuf directBuffer(int initialCapacity, int maxCapacity) {
if (initialCapacity == 0 && maxCapacity == 0) {
return emptyBuf;
}//校驗初始化大小和最大大小
validate(initialCapacity, maxCapacity);
return newDirectBuffer(initialCapacity, maxCapacity);
}
/**
* Create a direct {@link ByteBuf} with the given initialCapacity and maxCapacity.
*/
protected abstract ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity);

  directBuffer()方法有多個重載方法,最終會調用newDirectBuffer()方法,咱們發現newDirectBuffer()方法實際上是一個抽象方法,最終,交給AbstractByteBufAllocator 的子類來實現。同理,咱們發現heapBuffer()方法最終是調用newHeapBuffer()方法,而newHeapBuffer()方法也是抽象方法,具體交給AbstractByteBufAllocator 的子類實現。AbstractByteBufAllocator 的子類主要有兩個:PooledByteBufAllocator 和UnpooledByteBufAllocator,下面咱們來看AbstractByteBufAllocator 子類實現的類結構圖:oop

  到這裏,其實咱們還只知道directBuffer、heapBuffer 和pooled、unpooled 的分配規則,那unsafe 和非unsafe是如何判別的呢?其實,是Netty 自動幫咱們判別的,若是操做系統底層支持unsafe 那就採用unsafe 讀寫,不然採用非unsafe 讀寫。咱們能夠從UnpooledByteBufAllocator 的源碼中驗證一下,來看源碼:性能

public final class UnpooledByteBufAllocator extends AbstractByteBufAllocator implements ByteBufAllocatorMetricProvider {
  @Override
  protected ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity) {
   return PlatformDependent.hasUnsafe() ? new UnpooledUnsafeHeapByteBuf(this, initialCapacity, maxCapacity)
   : new UnpooledHeapByteBuf(this, initialCapacity, maxCapacity);
  }

  @Override
  protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
   ByteBuf buf = PlatformDependent.hasUnsafe() ?
   UnsafeByteBufUtil.newUnsafeDirectByteBuf(this, initialCapacity, maxCapacity) :
   new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity);
   return disableLeakDetector ? buf : toLeakAwareBuffer(buf);
  }
}

  咱們發如今newHeapBuffer()方法和newDirectBuffer()方法中,分配內存判斷PlatformDependent 是否支持Unsafa,若是支持則建立Unsafe 類型的Buffer,不然建立非Unsafe 類型的Buffer。由Netty 幫咱們自動判斷了。ui

Unpooled 非池化內存分配:

  堆內內存的分配:

  如今咱們來看UnpooledByteBufAllocator 的內存分配原理。首先,來看heapBuffer 的分配邏輯,進入newHeapBuffer()方法源碼:this

public final class UnpooledByteBufAllocator extends AbstractByteBufAllocator implements ByteBufAllocatorMetricProvider {
  protected ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity) {
     //是否unsafe是由jdk底層去實現的,若是可以獲取到unsafe對象,就使用unsafe
  return PlatformDependent.hasUnsafe() ? new UnpooledUnsafeHeapByteBuf(this, initialCapacity, maxCapacity)
: new UnpooledHeapByteBuf(this, initialCapacity, maxCapacity);
  }

}

  經過調用PlatformDependent.hasUnsafe() 方法來判斷操做系統是否支持Unsafe , 若是支持Unsafe 則建立UnpooledUnsafeHeapByteBuf 類,不然建立UnpooledHeapByteBuf 類。咱們先進入UnpooledUnsafeHeapByteBuf的構造器看看會進行哪些操做?spa

final class UnpooledUnsafeHeapByteBuf extends UnpooledHeapByteBuf {
  ....   UnpooledUnsafeHeapByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {     super(alloc, initialCapacity, maxCapacity);//父類構造器 以下   }.... }
public UnpooledUnsafeHeapByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);//父類構造器 以下
}
public UnpooledHeapByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(maxCapacity);//父類構造器 以下
checkNotNull(alloc, "alloc");
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity(%d) > maxCapacity(%d)", initialCapacity, maxCapacity));
}
this.alloc = alloc;
   //把默認分配的數組new byte[initialCapacity]賦值給全局變量array。
setArray(allocateArray(initialCapacity));
setIndex(0, 0);
}
protected AbstractReferenceCountedByteBuf(int maxCapacity) {
super(maxCapacity);//父類構造器 以下
}
protected AbstractByteBuf(int maxCapacity) {
checkPositiveOrZero(maxCapacity, "maxCapacity");
this.maxCapacity = maxCapacity;
}

  有一段關鍵方法就是setArray()方法,裏面的實現也很是簡單,就是把默認分配的數組new byte[initialCapacity]賦值給全局變量array。緊接着就是調用了setIndex()方法,最終在setIndex0()方法中初始化readerIndex 和writerIndex:操作系統

@Override public ByteBuf setIndex(int readerIndex, int writerIndex) { if (checkBounds) {//校驗3者大小關係 checkIndexBounds(readerIndex, writerIndex, capacity()); } setIndex0(readerIndex, writerIndex); return this; }
//AbstractByteBuf 設置讀寫指針位置 final void setIndex0(int readerIndex, int writerIndex) { this.readerIndex = readerIndex; this.writerIndex = writerIndex; }

  既然,UnpooledUnsafeHeapByteBuf 和UnpooledHeapByteBuf 調用的都是UnpooledHeapByteBuf 的構造方法,那麼它們之間到底有什麼區別呢?其實根本區別在於IO 的讀寫,咱們能夠分別來看一下它們的getByte()方法瞭解兩者的區別。先來看UnpooledHeapByteBuf 的getByte()方法實現:.net

@Override public byte getByte(int index) { ensureAccessible(); return _getByte(index); } @Override protected byte _getByte(int index) { return HeapByteBufUtil.getByte(array, index); }
final class HeapByteBufUtil {
   //直接用數組索引取值
static byte getByte(byte[] memory, int index) {
return memory[index];
}
}

  再來看UnpooledUnsafeHeapByteBuf 的getByte()方法實現:

@Override public byte getByte(int index) { checkIndex(index); return _getByte(index); } @Override protected byte _getByte(int index) { return UnsafeByteBufUtil.getByte(array, index); }//調用底層Unsafe去進行IO操做數據 static byte getByte(byte[] array, int index) { return PlatformDependent.getByte(array, index); }

  經過這樣一對比咱們基本已經瞭解UnpooledUnsafeHeapByteBuf 和UnpooledHeapByteBuf 的區別了。

  堆外內存的分配:

  仍是回到UnpooledByteBufAllocator 的newDirectBuffer()方法:

@Override protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) { ByteBuf buf = PlatformDependent.hasUnsafe() ? UnsafeByteBufUtil.newUnsafeDirectByteBuf(this, initialCapacity, maxCapacity) : new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity); return disableLeakDetector ? buf : toLeakAwareBuffer(buf); }

  跟分配堆內內存同樣,若是支持Unsafe 則調用UnsafeByteBufUtil.newUnsafeDirectByteBuf() 方法, 不然建立UnpooledDirectByteBuf 類。咱們先看一下UnpooledDirectByteBuf 的構造器:

protected UnpooledDirectByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) { super(maxCapacity);//父類構造器 以下      // check paramthis.alloc = alloc;
//調用了ByteBuffer.allocateDirect.allocateDirect()經過JDK 底層分配一個直接緩衝區,主要就是作了一次賦值。 setByteBuffer(ByteBuffer.allocateDirect(initialCapacity)); }
private void setByteBuffer(ByteBuffer buffer) {
ByteBuffer oldBuffer = this.buffer;
if (oldBuffer != null) {
if (doNotFree) {
doNotFree = false;
} else {
freeDirect(oldBuffer);
}
}
this.buffer = buffer;
tmpNioBuf = null;
   //返回剩餘的可用長度,此長度爲實際讀取的數據長度
capacity = buffer.remaining();
}
protected AbstractReferenceCountedByteBuf(int maxCapacity) {
super(maxCapacity);//父類構造器 以下
}
protected AbstractByteBuf(int maxCapacity) {
if (maxCapacity < 0) {
throw new IllegalArgumentException("maxCapacity: " + maxCapacity + " (expected: >= 0)");
}
this.maxCapacity = maxCapacity;

  下面咱們繼續來UnsafeByteBufUtil.newUnsafeDirectByteBuf()方法,看它的邏輯: 

static UnpooledUnsafeDirectByteBuf newUnsafeDirectByteBuf( ByteBufAllocator alloc, int initialCapacity, int maxCapacity) { if (PlatformDependent.useDirectBufferNoCleaner()) { return new UnpooledUnsafeNoCleanerDirectByteBuf(alloc, initialCapacity, maxCapacity); } return new UnpooledUnsafeDirectByteBuf(alloc, initialCapacity, maxCapacity); }
protected UnpooledUnsafeDirectByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(maxCapacity);//父類構造器 以下
   //check param
this.alloc = alloc;
setByteBuffer(allocateDirect(initialCapacity), false);
}
protected AbstractReferenceCountedByteBuf(int maxCapacity) {
super(maxCapacity);//父類構造器 以下
}
protected AbstractByteBuf(int maxCapacity) {
if (maxCapacity < 0) {
throw new IllegalArgumentException("maxCapacity: " + maxCapacity + " (expected: >= 0)");
}//設置Buf最大大小
this.maxCapacity = maxCapacity;
}

  它的邏輯和UnpooledDirectByteBuf 構造器的邏輯是類似的, 因此咱們關注setByteBuffer() 方法:

final void setByteBuffer(ByteBuffer buffer, boolean tryFree) { if (tryFree) { ByteBuffer oldBuffer = this.buffer; if (oldBuffer != null) { if (doNotFree) { doNotFree = false; } else { freeDirect(oldBuffer); } } } this.buffer = buffer; memoryAddress = PlatformDependent.directBufferAddress(buffer); tmpNioBuf = null;
   //返回剩餘的可用長度,此長度爲實際讀取的數據長度 capacity = buffer.remaining(); }

  一樣仍是先把建立從JDK 底層建立好的buffer 保存, 接下來有個很重要的操做就是調用了PlatformDependent.directBufferAddress()方法獲取的buffer 真實的內存地址,並保存到memoryAddress 變量中。咱們進入PlatformDependent.directBufferAddress()方法一探究竟。

public static long directBufferAddress(ByteBuffer buffer) { return PlatformDependent0.directBufferAddress(buffer); } static long directBufferAddress(ByteBuffer buffer) { return getLong(buffer, ADDRESS_FIELD_OFFSET); } private static long getLong(Object object, long fieldOffset) { return UNSAFE.getLong(object, fieldOffset); }

  能夠看到,調用了UNSAFE 的getLong()方法,這個方法是一個native 方法。它是直接經過buffer 的內存地址加上一個偏移量去取數據,。到這裏咱們已經基本清楚UnpooledUnsafeDirectByteBuf 和UnpooledDirectByteBuf 的區別,非unsafe 是經過數組的下標取數據,而unsafe 是直接操做內存地址,相對於非unsafe 來講效率固然要更高。

Pooled 池化內存分配:

  如今開始, 咱們來分析Pooled 池化內存的分配原理。咱們首先仍是找到AbstractByteBufAllocator 的子類PooledByteBufAllocator 實現的分配內存的兩個方法newDirectBuffer()方法和newHeapBuffer()方法,咱們以newDirectBuffer()方法爲例看看:

public class PooledByteBufAllocator extends AbstractByteBufAllocator {   ...... @Override protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) { PoolThreadCache cache = threadCache.get(); PoolArena<ByteBuffer> directArena = cache.directArena; ByteBuf buf; if (directArena != null) { buf = directArena.allocate(cache, initialCapacity, maxCapacity); } else { if (PlatformDependent.hasUnsafe()) { buf = UnsafeByteBufUtil.newUnsafeDirectByteBuf(this, initialCapacity, maxCapacity); } else { buf = new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity); } } return toLeakAwareBuffer(buf); }...... }

  首先,咱們看到它是經過threadCache.get()拿到一個類型爲PoolThreadCache 的cache 對象,而後,經過cache 拿到directArena 對象,最終會調用directArena.allocate()方法分配ByteBuf。這個地方你們可能會看得有點懵,咱們來詳細分析一下。咱們跟進到threadCache 對象實際上是PoolThreadLocalCache 類型的變量, 跟進到PoolThreadLocalCache 的源碼:

final class PoolThreadLocalCache extends FastThreadLocal<PoolThreadCache> {      @Override protected synchronized PoolThreadCache initialValue() { final PoolArena<byte[]> heapArena = leastUsedArena(heapArenas); final PoolArena<ByteBuffer> directArena = leastUsedArena(directArenas); return new PoolThreadCache( heapArena, directArena, tinyCacheSize, smallCacheSize, normalCacheSize, DEFAULT_MAX_CACHED_BUFFER_CAPACITY, DEFAULT_CACHE_TRIM_INTERVAL); }....... }

  這裏看到的 PoolArena,netty總的內存池是一個數組,數組每個成員是一個獨立的內存池。至關於一個國家(netty)有多個省(poolArena)分別自治管理不一樣的地區。

  從名字來看,咱們發現PoolThreadLocalCache 的initialValue()方法就是用來初始化PoolThreadLocalCache 的。首先就調用了leastUsedArena()方法分別得到類型爲PoolArena 的heapArena 和directArena 對象。而後把heapArena 和directArena 對象做爲參數傳遞到了PoolThreadCache 的構造器中。那麼heapArena 和directArena 對象是在哪裏初始化的呢?咱們查找一下發如今PooledByteBufAllocator 的構造方法中調用newArenaArray()方法給heapArenas 和directArenas 賦值了。

public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder, int tinyCacheSize, int smallCacheSize, int normalCacheSize) {      ......if (nHeapArena > 0) { heapArenas = newArenaArray(nHeapArena); ...... } else { heapArenas = null; heapArenaMetrics = Collections.emptyList(); } if (nDirectArena > 0) { directArenas = newArenaArray(nDirectArena); ...... } else { directArenas = null; directArenaMetrics = Collections.emptyList(); } }
private static <T> PoolArena<T>[] newArenaArray(int size) {
return new PoolArena[size];
}

  其實就是建立了一個固定大小的PoolArena 數組,數組大小由傳入的參數nHeapArena 和nDirectArena 來決定。咱們再回到PooledByteBufAllocator 的構造器源碼, 看nHeapArena 和nDirectArena 是怎麼初始化的, 咱們找到PooledByteBufAllocator 的重載構造器:

public PooledByteBufAllocator(boolean preferDirect) {
   //調用重載構造器,以下 this(preferDirect, DEFAULT_NUM_HEAP_ARENA, DEFAULT_NUM_DIRECT_ARENA, DEFAULT_PAGE_SIZE, DEFAULT_MAX_ORDER); }
public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder) {
this(preferDirect, nHeapArena, nDirectArena, pageSize, maxOrder,
DEFAULT_TINY_CACHE_SIZE, DEFAULT_SMALL_CACHE_SIZE, DEFAULT_NORMAL_CACHE_SIZE);
}

  咱們發現,nHeapArena 和nDirectArena 是經過DEFAULT_NUM_HEAP_ARENA 和DEFAULT_NUM_DIRECT_ARENA這兩個常量默認賦值的。再繼續跟進常量的定義(在靜態代碼塊裏):

final int defaultMinNumArena = runtime.availableProcessors() * 2;
final int defaultChunkSize = DEFAULT_PAGE_SIZE << DEFAULT_MAX_ORDER;
DEFAULT_NUM_HEAP_ARENA = Math.max(0,
 SystemPropertyUtil.getInt( "io.netty.allocator.numHeapArenas", (int) Math.min( defaultMinNumArena, runtime.maxMemory() / defaultChunkSize / 2 / 3))); DEFAULT_NUM_DIRECT_ARENA = Math.max(0, SystemPropertyUtil.getInt( "io.netty.allocator.numDirectArenas", (int) Math.min( defaultMinNumArena, PlatformDependent.maxDirectMemory() / defaultChunkSize / 2 / 3)));

  到這裏爲止, 咱們才知道nHeapArena 和nDirectArena 的默認賦值。默認是分配CPU 核數*2 , 也就是把defaultMinNumArena 的值賦值給nHeapArena 和nDirectArena。對於CPU 核數*2 你們應該有印象,EventLoopGroup 給分配線程時默認線程數也是CPU 核數*2,。那麼,Netty 爲何要這樣設計呢?其實,主要目的就是保證Netty 中的每個任務線程均可以有一個獨享的Arena,保證在每一個線程分配內存的時候是不用加鎖的。

  基於上面的分析,咱們知道Arena 有heapArean 和DirectArena,這裏咱們統稱爲Arena。假設咱們有四個線程,那麼對應會分配四個Arean。在建立BtyeBuf 的時候,首先經過PoolThreadCache 獲取Arena 對象並賦值給其成員變量,而後,每一個線程經過PoolThreadCache 調用get 方法的時候會拿到它底層的Arena,也就是說EventLoop1 拿到Arena1,EventLoop2 拿到Arena2,以此類推,以下圖所示:

  那麼PoolThreadCache 除了能夠旨在Arena 上進行內存分配,還能夠在它底層維護的ByteBuf 緩存列表進行分配。舉個例子:咱們經過PooledByteBufAllocator 去建立了一個1024 字節大小的ByteBuf,當咱們用完釋放以後,咱們可能在其餘地方會繼續分配1024 字節大小的ByteBuf。這個時候,其實不須要在Arena 上進行內存分配,而是直接經過PoolThreadCache 中維護的ByteBuf 的緩存列表直接拿過來返回。那麼,在PooledByteBufAllocator 中維護三種規格大小的緩存列表,分別是三個值tinyCacheSize、smallCacheSize、normalCacheSize:

public class PooledByteBufAllocator extends AbstractByteBufAllocator {    private final int tinyCacheSize; private final int smallCacheSize; private final int normalCacheSize;
  
  static{
    DEFAULT_TINY_CACHE_SIZE = SystemPropertyUtil.getInt("io.netty.allocator.tinyCacheSize", 512);
    DEFAULT_SMALL_CACHE_SIZE = SystemPropertyUtil.getInt("io.netty.allocator.smallCacheSize", 256);
    DEFAULT_NORMAL_CACHE_SIZE = SystemPropertyUtil.getInt("io.netty.allocator.normalCacheSize", 64);
  }

  public PooledByteBufAllocator(boolean preferDirect) { this(preferDirect, DEFAULT_NUM_HEAP_ARENA, DEFAULT_NUM_DIRECT_ARENA, DEFAULT_PAGE_SIZE, DEFAULT_MAX_ORDER); } public PooledByteBufAllocator(int nHeapArena, int nDirectArena, int pageSize, int maxOrder) { this(false, nHeapArena, nDirectArena, pageSize, maxOrder); } public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder) { this(preferDirect, nHeapArena, nDirectArena, pageSize, maxOrder, DEFAULT_TINY_CACHE_SIZE, DEFAULT_SMALL_CACHE_SIZE, DEFAULT_NORMAL_CACHE_SIZE); } public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder, int tinyCacheSize, int smallCacheSize, int normalCacheSize) { super(preferDirect); threadCache = new PoolThreadLocalCache(); this.tinyCacheSize = tinyCacheSize; this.smallCacheSize = smallCacheSize; this.normalCacheSize = normalCacheSize; final int chunkSize = validateAndCalculateChunkSize(pageSize, maxOrder);      ...... } }

  咱們看到, 在PooledByteBufAllocator 的構造器中, 分別賦值了tinyCacheSize=512 , smallCacheSize=256 ,normalCacheSize=64。經過這樣一種方式,Netty 中給咱們預建立固定規格的內存池,大大提升了內存分配的性能。

DirectArena 內存分配流程:

  Arena 分配內存的基本流程有三個步驟:

  1. 從對象池裏拿到PooledByteBuf 進行復用;
  2. 從緩存中進行內存分配;
  3. 從內存堆裏進行內存分配。

  咱們以directBuff 爲例, 首先來看從對象池裏拿到PooledByteBuf 進行復用的狀況, 咱們依舊跟進到PooledByteBufAllocator 的newDirectBuffer()方法:

@Override
protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
    PoolThreadCache cache = threadCache.get();
    PoolArena<ByteBuffer> directArena = cache.directArena;

    ByteBuf buf;
    if (directArena != null) {
        buf = directArena.allocate(cache, initialCapacity, maxCapacity);
    } else {
       .......
    }
    return toLeakAwareBuffer(buf);
}......

  上面的PoolArena 咱們已經清楚,如今咱們直接跟進到PoolArena 的allocate()方法:

PooledByteBuf<T> allocate(PoolThreadCache cache, int reqCapacity, int maxCapacity) { PooledByteBuf<T> buf = newByteBuf(maxCapacity); allocate(cache, buf, reqCapacity); return buf; }

  在這個地方其實就很是清晰了,首先調用了newByteBuf()方法去拿到一個PooledByteBuf 對象,接下來經過allocate()方法在線程私有的PoolThreadCache 中分配一塊內存,而後buf 裏面的內存地址之類的值進行初始化。咱們能夠跟進到newByteBuf()看一下,選擇DirectArena 對象:

@Override protected PooledByteBuf<ByteBuffer> newByteBuf(int maxCapacity) { if (HAS_UNSAFE) { return PooledUnsafeDirectByteBuf.newInstance(maxCapacity); } else { return PooledDirectByteBuf.newInstance(maxCapacity); }
}

  咱們發現首先判斷是否支持UnSafe , 默認狀況下通常是支持Unsafe 的, 因此咱們繼續進入到PooledUnsafeDirectByteBuf 的newInstance()方法:

final class PooledUnsafeDirectByteBuf extends PooledByteBuf<ByteBuffer> { private static final Recycler<PooledUnsafeDirectByteBuf> RECYCLER = new Recycler<PooledUnsafeDirectByteBuf>() { @Override protected PooledUnsafeDirectByteBuf newObject(Handle<PooledUnsafeDirectByteBuf> handle) { return new PooledUnsafeDirectByteBuf(handle, 0); } }; static PooledUnsafeDirectByteBuf newInstance(int maxCapacity) { PooledUnsafeDirectByteBuf buf = RECYCLER.get(); buf.reuse(maxCapacity); return buf; }....... }

  顧名思義,我看到首先就是從RECYCLER(也就是內存回收站)對象的get()方法拿到一個buf。從上面的代碼片斷來看,RECYCLER 對象實現了一個newObject()方法,當回收站裏面沒有可用的buf 時就會建立一個新的buf。由於獲取到的buf 多是回收站裏面拿出來的,複用前須要重置。所以,繼續往下看就會調用buf 的reuse()方法:

final void reuse(int maxCapacity) { maxCapacity(maxCapacity); setRefCnt(1); setIndex0(0, 0); discardMarks(); }

  咱們發現reuse()就是對全部的參數從新歸爲初始狀態。到這裏咱們應該已經清楚從內存池獲取buf 對象的全過程。那麼接下來,再回到PoolArena 的allocate()方法,看看真實的內存是如何分配出來的?buf 的內存分配主要有兩種狀況,分別是:從緩存中進行內存分配和從內存堆裏進行內存分配。咱們來看代碼,進入allocate()方法具體邏輯:

private void allocate(PoolThreadCache cache, PooledByteBuf<T> buf, final int reqCapacity) {   ...   if (normCapacity <= chunkSize) {     if (cache.allocateNormal(this, buf, reqCapacity, normCapacity)) {       // was able to allocate out of the cache so move on       return;     }
    allocateNormal(buf, reqCapacity, normCapacity);   } else {     // Huge allocations are never served via the cache so just call allocateHuge     allocateHuge(buf, reqCapacity);   } }

  這段代碼邏輯看上去很是複雜,其實咱們省略掉的邏輯基本上都是判斷不一樣規格大小,從其對應的緩存中獲取內存。若是全部規格都不知足,那就直接調用allocateHuge()方法進行真實的內存分配。

相關文章
相關標籤/搜索