理解HW bitmap(introduced in Oreo)

Hardware BitmapOreo中引入,做用和它的名字同樣,在Graphics Memory中分配Bitmap,這一點區別傳統Java Heap上分配的狀況。java

回顧下傳統的Bitmap如何分配方式: Android開發基本使用BitmapFactory工廠類建立一塊Bitmap對象,考慮到reusesize放縮優化增長了一個Options類,它定義在BitmapFactory中。canvas

public static Bitmap decodeStream(InputStream is, Rect outPadding, Options opts) {
        ...
        Bitmap bm = null;

        Trace.traceBegin(Trace.TRACE_TAG_GRAPHICS, "decodeBitmap");
        try {
            if (is instanceof AssetManager.AssetInputStream) {
                final long asset = ((AssetManager.AssetInputStream) is).getNativeAsset();
                bm = nativeDecodeAsset(asset, outPadding, opts);
            } else {
                bm = decodeStreamInternal(is, outPadding, opts);
            }
            ...
        return bm;
    }
複製代碼

BitmapFactory經過nativeXXX calljni流程中,最終都是調用BitmapFactory.cpp#doDecode接口。api

與內存分配相關部分是解析Options,拿到width、height、format(888,565)等,缺省的狀況下使用文件流參數,經過HeapAllocatornativemalloc一塊內存。async

bitmap::createBitmap(env, defaultAllocator.getStorageObjAndReset(),bitmapCreateFlags, ninePatchChunk, ninePatchInsets, -1);
複製代碼

最後native層回調Java層,並將native Bitmap傳給Java Bimtap,因此Bitmap.java的構造來自native層回調,理解這一點很是重要,而且Java層的Bitmap是經過mNativePtr關聯到nativeBitmap對象。這是Oreo之前的方式。ide

Hardware Bitmap又是個啥?優化

它的內存分配路徑不太相同,一樣是nativedoDecode接口卻使用hwui/Bitmap.cppallocateHardwareBitmap方法ui

sk_sp<Bitmap> Bitmap::allocateHardwareBitmap(SkBitmap& bitmap) {
    return uirenderer::renderthread::RenderProxy::allocateHardwareBitmap(bitmap);
}
複製代碼

它向Renderhread發起一個MethodInvokeRenderTask,生成紋理上傳GPUthis

// RenderThread.cpp
sk_sp<Bitmap> RenderThread::allocateHardwareBitmap(SkBitmap& skBitmap) {
    auto renderType = Properties::getRenderPipelineType();
    switch (renderType) {
        case RenderPipelineType::OpenGL:
            return OpenGLPipeline::allocateHardwareBitmap(*this, skBitmap);
        case RenderPipelineType::SkiaGL:
            return skiapipeline::SkiaOpenGLPipeline::allocateHardwareBitmap(*this, skBitmap);
        case RenderPipelineType::SkiaVulkan:
            return skiapipeline::SkiaVulkanPipeline::allocateHardwareBitmap(*this, skBitmap);
        default:
            LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t) renderType);
            break;
    }
    return nullptr;
}

// SkiaOpenGLPipeline.cpp
sk_sp<Bitmap> SkiaOpenGLPipeline::allocateHardwareBitmap(renderthread::RenderThread& renderThread,
        SkBitmap& skBitmap) {
        ...
            // glTexSubImage2D is synchronous in sense that it memcpy() from pointer that we provide.
            // But asynchronous in sense that driver may upload texture onto hardware buffer when we first
            // use it in drawing
            glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, info.width(), info.height(), format, type,
                    bitmap.getPixels());
            GL_CHECKPOINT(MODERATE);
        ...
}
複製代碼

返回後,GPU中多了一塊當前進程上傳的Bitmap紋理,而進程內nativeBitmap會被GC回收。回想下路徑,最開始BitmapFactory.java發起decode需求,要求Hareware類型,jni流程進入natice層調用doDecode接口,graphics/Bitmap.cpp經過RenderProxy向渲染線程發送一個MethodInvokeRenderTask生成Bitmap紋理,這一切完成後,JNI流程原路返回完成一次調用,路徑中全部local reference所有釋放。至於你可能要問了,HW Bitmap的最佳實踐是什麼,我也正在項目中嘗試使用它,歡迎更多的想法在留言區。spa

相關文章
相關標籤/搜索