Android 高清加載巨圖方案 拒絕壓縮圖片

    1、概述java

距離上一篇博客有段時間沒更新了,主要是最近有些私事致使的,那麼就先來一篇簡單一點的博客脈動回來。android

對於加載圖片,你們都不陌生,通常爲了儘量避免OOM都會按照以下作法:git

    對於圖片顯示:根據須要顯示圖片控件的大小對圖片進行壓縮顯示。
    若是圖片數量很是多:則會使用LruCache等緩存機制,將全部圖片佔據的內容維持在一個範圍內。github

其實對於圖片加載還有種狀況,就是單個圖片很是巨大,而且還不容許壓縮。好比顯示:世界地圖、清明上河圖、微博長圖等。面試

那麼對於這種需求,該如何作呢?canvas

首先不壓縮,按照原圖尺寸加載,那麼屏幕確定是不夠大的,而且考慮到內存的狀況,不可能一次性整圖加載到內存中,因此確定是局部加載,那麼就須要用到一個類:緩存

    BitmapRegionDecoderapp

其次,既然屏幕顯示不完,那麼最起碼要添加一個上下左右拖動的手勢,讓用戶能夠拖動查看。ide

那麼綜上,本篇博文的目的就是去自定義一個顯示巨圖的View,支持用戶去拖動查看,大概的效果圖以下:佈局

好吧,這清明上河圖太長了,想要觀看全圖,文末下載,圖片在assets目錄。固然若是你的圖,高度也很大,確定也是能夠上下拖動的。
2、初識BitmapRegionDecoder

BitmapRegionDecoder主要用於顯示圖片的某一塊矩形區域,若是你須要顯示某個圖片的指定區域,那麼這個類很是合適。

對於該類的用法,很是簡單,既然是顯示圖片的某一塊區域,那麼至少只須要一個方法去設置圖片;一個方法傳入顯示的區域便可;詳見:

    BitmapRegionDecoder提供了一系列的newInstance方法來構造對象,支持傳入文件路徑,文件描述符,文件的inputstrem等。

    例如:

     BitmapRegionDecoder bitmapRegionDecoder =
      BitmapRegionDecoder.newInstance(inputStream, false);
 

    上述解決了傳入咱們須要處理的圖片,那麼接下來就是顯示指定的區域。

    bitmapRegionDecoder.decodeRegion(rect, options);
        1

    參數一很明顯是一個rect,參數二是BitmapFactory.Options,你能夠控制圖片的inSampleSize,inPreferredConfig等。

那麼下面看一個超級簡單的例子:

package com.zhy.blogcodes.largeImage;

import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.BitmapRegionDecoder;
import android.graphics.Rect;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.widget.ImageView;

import com.zhy.blogcodes.R;

import java.io.IOException;
import java.io.InputStream;

public class LargeImageViewActivity extends AppCompatActivity
{
    private ImageView mImageView;

    @Override
    protected void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_large_image_view);

        mImageView = (ImageView) findViewById(R.id.id_imageview);
        try
        {
            InputStream inputStream = getAssets().open("tangyan.jpg");

            //得到圖片的寬、高
            BitmapFactory.Options tmpOptions = new BitmapFactory.Options();
            tmpOptions.inJustDecodeBounds = true;
            BitmapFactory.decodeStream(inputStream, null, tmpOptions);
            int width = tmpOptions.outWidth;
            int height = tmpOptions.outHeight;

            //設置顯示圖片的中心區域
            BitmapRegionDecoder bitmapRegionDecoder = BitmapRegionDecoder.newInstance(inputStream, false);
            BitmapFactory.Options options = new BitmapFactory.Options();
            options.inPreferredConfig = Bitmap.Config.RGB_565;
            Bitmap bitmap = bitmapRegionDecoder.decodeRegion(new Rect(width / 2 - 100, height / 2 - 100, width / 2 + 100, height / 2 + 100), options);
            mImageView.setImageBitmap(bitmap);


        } catch (IOException e)
        {
            e.printStackTrace();
        }


    }

}

上述代碼,就是使用BitmapRegionDecoder去加載assets中的圖片,調用bitmapRegionDecoder.decodeRegion解析圖片的中間矩形區域,返回bitmap,最終顯示在ImageView上。

效果圖:

上面的小圖顯示的即爲下面的大圖的中間區域。

ok,那麼目前咱們已經瞭解了BitmapRegionDecoder的基本用戶,那麼往外擴散,咱們須要自定義一個控件去顯示巨圖就很簡單了,首先Rect的範圍就是咱們View的大小,而後根據用戶的移動手勢,不斷去更新咱們的Rect的參數便可。
3、自定義顯示大圖控件

根據上面的分析呢,咱們這個自定義控件思路就很是清晰了:

    提供一個設置圖片的入口
    重寫onTouchEvent,在裏面根據用戶移動的手勢,去更新顯示區域的參數
    每次更新區域參數後,調用invalidate,onDraw裏面去regionDecoder.decodeRegion拿到bitmap,去draw

理清了,發現so easy,下面上代碼:

package com.zhy.blogcodes.largeImage.view;

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.BitmapRegionDecoder;
import android.graphics.Canvas;
import android.graphics.Rect;
import android.util.AttributeSet;
import android.view.MotionEvent;
import android.view.View;

import java.io.IOException;
import java.io.InputStream;

/**
 * Created by zhy on 15/5/16.
 */
public class LargeImageView extends View
{
    private BitmapRegionDecoder mDecoder;
    /**
     * 圖片的寬度和高度
     */
    private int mImageWidth, mImageHeight;
    /**
     * 繪製的區域
     */
    private volatile Rect mRect = new Rect();

    private MoveGestureDetector mDetector;


    private static final BitmapFactory.Options options = new BitmapFactory.Options();

    static
    {
        options.inPreferredConfig = Bitmap.Config.RGB_565;
    }

    public void setInputStream(InputStream is)
    {
        try
        {
            mDecoder = BitmapRegionDecoder.newInstance(is, false);
            BitmapFactory.Options tmpOptions = new BitmapFactory.Options();
            // Grab the bounds for the scene dimensions
            tmpOptions.inJustDecodeBounds = true;
            BitmapFactory.decodeStream(is, null, tmpOptions);
            mImageWidth = tmpOptions.outWidth;
            mImageHeight = tmpOptions.outHeight;

            requestLayout();
            invalidate();
        } catch (IOException e)
        {
            e.printStackTrace();
        } finally
        {

            try
            {
                if (is != null) is.close();
            } catch (Exception e)
            {
            }
        }
    }


    public void init()
    {
        mDetector = new MoveGestureDetector(getContext(), new MoveGestureDetector.SimpleMoveGestureDetector()
        {
            @Override
            public boolean onMove(MoveGestureDetector detector)
            {
                int moveX = (int) detector.getMoveX();
                int moveY = (int) detector.getMoveY();

                if (mImageWidth > getWidth())
                {
                    mRect.offset(-moveX, 0);
                    checkWidth();
                    invalidate();
                }
                if (mImageHeight > getHeight())
                {
                    mRect.offset(0, -moveY);
                    checkHeight();
                    invalidate();
                }

                return true;
            }
        });
    }


    private void checkWidth()
    {


        Rect rect = mRect;
        int imageWidth = mImageWidth;
        int imageHeight = mImageHeight;

        if (rect.right > imageWidth)
        {
            rect.right = imageWidth;
            rect.left = imageWidth - getWidth();
        }

        if (rect.left < 0)
        {
            rect.left = 0;
            rect.right = getWidth();
        }
    }


    private void checkHeight()
    {

        Rect rect = mRect;
        int imageWidth = mImageWidth;
        int imageHeight = mImageHeight;

        if (rect.bottom > imageHeight)
        {
            rect.bottom = imageHeight;
            rect.top = imageHeight - getHeight();
        }

        if (rect.top < 0)
        {
            rect.top = 0;
            rect.bottom = getHeight();
        }
    }


    public LargeImageView(Context context, AttributeSet attrs)
    {
        super(context, attrs);
        init();
    }

    @Override
    public boolean onTouchEvent(MotionEvent event)
    {
        mDetector.onToucEvent(event);
        return true;
    }

    @Override
    protected void onDraw(Canvas canvas)
    {
        Bitmap bm = mDecoder.decodeRegion(mRect, options);
        canvas.drawBitmap(bm, 0, 0, null);
    }

    @Override
    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec)
    {
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);

        int width = getMeasuredWidth();
        int height = getMeasuredHeight();

        int imageWidth = mImageWidth;
        int imageHeight = mImageHeight;

         //默認直接顯示圖片的中心區域,能夠本身去調節
        mRect.left = imageWidth / 2 - width / 2;
        mRect.top = imageHeight / 2 - height / 2;
        mRect.right = mRect.left + width;
        mRect.bottom = mRect.top + height;

    }


}

根據上述源碼:

    setInputStream裏面去得到圖片的真實的寬度和高度,以及初始化咱們的mDecoder
    onMeasure裏面爲咱們的顯示區域的rect賦值,大小爲view的尺寸
    onTouchEvent裏面咱們監聽move的手勢,在監聽的回調裏面去改變rect的參數,以及作邊界檢查,最後invalidate
    在onDraw裏面就是根據rect拿到bitmap,而後draw了

ok,上面並不複雜,不過你們有沒有注意到,這個監聽用戶move手勢的代碼寫的有點奇怪,恩,這裏模仿了系統的ScaleGestureDetector,編寫了MoveGestureDetector,代碼以下:

    MoveGestureDetector


    package com.zhy.blogcodes.largeImage.view;

    import android.content.Context;
    import android.graphics.PointF;
    import android.view.MotionEvent;

    public class MoveGestureDetector extends BaseGestureDetector
    {

        private PointF mCurrentPointer;
        private PointF mPrePointer;
        //僅僅爲了減小建立內存
        private PointF mDeltaPointer = new PointF();

        //用於記錄最終結果,並返回
        private PointF mExtenalPointer = new PointF();

        private OnMoveGestureListener mListenter;


        public MoveGestureDetector(Context context, OnMoveGestureListener listener)
        {
            super(context);
            mListenter = listener;
        }

        @Override
        protected void handleInProgressEvent(MotionEvent event)
        {
            int actionCode = event.getAction() & MotionEvent.ACTION_MASK;
            switch (actionCode)
            {
                case MotionEvent.ACTION_CANCEL:
                case MotionEvent.ACTION_UP:
                    mListenter.onMoveEnd(this);
                    resetState();
                    break;
                case MotionEvent.ACTION_MOVE:
                    updateStateByEvent(event);
                    boolean update = mListenter.onMove(this);
                    if (update)
                    {
                        mPreMotionEvent.recycle();
                        mPreMotionEvent = MotionEvent.obtain(event);
                    }
                    break;

            }
        }

        @Override
        protected void handleStartProgressEvent(MotionEvent event)
        {
            int actionCode = event.getAction() & MotionEvent.ACTION_MASK;
            switch (actionCode)
            {
                case MotionEvent.ACTION_DOWN:
                    resetState();//防止沒有接收到CANCEL or UP ,保險起見
                    mPreMotionEvent = MotionEvent.obtain(event);
                    updateStateByEvent(event);
                    break;
                case MotionEvent.ACTION_MOVE:
                    mGestureInProgress = mListenter.onMoveBegin(this);
                    break;
            }

        }

        protected void updateStateByEvent(MotionEvent event)
        {
            final MotionEvent prev = mPreMotionEvent;

            mPrePointer = caculateFocalPointer(prev);
            mCurrentPointer = caculateFocalPointer(event);

            //Log.e("TAG", mPrePointer.toString() + " ,  " + mCurrentPointer);

            boolean mSkipThisMoveEvent = prev.getPointerCount() != event.getPointerCount();

            //Log.e("TAG", "mSkipThisMoveEvent = " + mSkipThisMoveEvent);
            mExtenalPointer.x = mSkipThisMoveEvent ? 0 : mCurrentPointer.x - mPrePointer.x;
            mExtenalPointer.y = mSkipThisMoveEvent ? 0 : mCurrentPointer.y - mPrePointer.y;

        }

        /**
         * 根據event計算多指中心點
         *
         * @param event
         * @return
         */
        private PointF caculateFocalPointer(MotionEvent event)
        {
            final int count = event.getPointerCount();
            float x = 0, y = 0;
            for (int i = 0; i < count; i++)
            {
                x += event.getX(i);
                y += event.getY(i);
            }

            x /= count;
            y /= count;

            return new PointF(x, y);
        }


        public float getMoveX()
        {
            return mExtenalPointer.x;

        }

        public float getMoveY()
        {
            return mExtenalPointer.y;
        }


        public interface OnMoveGestureListener
        {
            public boolean onMoveBegin(MoveGestureDetector detector);

            public boolean onMove(MoveGestureDetector detector);

            public void onMoveEnd(MoveGestureDetector detector);
        }

        public static class SimpleMoveGestureDetector implements OnMoveGestureListener
        {

            @Override
            public boolean onMoveBegin(MoveGestureDetector detector)
            {
                return true;
            }

            @Override
            public boolean onMove(MoveGestureDetector detector)
            {
                return false;
            }

            @Override
            public void onMoveEnd(MoveGestureDetector detector)
            {
            }
        }

    }

    BaseGestureDetector

    package com.zhy.blogcodes.largeImage.view;

    import android.content.Context;
    import android.view.MotionEvent;


    public abstract class BaseGestureDetector
    {

        protected boolean mGestureInProgress;

        protected MotionEvent mPreMotionEvent;
        protected MotionEvent mCurrentMotionEvent;

        protected Context mContext;

        public BaseGestureDetector(Context context)
        {
            mContext = context;
        }


        public boolean onToucEvent(MotionEvent event)
        {

            if (!mGestureInProgress)
            {
                handleStartProgressEvent(event);
            } else
            {
                handleInProgressEvent(event);
            }

            return true;

        }

        protected abstract void handleInProgressEvent(MotionEvent event);

        protected abstract void handleStartProgressEvent(MotionEvent event);

        protected abstract void updateStateByEvent(MotionEvent event);

        protected void resetState()
        {
            if (mPreMotionEvent != null)
            {
                mPreMotionEvent.recycle();
                mPreMotionEvent = null;
            }
            if (mCurrentMotionEvent != null)
            {
                mCurrentMotionEvent.recycle();
                mCurrentMotionEvent = null;
            }
            mGestureInProgress = false;
        }


    }

    你可能會說,一個move手勢搞這麼多代碼,太麻煩了。的確是的,move手勢的檢測很是簡單,那麼之因此這麼寫呢,主要是爲了能夠複用,好比如今有一堆的XXXGestureDetector,當咱們須要監聽什麼手勢,就直接拿個detector來檢測多方便。我相信你們確定也鬱悶過Google,爲何只有ScaleGestureDetector而沒有RotateGestureDetector呢。

根據上述,你們應該理解了爲何要這麼作,當時不強制,每一個人都有個性。

    不過值得一提的是:上面這個手勢檢測的寫法,不是我想的,而是一個開源的項目https://github.com/rharter/android-gesture-detectors,裏面包含不少的手勢檢測。對應的博文是:http://code.almeros.com/android-multitouch-gesture-detectors#.VibzzhArJXg那面上面兩個類就是我偷學了的~ 哈

4、測試

測試其實沒撒好說的了,就是把咱們的LargeImageView放入佈局文件,而後Activity裏面去設置inputstream了。

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
                xmlns:tools="http://schemas.android.com/tools"
                android:layout_width="match_parent"
                android:layout_height="match_parent">


    <com.zhy.blogcodes.largeImage.view.LargeImageView
        android:id="@+id/id_largetImageview"
        android:layout_width="match_parent"
        android:layout_height="match_parent"/>

</RelativeLayout>

   

而後在Activity裏面去設置圖片:

package com.zhy.blogcodes.largeImage;

import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;

import com.zhy.blogcodes.R;
import com.zhy.blogcodes.largeImage.view.LargeImageView;

import java.io.IOException;
import java.io.InputStream;

public class LargeImageViewActivity extends AppCompatActivity
{
    private LargeImageView mLargeImageView;

    @Override
    protected void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_large_image_view);

        mLargeImageView = (LargeImageView) findViewById(R.id.id_largetImageview);
        try
        {
            InputStream inputStream = getAssets().open("world.jpg");
            mLargeImageView.setInputStream(inputStream);

        } catch (IOException e)
        {
            e.printStackTrace();
        }


    }

}

 

效果圖:

ok,那麼到此,顯示巨圖的方案以及詳細的代碼就描述完成了,整體仍是很是簡單的。
可是,在實際的項目中,可能會有更多的需求,好比增長放大、縮小;增長快滑手勢等等,那麼你們能夠去參考這個庫:https://github.com/johnnylambada/WorldMap,該庫基本實現了絕大多數的需求,你們根據本文這個思路再去看這個庫,也會簡單不少,定製起來也容易。我這個地圖的圖就是該庫裏面提供的。

哈,掌握了這個,之後面試過程當中也能夠悄悄的裝一把了,當你優雅的答完android加載圖片的方案之後,而後接一句,其實還有一種狀況,就是高清顯示巨圖,那麼咱們應該…相信面試官對你的印象會好不少~ have a nice day ~

源碼點擊下載

相關文章
相關標籤/搜索