做者:@魷魚先生 本文爲原創,轉載請註明:juejin.im/user/5aff97…html
安卓相機相關開發的文章已經數不勝數,今天提筆想給開發者說說安卓相機開發的一些小祕密,固然也會進行一些基礎知識的普及😄。若是尚未相機開發相關支持的小夥伴,建議打開谷歌的文檔 Camera
和 Camera Guide 進行相關的學習,而後再結合本文的內容,必定能夠達到事倍功半的效果。java
這裏提早附上參考代碼的克隆地址: ps: 😊貼心的博主特意使用碼雲方便國內的小夥伴們高速訪問代碼。android
碼雲:Camera-Androidgit
本文主要是介紹安卓Camera1相關的介紹,Camera2的就等待個人更新吧:)😊github
從API文檔和不少網絡的資料通常的啓動套路代碼:算法
/** A safe way to get an instance of the Camera object. */
public static Camera getCameraInstance(){
Camera c = null;
try {
c = Camera.open(); // attempt to get a Camera instance
}
catch (Exception e){
// Camera is not available (in use or does not exist)
}
return c; // returns null if camera is unavailable
}
複製代碼
可是調用該函數獲取相機實例的時候,通常調用都是直接在 MainThread
中直接調用該函數:性能優化
@Override
protected void onCreate(Bundle savedInstanceState) {
// ...
Camera camera = getCameraInstance();
}
複製代碼
讓咱們來看看安卓源碼的是實現,Camera.java:網絡
/** * Creates a new Camera object to access the first back-facing camera on the * device. If the device does not have a back-facing camera, this returns * null. * @see #open(int) */
public static Camera open() {
int numberOfCameras = getNumberOfCameras();
CameraInfo cameraInfo = new CameraInfo();
for (int i = 0; i < numberOfCameras; i++) {
getCameraInfo(i, cameraInfo);
if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
return new Camera(i);
}
}
return null;
}
Camera(int cameraId) {
mShutterCallback = null;
mRawImageCallback = null;
mJpegCallback = null;
mPreviewCallback = null;
mPostviewCallback = null;
mUsingPreviewAllocation = false;
mZoomListener = null;
Looper looper;
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else {
mEventHandler = null;
}
String packageName = ActivityThread.currentPackageName();
native_setup(new WeakReference<Camera>(this), cameraId, packageName);
}
複製代碼
注意mEventHandler
若是當前的啓動線程不帶 Looper
則默認的 mEventHandler
使用UI線程的默認 Looper
。從源碼咱們能夠看到 EventHandler
負責處理底層的消息的回調。正常狀況下,咱們指望全部回調都在UI線程這樣能夠方便咱們直接操做相關的頁面邏輯。可是針對一些特殊場景咱們能夠作一些特殊的操做,目前能夠把這個知識點記下,以便後續他用。app
SurfaceHolder
預覽根據官方的 Guide 文章咱們直接使用 SurfaceView
做爲預覽的展現對象。ide
@Override
protected void onCreate(Bundle savedInstanceState) {
// ...
SurfaceView surfaceView = findViewById(R.id.camera_surface_view);
surfaceView.getHolder().addCallback(this);
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
// TODO: Connect Camera.
if (null != mCamera) {
try {
mCamera.setPreviewDisplay(holder);
mCamera.startPreview();
mHolder = holder;
} catch (IOException e) {
e.printStackTrace();
}
}
}
複製代碼
從新運行下程序,我相信你已經能夠看到預覽的畫面,固然它可能有些方向的問題。可是咱們至少看到了相機的畫面。
SurfaceTexture
預覽該方式目前主要是針對須要利用 OpenGL ES 做爲相機 GPU 預覽的模式。此時使用的目標 View
也換成了 GLSurfaceView
。在使用的時候⚠️注意3個小細節:
GLSurfaceView
的基礎設置GLSurfaceView surfaceView = findViewById(R.id.gl_surfaceview);
surfaceView.setEGLContextClientVersion(2); // 開啓 OpenGL ES 2.0 支持
surfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); // 啓用被動刷新。
surfaceView.setRenderer(this);
複製代碼
關於被動刷新的開啓,第三點會詳細介紹它的意思。 2. 建立紋理對應的 SurfaceTexture
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Init Camera
int[] textureIds = new int[1];
GLES20.glGenTextures(1, textureIds, 0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textureIds[0]);
// 超出紋理座標範圍,採用截斷到邊緣
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
//過濾(紋理像素映射到座標點) (縮小、放大:GL_LINEAR線性)
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
mSurfaceTexture = new SurfaceTexture(textureIds[0]);
mCameraTexture = textureIds[0];
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
try {
// 建立的 SurfaceTexture 做爲預覽用的 Texture
mCamera.setPreviewTexture(mSurfaceTexture);
mCamera.startPreview();
} catch (IOException e) {
e.printStackTrace();
}
}
複製代碼
這裏建立的紋理是一種特殊的來自 OpenGL ES
的擴展,GLES11Ext.GL_TEXTURE_EXTERNAL_OES
有且只有在使用此種類型紋理的時候,開發者才能經過本身的 GPU 代碼進行攝像頭內容的實時處理。 3. 數據驅動刷新
將原有的 GLSurfaceView
連續刷新的模式改爲,只有當數據有變化的時候才刷新。
GLSurfaceView surfaceView = findViewById(R.id.gl_surfaceview);
surfaceView.setEGLContextClientVersion(2);
surfaceView.setRenderer(this);
// 添加如下設置,改爲被動的 GL 渲染。
// Change SurfaceView render mode to RENDERMODE_WHEN_DIRTY.
surfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
複製代碼
當數據變化的時候咱們能夠經過如下方式進行通知
mSurfaceTexture.setOnFrameAvailableListener(surfaceTexture -> {
// 有數據能夠進行展現,同時GL線程工做。
mSurfaceView.requestRender();
});
複製代碼
其他的部分能夠不變,這樣的好處是刷新的幀率能夠隨着相機的幀率變化而變化。不是本身一直自動刷新形成沒必要要的GPU功耗。
本節將重點介紹如何使用YUV數據進行相機的畫面的預覽的技術實現。這個技術方案主要的落地場景是 人臉識別(Face Detection) 或是其餘 CV 領域的實時算法數據加工。
Camera
預覽 YUV 數據回調 Buffer
本步驟利用舊版本的接口 Camera.setPreviewCallbackWithBuffer
, 可是使用此函數須要作一個必要操做,就是往相機裏面添加回調數據的 Buffer。
// 設置目標的預覽分辨率,能夠直接使用 1280*720 目前的相機都會有該分辨率
parameters.setPreviewSize(previewSize.first, previewSize.second);
// 設置相機 NV21 數據回調使用用戶設置的 buffer
mCamera.setPreviewCallbackWithBuffer(this);
mCamera.setParameters(parameters);
// 添加4個用於相機進行處理的 byte[] buffer 對象。
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
複製代碼
這裏須要注意⚠️,若是設置預覽回調使用的是 Camera.setPreviewCallback
那麼相機返回的數據 onPreviewFrame(byte[] data, Camera camera)
中的 data
是由相機內部建立。
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
// TODO: 預處理相機輸入數據
if (!bytesToByteBuffer.containsKey(data)) {
Log.d(TAG, "Skipping frame. Could not find ByteBuffer associated with the image "
+ "data from the camera.");
} else {
// 由於咱們使用的是 setPreviewCallbackWithBuffer 因此必須把data還回去
mCamera.addCallbackBuffer(data);
}
}
複製代碼
若是不進行 mCamera.addCallbackBuffer(byte[])
, 當回調 4 次以後,就不會再觸發 onPreviewFrame
。能夠發現次數恰好等於相機初始化時候添加的 Buffer 個數。
咱們目的是使用 onPreviewFrame
返回數據進行渲染,因此設置 mCamera.setPreviewTexture
的邏輯代碼須要去除,由於咱們不但願相機還繼續把預覽的數據繼續發送給以前設置的 SurfaceTexture
這個就係統浪費資源了。
😂支持註釋相機 mCamera.setPreviewTexture(mSurfaceTexture);
的代碼段:
try {
// mCamera.setPreviewTexture(mSurfaceTexture);
mCamera.startPreview();
} catch (Exception e) {
e.printStackTrace();
}
複製代碼
經過測試發現 onPreviewFrame
竟然不工做了,快速看下文檔,裏面提到如下信息:
/** * Starts capturing and drawing preview frames to the screen * Preview will not actually start until a surface is supplied * with {@link #setPreviewDisplay(SurfaceHolder)} or * {@link #setPreviewTexture(SurfaceTexture)}. * * <p>If {@link #setPreviewCallback(Camera.PreviewCallback)}, * {@link #setOneShotPreviewCallback(Camera.PreviewCallback)}, or * {@link #setPreviewCallbackWithBuffer(Camera.PreviewCallback)} were * called, {@link Camera.PreviewCallback#onPreviewFrame(byte[], Camera)} * will be called when preview data becomes available. * * @throws RuntimeException if starting preview fails; usually this would be * because of a hardware or other low-level error, or because release() * has been called on this Camera instance. */
public native final void startPreview();
複製代碼
相機的有且僅有被設置的對應的 Surface
資源以後才能正確的啓動預覽。
下面是見證奇蹟的時刻了:
/** * The dummy surface texture must be assigned a chosen name. Since we never use an OpenGL context, * we can choose any ID we want here. The dummy surface texture is not a crazy hack - it is * actually how the camera team recommends using the camera without a preview. */
private static final int DUMMY_TEXTURE_NAME = 100;
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// ... codes
SurfaceTexture dummySurfaceTexture = new SurfaceTexture(DUMMY_TEXTURE_NAME);
mCamera.setPreviewTexture(dummySurfaceTexture);
// ... codes
}
複製代碼
這個操做以後,相機的 onPreviewFrame
又開始被觸發了。這個虛擬的 SurfaceTexture
它可讓相機工做起來,而且經過設置 :
dummySurfaceTexture.setOnFrameAvailableListener(surfaceTexture -> {
Log.d(TAG, "dummySurfaceTexture working.");
});
複製代碼
咱們會發現系統是能本身判斷出 SurfaceTexture
是否有效,接着 onFrameAvailable
也毫無反應。
SurfaceView
。目前安卓默認的YUV格式是 NV21. 因此須要使用 Shader
進行格式的轉換。 在 OpenGL 中只能進行 RGB 的顏色進行繪製。具體腳本算法能夠參考: nv21_to_rgba_fs.glsl
#ifdef GL_ES
precision highp float;
#endif
varying vec2 v_texCoord;
uniform sampler2D y_texture;
uniform sampler2D uv_texture;
void main (void) {
float r, g, b, y, u, v;
//We had put the Y values of each pixel to the R,G,B components by
//GL_LUMINANCE, that's why we're pulling it from the R component,
//we could also use G or B
y = texture2D(y_texture, v_texCoord).r;
//We had put the U and V values of each pixel to the A and R,G,B
//components of the texture respectively using GL_LUMINANCE_ALPHA.
//Since U,V bytes are interspread in the texture, this is probably
//the fastest way to use them in the shader
u = texture2D(uv_texture, v_texCoord).a - 0.5;
v = texture2D(uv_texture, v_texCoord).r - 0.5;
//The numbers are just YUV to RGB conversion constants
r = y + 1.13983*v;
g = y - 0.39465*u - 0.58060*v;
b = y + 2.03211*u;
//We finally set the RGB color of our pixel
gl_FragColor = vec4(r, g, b, 1.0);
}
複製代碼
主要思路是將N21的數據直接分離成2張紋理數據,fragment shader 裏面進行顏色格式的計算,算回 RGBA。
mYTexture = new Texture();
created = mYTexture.create(mYuvBufferWidth, mYuvBufferHeight, GLES10.GL_LUMINANCE);
if (!created) {
throw new RuntimeException("Create Y texture fail.");
}
mUVTexture = new Texture();
created = mUVTexture.create(mYuvBufferWidth/2, mYuvBufferHeight/2, GLES10.GL_LUMINANCE_ALPHA); // uv 由於是兩個通道因此數據的格式上選擇 GL_LUMINANCE_ALPHA
if (!created) {
throw new RuntimeException("Create UV texture fail.");
}
// ...省略部分邏輯代碼
//Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
yBuffer.put(data.array(), 0, mPreviewSize.first * mPreviewSize.second);
yBuffer.position(0);
//Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
uvBuffer.put(data.array(), mPreviewSize.first * mPreviewSize.second, (mPreviewSize.first * mPreviewSize.second)/2);
uvBuffer.position(0);
mYTexture.load(yBuffer);
mUVTexture.load(uvBuffer);
複製代碼
相機的回調 YUV 的速度和 OpenGL ES 渲染相機預覽畫面的速度不必定是匹配的,因此咱們能夠進行優化。既然是相機的預覽咱們必須保證當前渲染的畫面必定是最新的。咱們能夠利用 pendingFrameData
一個公用資源進行渲染線程和相機數據回調線程的同步,保證畫面的時效性。
synchronized (lock) {
if (pendingFrameData != null) { // frame data tha has not been processed. Just return back to Camera.
camera.addCallbackBuffer(pendingFrameData.array());
pendingFrameData = null;
}
pendingFrameData = bytesToByteBuffer.get(data);
// Notify the processor thread if it is waiting on the next frame (see below).
// Demo 中是通知 GLThread 中渲染線程若是處理等待狀態就是直接喚醒。
lock.notifyAll();
}
// 通知 GLSurfaceView 能夠刷新了
mSurfaceView.requestRender();
複製代碼
最後還有一個優化的小技巧㊙️,須要結合在 啓動相機 中提到的關於 Handler
的事情。若是咱們是在安卓的主線程或是不帶有 Looper 的子線程中調用相機 Camera.open()
最終的結局都是全部相機的回調信息都會從主線程的 Looper.getMainLooper()
的 Looper
進行信息處理。咱們能夠想象若是目前 UI 的線程正在進行重的操做,勢必將影響到相機預覽的幀率問題,因此最好的方法就是開闢子線程進行相機的開啓操做。
final ConditionVariable startDone = new ConditionVariable();
new Thread() {
@Override
public void run() {
Log.v(TAG, "start loopRun");
// Set up a looper to be used by camera.
Looper.prepare();
// Save the looper so that we can terminate this thread
// after we are done with it.
mLooper = Looper.myLooper();
mCamera = Camera.open(cameraId);
Log.v(TAG, "camera is opened");
startDone.open();
Looper.loop(); // Blocks forever until Looper.quit() is called.
if (LOGV) Log.v(TAG, "initializeMessageLooper: quit.");
}
}.start();
Log.v(TAG, "start waiting for looper");
if (!startDone.block(WAIT_FOR_COMMAND_TO_COMPLETE)) {
Log.v(TAG, "initializeMessageLooper: start timeout");
fail("initializeMessageLooper: start timeout");
}
複製代碼
攝像頭的數據預覽是跟攝像頭傳感器的安裝位置有關係的,相關的內容能夠單獨再寫一篇文章進行討論,我這邊就直接上代碼。
private void setRotation(Camera camera, Camera.Parameters parameters, int cameraId) {
WindowManager windowManager = (WindowManager)getSystemService(Context.WINDOW_SERVICE);
int degrees = 0;
int rotation = windowManager.getDefaultDisplay().getRotation();
switch (rotation) {
case Surface.ROTATION_0:
degrees = 0;
break;
case Surface.ROTATION_90:
degrees = 90;
break;
case Surface.ROTATION_180:
degrees = 180;
break;
case Surface.ROTATION_270:
degrees = 270;
break;
default:
Log.e(TAG, "Bad rotation value: " + rotation);
}
Camera.CameraInfo cameraInfo = new Camera.CameraInfo();
Camera.getCameraInfo(cameraId, cameraInfo);
int angle;
int displayAngle;
if (cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
angle = (cameraInfo.orientation + degrees) % 360;
displayAngle = (360 - angle) % 360; // compensate for it being mirrored
} else { // back-facing
angle = (cameraInfo.orientation - degrees + 360) % 360;
displayAngle = angle;
}
// This corresponds to the rotation constants.
mRotation = angle;
camera.setDisplayOrientation(displayAngle);
parameters.setRotation(angle);
}
複製代碼
可是測試中你會發如今使用YUV數據預覽模式的時候是不起做用的,這個是由於設置的角度參數不會直接影響 PreviewCallback#onPreviewFrame
返回的結果。咱們經過查看源碼的註釋後更加確信這點。
/** * Set the clockwise rotation of preview display in degrees. This affects * the preview frames and the picture displayed after snapshot. This method * is useful for portrait mode applications. Note that preview display of * front-facing cameras is flipped horizontally before the rotation, that * is, the image is reflected along the central vertical axis of the camera * sensor. So the users can see themselves as looking into a mirror. * * <p>This does not affect the order of byte array passed in {@link * PreviewCallback#onPreviewFrame}, JPEG pictures, or recorded videos. This * method is not allowed to be called during preview. * * <p>If you want to make the camera image show in the same orientation as * the display, you can use the following code. * <pre> * public static void setCameraDisplayOrientation(Activity activity, * int cameraId, android.hardware.Camera camera) { * android.hardware.Camera.CameraInfo info = * new android.hardware.Camera.CameraInfo(); * android.hardware.Camera.getCameraInfo(cameraId, info); * int rotation = activity.getWindowManager().getDefaultDisplay() * .getRotation(); * int degrees = 0; * switch (rotation) { * case Surface.ROTATION_0: degrees = 0; break; * case Surface.ROTATION_90: degrees = 90; break; * case Surface.ROTATION_180: degrees = 180; break; * case Surface.ROTATION_270: degrees = 270; break; * } * * int result; * if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) { * result = (info.orientation + degrees) % 360; * result = (360 - result) % 360; // compensate the mirror * } else { // back-facing * result = (info.orientation - degrees + 360) % 360; * } * camera.setDisplayOrientation(result); * } * </pre> * * <p>Starting from API level 14, this method can be called when preview is * active. * * <p><b>Note: </b>Before API level 24, the default value for orientation is 0. Starting in * API level 24, the default orientation will be such that applications in forced-landscape mode * will have correct preview orientation, which may be either a default of 0 or * 180. Applications that operate in portrait mode or allow for changing orientation must still * call this method after each orientation change to ensure correct preview display in all * cases.</p> * * @param degrees the angle that the picture will be rotated clockwise. * Valid values are 0, 90, 180, and 270. * @throws RuntimeException if setting orientation fails; usually this would * be because of a hardware or other low-level error, or because * release() has been called on this Camera instance. * @see #setPreviewDisplay(SurfaceHolder) */
public native final void setDisplayOrientation(int degrees);
複製代碼
爲了獲得正確的方向角度。咱們須要進行YUV渲染的是改變下座標點。 這裏我用了一個很暴力的手段,直接去調整下紋理的座標
private static final float FULL_RECTANGLE_COORDS[] = {
-1.0f, -1.0f, // 0 bottom left
1.0f, -1.0f, // 1 bottom right
-1.0f, 1.0f, // 2 top left
1.0f, 1.0f, // 3 top right
};
// FIXME: 爲了繪製正確的角度,將紋理座標按90度進行計算,中間還包含了一次紋理數據的鏡像處理
private static final float FULL_RECTANGLE_TEX_COORDS[] = {
1.0f, 1.0f, // 0 bottom left
1.0f, 0.0f, // 1 bottom right
0.0f, 1.0f, // 2 top left
0.0f, 0.0f // 3 top right
};
複製代碼
重啓程序 Perfect 搞定。
關於安卓相機的開發,總結就是在踩坑中度過。建議正在學習的同窗,最好能結合我參考資料裏面附加的內容以及相機源碼進行學習。你將會獲得很大的收穫。 同時我也但願本身寫的經驗文章能夠幫到正在學習的你。🍻🍻🍻