前面對Camera2的初始化以及預覽的相關流程進行了詳細分析,本文將會對Camera2的capture(拍照)流程進行分析。java
前面分析preview的時候,當預覽成功後,會使能ShutterButton,便可以進行拍照,定位到ShutterButton的監聽事件爲onShutterButtonClick方法:session
//CaptureModule.java @Override public void onShutterButtonClick() { //Camera未打開 if (mCamera == null) { return; } int countDownDuration = mSettingsManager.getInteger(SettingsManager .SCOPE_GLOBAL,Keys.KEY_COUNTDOWN_DURATION); if (countDownDuration > 0) { // 開始倒計時 mAppController.getCameraAppUI().transitionToCancel(); mAppController.getCameraAppUI().hideModeOptions(); mUI.setCountdownFinishedListener(this); mUI.startCountdown(countDownDuration); // Will take picture later via listener callback. } else { //即刻拍照 takePictureNow(); } }
首先,讀取Camera的配置,判斷配置是否須要延時拍照,此處分析不需延時的狀況,即調用takePictureNow方法:異步
//CaptureModule.java private void takePictureNow() { if (mCamera == null) { Log.i(TAG, "Not taking picture since Camera is closed."); return; } //建立Capture會話並開啓會話 CaptureSession session = createAndStartCaptureSession(); //獲取Camera的方向 int orientation = mAppController.getOrientationManager() .getDeviceOrientation().getDegrees(); //初始化圖片參數,其中this(即CaptureModule)爲PictureCallback的實現 PhotoCaptureParameters params = new PhotoCaptureParameters( session.getTitle(), orientation, session.getLocation(), mContext.getExternalCacheDir(), this, mPictureSaverCallback, mHeadingSensor.getCurrentHeading(), mZoomValue, 0); //裝配Session decorateSessionAtCaptureTime(session); //拍照 mCamera.takePicture(params, session); }
它首先調用createAndStartCaptureSession來建立一個CaptureSession而且啓動會話,這裏而且會進行初始參數的設置,譬如設置CaptureModule(此處實參ide
爲this)爲圖片處理的回調(後面再分析):函數
//CaptureModule.java private CaptureSession createAndStartCaptureSession() { //獲取會話時間 long sessionTime = getSessionTime(); //當前位置 Location location = mLocationManager.getCurrentLocation(); //設置picture name String title = CameraUtil.instance().createJpegName(sessionTime); //建立會話 CaptureSession session = getServices().getCaptureSessionManager() .createNewSession(title, sessionTime, location); //開啓會話 session.startEmpty(new CaptureStats(mHdrPlusEnabled),new Size( (int) mPreviewArea.width(), (int) mPreviewArea.height())); return session; }
首先,獲取會話的相關參數,包括會話時間,拍照的照片名字以及位置信息等,而後調用Session管理來建立CaptureSession,最後將此CaptureSessionui
啓動。到這裏,會話就建立並啓動了,因此接着分析上面的拍照流程,它會調用OneCameraImpl的takePicture方法來進行拍照:this
//OneCameraImpl.java @Override public void takePicture(final PhotoCaptureParameters params, final CaptureSession session) { ... // 除非拍照已經返回,不然就廣播一個未準備好狀態的廣播,即等待本次拍照結束 broadcastReadyState(false); //建立一個線程 mTakePictureRunnable = new Runnable() { @Override public void run() { //拍照 takePictureNow(params, session); } }; //設置回調,此回調後面將分析,它其實就是CaptureModule,它實現了PictureCallback mLastPictureCallback = params.callback; mTakePictureStartMillis = SystemClock.uptimeMillis(); //若是須要自動聚焦 if (mLastResultAFState == AutoFocusState.ACTIVE_SCAN) { mTakePictureWhenLensIsStopped = true; } else { //拍照 takePictureNow(params, session); } }
在拍照裏,首先廣播一個未準備好的狀態廣播,而後進行拍照的回調設置,而且判斷是否有自動聚焦,若是是則將mTakePictureWhenLensIsStopped 設爲ture,spa
即即刻拍照被中止了,不然則調用OneCameraImpl的takePictureNow方法來發起拍照請求:線程
//OneCameraImpl.java public void takePictureNow(PhotoCaptureParameters params, CaptureSession session) { long dt = SystemClock.uptimeMillis() - mTakePictureStartMillis; try { // 構造JPEG圖片拍照的請求 CaptureRequest.Builder builder = mDevice.createCaptureRequest( CameraDevice.TEMPLATE_STILL_CAPTURE); builder.setTag(RequestTag.CAPTURE); addBaselineCaptureKeysToRequest(builder); // Enable lens-shading correction for even better DNGs. if (sCaptureImageFormat == ImageFormat.RAW_SENSOR) { builder.set(CaptureRequest.STATISTICS_LENS_SHADING_MAP_MODE, CaptureRequest.STATISTICS_LENS_SHADING_MAP_MODE_ON); } else if (sCaptureImageFormat == ImageFormat.JPEG) { builder.set(CaptureRequest.JPEG_QUALITY, JPEG_QUALITY); .getJpegRotation(params.orientation, mCharacteristics)); } //用於preview的控件 builder.addTarget(mPreviewSurface); //用於圖片顯示的控件 builder.addTarget(mCaptureImageReader.getSurface()); CaptureRequest request = builder.build(); if (DEBUG_WRITE_CAPTURE_DATA) { final String debugDataDir = makeDebugDir(params.debugDataFolder, "normal_capture_debug"); Log.i(TAG, "Writing capture data to: " + debugDataDir); CaptureDataSerializer.toFile("Normal Capture", request, new File(debugDataDir,"capture.txt")); } //拍照,mCaptureCallback爲回調 mCaptureSession.capture(request, mCaptureCallback, mCameraHandler); } catch (CameraAccessException e) { Log.e(TAG, "Could not access camera for still image capture."); broadcastReadyState(true); params.callback.onPictureTakingFailed(); return; } synchronized (mCaptureQueue) { mCaptureQueue.add(new InFlightCapture(params, session)); } }
與preview相似,都是經過CaptureRequest來與Camera進行通訊的,經過session的capture來進行拍照,debug
並設置拍照的回調函數爲mCaptureCallback:
//CameraCaptureSessionImpl.java @Override public synchronized int capture(CaptureRequest request,CaptureCallback callback,Handler handler)throws CameraAccessException{ ... handler = checkHandler(handler,callback); return addPendingSequence(mDeviceImpl.capture(request,createCaptureCallbackProxy(handler,callback),mDeviceHandler)); }
代碼與preview中的相似,都是將請求加入到待處理的請求集,如今看CaptureCallback回調:
//OneCameraImpl.java private final CameraCaptureSession.CaptureCallback mCaptureCallback = new CameraCaptureSession.CaptureCallback(){ @Override public void onCaptureStarted(CameraCaptureSession session,CaptureRequest request,long timestamp,long frameNumber){ //與preview相似 if(request.getTag() == RequestTag.CAPTURE&&mLastPictureCallback!=null){ mLastPictureCallback.onQuickExpose(); } } ... @Override public void onCaptureCompleted(CameraCaptureSession session,CaptureRequest request,TotalCaptureResult result){ autofocusStateChangeDispatcher(result); if(result.get(CaptureResult.CONTROL_AF_STATE) == null){ //檢查自動聚焦的狀態 AutoFocusHelper.checkControlAfState(result); } ... if(request.getTag() == RequestTag.CAPTURE){ synchronized(mCaptureQueue){ if(mCaptureQueue.getFirst().setCaptureResult(result).isCaptureComplete()){ capture = mCaptureQueue.removeFirst(); } } if(capture != null){ //拍照結束 OneCameraImpl.this.onCaptureCompleted(capture); } } super.onCaptureCompleted(session,request,result); } ... }
這是Native層在處理請求時,會調用相應的回調,如capture開始時,會回調onCaptureStarted,具體的在preview中有過度析,當拍照結束時,會回調
onCaptureCompleted方法,其中會根據CaptureResult來檢查自動聚焦的狀態,並經過TAG判斷其是Capture動做時,再來看它是不是隊列中的第一個請求,
若是是,則將請求移除,由於請求已經處理成功,最後再調用OneCameraImpl的onCaptureCompleted方法來進行處理:
//OneCameraImpl.java private void onCaptureCompleted(InFlightCapture capture){ if(isCaptureImageFormat == ImageFormat.RAW_SENSOR){ ... File dngFile = new File(RAW_DIRECTORY,capture.session.getTitle()+".dng"); writeDngBytesAndClose(capture.image,capture.totalCaptureResult,mCharacteristics,dngFile); }else{ //解析result中的圖片數據 byte[] imageBytes = acquireJpegBytesAndClose(capture.image); //保存Jpeg圖片 saveJpegPicture(imageBytes,capture.parameters,capture.session,capture.totalCaptureResult); } broadcastReadyState(true); //調用回調 capture.parameters.callback.onPictureTaken(capture.session); }
如代碼所示,首先,對result中的圖片數據進行了解析,而後調用saveJpegPicture方法將解析獲得的圖片數據進行保存,最後再調用
裏面的回調(即CaptureModule,前面在初始化Parameters時說明了,它實現了PictureCallbak接口)的onPictureTaken方法,因此,
接下來先分析saveJpegPicture方法:
//OneCameraImpl.java private void saveJpegPicture(byte[] jpegData,final PhotoCaptureParameters captureParams,CaptureSession session,CaptureResult result){ ... ListenableFuture<Optional<Uri>> futureUri = session.saveAndFinish(jpegData,width,height,rotation,exif); Futures.addCallback(futureUri,new FutureCallback<Optional<Uri>>(){ @Override public void onSuccess(Optional<Uri> uriOptional){ captureParams.callback.onPictureSaved(mOptional.orNull()); } @Override public void onFailure(Throwable throwable){ captureParams.callback.onPictureSaved(null); } }); }
它最後會回調onPictureSaved方法來對圖片進行保存,因此須要分析CaptureModule的onPictureSaved方法:
//CaptureModule.java @Override public void onPictureSaved(Uri uri){ mAppController.notifyNewMedia(uri); }
mAppController的實現爲CameraActivity,因此分析notifyNewMedia方法:
//CameraActivity.java @Override public void notifyNewMedia(Uri uri){ ... if(FilmstripItemUtils.isMimeTypeVideo(mimeType)){ //若是拍攝的是video sendBroadcast(new Intent(CameraUtil.ACTION_NEW_VIDEO,uri)); newData = mVideoItemFactory.queryContentUri(uri); ... }else if(FilmstripItemUtils.isMimeTypeImage(mimeType)){ //若是是拍攝圖片 CameraUtil.broadcastNewPicture(mAppContext,uri); newData = mPhotoItemFactory.queryCotentUri(uri); ... }else{ return; } new AsyncTask<FilmstripItem,Void,FilmstripItem>(){ @Override protected FilmstripItem doInBackground(FilmstripItem... Params){ FilmstripItem data = params[0]; MetadataLoader.loadMetadata(getAndroidContet(),data); return data; } ... } }
由代碼可知,這裏有兩種數據的處理,一種是video,另外一種是image。而咱們這裏分析的是capture圖片數據,因此首先會根據在回調函數
傳入的參數Uri和PhotoItemFactory來查詢到相應的拍照數據,而後再開啓一個異步的Task來對此數據進行處理,即經過MetadataLoader的
loadMetadata來加載數據,並返回。至此,capture的流程就基本分析結束了,下面將給出capture流程的整個過程當中的時序圖: