GLSurfaceView爲咱們構建了一個OpenGl環境,若是咱們想經過GLSurfaceView來渲染camera的Preview內容,那麼咱們必須掌握一些基礎的OpenGl相關知識。如何使用OpenGL es接口繪製圖形。重點須要學習下面的知識內容:node
爲了可以在GLSurfaceView上繪製內容,我定義了一個CustomRender類,這個類實現了GLSurfaceView.Renderer接口。咱們能夠在onDrawFrame方法中使用OPenGl來繪製camera preview內容。一般狀況下咱們除了須要繪製camera的preview內容,咱們還須要繪製水印,sticker,filter等內容。因此我在這裏把每個繪製內容都抽象成一個node。NodeRender就是用於管理這些內容的。咱們能夠根據具體的需求來動態的添加繪製內容。frontBuffer是一個framebuffer的texture,NodeRender會將全部的node內容繪製到這個framebuffer上。android
override fun onDrawFrame(gl: GL10?) { //繪製因此的node到frontBuffer上 val frontBuffer = nodesRender.render() //清除屏幕內容 GLES20.glViewport(0, 0, width, height) GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f) GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT or GLES20.GL_DEPTH_BUFFER_BIT) //初識話顯示用的Opengl程序 if (displayProgram == null) { displayProgram = TextureProgram() } frontBuffer?:return //對顯示位置進行變換,實現preview內容的縮放、移動操做。 matrixHandler.applyVertexMatrix(OpenGlUtils.BufferUtils.SQUARE_VERTICES, displayPosition) //進行描畫 displayProgram!!.draw( frontBuffer.textureId,//描畫的texture displayPosition,//描畫的位置信息 displayTextureCoordinate,//描畫的紋理範圍信息 matrixValues//這裏沒有進行位置變換,因此使用的是單位矩陣 ) }
TextureProgram中的opengl程序是什麼樣的呢?首先來看下它定義的vertex shader和fragment shader。這兩個shader很是簡單,惟一須要注意的是位置座標會通過mvp矩陣變換。咱們這裏傳遞的是單位矩陣,至關於無轉換。若是咱們的位置信息是參照view系統的而不是opengl系統定義的範圍,那麼咱們須要根據具體的viewport大小來設置mvp矩陣。mvp矩陣設置方法。git
""" attribute vec2 vPosition; attribute vec2 vInputTextureCoordinate; uniform mat4 mvpMatrix; varying vec2 vTextureCoordinate; void main(){ gl_Position = mvpMatrix * vec4(vPosition,0.0,1.0); vTextureCoordinate = vInputTextureCoordinate; } """, """ precision mediump float; uniform sampler2D inputTexture; varying vec2 vTextureCoordinate; void main(){ gl_FragColor = texture2D(inputTexture, vTextureCoordinate); } """
matrixHandler是如何實現的縮放和移動處理的呢?這裏經過對顯示矩形進行縮放和移動來實現的。顯示矩形的計算主要經過updateVertexMatrix方法實現的。這個方法的輸入參數包括屏幕的寬高和顯示texture的寬高。經過輸入的參數能夠計算出fix center的顯示矩形和fill center的顯示矩形。咱們能夠根據實際需求來選擇使用哪一種顯示矩形。詳細的計算代碼請參照下面的代碼段。github
protected void updateVertexMatrix(int screenWidth, int screenHeight, int sourceWidth, int sourceHeight) { if (screenWidth <= 0 || screenHeight <= 0 || sourceWidth <= 0 || sourceHeight <= 0) { return; } boolean isLandscape = sourceRotate % 180 != 0; screenRectF = new RectF(0, 0, screenWidth, screenHeight); sourceRectF = new RectF(0, 0, isLandscape ? sourceHeight : sourceWidth, isLandscape ? sourceWidth : sourceHeight); if (displayRectF == null) { displayRectF = new RectF(0, 0, screenWidth, screenHeight); } minimumScaleSourceRectF = new RectF(); maximumScaleSourceRectF = new RectF(); float scaleH = displayRectF.height() / sourceRectF.height(); float scaleW = displayRectF.width() / sourceRectF.width(); minimumRealScale = scaleH < scaleW ? scaleH : scaleW; maximumRealScale = scaleH > scaleW ? scaleH : scaleW; Matrix matrix = new Matrix(); matrix.setTranslate(displayRectF.centerX() - sourceRectF.centerX(), displayRectF.centerY() - sourceRectF.centerY()); matrix.postScale(minimumRealScale, minimumRealScale, displayRectF.centerX(), displayRectF.centerY()); matrix.mapRect(minimumScaleSourceRectF, sourceRectF); matrix.reset(); matrix.setTranslate(displayRectF.centerX() - sourceRectF.centerX(), displayRectF.centerY() - sourceRectF.centerY()); matrix.postScale(maximumRealScale, maximumRealScale, displayRectF.centerX(), displayRectF.centerY()); matrix.mapRect(maximumScaleSourceRectF, sourceRectF); currentScaleRectF = new RectF(minimumScaleSourceRectF); currentScale = minimumRealScale; initRectFlag = true; updateMatrix(); }
顯示矩形已經計算好了,下面須要更新vertex的變換矩陣,使顯示矩形的位置可以正確的映射到opengl的position進行顯示位置的控制。這裏經過updateMatrix()方法來更新變換矩陣。而後咱們就能夠對vertext座標應用applyVertexMatrix來控制顯示位置。在發生縮放或是移動手勢的時候,咱們對currentScaleRectF矩形實施縮放和移動操做,而後在調用updateMatrix()更新變換矩陣。app
private void updateMatrix() { vertexMatrixLock.lock(); try { float scaleX = currentScaleRectF.width() / screenRectF.width(); float scaleY = currentScaleRectF.height() / screenRectF.height(); applyVertexMatrix.reset(); applyVertexMatrix.setScale(scaleX, scaleY); applyVertexMatrix.postTranslate((currentScaleRectF.centerX() - screenRectF.centerX()) * 2 / screenRectF.width(), -(currentScaleRectF.centerY() - screenRectF.centerY()) * 2 / screenRectF.height()); needUpdateVertexMatrix = true; } finally { vertexMatrixLock.unlock(); } }
因爲camera須要設置surface來接收錄制的內容,因此咱們來看下如何經過textureId來生成surface。這裏我定義了一個CombineSurfaceTexture類用於封裝surface類型的texture。首先生成一個textureId,而後使用textureId生成一個SurfaceTexture對象,再用SurfaceTexture生成Surface對象。我這裏使用GLSurfaceView.RENDERMODE_WHEN_DIRTY方式更新,因此在OnFrameAvailableListener中須要通知GLSurfaceView來更新。在咱們繪製這個textureId的內容時,咱們須要主動調用surfaceTexture.updateTexImage()來刷新到最新的數據。ide
class CombineSurfaceTexture( width: Int, height: Int, orientation: Int, flipX: Boolean = false, flipY: Boolean = false, notify: () -> Unit = {} ) : BasicTexture(width, height, orientation, flipX, flipY) { private val surfaceTexture: SurfaceTexture val surface: Surface init { textureId = OpenGlUtils.createTexture() surfaceTexture = SurfaceTexture(textureId) surfaceTexture.setDefaultBufferSize(width, height) surfaceTexture.setOnFrameAvailableListener { notify.invoke() } surface = Surface(surfaceTexture) } fun update() { if (surface.isValid) { surfaceTexture.updateTexImage() } } override fun release() { super.release() surface.release() surfaceTexture.release() } }
在繪製Surface類型的texture的時候,咱們須要聲明輸入texture的類型爲samplerExternalOES。post
""" attribute vec2 vPosition; attribute vec2 vInputTextureCoordinate; uniform mat4 mvpMatrix; varying vec2 vTextureCoordinate; void main(){ gl_Position = mvpMatrix * vec4(vPosition,0.0,1.0); vTextureCoordinate = vInputTextureCoordinate; } """, """ #extension GL_OES_EGL_image_external : require precision mediump float; uniform samplerExternalOES inputTexture; varying vec2 vTextureCoordinate; void main(){ gl_FragColor = texture2D(inputTexture, vTextureCoordinate); } """
這裏沒有直接把surface類型的texture繪製到GLSurfaceView上,而是先將它繪製到frameBuffer上而後再繪製到GLSurfaceView。這是由於爲後面實現錄製、添加filter、添加水印等工做作準備。
先後攝像頭切換和更改分辨率的操做是什麼樣的處理呢?其實從下面的代碼能夠看到這兩種狀況下都把preview使用的texture釋放了並生成新的texture和surface。因爲我這裏把繪製的程序封裝到node裏,因此這裏進行了preview node替換。學習
switchCamera.setOnClickListener { nodesRender.runInRender { val cameraId = when (cameraHolder.cameraId) { CAMERA_REAR -> CAMERA_FRONT else -> CAMERA_REAR } cameraHolder.cameraId = cameraId updatePreviewNode( cameraHolder.previewSizes.first().width, cameraHolder.previewSizes.first().height ) cameraHolder.setSurface(cameraPreviewNode!!.combineSurfaceTexture.surface) .invalidate() } } previewSize.setOnClickListener { cameraHolder?.let { it.previewSizes?.let { sizes -> val builder = AlertDialog.Builder(this@MainActivity) val sizesString: Array<String> = Array(sizes?.size ?: 0) { "" } sizes?.forEachIndexed { index, item -> sizesString[index] = item.width.toString() + "*" + item.height.toString() } builder.setItems(sizesString) { d, index -> val size = sizesString[index].split("*") val width = size[0].toInt() val height = size[1].toInt() nodesRender.runInRender { updatePreviewNode(width, height) cameraHolder.setSurface(cameraPreviewNode!!.combineSurfaceTexture.surface) .invalidate() } } builder.create().show() } } }