例子TFMediaPlayer這個項目裏,是我按着ijkPlayer寫的直播播放器,要運行須要編譯ffmpeg的庫,網盤裏存了一份, 提取碼:vjce。OpenGL ES播放相關的在在OpenGLES的文件夾裏。html
learnOpenGL學到會使用紋理就能夠了。git
播放視頻,就是把畫面一副一副的顯示,跟幀動畫那樣。在解碼視頻幀數據以後獲得的就是某種格式的一段內存,這段數據構成了一副畫面所需的顏色信息,好比yuv420p。圖文詳解YUV420數據格式這篇寫的很好。github
YUV和RGB這些都叫顏色空間,個人理解即是:它們是一種約定好的顏色值的排列方式。好比RGB,即是紅綠藍三種顏色份量依次排列,通常每一個顏色份量就佔一個字節,值爲0-255。數組
YUV420p, 是YUV三個份量分別三層,就像:YYYYUUVV。就是Y所有在一塊兒,而RGB是RGBRGBRGB這樣混合的。每一個份量各自在一塊兒的就是有**平面(Plane)**的。而420樣式是4個Y份量和一對UV份量組合,節省空間。bash
要顯示YUV420p的圖像,須要轉化yuv到rgba,由於OpenGL輸出只認rgba。多線程
OpenGL部分在各平臺邏輯是一致的,不在iOS上的能夠跳過這段。app
使用frameBuffer來顯示:ide
CAEAGLLayer
:+(Class)layerClass{
return [CAEAGLLayer class];
}
複製代碼
-(BOOL)setupOpenGLContext{
_renderLayer = (CAEAGLLayer *)self.layer;
_renderLayer.opaque = YES;
_renderLayer.contentsScale = [UIScreen mainScreen].scale;
_renderLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat,
nil];
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
//_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!_context) {
NSLog(@"alloc EAGLContext failed!");
return false;
}
EAGLContext *preContex = [EAGLContext currentContext];
if (![EAGLContext setCurrentContext:_context]) {
NSLog(@"set current EAGLContext failed!");
return false;
}
[self setupFrameBuffer];
[EAGLContext setCurrentContext:preContex];
return true;
}
複製代碼
opaque
設爲YES是爲了避免作圖層混合,去掉沒必要要的性能消耗。contentsScale
保持跟手機主屏幕一致,在不一樣手機上自適應。kEAGLDrawablePropertyRetainedBacking
爲YES的時候會保存渲染以後數據不變,咱們不須要這個,一幀視頻數據顯示完就沒用了,因此這個功能關閉,去掉沒必要要的性能消耗。有了這個context,而且把它設爲CurrentContext
,那麼在繪製過程裏的那些OpenGL代碼才能在這個context生效,它才能把結果輸出到須要的地方。佈局
-(void)setupFrameBuffer{
glGenBuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glGenRenderbuffers(1, &_colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_renderLayer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer);
GLint width,height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
_bufferSize.width = width;
_bufferSize.height = height;
glViewport(0, 0, _bufferSize.width, _bufferSize.height);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@"failed to make complete framebuffer object %x", status);
}
}
複製代碼
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_renderLayer];
這一句比較關鍵。由於它,renderBuffer、context和layer才聯繫到了一塊兒。根據Apple文檔,負責顯示的layer和renderbuffer是共用內存的,這樣輸出到renderBuffer裏的內容,layer才顯示。分爲兩部分:第一次繪製開始前準備數據和每次繪製循環。性能
使用OpenGL顯示的邏輯是:畫一個正方形,而後把輸出的視頻幀數據製做成紋理(texture)給這個正方形,把紋理顯示處理就OK裏。
因此繪製的圖形是不變的,那麼shader和數據(AVO等)都是固定的,在第一次開始前搞定後面就不須要變了。
if (!_renderConfiged) {
[self configRenderData];
}
複製代碼
-(BOOL)configRenderData{
if (_renderConfiged) {
return true;
}
GLfloat vertices[] = {
-1.0f, 1.0f, 0.0f, 0.0f, 0.0f, //left top
-1.0f, -1.0f, 0.0f, 0.0f, 1.0f, //left bottom
1.0f, 1.0f, 0.0f, 1.0f, 0.0f, //right top
1.0f, -1.0f, 0.0f, 1.0f, 1.0f, //right bottom
};
// NSString *vertexPath = [[NSBundle mainBundle] pathForResource:@"frameDisplay" ofType:@"vs"];
// NSString *fragmentPath = [[NSBundle mainBundle] pathForResource:@"frameDisplay" ofType:@"fs"];
//_frameProgram = new TFOPGLProgram(std::string([vertexPath UTF8String]), std::string([fragmentPath UTF8String]));
_frameProgram = new TFOPGLProgram(TFVideoDisplay_common_vs, TFVideoDisplay_yuv420_fs);
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5*sizeof(GL_FLOAT), 0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5*sizeof(GL_FLOAT), (void*)(3*(sizeof(GL_FLOAT))));
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
//gen textures
glGenTextures(TFMAX_TEXTURE_COUNT, textures);
for (int i = 0; i<TFMAX_TEXTURE_COUNT; i++) {
glBindTexture(GL_TEXTURE_2D, textures[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
}
_renderConfiged = YES;
return YES;
}
複製代碼
TFOPGLProgram
這個類裏作了。先上shader:
const GLchar *TFVideoDisplay_common_vs =" \n\ #version 300 es \n\ \n\ layout (location = 0) in highp vec3 position; \n\ layout (location = 1) in highp vec2 inTexcoord; \n\ \n\ out highp vec2 texcoord; \n\ \n\ void main() \n\ { \n\ gl_Position = vec4(position, 1.0); \n\ texcoord = inTexcoord; \n\ } \n\ ";
複製代碼
const GLchar *TFVideoDisplay_yuv420_fs =" \n\ #version 300 es \n\ precision highp float; \n\ \n\ in vec2 texcoord; \n\ out vec4 FragColor; \n\ uniform lowp sampler2D yPlaneTex; \n\ uniform lowp sampler2D uPlaneTex; \n\ uniform lowp sampler2D vPlaneTex; \n\ \n\ void main() \n\ { \n\ // (1) y - 16 (2) rgb * 1.164 \n\ vec3 yuv; \n\ yuv.x = texture(yPlaneTex, texcoord).r; \n\ yuv.y = texture(uPlaneTex, texcoord).r - 0.5f; \n\ yuv.z = texture(vPlaneTex, texcoord).r - 0.5f; \n\ \n\ mat3 trans = mat3(1, 1 ,1, \n\ 0, -0.34414, 1.772, \n\ 1.402, -0.71414, 0 \n\ ); \n\ \n\ FragColor = vec4(trans*yuv, 1.0); \n\ } \n\ ";
複製代碼
vertex shader就是輸出一下gl_Position而後把紋理座標傳給fragment shader。
fragment shader是重點,由於要在這裏完成從yuv到rgb的轉換。
由於yuv420p是yuv3個份量分層存放的,若是將整個yuv數據做爲整個紋理加載進來,那麼用一個紋理座標想取到3個份量,計算起來就比較麻煩了,每一個fragment都須要計算。 YyYYYYYY YYYYYYYY uUUUvVVV yuv420p的樣子是這樣的,加入你要取(2,1)這個座標的顏色信息,那麼y在(2,1),u在(1,3),v在(5,3)。並且高寬比例會影響佈局: YyYYYYYY YYYYYYYY YyYYYYYY YYYYYYYY uUUUuUUU vVVVvVVV 這樣uv不在同一行了。
因此採用每一個份量單獨的紋理。這樣厲害的地方就是他們能夠共用同一個紋理座標:
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[0]);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width/2, height/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[1]);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textures[2]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width/2, height/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[2]);
glGenerateMipmap(GL_TEXTURE_2D);
複製代碼
overlay
只是用來打包視頻幀數據的一個結構體,pixels的0、一、2分別就是yuv3個份量的平面的開始位置。GL_LUMINANCE
,也就是單顏色通道。看網上的例子,以前寫的是GL_RED
的是不行的。最後用的把yuv轉成rgb,用的公式:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
複製代碼
這裏還有一個注意的就是,YUV和YCrCb的區別: YCrCb是YUV的一個偏移版本,因此須要減去0.5(由於都映射到0-1範圍了128就是0.5)。固然我以爲這個公式仍是要看編碼的時候設置了什麼格式,視頻拍攝的時候是怎麼把rgb轉成yuv的,二者配套就ok了!
glBindFramebuffer(GL_FRAMEBUFFER, self.frameBuffer);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
_frameProgram->use();
_frameProgram->setTexture("yPlaneTex", GL_TEXTURE_2D, textures[0], 0);
_frameProgram->setTexture("uPlaneTex", GL_TEXTURE_2D, textures[1], 1);
_frameProgram->setTexture("vPlaneTex", GL_TEXTURE_2D, textures[2], 2);
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, self.colorBuffer);
[self.context presentRenderbuffer:GL_RENDERBUFFER];
複製代碼
[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(catchAppResignActive) name:UIApplicationWillResignActiveNotification object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(catchAppBecomeActive) name:UIApplicationDidBecomeActiveNotification object:nil];
......
-(void)catchAppResignActive{
_appIsUnactive = YES;
}
-(void)catchAppBecomeActive{
_appIsUnactive = NO;
}
.......
if (self.appIsUnactive) {
return; //繪製以前檢查,直接取消
}
複製代碼
把繪製移到副線程 iOS中OpenGL ES的的這些操縱是能夠所有放到副線程處理的,包括最後的presentRenderbuffer
。關鍵是context構建、數組準備(VAO texture等)、渲染這些得在一個線程裏,固然也能夠多線程操做,但對於視屏播放而言沒有必要,去除不必的性能消耗吧,鎖都不用加了。
layer的frame改變處理
-(void)layoutSubviews{
[super layoutSubviews];
//If context has setuped and layer's size has changed, realloc renderBuffer. if (self.context && !CGSizeEqualToSize(self.layer.frame.size, self.bufferSize)) { _needReallocRenderBuffer = YES; } } ........... if (_needReallocRenderBuffer) { [self reallocRenderBuffer]; _needReallocRenderBuffer = NO; } ......... -(void)reallocRenderBuffer{ glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer); [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_renderLayer]; glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer); ...... } 複製代碼
layoutSubviews
裏從新分配render buffer,這裏確定是主線程。因此只是作了個標記重點是fragment shader裏對yuv份量的讀取:
GL_LUMINANCE
, u、v紋理寬高相對y都減半。