來源:http://www.cnblogs.com/michaellfx/p/understanding_-VideoToolboxDemo.htmlcss
-VideoToolboxDemo爲VideoToolbox的簡單應用示例。html
(一)初始化FFmpegios
SuperVideoFrameExtractor類提供了兩個初始化方法,git
initWithVideo:usesTcp:
initWithVideo:
分別對應本地文件與網絡流。github
爲了提升響應速度,在讀取網絡流時,手動遍歷 AVFormatContext.streams[]
字段,本地文件則調用av_find_best_stream()獲取指定的音視頻流。正如函數名所示,爲了查找最佳信息,av_find_best_stream()須要讀取更多的流數據才能返回流信息,考慮到網絡傳輸速度不穩定,有時該函數執行時間略長。算法
另外,讀取網絡流時,打開流以前,需初始化網絡和設置傳輸協議,由avformat_network_init()
和 av_dict_set("rtsp_transport", "tcp")
實現。編程
(二)視頻時長與當前播放時間後端
視頻總長度可從AVFormatContext中讀取,單位微秒。數組
video.duration { return (double)pFormatCtx->duration / AV_TIME_BASE/* 1000000 */; }
duration字段說明以下markdown
/**
* Duration of the stream, in AV_TIME_BASE fractional * seconds. Only set this value if you know none of the individual stream * durations and also do not set any of them. This is deduced from the * AVStream values if not set. * * Demuxing only, set by libavformat. */
當前播放時間由AVPacket.pts字段與AVStream.time_base字段計算得出:
- (double)currentTime { AVRational timeBase = pFormatCtx->streams[videoStream]->time_base; return packet.pts * (double)timeBase.num / timeBase.den; }
其實,有時AVPacket並沒有pts數據,FFmpeg會從視頻幀所屬的包中複製第一幀的顯示時間戳給後面的幀,因此,此值不必定準確。
(三)視頻刷新
使用CADisplayLink以每秒60次調用 displayLinkCallback:
方法刷新視頻。此方法讀取顯示時間戳pts、視頻轉成的圖片數組,在時間戳數據足夠( if ([self.presentationTimes count] == 3)
)時發出繼續解碼信號,同時在圖像數據有值時轉換成圖片並顯示在UIImageView上。
- (void)displayLinkCallback:(CADisplayLink *)sender { if ([self.outputFrames count] && [self.presentationTimes count]) { CVImageBufferRef imageBuffer = NULL; NSNumber *insertionIndex = nil; id imageBufferObject = nil; @synchronized(self){ insertionIndex = [self.presentationTimes firstObject]; imageBufferObject = [self.outputFrames firstObject]; imageBuffer = (__bridge CVImageBufferRef)imageBufferObject; } @synchronized(self){ if (imageBufferObject) { [self.outputFrames removeObjectAtIndex:0]; } if (insertionIndex) { [self.presentationTimes removeObjectAtIndex:0]; if ([self.presentationTimes count] == 3) { NSLog(@"====== start ======"); dispatch_semaphore_signal(self.bufferSemaphore); } } } if (imageBuffer) { NSLog(@"====== show ====== %lu", (unsigned long)self.presentationTimes.count); [self displayImage:imageBuffer]; } } }
(四)CVImageBuffer轉換成UIImage
轉換算法如本節開始的流程圖所示,由 displayImage:
方法實現:
(五)獲取視頻播放的當前幀圖片
使用VideoToolbox解碼能夠直接從CVPixelBuffer生成圖片,然而,本項目使用FFmpeg軟解實現此功能。
A. AVFrame轉換成AVPicture
sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, picture.data, picture.linesize);
B. 以PPM格式保存圖片至閃存(非必要操做)
-(void)savePicture:(AVPicture)pict width:(int)width height:(int)height index:(int)iFrame { FILE *pFile; NSString *fileName; int y; fileName = [Utilities documentsPath:[NSString stringWithFormat:@"image%04d.ppm",iFrame]]; // Open file NSLog(@"write image file: %@",fileName); pFile=fopen([fileName cStringUsingEncoding:NSASCIIStringEncoding], "wb"); if(pFile==NULL) return; // Write header fprintf(pFile, "P6\n%d %d\n255\n", width, height); // Write pixel data for(y=0; y<height; y++) fwrite(pict.data[0]+y*pict.linesize[0], 1, width*3, pFile); // Close file fclose(pFile); }
C. AVPicture轉UIImage
-(UIImage *)imageFromAVPicture:(AVPicture)pict width:(int)width height:(int)height { CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault; CFDataRef data = CFDataCreateWithBytesNoCopy(kCFAllocatorDefault, pict.data[0], pict.linesize[0]*height, kCFAllocatorNull); CGDataProviderRef provider = CGDataProviderCreateWithCFData(data); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGImageRef cgImage = CGImageCreate(width, height, 8, 24, pict.linesize[0], colorSpace, bitmapInfo, provider, NULL, NO, kCGRenderingIntentDefault); CGColorSpaceRelease(colorSpace); UIImage *image = [UIImage imageWithCGImage:cgImage]; CGImageRelease(cgImage); CGDataProviderRelease(provider); CFRelease(data); return image; }
(六)初始化音頻
存在音頻流時才初始化音頻,由 -[SuperVideoFrameExtractor setupAudioDecoder]
完成,主要工做是分配緩衝區(_audioBuffer)內存,打開解碼器和初始化AVAudioSession。
(一)視頻
play:
方法啓動後臺線程進行解碼,每讀取一個有效的視頻包時調用VideoToolbox解碼。在VideoToolbox回調函數中經過委託方法,將CVPixelBuffer轉換成圖片顯示。
(二)音頻
音頻的輸出由AVAudiosession實現,此項目並沒實現音頻輸出。
iOS 8開放了H.264硬件編解碼接口VideoToolbox.framework,解碼編程步驟爲:
VTDecompressionSessionCreate
:建立解碼會話VTDecompressionSessionDecodeFrame
:解碼一個視頻幀VTDecompressionSessionInvalidate
:釋放解碼會話添加VideoToolbox.framework
到工程幷包含#include <VideoToolbox/VideoToolbox.h>
開始硬解編程。
硬解H.264前需配置VideoToolbox,簡單地說,VideoToolbox只有瞭解輸入數據源才能進行有效解碼,咱們要作的就是給它提 供H.264的SPS(Sequence Parameter Sets)和PPS(Picture Parameter Set)數據,建立出格式描述對象,由此建立解碼會話。
(一)SPS與PPS
H.264的SPS和PPS包含了初始化H.264解碼器所須要的信息參數,包括編碼所用的profile,level,圖像的寬和高,deblock濾波器等。
MP4或MPEG-TS格式的H.264數據能夠從AVCodecContext的extradata中讀取到SPS和PPS數據,如
uint8_t *data = pCodecCtx -> extradata; int size = pCodecCtx -> extradata_size;
而以Elementary Stream形式從網絡接收H.264裸流時,不存在單獨的SPS、PPS包或幀,而是附加在I幀前面,存儲的通常形式爲 00 00 00 01 SPS 00 00 00 01 PPS 00 00 00 01 I幀
,前面的這些00 00數據稱爲起始碼(Start Code),它們不屬於SPS、PPS的內容,只做標識。因此,建立CMFormatDescription時須要過濾它們。
因爲VideoToolbox接口只接受MP4容器格式,當接收到Elementary Stream形式的H.264流,需把Start Code(3- or 4-Byte Header)換成Length(4-Byte Header)。
00 00 01
或 00 00 00 01
00 00 80 00
處理流程爲 00 00 01
或 00 00 00 01
=> 00 00 80 00
(當前幀長度)。每一個幀都須要處理。
本項目給出URL爲 rtsp://192.168.2.73:1935/vod/sample.mp4
,它的處理方式爲:
for (int i = 0; i < size; i++) { if (i >= 3) { if (data[i] == 0x01 && data[i-1] == 0x00 && data[i-2] == 0x00 && data[i-3] == 0x00) { if (startCodeSPSIndex == 0) { startCodeSPSIndex = i; } if (i > startCodeSPSIndex) { startCodePPSIndex = i; } } } } spsLength = startCodePPSIndex - startCodeSPSIndex - 4; ppsLength = size - (startCodePPSIndex + 1); nalu_type = ((uint8_t) data[startCodeSPSIndex + 1] & 0x1F); if (nalu_type == 7/* Sequence parameter set (non-VCL) */) { spsData = [NSData dataWithBytes:&(data[startCodeSPSIndex + 1]) length: spsLength]; } nalu_type = ((uint8_t) data[startCodePPSIndex + 1] & 0x1F); if (nalu_type == 8/* Picture parameter set (non-VCL) */) { ppsData = [NSData dataWithBytes:&(data[startCodePPSIndex + 1]) length: ppsLength]; }
按我理解,因爲是MP4容器,故只需提取SPS、PPS數據。
(二)建立視頻格式描述對象
CMFormatDescription描述了視頻的基本信息,有時也用CMVideoFormatDescriptionRef表示,typedef CMFormatDescriptionRef CMVideoFormatDescriptionRef;
,示意圖以下。
有兩個接口可建立視頻格式描述對象CMFormatDescriptionRef,本項目使用了 CMVideoFormatDescriptionCreateFromH264ParameterSets
,由於前面已處理SPS及PPS。
const uint8_t* const parameterSetPointers[2] = { (const uint8_t*)[spsData bytes], (const uint8_t*)[ppsData bytes] }; const size_t parameterSetSizes[2] = { [spsData length], [ppsData length] }; status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &videoFormatDescr);
另外一個接口是不處理,以atom形式提供給VideoToolbox,由它自行處理,如
CFMutableDictionaryRef atoms = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks,&kCFTypeDictionaryValueCallBacks); CFMutableDictionarySetData(atoms, CFSTR ("avcC"), (uint8_t *)extradata, extradata_size); CFMutableDictionaryRef extensions = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFMutableDictionarySetObject(extensions, CFSTR ("SampleDescriptionExtensionAtoms"), (CFTypeRef *) atoms); CMVideoFormatDescriptionCreate(NULL, format_id, width, height, extensions, &videoFormatDescr);
(三)建立解碼會話
建立解碼會話須要提供回調函數以便系統解碼完成時將解碼數據、狀態等信息返回給用戶。
VTDecompressionOutputCallbackRecord callback; callback.decompressionOutputCallback = didDecompress; callback.decompressionOutputRefCon = (__bridge void *)self;
回調函數原型爲 void didDecompress( void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration )
,每解碼一幀視頻都調用一次此函數。
CVImageBufferRef與CVPixelBufferRef是同一個類型,typedef CVImageBufferRef CVPixelBufferRef
,依圖像緩衝區類型,像素緩衝區爲圖像緩衝區提供內存存儲空間。CVPixelBuffer持有圖像信息的描述,如寬度、高度、像素格式類型,示意圖以下:
因爲CVPixelBuffer的建立與釋放屬於耗性能操做,蘋果提供了CVPixelBufferPool管理CVPixelBuffer,它在 後端提供了高效的CVPixelBuffer循環利用機制。CVPixelBufferPool維持了CVPixelBuffer的引用計數,當計數爲0 時,將CVPixelBuffer收回它的循環利用隊列,下次遇到建立CVPixelBuffer請求時,返回其中一個可用的 CVPixelBuffer,而非直接釋放。
建立解碼會話時還須要提供一些解碼指導信息,如已解碼數據是否爲OpenGL ES兼容、是否須要YUV轉換RGB(此項通常不設置,OpenGL轉換效率更高,VideoToolbox轉換不只須要使用更多內存,同時也消耗CPU)等等,如
NSDictionary *destinationImageBufferAttributes =[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:NO],(id)kCVPixelBufferOpenGLESCompatibilityKey,[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey,nil];
準備工做都完成了,如今正式建立解碼會話 VTDecompressionSessionCreate(kCFAllocatorDefault, videoFormatDescr, NULL, (CFDictionaryRef)destinationImageBufferAttributes, &callback, &session);
,該會話在每次解碼時都會被調用。
(四)解碼
A. 解碼前將AVPacket的數據(網絡抽象層單元數據,NALU)拷貝到CMBlockBuffer:
int nalu_type = ((uint8_t)packet.data[startCodeIndex + 1] & 0x1F); if (nalu_type == 1/* Coded slice of a non-IDR picture (VCL) */ || nalu_type == 5/* Coded slice of an IDR picture (VCL) */) { CMBlockBufferCreateWithMemoryBlock(NULL, packet.data, packet.size, kCFAllocatorNull, NULL, 0, packet.size, 0, &videoBlock); }
CMBlockBuffer提供一種包裝任意Core Media數據塊的基本辦法。在視頻處理流水線遇到的壓縮視頻數據,幾乎都被包裝在CMBlockBuffer中。
B. 用4字節長度代碼(4 byte length code (the length of the NalUnit including the unit code))替換分隔碼(separator code)
int reomveHeaderSize = packet.size - 4; const uint8_t sourceBytes[] = {(uint8_t)(reomveHeaderSize >> 24), (uint8_t)(reomveHeaderSize >> 16), (uint8_t)(reomveHeaderSize >> 8), (uint8_t)reomveHeaderSize}; status = CMBlockBufferReplaceDataBytes(sourceBytes, videoBlock, 0, 4);
C. 由CMBlockBuffer建立CMSampleBuffer
const size_t sampleSizeArray[] = {packet.size}; CMSampleBufferCreate(kCFAllocatorDefault, videoBlock, true, NULL, NULL, videoFormatDescr, 1, 0, NULL/* 可傳遞CMSampleTimingInfo */, 1, sampleSizeArray, &sbRef);
CMSampleBuffer包裝了數據採樣,就視頻而言,CMSampleBuffer可包裝壓縮視頻幀或未壓縮視頻幀,它組合了以下類 型:CMTime(採樣的顯示時間)、CMVideoFormatDescription(描述了CMSampleBuffer包含的數據)、 CMBlockBuffer(對於壓縮視頻幀)、CMSampleBuffer(未壓縮光柵化圖像,可能包含在CVPixelBuffer或 CMBlockBuffer),如圖所示。
D. 解碼
VideoToolbox支持同、異步解碼,由VTDecodeFrameFlags指定,VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression;
,默認爲同步解碼。
同步解碼時,調用解碼函數 VTDecompressionSessionDecodeFrame
後系統回調咱們提供的回調函數,而後解碼函數才結束調用。異步解碼則回調順序不肯定,故須要自行整理幀序。
VTDecompressionSessionDecodeFrame(session, sbRef, flags, &sbRef, &flagOut);
(五)時間
項目有段未使用的代碼,以下所示:
int32_t timeSpan = 90000; CMSampleTimingInfo timingInfo; timingInfo.presentationTimeStamp = CMTimeMake(0, timeSpan); timingInfo.duration = CMTimeMake(3000, timeSpan); timingInfo.decodeTimeStamp = kCMTimeInvalid;
由於CMSampleBuffer只有圖片數據,無時間信息,須要在解碼時提供額外的時間說明。
CMTime是VideoToolbox中關於時間的基本描述,示意圖以下。
因爲CMTime一直在增加,很差控制,蘋果提供了易於控制的CMTimebase。
(六)刷新正在解碼的視頻幀
/* Flush in-process frames. */ VTDecompressionSessionFinishDelayedFrames(session); /* Block until our callback has been called with the last frame. */ VTDecompressionSessionWaitForAsynchronousFrames(session);
(七)清理資源
/* Clean up. */ VTDecompressionSessionInvalidate(session); CFRelease(session); CFRelease(videoFormatDescr);
- (void) dumpPacketData { // Log dump int index = 0; NSString *tmp = [NSString new]; for(int i = 0; i < packet.size; i++) { NSString *str = [NSString stringWithFormat:@" %.2X",packet.data[i]]; if (i == 4) { NSString *header = [NSString stringWithFormat:@"%.2X",packet.data[i]]; NSLog(@" header ====>> %@",header); if ([header isEqualToString:@"41"]) { NSLog(@"P Frame"); } if ([header isEqualToString:@"65"]) { NSLog(@"I Frame"); } } tmp = [tmp stringByAppendingString:str]; index++; if (index == 16) { NSLog(@"%@",tmp); tmp = @""; index = 0; } } }
// Free scaler
sws_freeContext(img_convert_ctx); // Free RGB picture avpicture_free(&picture); // Free the packet that was allocated by av_read_frame av_free_packet(&packet); // Free the YUV frame av_free(pFrame); // Close the codec if (pCodecCtx) avcodec_close(pCodecCtx); // Close the video file if (pFormatCtx) avformat_close_input(&pFormatCtx);
- (void)seekTime:(double)seconds { AVRational timeBase = pFormatCtx->streams[videoStream]->time_base; int64_t targetFrame = (int64_t)((double)timeBase.den / timeBase.num * seconds); avformat_seek_file(pFormatCtx, videoStream, targetFrame, targetFrame, targetFrame, AVSEEK_FLAG_FRAME); avcodec_flush_buffers(pCodecCtx); }
SPS、PPS一旦設置,將被用於後續的NALU,只有更新它纔會影響新的NALU解碼,和OpenGL同樣,狀態設置後一直保持。
對於Elementary Stream,SPS和PPS包含在一個NALU中,方便回話。MP4則將它提取出來,放在文件頭部,這樣可支持隨機訪問。
Elementary Stream與MP4容器格式之類的SPS、PPS轉換。
不一樣容器格式的NAL Unit頭之間的區別。
轉換方法是,將Elementary Stream起始碼換成長度,這在前面有描述。
前面說過,CMTime難以控制,因此AVSampleBufferDisplay提供了CMTimebase的控制方式。
0x67是SPS的NAL頭,0x68是PPS的NAL頭,舉例:
[000]=0x00 [001]=0x00 [002]=0x00 [003]=0x01 [004]=0x67 [005]=0x64 [006]=0x00 [007]=0x32 [008]=0xAC [009]=0xB3 [010]=0x00 [011]=0xF0 [012]=0x04 [013]=0x4F [014]=0xCB [015]=0x08 [016]=0x00 [017]=0x00 [018]=0x03 [019]=0x00 [020]=0x08 [021]=0x00 [022]=0x00 [023]=0x03 [024]=0x01 [025]=0x94 [026]=0x78 [027]=0xC1 [028]=0x93 [029]=0x40 [030]=0x00 [031]=0x00 [032]=0x00 [033]=0x01 [034]=0x68 [035]=0xE9 [036]=0x73 [037]=0x2C [038]=0x8B [039]=0x00 [040]=0x00 [041]=0x01 [042]=0x65
成分爲:
Start Code:0x00 0x00 0x00 0x01
SPS從[004]開始,長度爲24:
0x67 0x64 0x00 0x32 0xAC 0xB3 0x00 0xF0 0x04 0x4F 0xCB 0x08 0x00 0x00 0x03 0x00 0x08 0x00 0x00 0x03 0x01 0x94 0x78 0xC1 0x93 0x40
SPS內容:
profile_idc = 66 constrained_set0_flag = 1 constrained_set1_flag = 1 constrained_set2_flag = 1 constrained_set3_flag = 0 level_idc = 20 seq_parameter_set_id = 0 chroma_format_idc = 1 bit_depth_luma_minus8 = 0 bit_depth_chroma_minus8 = 0 seq_scaling_matrix_present_flag = 0 log2_max_frame_num_minus4 = 0 pic_order_cnt_type = 2 log2_max_pic_order_cnt_lsb_minus4 = 0 delta_pic_order_always_zero_flag = 0 offset_for_non_ref_pic = 0 offset_for_top_to_bottom_field = 0 num_ref_frames_in_pic_order_cnt_cycle = 0 num_ref_frames = 1 gaps_in_frame_num_value_allowed_flag = 0 pic_width_in_mbs_minus1 = 21 pic_height_in_mbs_minus1 = 17 frame_mbs_only_flag = 1 mb_adaptive_frame_field_flag = 0 direct_8x8_interence_flag = 0 frame_cropping_flag = 0 frame_cropping_rect_left_offset = 0 frame_cropping_rect_right_offset = 0 frame_cropping_rect_top_offset = 0 frame_cropping_rect_bottom_offset = 0 vui_parameters_present_flag = 0
pic_width_in_mbs_minus1 = 21
pic_height_in_mbs_minus1 = 17
分別表示圖像的寬和高,以宏塊(16x16)爲單位的值減1
所以,實際的寬爲 (21+1)*16 = 352
PPS內容:
pic_parameter_set_id = 0 seq_parameter_set_id = 0 entropy_coding_mode_flag = 0 pic_order_present_flag = 0 num_slice_groups_minus1 = 0 slice_group_map_type = 0 num_ref_idx_l0_active_minus1 = 0 num_ref_idx_l1_active_minus1 = 0 weighted_pref_flag = 0 weighted_bipred_idc = 0 pic_init_qp_minus26 = 0 pic_init_qs_minus26 = 0 chroma_qp_index_offset = 10 deblocking_filter_control_present_flag = 1 constrained_intra_pred_flag = 0 redundant_pic_cnt_present_flag = 0 transform_8x8_mode_flag = 0 pic_scaling_matrix_present_flag = 0 second_chroma_qp_index_offset = 10
分析方法
67 42 e0 0a 89 95 42 c1 2c 80 (67爲sps頭)
0100 0010 1110 0000 0000 1010 1000 1001 1001 0101 0100 0010 11000001 0010 1100 1000 0000
FIELD | No. of BITS | VALUE | CodeNum | 描述符 |
---|---|---|---|---|
profile_idc | 8 | 01000010 | 66 | u(8) |
constraint_set0_flag | 1 | 1 | u(1) | |
constraint_set1_flag | 1 | 1 | u(1) | |
constraint_set2_flag | 1 | 1 | u(1) | |
constraint_set3_flag | 1 | 0 | u(1) | |
reserved_zero_4bits | 4 | 0000 | u(4) | |
level_idc | 8 | 00001010 | 10 | u(8) |
seq_parameter_set_id | 1 | 1 | 0 | ue(v) |
log2_max_frame_num_minus4 | 7 | 0001001 | 8 | ue(v) |
pic_order_cnt_type | 1 | 1 | 0 | ue(v) |
log2_max_pic_order_cnt_lsb_minus4 | 5 | 00101 | 4 | ue(v) |
num_ref_frames | 3 | 010 | ue(v) | |
gaps_in_frame_num_value_allowed_flag | 1 | 1 | u(1) | |
pic_width_in_mbs_minus1 | 9 | 000010110 | 20 | ue(v) |
pic_height_in_map_units_minus1 | 9 | 000010010 | 16 | ue(v) |
frame_mbs_only_flag | 1 | 1 | 0 | u(1) |
mb_adaptive_frame_field_flag | 1 | 1 | 0 | u(1) |
direct_8x8_inference_flag | 1 | 0 | u(1) | |
frame_cropping_flag | 1 | 0 | u(1) | |
vui_parameters_present_flag | 1 | 1 | 0 | u(1) |
68 ce 05 8b 72 (68爲pps頭)
1100 1110 0000 0101 1000 1011 0111 0010 pps
FIELD | No. of BITS | VALUE | CodeNum | 描述符 |
---|---|---|---|---|
pic_parameter_set_id | 1 | 1 | 0 | ue(v) |
seq_parameter_set_id | 1 | 1 | 0 | ue(v) |
entropy_coding_mode_flag | 1 | 0 | ue(1) | |
pic_order_present_flag | 1 | 0 | ue(1) | |
num_slice_groups_minus1 | 1 | 1 | 0 | ue(v) |
num_ref_idx_l0_active_minus1 | 1 | 1 | 0 | ue(v) |
num_ref_idx_l1_active_minus1 | 1 | 1 | 0 | ue(v) |
weighted_pred_flag | 1 | 0 | ue(1) | |
weighted_bipred_idc | 2 | 00 | ue(2) | |
pic_init_qp_minus26 | 7 | 0001011 | 10(-5) | se(v) |
pic_init_qs_minus26 | 7 | 0001011 | 10(-5) | se(v) |
chroma_qp_index_offset | 3 | 011 | 2(-1) | se(v) |
deblocking_filter_control_present_flag | 1 | 1 | ue(1) | |
constrained_intra_pred_flag | 1 | 0 | ue(1) | |
redundant_pic_cnt_present_flag | 1 | 0 | ue(1) |
MP4封裝格式是基於QuickTime容器格式定義,媒體描述與媒體數據分開,目前被普遍應用於封裝h.264視頻和ACC音頻,是高清視頻/HDV的表明。