Live555源代碼解讀(8)

九 、h264 RTP傳輸詳解(1)session

前幾章對Server端的介紹中有個比較重要的問題沒有仔細探究:如何打開文件並得到其SDP信息。咱們就從這裏入手吧。
當RTSPServer收到對某個媒體的DESCRIBE請求時,它會找到對應的ServerMediaSession,調用ServerMediaSession::generateSDPDescription()。generateSDPDescription()中會遍歷調用ServerMediaSession中全部的調用ServerMediaSubsession,經過subsession->sdpLines()取得每一個Subsession的sdp,合併成一個完整的SDP返回之。
咱們幾乎能夠判定,文件的打開和分析應該是在每一個Subsession的sdpLines()函數中完成的,看看這個函數:
ide

char const* OnDemandServerMediaSubsession::sdpLines()函數

{oop

if (fSDPLines == NULL) {this

// We need to construct a set of SDP lines that describe thisspa

// subsession (as a unicast stream).  To do so, we first createorm

// dummy (unused) source and "RTPSink" objects,server

// whose parameters we use for the SDP lines:繼承

unsigned estBitrate;ip

FramedSource* inputSource = createNewStreamSource(0, estBitrate);

if (inputSource == NULL)

return NULL; // file not found


struct in_addr dummyAddr;

dummyAddr.s_addr = 0;

Groupsock dummyGroupsock(envir(), dummyAddr, 0, 0);

unsigned char rtpPayloadType = 96 + trackNumber() - 1; // if dynamic

RTPSink* dummyRTPSink = createNewRTPSink(&dummyGroupsock,

rtpPayloadType, inputSource);


setSDPLinesFromRTPSink(dummyRTPSink, inputSource, estBitrate);

Medium::close(dummyRTPSink);

closeStreamSource(inputSource);

}


return fSDPLines;

}

其所爲如是:Subsession中直接保存了對應媒體文件的SDP,可是在第一次獲取時fSDPLines爲NULL,因此需先獲取fSDPLines。其作法比較費事,居然是建了臨時的Source和RTPSink,把它們鏈接成一個StreamToken,Playing一段時間以後才取得了fSDPLines。createNewStreamSource()和createNewRTPSink()都是虛函數,因此此處建立的source和sink都是繼承類指定的,咱們分析的是H264,也就是H264VideoFileServerMediaSubsession所指定的,來看一下這兩個函數:

FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(

unsigned /*clientSessionId*/,

unsigned& estBitrate)

{

estBitrate = 500; // kbps, estimate


// Create the video source:

ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(),

fFileName);

if (fileSource == NULL)

return NULL;

fFileSize = fileSource->fileSize();


// Create a framer for the Video Elementary Stream:

return H264VideoStreamFramer::createNew(envir(), fileSource);

}


RTPSink* H264VideoFileServerMediaSubsession::createNewRTPSink(

Groupsock* rtpGroupsock,

unsigned char rtpPayloadTypeIfDynamic,

FramedSource* /*inputSource*/)

{

return H264VideoRTPSink::createNew(envir(), rtpGroupsock,

rtpPayloadTypeIfDynamic);

}


能夠看到,分別建立了H264VideoStreamFramer和H264VideoRTPSink。能夠確定H264VideoStreamFramer也是一個Source,但它內部又利用了另外一個source--ByteStreamFileSource。後面會分析爲何要這樣作,這裏先不要管它。尚未看到真正打開文件的代碼,繼續探索:

void OnDemandServerMediaSubsession::setSDPLinesFromRTPSink(

RTPSink* rtpSink,

FramedSource* inputSource,

unsigned estBitrate)

{

if (rtpSink == NULL)

return;


char const* mediaType = rtpSink->sdpMediaType();

unsigned char rtpPayloadType = rtpSink->rtpPayloadType();

struct in_addr serverAddrForSDP;

serverAddrForSDP.s_addr = fServerAddressForSDP;

char* const ipAddressStr = strDup(our_inet_ntoa(serverAddrForSDP));

char* rtpmapLine = rtpSink->rtpmapLine();

char const* rangeLine = rangeSDPLine();

char const* auxSDPLine = getAuxSDPLine(rtpSink, inputSource);

if (auxSDPLine == NULL)

auxSDPLine = "";


char const* const sdpFmt = "m=%s %u RTP/AVP %d\r\n"

"c=IN IP4 %s\r\n"

"b=AS:%u\r\n"

"%s"

"%s"

"%s"

"a=control:%s\r\n";

unsigned sdpFmtSize = strlen(sdpFmt) + strlen(mediaType) + 5 /* max short len */

+ 3 /* max char len */

+ strlen(ipAddressStr) + 20 /* max int len */

+ strlen(rtpmapLine) + strlen(rangeLine) + strlen(auxSDPLine)

+ strlen(trackId());

char* sdpLines = new char[sdpFmtSize];

sprintf(sdpLines, sdpFmt, mediaType, // m= <media>

fPortNumForSDP, // m= <port>

rtpPayloadType, // m= <fmt list>

ipAddressStr, // c= address

estBitrate, // b=AS:<bandwidth>

rtpmapLine, // a=rtpmap:... (if present)

rangeLine, // a=range:... (if present)

auxSDPLine, // optional extra SDP line

trackId()); // a=control:<track-id>

delete[] (char*) rangeLine;

delete[] rtpmapLine;

delete[] ipAddressStr;


fSDPLines = strDup(sdpLines);

delete[] sdpLines;

}


此函數中取得Subsession的sdp並保存到fSDPLines。打開文件應在rtpSink->rtpmapLine()甚至是Source建立時已經作了。咱們不防先把它放一放,而是先把SDP的獲取過程搞個通透。因此把焦點集中到getAuxSDPLine()上。

char const* OnDemandServerMediaSubsession::getAuxSDPLine(

RTPSink* rtpSink,

FramedSource* /*inputSource*/)

{

// Default implementation:

return rtpSink == NULL ? NULL : rtpSink->auxSDPLine();

}


很簡單,調用了rtpSink->auxSDPLine()那麼咱們要看H264VideoRTPSink::auxSDPLine():不用看了,很簡單,取得source 中保存的PPS,SPS等造成a=fmpt行。但事實上並無這麼簡單,H264VideoFileServerMediaSubsession重寫了getAuxSDPLine()!若是不重寫,則說明auxSDPLine已經在前面分析文件時得到了,那麼既然重寫,就說明前面沒有獲取到,只能在這個函數中重寫。look H264VideoFileServerMediaSubsession中這個函數:

char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(

RTPSink* rtpSink,

FramedSource* inputSource)

{

if (fAuxSDPLine != NULL)

return fAuxSDPLine; // it's already been set up (for a previous client)


if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream

// Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known

// until we start reading the file.  This means that "rtpSink"s "auxSDPLine()" will be NULL initially,

// and we need to start reading data from our file until this changes.

fDummyRTPSink = rtpSink;


// Start reading the file:

fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this);


// Check whether the sink's 'auxSDPLine()' is ready:

checkForAuxSDPLine(this);

}


envir().taskScheduler().doEventLoop(&fDoneFlag);


return fAuxSDPLine;

}

註釋裏面解釋得很清楚,H264不能在文件頭中取得PPS/SPS,必須在播放一下後(固然,它是一個原始流文件,沒有文件頭)才行。也就是說不能從rtpSink中取得了。爲了保證在函數退出前能取得AuxSDP,把大循環搬到這裏來了。afterPlayingDummy()是在播放結束也就是取得aux sdp以後執行。在大循環以前的checkForAuxSDPLine()作了什麼呢?

void H264VideoFileServerMediaSubsession::checkForAuxSDPLine1()

{

char const* dasl;


if (fAuxSDPLine != NULL) {

// Signal the event loop that we're done:

setDoneFlag();

} else if (fDummyRTPSink != NULL

&& (dasl = fDummyRTPSink->auxSDPLine()) != NULL) {

fAuxSDPLine = strDup(dasl);

fDummyRTPSink = NULL;


// Signal the event loop that we're done:

setDoneFlag();

} else {

// try again after a brief delay:

int uSecsToDelay = 100000; // 100 ms

nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay,

(TaskFunc*) checkForAuxSDPLine, this);

}

}


它檢查是否已取得Aux sdp,若是取得了,設置結束標誌,直接返回。若是沒有,就檢查是否sink中已取得了aux sdp,若是是,也設置結束標誌,返回。若是尚未取得,則把這個檢查函數作爲delay task加入計劃任務中。每100毫秒檢查一次,每檢查一次主要就是調用一次fDummyRTPSink->auxSDPLine()。大循環在檢測到fDoneFlag改變時中止,此時已取得了aux sdp。可是若是直到文件結束也沒有獲得aux sdp,則afterPlayingDummy()被執行,在其中中止掉這個大循環。而後在父Subsession類中關掉這些臨時的source和sink。在直正播放時從新建立。

相關文章
相關標籤/搜索