八 、RTSPClient分析html
有RTSPServer,固然就要有RTSPClient。
若是按照Server端的架構,想一下Client端各部分的組成多是這樣:
由於要鏈接RTSP server,因此RTSPClient要有TCP socket。當獲取到server端的DESCRIBE後,應創建一個對應於ServerMediaSession的ClientMediaSession。對應每一個Track,ClientMediaSession中應創建ClientMediaSubsession。當創建RTP Session時,應分別爲所擁有的Track發送SETUP請求鏈接,在獲取迴應後,分別爲全部的track創建RTP socket,而後請求PLAY,而後開始傳輸數據。事實是這樣嗎?只能分析代碼了。session
testProgs中的OpenRTSP是典型的RTSPClient示例,因此分析它吧。
main()函數在playCommon.cpp文件中。main()的流程比較簡單,跟服務端差異不大:創建任務計劃對象--創建環境對象--處理用戶輸入的參數(RTSP地址)--建立RTSPClient實例--發出第一個RTSP請求(多是OPTIONS也多是DESCRIBE)--進入Loop。架構
RTSP的tcp鏈接是在發送第一個RTSP請求時才創建的,在RTSPClient的那幾個發請求的函數sendXXXXXXCommand()中最終都調用sendRequest(),sendRequest()中會跟據狀況創建起TCP鏈接。在創建鏈接時立刻向任務計劃中加入處理從這個TCP接收數據的socket handler:RTSPClient::incomingDataHandler()。
下面就是發送RTSP請求,OPTIONS就沒必要看了,從請求DESCRIBE開始:app
void getSDPDescription(RTSPClient::responseHandler* afterFunc)socket
{tcp
ourRTSPClient->sendDescribeCommand(afterFunc, ourAuthenticator);ide
}函數
unsigned RTSPClient::sendDescribeCommand(responseHandler* responseHandler,oop
Authenticator* authenticator)ui
{
if (authenticator != NULL)
fCurrentAuthenticator = *authenticator;
return sendRequest(new RequestRecord(++fCSeq, "DESCRIBE", responseHandler));
}
參數responseHandler是調用者提供的回調函數,用於在處理完請求的迴應後再調用之。而且在這個回調函數中會發出下一個請求--全部的請求都是這樣依次發出的。使用回調函數的緣由主要是由於socket的發送與接收不是同步進行的。類RequestRecord就表明一個請求,它不但保存了RTSP請求相關的信息,並且保存了請求完成後的回調函數--就是responseHandler。有些請求發出時還沒創建tcp鏈接,不能當即發送,則加入fRequestsAwaitingConnection隊列;有些發出後要等待Server端的迴應,就加入fRequestsAwaitingResponse隊列,當收到迴應後再從隊列中把它取出。
因爲RTSPClient::sendRequest()太複雜,就不列其代碼了,其無非是創建起RTSP請求字符串而後用TCP socket發送之。
如今看一下收到DESCRIBE的迴應後如何處理它。理論上是跟據媒體信息創建起MediaSession了,看看是否是這樣:
void continueAfterDESCRIBE(RTSPClient*, int resultCode, char* resultString)
{
char* sdpDescription = resultString;
//跟據SDP建立MediaSession。
// Create a media session object from this SDP description:
session = MediaSession::createNew(*env, sdpDescription);
delete[] sdpDescription;
// Then, setup the "RTPSource"s for the session:
MediaSubsessionIterator iter(*session);
MediaSubsession *subsession;
Boolean madeProgress = False;
char const* singleMediumToTest = singleMedium;
//循環全部的MediaSubsession,爲每一個設置其RTPSource的參數
while ((subsession = iter.next()) != NULL) {
//初始化subsession,在其中會創建RTP/RTCP socket以及RTPSource。
if (subsession->initiate(simpleRTPoffsetArg)) {
madeProgress = True;
if (subsession->rtpSource() != NULL) {
// Because we're saving the incoming data, rather than playing
// it in real time, allow an especially large time threshold
// (1 second) for reordering misordered incoming packets:
unsigned const thresh = 1000000; // 1 second
subsession->rtpSource()->setPacketReorderingThresholdTime(thresh);
// Set the RTP source's OS socket buffer size as appropriate - either if we were explicitly asked (using -B),
// or if the desired FileSink buffer size happens to be larger than the current OS socket buffer size.
// (The latter case is a heuristic, on the assumption that if the user asked for a large FileSink buffer size,
// then the input data rate may be large enough to justify increasing the OS socket buffer size also.)
int socketNum = subsession->rtpSource()->RTPgs()->socketNum();
unsigned curBufferSize = getReceiveBufferSize(*env,socketNum);
if (socketInputBufferSize > 0 || fileSinkBufferSize > curBufferSize) {
unsigned newBufferSize = socketInputBufferSize > 0 ?
socketInputBufferSize : fileSinkBufferSize;
newBufferSize = setReceiveBufferTo(*env, socketNum, newBufferSize);
if (socketInputBufferSize > 0) { // The user explicitly asked for the new socket buffer size; announce it:
*env
<< "Changed socket receive buffer size for the \""
<< subsession->mediumName() << "/"
<< subsession->codecName()
<< "\" subsession from " << curBufferSize
<< " to " << newBufferSize << " bytes\n";
}
}
}
}
}
if (!madeProgress)
shutdown();
// Perform additional 'setup' on each subsession, before playing them:
//下一步就是發送SETUP請求了。須要爲每一個Track分別發送一次。
setupStreams();
}
此函數被刪掉不少枝葉,因此發現與原版不一樣請不要驚掉大牙。
的確在DESCRIBE迴應後創建起了MediaSession,並且咱們發現Client端的MediaSession不叫ClientMediaSesson,SubSession亦不是。我如今很想看看MediaSession與MediaSubsession的創建過程:
MediaSession* MediaSession::createNew(UsageEnvironment& env,char const* sdpDescription)
{
MediaSession* newSession = new MediaSession(env);
if (newSession != NULL) {
if (!newSession->initializeWithSDP(sdpDescription)) {
delete newSession;
return NULL;
}
}
return newSession;
}
我能夠告訴你,MediaSession的構造函數沒什麼可看的,那麼就來看initializeWithSDP():
內容太多,沒必要看了,我大致說說吧:就是處理SDP,跟據每一行來初始化一些變量。當遇到"m="行時,就創建一個MediaSubsession,而後再處理這一行之下,下一個"m="行之上的行們,用這些參數初始化MediaSubsession的變量。循環往復,直到盡頭。然而這其中並無創建RTP socket。咱們發如今continueAfterDESCRIBE()中,建立MediaSession以後又調用了subsession->initiate(simpleRTPoffsetArg),那麼socket是否是在它裏面建立的呢?look:
Boolean MediaSubsession::initiate(int useSpecialRTPoffset)
{
if (fReadSource != NULL)
return True; // has already been initiated
do {
if (fCodecName == NULL) {
env().setResultMsg("Codec is unspecified");
break;
}
//建立RTP/RTCP sockets
// Create RTP and RTCP 'Groupsocks' on which to receive incoming data.
// (Groupsocks will work even for unicast addresses)
struct in_addr tempAddr;
tempAddr.s_addr = connectionEndpointAddress();
// This could get changed later, as a result of a RTSP "SETUP"
if (fClientPortNum != 0) {
//當server端指定了建議的client端口
// The sockets' port numbers were specified for us. Use these:
fClientPortNum = fClientPortNum & ~1; // even
if (isSSM()) {
fRTPSocket = new Groupsock(env(), tempAddr, fSourceFilterAddr,
fClientPortNum);
} else {
fRTPSocket = new Groupsock(env(), tempAddr, fClientPortNum,
255);
}
if (fRTPSocket == NULL) {
env().setResultMsg("Failed to create RTP socket");
break;
}
// Set our RTCP port to be the RTP port +1
portNumBits const rtcpPortNum = fClientPortNum | 1;
if (isSSM()) {
fRTCPSocket = new Groupsock(env(), tempAddr, fSourceFilterAddr,
rtcpPortNum);
} else {
fRTCPSocket = new Groupsock(env(), tempAddr, rtcpPortNum, 255);
}
if (fRTCPSocket == NULL) {
char tmpBuf[100];
sprintf(tmpBuf, "Failed to create RTCP socket (port %d)",
rtcpPortNum);
env().setResultMsg(tmpBuf);
break;
}
} else {
//Server端沒有指定client端口,咱們本身找一個。之因此作的這樣複雜,是爲了能找到連續的兩個端口
//RTP/RTCP的端口號不是要連續嗎?還記得不?
// Port numbers were not specified in advance, so we use ephemeral port numbers.
// Create sockets until we get a port-number pair (even: RTP; even+1: RTCP).
// We need to make sure that we don't keep trying to use the same bad port numbers over and over again.
// so we store bad sockets in a table, and delete them all when we're done.
HashTable* socketHashTable = HashTable::create(ONE_WORD_HASH_KEYS);
if (socketHashTable == NULL)
break;
Boolean success = False;
NoReuse dummy; // ensures that our new ephemeral port number won't be one that's already in use
while (1) {
// Create a new socket:
if (isSSM()) {
fRTPSocket = new Groupsock(env(), tempAddr,
fSourceFilterAddr, 0);
} else {
fRTPSocket = new Groupsock(env(), tempAddr, 0, 255);
}
if (fRTPSocket == NULL) {
env().setResultMsg(
"MediaSession::initiate(): unable to create RTP and RTCP sockets");
break;
}
// Get the client port number, and check whether it's even (for RTP):
Port clientPort(0);
if (!getSourcePort(env(), fRTPSocket->socketNum(),
clientPort)) {
break;
}
fClientPortNum = ntohs(clientPort.num());
if ((fClientPortNum & 1) != 0) { // it's odd
// Record this socket in our table, and keep trying:
unsigned key = (unsigned) fClientPortNum;
Groupsock* existing = (Groupsock*) socketHashTable->Add(
(char const*) key, fRTPSocket);
delete existing; // in case it wasn't NULL
continue;
}
// Make sure we can use the next (i.e., odd) port number, for RTCP:
portNumBits rtcpPortNum = fClientPortNum | 1;
if (isSSM()) {
fRTCPSocket = new Groupsock(env(), tempAddr,
fSourceFilterAddr, rtcpPortNum);
} else {
fRTCPSocket = new Groupsock(env(), tempAddr, rtcpPortNum,
255);
}
if (fRTCPSocket != NULL && fRTCPSocket->socketNum() >= 0) {
// Success! Use these two sockets.
success = True;
break;
} else {
// We couldn't create the RTCP socket (perhaps that port number's already in use elsewhere?).
delete fRTCPSocket;
// Record the first socket in our table, and keep trying:
unsigned key = (unsigned) fClientPortNum;
Groupsock* existing = (Groupsock*) socketHashTable->Add(
(char const*) key, fRTPSocket);
delete existing; // in case it wasn't NULL
continue;
}
}
// Clean up the socket hash table (and contents):
Groupsock* oldGS;
while ((oldGS = (Groupsock*) socketHashTable->RemoveNext()) != NULL) {
delete oldGS;
}
delete socketHashTable;
if (!success)
break; // a fatal error occurred trying to create the RTP and RTCP sockets; we can't continue
}
// Try to use a big receive buffer for RTP - at least 0.1 second of
// specified bandwidth and at least 50 KB
unsigned rtpBufSize = fBandwidth * 25 / 2; // 1 kbps * 0.1 s = 12.5 bytes
if (rtpBufSize < 50 * 1024)
rtpBufSize = 50 * 1024;
increaseReceiveBufferTo(env(), fRTPSocket->socketNum(), rtpBufSize);
// ASSERT: fRTPSocket != NULL && fRTCPSocket != NULL
if (isSSM()) {
// Special case for RTCP SSM: Send RTCP packets back to the source via unicast:
fRTCPSocket->changeDestinationParameters(fSourceFilterAddr, 0, ~0);
}
//建立RTPSource的地方
// Create "fRTPSource" and "fReadSource":
if (!createSourceObjects(useSpecialRTPoffset))
break;
if (fReadSource == NULL) {
env().setResultMsg("Failed to create read source");
break;
}
// Finally, create our RTCP instance. (It starts running automatically)
if (fRTPSource != NULL) {
// If bandwidth is specified, use it and add 5% for RTCP overhead.
// Otherwise make a guess at 500 kbps.
unsigned totSessionBandwidth =
fBandwidth ? fBandwidth + fBandwidth / 20 : 500;
fRTCPInstance = RTCPInstance::createNew(env(), fRTCPSocket,
totSessionBandwidth, (unsigned char const*) fParent.CNAME(),
NULL /* we're a client */, fRTPSource);
if (fRTCPInstance == NULL) {
env().setResultMsg("Failed to create RTCP instance");
break;
}
}
return True;
} while (0);
//失敗時執行到這裏
delete fRTPSocket;
fRTPSocket = NULL;
delete fRTCPSocket;
fRTCPSocket = NULL;
Medium::close(fRTCPInstance);
fRTCPInstance = NULL;
Medium::close(fReadSource);
fReadSource = fRTPSource = NULL;
fClientPortNum = 0;
return False;
}
是的,在其中建立了RTP/RTCP socket並建立了RTPSource,建立RTPSource在函數createSourceObjects()中,看一下:
Boolean MediaSubsession::createSourceObjects(int useSpecialRTPoffset)
{
do {
// First, check "fProtocolName"
if (strcmp(fProtocolName, "UDP") == 0) {
// A UDP-packetized stream (*not* a RTP stream)
fReadSource = BasicUDPSource::createNew(env(), fRTPSocket);
fRTPSource = NULL; // Note!
if (strcmp(fCodecName, "MP2T") == 0) { // MPEG-2 Transport Stream
fReadSource = MPEG2TransportStreamFramer::createNew(env(),
fReadSource);
// this sets "durationInMicroseconds" correctly, based on the PCR values
}
} else {
// Check "fCodecName" against the set of codecs that we support,
// and create our RTP source accordingly
// (Later make this code more efficient, as this set grows #####)
// (Also, add more fmts that can be implemented by SimpleRTPSource#####)
Boolean createSimpleRTPSource = False; // by default; can be changed below
Boolean doNormalMBitRule = False; // default behavior if "createSimpleRTPSource" is True
if (strcmp(fCodecName, "QCELP") == 0) { // QCELP audio
fReadSource = QCELPAudioRTPSource::createNew(env(), fRTPSocket,
fRTPSource, fRTPPayloadFormat, fRTPTimestampFrequency);
// Note that fReadSource will differ from fRTPSource in this case
} else if (strcmp(fCodecName, "AMR") == 0) { // AMR audio (narrowband)
fReadSource = AMRAudioRTPSource::createNew(env(), fRTPSocket,
fRTPSource, fRTPPayloadFormat, 0 /*isWideband*/,
fNumChannels, fOctetalign, fInterleaving,
fRobustsorting, fCRC);
// Note that fReadSource will differ from fRTPSource in this case
} else if (strcmp(fCodecName, "AMR-WB") == 0) { // AMR audio (wideband)
fReadSource = AMRAudioRTPSource::createNew(env(), fRTPSocket,
fRTPSource, fRTPPayloadFormat, 1 /*isWideband*/,
fNumChannels, fOctetalign, fInterleaving,
fRobustsorting, fCRC);
// Note that fReadSource will differ from fRTPSource in this case
} else if (strcmp(fCodecName, "MPA") == 0) { // MPEG-1 or 2 audio
fReadSource = fRTPSource = MPEG1or2AudioRTPSource::createNew(
env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "MPA-ROBUST") == 0) { // robust MP3 audio
fRTPSource = MP3ADURTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat, fRTPTimestampFrequency);
if (fRTPSource == NULL)
break;
// Add a filter that deinterleaves the ADUs after depacketizing them:
MP3ADUdeinterleaver* deinterleaver = MP3ADUdeinterleaver::createNew(
env(), fRTPSource);
if (deinterleaver == NULL)
break;
// Add another filter that converts these ADUs to MP3 frames:
fReadSource = MP3FromADUSource::createNew(env(), deinterleaver);
} else if (strcmp(fCodecName, "X-MP3-DRAFT-00") == 0) {
// a non-standard variant of "MPA-ROBUST" used by RealNetworks
// (one 'ADU'ized MP3 frame per packet; no headers)
fRTPSource = SimpleRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat, fRTPTimestampFrequency,
"audio/MPA-ROBUST" /*hack*/);
if (fRTPSource == NULL)
break;
// Add a filter that converts these ADUs to MP3 frames:
fReadSource = MP3FromADUSource::createNew(env(), fRTPSource,
False /*no ADU header*/);
} else if (strcmp(fCodecName, "MP4A-LATM") == 0) { // MPEG-4 LATM audio
fReadSource = fRTPSource = MPEG4LATMAudioRTPSource::createNew(
env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "AC3") == 0
|| strcmp(fCodecName, "EAC3") == 0) { // AC3 audio
fReadSource = fRTPSource = AC3AudioRTPSource::createNew(env(),
fRTPSocket, fRTPPayloadFormat, fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "MP4V-ES") == 0) { // MPEG-4 Elem Str vid
fReadSource = fRTPSource = MPEG4ESVideoRTPSource::createNew(
env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "MPEG4-GENERIC") == 0) {
fReadSource = fRTPSource = MPEG4GenericRTPSource::createNew(
env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency, fMediumName, fMode, fSizelength,
fIndexlength, fIndexdeltalength);
} else if (strcmp(fCodecName, "MPV") == 0) { // MPEG-1 or 2 video
fReadSource = fRTPSource = MPEG1or2VideoRTPSource::createNew(
env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "MP2T") == 0) { // MPEG-2 Transport Stream
fRTPSource = SimpleRTPSource::createNew(env(), fRTPSocket,
fRTPPayloadFormat, fRTPTimestampFrequency, "video/MP2T",
0, False);
fReadSource = MPEG2TransportStreamFramer::createNew(env(),
fRTPSource);
// this sets "durationInMicroseconds" correctly, based on the PCR values
} else if (strcmp(fCodecName, "H261") == 0) { // H.261
fReadSource = fRTPSource = H261VideoRTPSource::createNew(env(),
fRTPSocket, fRTPPayloadFormat, fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "H263-1998") == 0
|| strcmp(fCodecName, "H263-2000") == 0) { // H.263+
fReadSource = fRTPSource = H263plusVideoRTPSource::createNew(
env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "H264") == 0) {
fReadSource = fRTPSource = H264VideoRTPSource::createNew(env(),
fRTPSocket, fRTPPayloadFormat, fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "DV") == 0) {
fReadSource = fRTPSource = DVVideoRTPSource::createNew(env(),
fRTPSocket, fRTPPayloadFormat, fRTPTimestampFrequency);
} else if (strcmp(fCodecName, "JPEG") == 0) { // motion JPEG
fReadSource = fRTPSource = JPEGVideoRTPSource::createNew(env(),
fRTPSocket, fRTPPayloadFormat, fRTPTimestampFrequency,
videoWidth(), videoHeight());
} else if (strcmp(fCodecName, "X-QT") == 0
|| strcmp(fCodecName, "X-QUICKTIME") == 0) {
// Generic QuickTime streams, as defined in
// <http://developer.apple.com/quicktime/icefloe/dispatch026.html>
char* mimeType = new char[strlen(mediumName())
+ strlen(codecName()) + 2];
sprintf(mimeType, "%s/%s", mediumName(), codecName());
fReadSource = fRTPSource = QuickTimeGenericRTPSource::createNew(
env(), fRTPSocket, fRTPPayloadFormat,
fRTPTimestampFrequency, mimeType);
delete[] mimeType;
} else if (strcmp(fCodecName, "PCMU") == 0 // PCM u-law audio
|| strcmp(fCodecName, "GSM") == 0 // GSM audio
|| strcmp(fCodecName, "DVI4") == 0 // DVI4 (IMA ADPCM) audio
|| strcmp(fCodecName, "PCMA") == 0 // PCM a-law audio
|| strcmp(fCodecName, "MP1S") == 0 // MPEG-1 System Stream
|| strcmp(fCodecName, "MP2P") == 0 // MPEG-2 Program Stream
|| strcmp(fCodecName, "L8") == 0 // 8-bit linear audio
|| strcmp(fCodecName, "L16") == 0 // 16-bit linear audio
|| strcmp(fCodecName, "L20") == 0 // 20-bit linear audio (RFC 3190)
|| strcmp(fCodecName, "L24") == 0 // 24-bit linear audio (RFC 3190)
|| strcmp(fCodecName, "G726-16") == 0 // G.726, 16 kbps
|| strcmp(fCodecName, "G726-24") == 0 // G.726, 24 kbps
|| strcmp(fCodecName, "G726-32") == 0 // G.726, 32 kbps
|| strcmp(fCodecName, "G726-40") == 0 // G.726, 40 kbps
|| strcmp(fCodecName, "SPEEX") == 0 // SPEEX audio
|| strcmp(fCodecName, "T140") == 0 // T.140 text (RFC 4103)
|| strcmp(fCodecName, "DAT12") == 0 // 12-bit nonlinear audio (RFC 3190)
) {
createSimpleRTPSource = True;
useSpecialRTPoffset = 0;
} else if (useSpecialRTPoffset >= 0) {
// We don't know this RTP payload format, but try to receive
// it using a 'SimpleRTPSource' with the specified header offset:
createSimpleRTPSource = True;
} else {
env().setResultMsg(
"RTP payload format unknown or not supported");
break;
}
if (createSimpleRTPSource) {
char* mimeType = new char[strlen(mediumName())
+ strlen(codecName()) + 2];
sprintf(mimeType, "%s/%s", mediumName(), codecName());
fReadSource = fRTPSource = SimpleRTPSource::createNew(env(),
fRTPSocket, fRTPPayloadFormat, fRTPTimestampFrequency,
mimeType, (unsigned) useSpecialRTPoffset,
doNormalMBitRule);
delete[] mimeType;
}
}
return True;
} while (0);
return False; // an error occurred
}
能夠看到,這個函數裏主要是跟據前面分析出的媒體和傳輸信息創建合適的Source。
socket創建了,Source也建立了,下一步應該是鏈接Sink,造成一個流。到此爲止還未看到Sink的影子,應該是在下一步SETUP中創建,咱們看到在continueAfterDESCRIBE()的最後調用了setupStreams(),那麼就來探索一下setupStreams():
void setupStreams()
{
static MediaSubsessionIterator* setupIter = NULL;
if (setupIter == NULL)
setupIter = new MediaSubsessionIterator(*session);
//每次調用此函數只爲一個Subsession發出SETUP請求。
while ((subsession = setupIter->next()) != NULL) {
// We have another subsession left to set up:
if (subsession->clientPortNum() == 0)
continue; // port # was not set
//爲一個Subsession發送SETUP請求。請求處理完成時調用continueAfterSETUP(),
//continueAfterSETUP()又調用了setupStreams(),在此函數中爲下一個SubSession發送SETUP請求。
//直處處理完全部的SubSession
setupSubsession(subsession, streamUsingTCP, continueAfterSETUP);
return;
}
//執行到這裏時,已循環完全部的SubSession了
// We're done setting up subsessions.
delete setupIter;
if (!madeProgress)
shutdown();
//建立輸出文件,看來是在這裏建立Sink了。建立sink後,就開始播放它。這個播放應該只是把socket的handler加入到
//計劃任務中,而沒有數據的接收或發送。只有等到發出PLAY請求後纔有數據的收發。
// Create output files:
if (createReceivers) {
if (outputQuickTimeFile) {
// Create a "QuickTimeFileSink", to write to 'stdout':
qtOut = QuickTimeFileSink::createNew(*env, *session, "stdout",
fileSinkBufferSize, movieWidth, movieHeight, movieFPS,
packetLossCompensate, syncStreams, generateHintTracks,
generateMP4Format);
if (qtOut == NULL) {
*env << "Failed to create QuickTime file sink for stdout: "
<< env->getResultMsg();
shutdown();
}
qtOut->startPlaying(sessionAfterPlaying, NULL);
} else if (outputAVIFile) {
// Create an "AVIFileSink", to write to 'stdout':
aviOut = AVIFileSink::createNew(*env, *session, "stdout",
fileSinkBufferSize, movieWidth, movieHeight, movieFPS,
packetLossCompensate);
if (aviOut == NULL) {
*env << "Failed to create AVI file sink for stdout: "
<< env->getResultMsg();
shutdown();
}
aviOut->startPlaying(sessionAfterPlaying, NULL);
} else {
// Create and start "FileSink"s for each subsession:
madeProgress = False;
MediaSubsessionIterator iter(*session);
while ((subsession = iter.next()) != NULL) {
if (subsession->readSource() == NULL)
continue; // was not initiated
// Create an output file for each desired stream:
char outFileName[1000];
if (singleMedium == NULL) {
// Output file name is
// "<filename-prefix><medium_name>-<codec_name>-<counter>"
static unsigned streamCounter = 0;
snprintf(outFileName, sizeof outFileName, "%s%s-%s-%d",
fileNamePrefix, subsession->mediumName(),
subsession->codecName(), ++streamCounter);
} else {
sprintf(outFileName, "stdout");
}
FileSink* fileSink;
if (strcmp(subsession->mediumName(), "audio") == 0
&& (strcmp(subsession->codecName(), "AMR") == 0
|| strcmp(subsession->codecName(), "AMR-WB")
== 0)) {
// For AMR audio streams, we use a special sink that inserts AMR frame hdrs:
fileSink = AMRAudioFileSink::createNew(*env, outFileName,
fileSinkBufferSize, oneFilePerFrame);
} else if (strcmp(subsession->mediumName(), "video") == 0
&& (strcmp(subsession->codecName(), "H264") == 0)) {
// For H.264 video stream, we use a special sink that insert start_codes:
fileSink = H264VideoFileSink::createNew(*env, outFileName,
subsession->fmtp_spropparametersets(),
fileSinkBufferSize, oneFilePerFrame);
} else {
// Normal case:
fileSink = FileSink::createNew(*env, outFileName,
fileSinkBufferSize, oneFilePerFrame);
}
subsession->sink = fileSink;
if (subsession->sink == NULL) {
*env << "Failed to create FileSink for \"" << outFileName
<< "\": " << env->getResultMsg() << "\n";
} else {
if (singleMedium == NULL) {
*env << "Created output file: \"" << outFileName
<< "\"\n";
} else {
*env << "Outputting data from the \""
<< subsession->mediumName() << "/"
<< subsession->codecName()
<< "\" subsession to 'stdout'\n";
}
if (strcmp(subsession->mediumName(), "video") == 0
&& strcmp(subsession->codecName(), "MP4V-ES") == 0 &&
subsession->fmtp_config() != NULL) {
// For MPEG-4 video RTP streams, the 'config' information
// from the SDP description contains useful VOL etc. headers.
// Insert this data at the front of the output file:
unsigned configLen;
unsigned char* configData
= parseGeneralConfigStr(subsession->fmtp_config(), configLen);
struct timeval timeNow;
gettimeofday(&timeNow, NULL);
fileSink->addData(configData, configLen, timeNow);
delete[] configData;
}
//開始傳輸
subsession->sink->startPlaying(*(subsession->readSource()),
subsessionAfterPlaying, subsession);
// Also set a handler to be called if a RTCP "BYE" arrives
// for this subsession:
if (subsession->rtcpInstance() != NULL) {
subsession->rtcpInstance()->setByeHandler(
subsessionByeHandler, subsession);
}
madeProgress = True;
}
}
if (!madeProgress)
shutdown();
}
}
// Finally, start playing each subsession, to start the data flow:
if (duration == 0) {
if (scale > 0)
duration = session->playEndTime() - initialSeekTime; // use SDP end time
else if (scale < 0)
duration = initialSeekTime;
}
if (duration < 0)
duration = 0.0;
endTime = initialSeekTime;
if (scale > 0) {
if (duration <= 0)
endTime = -1.0f;
else
endTime = initialSeekTime + duration;
} else {
endTime = initialSeekTime - duration;
if (endTime < 0)
endTime = 0.0f;
}
//發送PLAY請求,以後才能從Server端接收數據
startPlayingSession(session, initialSeekTime, endTime, scale,
continueAfterPLAY);
}
仔細看看註釋,應很容易瞭解此函數。