在開始分析以前,先對編碼協商中可能涉及的asterisk數據結構和變量做些說明。
ast_channel:定義一個通用的通道數據結構ios
struct ast_channel { const struct ast_channel_tech *tech; /*!< Technology (point to channel driver) */ void *tech_pvt; /*!< Private data used by the technology driver */ ... }
其中tech和tech_pvt兩個成員是與通道具體使用的技術相關的,tech是與一種通道技術(如SIP)對應的驅動的數據結構,tech的類型爲 ast_channel_tech的結構體,通道驅動定義了通道類型、描述,基本的呼叫相關函數指針(call,hangup,answer,transfer,bridge,early-bridge),幀的讀寫函數指針,DTMF、文本、圖像、HTML、視頻的發送,通道狀態指示函數指針(indicate)等,對通道的操做主要是在這裏定義的,而tech_pvt則定義了具體通道技術的數據信息,如sip_pvt。這些都是依賴於通道所使用的技術的。而ast_channel中其餘的成員則可認爲是各類通道都具備的通用的數據信息。數組
保存默認編碼偏好的兩個全局變量sip_cfg.capability和default_prefs
sip_cfg.capability:
sip_cfg是保存sip.conf的general段配置的全局結構體。sip_cfg.capability則是sip general配置的asterisk支持的編碼。在reload_config函數中對sip_cfg.capability進行初始化。首先,把 DEFAULT_CAPABILITY這個宏(定義在sip.h中)中的5種編碼(ulaw | testlaw | alaw | gsm | h.263)加到sip_cfg.capability中,而後解析(ast_parse_allow_disallow函數)sip.conf中的general段中的disallow和allow兩項配置,先剔除disallow中的編碼,再將allow中的編碼加到sip_cfg.capability中。session
default_prefs:
default_prefs是保存默認音頻編碼偏好的全局結構體。
對default_prefs的初始化也是在reload_config中進行的。解析(ast_parse_allow_disallow函數)sip.conf中的general段中的disallow和allow兩項配置,先剔除disallow中的編碼,而後將allow中的音頻編碼加到default_prefs中。default_prefs中不包括視頻編碼,由於asterisk不能對視頻編碼進行轉碼,只得使用所提供(offer)的視頻編碼。數據結構
sip_pvt結構體p中關於編碼的成員的說明
p->peercapability: 即user/peer對應的終端上支持的編碼。
p->capability: 即user/peer對應的編碼配置。初始化爲sip.conf [general]中allow選項配置的編碼,在check_peer_ok函數中從新賦值爲對應user/peer的編碼。
p->prefcodec: 只用於呼出呼叫(outbound call),由呼入通道以參數傳遞進來。
p->jointcapability: 對於呼入通道來講,指的是user/peer和終端都具備的編碼。對於呼出通道來講,在發出invite還未收到帶sdp的響應以前,指的是p->capability中可以與呼入通道傳遞的nativeformats(即p->prefcodec)進行互相轉碼的編碼;收到終端帶sdp的響應後,在處理sdp時,賦值爲sdp中攜帶的編碼。
p->prefs: 即user/peer對應的音頻編碼配置。在sip_alloc函數中被初始化爲default_prefs這個全局結構體的值,在check_peer_ok函數中從新賦值爲對應user/peer的編碼中的音頻編碼。ide
呼入呼叫(Inbound Call)協商:
在load或reload chan_sip模塊時,會調用build_peer函數從sip.conf或users.conf中讀取配置,並把每個賬號的配置保存到一個sip_peer的結構體中,在build_peer中調用set_peer_defaults來初始化這個結構體的成員(好比sippeer->capability = sip_cfg.capability; sippeer->prefs = default_prefs;),而後解析sip.conf(或users.conf),若是該賬號對應的context中有allow選項的話,就覆蓋sippeer的capability和prefs成員的初始值,將allow中的全部編碼賦值給sippeer->capability,將allow中的音頻編碼保持原順序賦值給sippeer->prefs。函數
p->prefs、p->capability、呼入通道的nativeformats的初始化:
asterisk接收到sip請求消息時,調用handle_request_do->find_call->sip_alloc,在sip_alloc中爲sip_pvt結構體p的capability和prefs成員初始化:p->capability = sip_cfg.capability;p->prefs = default_prefs。oop
接着handle_request_do->handle_incoming->handle_request_invite->check_user_full->check_peer_ok,在check_peer_ok中將找到的sip_peer的編碼成員分別賦值給p->capability,p->jointcapability,p->prefs,從新對p->capability、p->prefs進行初始化,對p->jointcapability進行初始化。ui
p->capability = peer->capability; p->prefs = peer->prefs; p->jointcapability = peer->capability;
而後handle_request_do->handle_incoming->handle_request_invite->sip_new,在sip_new中建立sip通道結構體tmp,併爲tmp的nativeformats成員賦值:首先從tmp通道對應的sip_pvt結構體成員i->prefs(這裏的i實際上就是sip_alloc中分配的sip_pvt結構體p做爲參數傳進來)的編碼中按順序從最前面開始選出(ast_codec_choose)屬於sip_pvt結構體capability的一種編碼做爲nativeformats的值,若是沒找到,就調用ast_best_codec來選擇一種編碼,該函數內部定義了一個名爲prefs的音頻編碼數組,按順序遍歷該數組,直到找到一個屬於sip_pvt結構體capability而且爲音頻的編碼。而後或上視頻和文本編碼能力,賦值給tmp->nativeformats。this
/* Select our native format based on codec preference until we receive something from another device to the contrary. */ if (i->jointcapability) { /* The joint capabilities of us and peer */ what = i->jointcapability; video = i->jointcapability & AST_FORMAT_VIDEO_MASK; text = i->jointcapability & AST_FORMAT_TEXT_MASK; } else if (i->capability) { /* Our configured capability for this peer */ what = i->capability; video = i->capability & AST_FORMAT_VIDEO_MASK; text = i->capability & AST_FORMAT_TEXT_MASK; } else { what = sip_cfg.capability; /* Global codec support */ video = sip_cfg.capability & AST_FORMAT_VIDEO_MASK; text = sip_cfg.capability & AST_FORMAT_TEXT_MASK; } /* Set the native formats for audio and merge in video */ tmp->nativeformats = ast_codec_choose(&i->prefs, what, 1) | video | text;
在handle_request_invite中調用process_sdp(注:process_sdp並不處理outbound call中的invite,process_sdp會在有incoming的invite和200OK時被調用),將invite中sdp攜帶的編碼逐個解析出來,將這些編碼對應的code相或並賦值給p->peercapability,將p->capability & p->peercapability的結果賦值給p->jointcapability.編碼
/* Scan media stream (m=) specific parameters loop */ while (!ast_strlen_zero(nextm)) { int audio = FALSE; int video = FALSE; int image = FALSE; int text = FALSE; char protocol[5] = {0,}; int x; numberofports = 1; len = -1; start = next; m = nextm; iterator = next; nextm = get_sdp_iterate(&next, req, "m"); /* Search for audio media definition */ /* 處理SDP中的m(音頻媒體屬性,如: m: audio 13422 RTP/AVP 0 3 101) */ if ((sscanf(m, "audio %30u/%30u RTP/%4s %n", &x, &numberofports, protocol, &len) == 3 && len > 0 && x) || (sscanf(m, "audio %30u RTP/%4s %n", &x, protocol, &len) == 2 && len > 0 && x)) { if (!strcmp(protocol, "SAVP")) { secure_audio = 1; } else if (strcmp(protocol, "AVP")) { ast_log(LOG_WARNING, "unknown SDP media protocol in offer: %s\n", protocol); continue; } if (p->offered_media[SDP_AUDIO].order_offered) { ast_log(LOG_WARNING, "Multiple audio streams are not supported\n"); return -3; } audio = TRUE; p->offered_media[SDP_AUDIO].order_offered = ++numberofmediastreams; portno = x; /* Scan through the RTP payload types specified in a "m=" line: */ codecs = m + len; ast_copy_string(p->offered_media[SDP_AUDIO].codecs, codecs, sizeof(p->offered_media[SDP_AUDIO].codecs)); for (; !ast_strlen_zero(codecs); codecs = ast_skip_blanks(codecs + len)) { if (sscanf(codecs, "%30u%n", &codec, &len) != 1) { ast_log(LOG_WARNING, "Error in codec string '%s'\n", codecs); return -1; } if (debug) ast_verbose("Found RTP audio format %d\n", codec); ast_rtp_codecs_payloads_set_m_type(&newaudiortp, NULL, codec); } /* Search for video media definition */ /* 處理SDP中的m(視頻媒體屬性,如: m: video 12036 RTP/AVP 34 98 99 ) */ } else if ((sscanf(m, "video %30u/%30u RTP/%4s %n", &x, &numberofports, protocol, &len) == 3 && len > 0 && x) || (sscanf(m, "video %30u RTP/%4s %n", &x, protocol, &len) == 2 && len >= 0 && x)) { if (!strcmp(protocol, "SAVP")) { secure_video = 1; } else if (strcmp(protocol, "AVP")) { ast_log(LOG_WARNING, "unknown SDP media protocol in offer: %s\n", protocol); continue; } if (p->offered_media[SDP_VIDEO].order_offered) { ast_log(LOG_WARNING, "Multiple video streams are not supported\n"); return -3; } video = TRUE; p->novideo = FALSE; p->offered_media[SDP_VIDEO].order_offered = ++numberofmediastreams; vportno = x; /* Scan through the RTP payload types specified in a "m=" line: */ codecs = m + len; ast_copy_string(p->offered_media[SDP_VIDEO].codecs, codecs, sizeof(p->offered_media[SDP_VIDEO].codecs)); for (; !ast_strlen_zero(codecs); codecs = ast_skip_blanks(codecs + len)) { if (sscanf(codecs, "%30u%n", &codec, &len) != 1) { ast_log(LOG_WARNING, "Error in codec string '%s'\n", codecs); return -1; } if (debug) ast_verbose("Found RTP video format %d\n", codec); ast_rtp_codecs_payloads_set_m_type(&newvideortp, NULL, codec); } /* Search for text media definition */ } /* ...... */ /* Media stream specific parameters */ while ((type = get_sdp_line(&iterator, next - 1, req, &value)) != '\0') { int processed = FALSE; switch (type) { case 'c': if (audio) { if (process_sdp_c(value, &audiosa)) { processed = TRUE; sa = &audiosa; } } else if (video) { if (process_sdp_c(value, &videosa)) { processed = TRUE; vsa = &videosa; } } else if (text) { if (process_sdp_c(value, &textsa)) { processed = TRUE; tsa = &textsa; } } else if (image) { if (process_sdp_c(value, &imagesa)) { processed = TRUE; isa = &imagesa; } } break; // 處理SDP中的a(媒體屬性,如: a: rtpmap:0 PCMU/8000) case 'a': /* Audio specific scanning */ if (audio) { if (process_sdp_a_sendonly(value, &sendonly)) processed = TRUE; else if (process_crypto(p, p->rtp, &p->srtp, value)) processed = TRUE; /* 在process_sdp_a_audio中調用ast_rtp_codecs_payloads_set_rtpmap_type_rate,根據編碼的code(如:AST_FORMAT_G726)和 payload將對應的編碼類型加到newaudiortp中 */ else if (process_sdp_a_audio(value, p, &newaudiortp, &last_rtpmap_codec)) processed = TRUE; } /* Video specific scanning */ else if (video) { if (process_sdp_a_sendonly(value, &vsendonly)) processed = TRUE; else if (process_crypto(p, p->vrtp, &p->vsrtp, value)) processed = TRUE; /* 在process_sdp_a_video中調用ast_rtp_codecs_payloads_set_rtpmap_type_rate,根據編碼的code(如:AST_FORMAT_H264)和 payload將對應的編碼類型加到newvideortp中 */ else if (process_sdp_a_video(value, p, &newvideortp, &last_rtpmap_codec)) processed = TRUE; } /* Text (T.140) specific scanning */ else if (text) { if (process_sdp_a_text(value, p, &newtextrtp, red_fmtp, &red_num_gen, red_data_pt, &last_rtpmap_codec)) processed = TRUE; else if (process_crypto(p, p->trtp, &p->tsrtp, value)) processed = TRUE; } /* Image (T.38 FAX) specific scanning */ else if (image) { if (process_sdp_a_image(value, p)) processed = TRUE; } break; } ast_debug(3, "Processing media-level (%s) SDP %c=%s... %s\n", (audio == TRUE)? "audio" : (video == TRUE)? "video" : "image", type, value, (processed == TRUE)? "OK." : "UNSUPPORTED."); } } /* ...... */ /* Now gather all of the codecs that we are asked for: */ /* 把newaudiortp中的asterisk編碼加到peercapability中,非asterisk編碼(AST_RTP_CN、AST_RTP_DTMF、AST_RTP_CISCO_DTMF)加到 peernoncodeccapability中 */ ast_rtp_codecs_payload_formats(&newaudiortp, &peercapability, &peernoncodeccapability); ast_rtp_codecs_payload_formats(&newvideortp, &vpeercapability, &vpeernoncodeccapability); ast_rtp_codecs_payload_formats(&newtextrtp, &tpeercapability, &tpeernoncodeccapability); newjointcapability = p->capability & (peercapability | vpeercapability | tpeercapability); newpeercapability = (peercapability | vpeercapability | tpeercapability); newnoncodeccapability = p->noncodeccapability & peernoncodeccapability; /* ...... */ if (portno != -1 || vportno != -1 || tportno != -1) { /* We are now ready to change the sip session and p->rtp and p->vrtp with the offered codecs, since they are acceptable */ /* 爲p->jointcapability和p->peercapability賦值*/ p->jointcapability = newjointcapability; /* Our joint codec profile for this call */ p->peercapability = newpeercapability; /* The other sides capability in latest offer */ p->jointnoncodeccapability = newnoncodeccapability; /* DTMF capabilities */ /* respond with single most preferred joint codec, limiting the other side's choice */ if (ast_test_flag(&p->flags[1], SIP_PAGE2_PREFERRED_CODEC)) { p->jointcapability = ast_codec_choose(&p->prefs, p->jointcapability, 1); } }
呼出呼叫(Outbound Call)協商:
dial_exec->dial_exec_full->ast_request->sip_request_call,把呼入通道的nativeformats經過ast_request傳給sip_request_call,在sip_request_call中調用sip_alloc分配呼出通道對應的sip_pvt結構體q(說明:下面代碼中的sip_pvt結構體p實際上就是這裏的q),sip_request_call->create_addr->find_peer,在find_peer中,經過被叫的peer name來查找對應的sip_peer數據結構,對於非realtime模式,find_peer查找sip_peer有兩種方式,一種是經過peer name,另外一種是經過ip地址,在outbound call時,採用的是第一種(傳給find_peer的addr參數爲NULL),在inbound call時,對於type=peer類型,則是採用第二種。sip_request_call->create_addr->create_addr_from_peer,將找到的被叫sip_peer對應的編碼配置賦值給q->capability,調用dialog_initialize_rtp來初始化rtp,是否須要初始化視頻rtp有兩種狀況:
1,peer/user的videosupport選項爲always(SIP_PAGE2_VIDEOSUPPORT_ALWAYS);
2,peer/user的videosupport選項爲真(SIP_PAGE2_VIDEOSUPPORT),而且q->capability中有視頻編碼。
這兩種狀況下都會初始化視頻rtp。而後返回到sip_request_call中,把呼入通道的nativeformats賦值給q->prefcodec,將q->jointcapability初始化爲q->prefcodec & q->capability.
dial_exec->dial_exec_full->ast_call->sip_call,在sip_call中,調用ast_rtp_instance_available_formats,查找p->capability中的編碼與p->prefcodec中的編碼是否有可用的轉碼路徑,把p->capability中能與p->prefcodec進行互相轉碼或者相同的編碼賦值給p->jointcapability.
p->jointcapability = ast_rtp_instance_available_formats(p->rtp, p->capability, p->prefcodec);
sip_call->transmit_invite->add_sdp,在add_sdp中將編碼添加到invite請求的sdp.首先,若是p->prefcodec與p->jointcapability有相同的音頻編碼時,將p->prefcodec中的音頻編碼加到sdp中(p->prefcodec中的音頻編碼只有一種);而後,將p->jointcapability中包含在p->prefs中的編碼加到sdp,p->prefs中只包含音頻編碼;最後,把p->jointcapability中其餘的編碼包括視頻編碼等加到sdp中。
capability = p->jointcapability; /* ...... */ /* 首先,若是p->prefcodec與capability(即p->jointcapability)有相同的編碼時,將p->prefcodec中的音頻編碼加到sdp中(p->prefcodec中的 音頻編碼只有一種) */ //if (capability & p->prefcodec) { if (capability & p->prefcodec & AST_FORMAT_AUDIO_MASK) { /* 當capability與p->prefcodec有相同的視頻編碼可是卻沒有相同的音頻編碼時會致使協商錯誤,eg: p->jointcapability: AST_FORMAT_ULAW | AST_FORMAT_ALAW | AST_FORMAT_H264 p->prefcodec: AST_FORMAT_GSM | AST_FORMAT_H264 上述狀況下會把GSM編碼加到sdp中去,顯然是不對的,應該把條件改成 if (capability & p->prefcodec & AST_FORMAT_AUDIO_MASK) */ format_t codec = p->prefcodec & AST_FORMAT_AUDIO_MASK;// p->prefcodec中的音頻編碼只有一種 add_codec_to_sdp(p, codec, &m_audio, &a_audio, debug, &min_audio_packet_size); alreadysent |= codec; } /* 而後,將capability(即p->jointcapability)中包含在全局結構體default_prefs中的編碼加到sdp,default_prefs中只包含音頻編碼 */ /* Start by sending our preferred audio/video codecs */ for (x = 0; x < 64; x++) { format_t codec; if (!(codec = ast_codec_pref_index(&p->prefs, x))) break; if (!(capability & codec)) continue; if (alreadysent & codec) continue; add_codec_to_sdp(p, codec, &m_audio, &a_audio, debug, &min_audio_packet_size); alreadysent |= codec; } /* 最後,把capability(即p->jointcapability)中其餘的編碼包括視頻編碼等加到sdp中*/ /* Now send any other common audio and video codecs, and non-codec formats: */ for (x = 1ULL; x if (!(capability & x)) /* Codec not requested */ continue; if (alreadysent & x) /* Already added to SDP */ continue; if (x & AST_FORMAT_AUDIO_MASK) add_codec_to_sdp(p, x, &m_audio, &a_audio, debug, &min_audio_packet_size); else if (x & AST_FORMAT_VIDEO_MASK) add_vcodec_to_sdp(p, x, &m_video, &a_video, debug, &min_video_packet_size); else if (x & AST_FORMAT_TEXT_MASK) add_tcodec_to_sdp(p, x, &m_text, &a_text, debug, &min_text_packet_size); }