咱們在發送端向端口40020發送數據,因此這裏監聽的是40020端口,payload type是97與發送端設置的一致。git
audioRtpWrapper.open(40020, 97, 1000);
設置rtp callback用於接收aac原始數據。github
audioRtpWrapper.setCallback { data, len -> if (len < 4) return@setCallback val index = indexArray.take() if (currentTime == 0L) { currentTime = System.currentTimeMillis() } val buffer = audioDecodeCodec.mediaCodec.getInputBuffer(index) val time = (System.currentTimeMillis() - currentTime) * 1000 if (hasAuHeader) { buffer?.position(0) buffer?.put(data, 4, len - 4); buffer?.position(0) audioDecodeCodec.mediaCodec.queueInputBuffer(index, 0, len - 4, time, 1) } else { buffer?.position(0) buffer?.put(data, 0, len); buffer?.position(0) audioDecodeCodec.mediaCodec.queueInputBuffer(index, 0, len, time, 1) } };
接收到aac數據後直接寫入MediaCodec的inputbuffer,這裏針對是否有au header來處理數據。若是有au header,那麼跳過au header數據。au header length 和 au header共佔用4 bytes。在經過index獲取inputbuffer的時候要格外注意index的有效性,即index必須是mediacodec釋放出來的buffer的index,不然會發送buffer寫入錯誤。segmentfault
建立MediaCodec的時候要指定採樣率、通道數、格式等信息,這些信息須要與發送端保持一致。app
val sampleRate = 44100 val audioFormat = MediaFormat.createAudioFormat( MediaFormat.MIMETYPE_AUDIO_AAC, sampleRate, audioChannelCount ) audioFormat.setByteBuffer("csd-0", audioSpecificConfig) var currentTime = 0L indexArray.clear() audioDecodeCodec = object : AudioDecodeCodec(audioFormat) { override fun onInputBufferAvailable(codec: MediaCodec, index: Int) { indexArray.put(index) } }
這裏的"csd-0"參數要格外的關注,由於這個參數不設置或是設置錯誤都會在解碼過程當中發生buffer的讀寫錯誤。audioSpecificConfig這個變量的生成規則能夠參考下面這段代碼:ide
private val audioChannelCount = 1; private val audioProfile = 1 /** * 97000, 88200, 64000, 48000,44100, 32000, 24000, 22050,16000, 12000, 11025, 8000,7350, 0, 0, 0 */ private val audioIndex = 4 private val audioSpecificConfig = ByteArray(2).apply { this[0] = ((audioProfile + 1).shl(3).and(0xff)).or(audioIndex.ushr(1).and(0xff)).toByte() this[1] = ((audioIndex.shl(7).and(0xff)).or(audioChannelCount.shl(3).and(0xff))).toByte() }.let { val buffer = ByteBuffer.allocate(2) buffer.put(it) buffer.position(0) buffer }
固然若是不設置csd-0這個參數,那麼從rtp返回的數據中有config數據也是能夠的。可是rtp是廣播形式的,它並不知道何時須要廣播config數據,因此配置csd-0是比較好的選擇。(注意:config數據指的是發送端的MediaCodec編碼啓動後生成的config 數據buffer)this
首先根據mediaformat信息建立AudioTrack並進行播放,這裏設置的參數要與MediaCodec設置的一致,同事這種流模式播放(MODE_STREAM)編碼
override fun onOutputFormatChanged(codec: MediaCodec, format: MediaFormat) { val sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE) val channelCount = format.getInteger(MediaFormat.KEY_CHANNEL_COUNT) val minBufferSize = AudioRecord.getMinBufferSize(sampleRate, if (channelCount == 1) AudioFormat.CHANNEL_IN_MONO else AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT); audioTrack = AudioTrack(AudioManager.STREAM_VOICE_CALL, sampleRate, channelCount, AudioFormat.ENCODING_PCM_16BIT, minBufferSize, MODE_STREAM); audioTrack?.play(); }
從MediaCodec讀取解碼後的數據並寫入AudioTrack,這裏採用的是block方式寫入。spa
override fun onOutputBufferAvailable(codec: MediaCodec, index: Int, info: MediaCodec.BufferInfo) { Log.d("audio_dragon","onOutputBufferAvailable $index") kotlin.runCatching { val buffer = codec.getOutputBuffer(index) ?: return; buffer.position(info.offset); audioTrack?.write(buffer, info.size, WRITE_BLOCKING); codec.releaseOutputBuffer(index, false); } }