MediaPlayer、MediaRecord、AudioRecord,這三個都是你們耳目能詳的Android多媒體類(= =沒聽過的也要僞裝聽過),包含了音視頻播放,音視頻錄製等...
可是還有一個被遺棄的熊孩子AudioTrack,這個由於太很差用了而被人過門而不入(反正確定不是由於懶),這Android上多媒體四你們族就齊了,MediaPlayer、MediaRecord是封裝好了的錄製與播放,AudioRecord、AudioTrack是須要對數據和自定義有必定須要的時候用到的。(什麼,還有SoundPool?我不聽我不聽...)android
當那位小夥提出這個需求的時候,我就想起了AudioTrack這個類,和AudioRecord功能的使用方法十分類似,使用的時候初始化好以後對數據的buffer執行write就能夠發出呻吟了,由於數據是read出來的,因此你能夠對音頻數據作任何你愛作的事情。git
可是問題來了,首先AudioTrack只能播放PCM的原始音頻文件,那要MP3怎麼辦?這時候萬能的Google告訴了我一個方向,"移植Libmad到android平臺",相似上篇文章中利用mp3lame實現邊錄邊轉碼的功能(有興趣的朋友能夠看一下,很不錯)。github
但WTF(ノಠ益ಠ)ノ彡┻━┻,這麼重的模式怎麼適合咱們敏(lan)捷(ren)開發呢,調試JNI各類躺坑呢。這時候做爲一個作責任的社會主義青少年,我發現了這個MP3RadioStreamPlayer,看簡介:An MP3 online Stream player that uses MediaExtractor, MediaFormat, MediaCodec and AudioTrack meant as an alternative to using MediaPlayer....嗯~臨表涕零,不知所言。算法
4.1以上Android系統(這和支持全部系統有什麼區別),支持mp3,wma等,能夠用於編解碼,感謝上帝,之前的本身真的孤陋顧問了。網絡
其中MediaExtractor,咱們須要支持網絡數據,這個類能夠負責中間的過程,即將從DataSource獲得的原始數據解析成解碼器須要的es數據,並經過MediaSource的接口輸出。 oop
下面直接看代碼吧,都有註釋(真的不是懶得講╮(╯_╰)╭):post
流程就是定義好buffer,初始化MediaExtractor來獲取數據,MediaCodec對數據進行解碼,初始化AudioTrack播放數據。學習
ByteBuffer[] codecInputBuffers;
ByteBuffer[] codecOutputBuffers;
// 這裏配置一個路徑文件
extractor = new MediaExtractor();
try {
extractor.setDataSource(this.mUrlString);
} catch (Exception e) {
mDelegateHandler.onRadioPlayerError(MP3RadioStreamPlayer.this);
return;
}
//獲取多媒體文件信息
MediaFormat format = extractor.getTrackFormat(0);
//媒體類型
String mime = format.getString(MediaFormat.KEY_MIME);
// 檢查是否爲音頻文件
if (!mime.startsWith("audio/")) {
Log.e("MP3RadioStreamPlayer", "不是音頻文件!");
return;
}
// 聲道個數:單聲道或雙聲道
int channels = format.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
// if duration is 0, we are probably playing a live stream
//時長
duration = format.getLong(MediaFormat.KEY_DURATION);
// System.out.println("歌曲總時間秒:"+duration/1000000);
//時長
int bitrate = format.getInteger(MediaFormat.KEY_BIT_RATE);
// the actual decoder
try {
// 實例化一個指定類型的解碼器,提供數據輸出
codec = MediaCodec.createDecoderByType(mime);
} catch (IOException e) {
e.printStackTrace();
}
codec.configure(format, null /* surface */, null /* crypto */, 0 /* flags */);
codec.start();
// 用來存放目標文件的數據
codecInputBuffers = codec.getInputBuffers();
// 解碼後的數據
codecOutputBuffers = codec.getOutputBuffers();
// get the sample rate to configure AudioTrack
int sampleRate = format.getInteger(MediaFormat.KEY_SAMPLE_RATE);
// 設置聲道類型:AudioFormat.CHANNEL_OUT_MONO單聲道,AudioFormat.CHANNEL_OUT_STEREO雙聲道
int channelConfiguration = channels == 1 ? AudioFormat.CHANNEL_OUT_MONO : AudioFormat.CHANNEL_OUT_STEREO;
//Log.i(TAG, "channelConfiguration=" + channelConfiguration);
Log.i(LOG_TAG, "mime " + mime);
Log.i(LOG_TAG, "sampleRate " + sampleRate);
// create our AudioTrack instance
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC,
sampleRate,
channelConfiguration,
AudioFormat.ENCODING_PCM_16BIT,
AudioTrack.getMinBufferSize(
sampleRate,
channelConfiguration,
AudioFormat.ENCODING_PCM_16BIT
),
AudioTrack.MODE_STREAM
);
//開始play,等待write發出聲音
audioTrack.play();
extractor.selectTrack(0);//選擇讀取音軌
// start decoding
final long kTimeOutUs = 10000;//超時
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
// 解碼
boolean sawInputEOS = false;
boolean sawOutputEOS = false;
int noOutputCounter = 0;
int noOutputCounterLimit = 50;
while (!sawOutputEOS && noOutputCounter < noOutputCounterLimit && !doStop) {
//Log.i(LOG_TAG, "loop ");
noOutputCounter++;
if (!sawInputEOS) {
inputBufIndex = codec.dequeueInputBuffer(kTimeOutUs);
bufIndexCheck++;
// Log.d(LOG_TAG, " bufIndexCheck " + bufIndexCheck);
if (inputBufIndex >= 0) {
ByteBuffer dstBuf = codecInputBuffers[inputBufIndex];
int sampleSize =
extractor.readSampleData(dstBuf, 0 /* offset */);
long presentationTimeUs = 0;
if (sampleSize < 0) {
Log.d(LOG_TAG, "saw input EOS.");
sawInputEOS = true;
sampleSize = 0;
} else {
presentationTimeUs = extractor.getSampleTime();
}
// can throw illegal state exception (???)
codec.queueInputBuffer(
inputBufIndex,
0 /* offset */,
sampleSize,
presentationTimeUs,
sawInputEOS ? MediaCodec.BUFFER_FLAG_END_OF_STREAM : 0);
if (!sawInputEOS) {
extractor.advance();
}
} else {
Log.e(LOG_TAG, "inputBufIndex " + inputBufIndex);
}
}
// decode to PCM and push it to the AudioTrack player
// 解碼數據爲PCM
int res = codec.dequeueOutputBuffer(info, kTimeOutUs);
if (res >= 0) {
//Log.d(LOG_TAG, "got frame, size " + info.size + "/" + info.presentationTimeUs);
if (info.size > 0) {
noOutputCounter = 0;
}
int outputBufIndex = res;
ByteBuffer buf = codecOutputBuffers[outputBufIndex];
final byte[] chunk = new byte[info.size];
buf.get(chunk);
buf.clear();
if (chunk.length > 0) {
//播放
audioTrack.write(chunk, 0, chunk.length);
//根據數據的大小爲把byte合成short文件
//而後計算音頻數據的音量用於判斷特徵
short[] music = (!isBigEnd()) ? byteArray2ShortArrayLittle(chunk, chunk.length / 2) :
byteArray2ShortArrayBig(chunk, chunk.length / 2);
sendData(music, music.length);
calculateRealVolume(music, music.length);
if (this.mState != State.Playing) {
mDelegateHandler.onRadioPlayerPlaybackStarted(MP3RadioStreamPlayer.this);
}
this.mState = State.Playing;
hadPlay = true;
}
//釋放
codec.releaseOutputBuffer(outputBufIndex, false /* render */);
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
Log.d(LOG_TAG, "saw output EOS.");
sawOutputEOS = true;
}
} else if (res == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
codecOutputBuffers = codec.getOutputBuffers();
Log.d(LOG_TAG, "output buffers have changed.");
} else if (res == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat oformat = codec.getOutputFormat();
Log.d(LOG_TAG, "output format has changed to " + oformat);
} else {
Log.d(LOG_TAG, "dequeueOutputBuffer returned " + res);
}
}
Log.d(LOG_TAG, "stopping...");
relaxResources(true);
this.mState = State.Stopped;
doStop = true;
// attempt reconnect
if (sawOutputEOS) {
try {
if (isLoop || !hadPlay) {
MP3RadioStreamPlayer.this.play();
}
return;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}複製代碼
既然都有數據了,那還愁什麼波形,和上一期同樣直接傳┑( ̄Д  ̄)┍入AudioWaveView的List就好啦。優化
這裏曾經有過一個坑,躺屍很久,那時候的我仍是個通訊工程的孩紙,滿腦子什麼FFT快速傅里葉變化,求包絡,自相關,卷積什麼的,而後就從網上扒了一套算法很開心的計算頻率和頻譜,最後實現的效果非常堪憂,特別是錄音條件下的實時效果不好,誰讓我數學不是別人家的孩子呢┑( ̄Д  ̄)┍。this
反正此次實現的沒那麼高深,很low的作法:
if (mBaseRecorder == null)
return;
//獲取音量大小
int volume = mBaseRecorder.getRealVolume();
//Log.e("volume ", "volume " + volume);
//縮減過濾掉小數據
int scale = (volume / 100);
//是否大於給定閾值
if (scale < 5) {
mPreFFtCurrentFrequency = scale;
return;
}
//這個數據和上個數據之間的比例
int fftScale = 0;
if (mPreFFtCurrentFrequency != 0) {
fftScale = scale / mPreFFtCurrentFrequency;
}
//若是連續幾個或者大了好多就能夠改變顏色
if (mColorChangeFlag == 4 || fftScale > 10) {
mColorChangeFlag = 0;
}
if (mColorChangeFlag == 0) {
if (mColorPoint == 1) {
mColorPoint = 2;
} else if (mColorPoint == 2) {
mColorPoint = 3;
} else if (mColorPoint == 3) {
mColorPoint = 1;
}
int color;
if (mColorPoint == 1) {
color = mColor1;
} else if (mColorPoint == 2) {
color = mColor3;
} else {
color = mColor2;
}
mPaint.setColor(color);
}
mColorChangeFlag++;
//保存數據
if (scale != 0)
mPreFFtCurrentFrequency = scale;
...
/**
* 此計算方法來自samsung開發範例
*
* @param buffer buffer
* @param readSize readSize
*/
protected void calculateRealVolume(short[] buffer, int readSize) {
double sum = 0;
for (int i = 0; i < readSize; i++) {
// 這裏沒有作運算的優化,爲了更加清晰的展現代碼
sum += buffer[i] * buffer[i];
}
if (readSize > 0) {
double amplitude = sum / readSize;
mVolume = (int) Math.sqrt(amplitude);
}
}複製代碼
怎麼樣,很簡單是吧,有沒感受又被我水了一篇<( ̄︶ ̄)>,不知道你有沒有收穫呢,歡迎留言喲。
有時候會聽到有人說作業務代碼只是在搬磚,對本身的技術沒有什麼提高,這種理論我我的並非十分認同的,由於相對於本身開源和學習新的技術,業務代碼可讓你更加嚴謹的對待你的代碼,會遇到更多你沒法迴避的問題,各類各種的坑纔是你提高的關鍵,當前,前提是你能把各類坑都保存好,不要每次都跳進去。因此,對你的工做好一些吧.....((/- -)/
我的Github : github.com/CarGuo