Android 開發 AudioRecord音頻錄製

前言

  Android SDK 提供了兩套音頻採集的API,分別是:MediaRecorder 和 AudioRecord,前者是一個更加上層一點的API,它能夠直接把手機麥克風錄入的音頻數據進行編碼壓縮(如AMR、MP3等)並存成文件,然後者則更接近底層,可以更加自由靈活地控制,能夠獲得原始的一幀幀PCM音頻數據。android

 

實現流程

  1. 獲取權限
  2. 初始化獲取每一幀流的Size
  3. 初始化音頻錄製AudioRecord
  4. 開始錄製與保存錄制音頻文件
  5. 中止錄製
  6. 給音頻文件添加頭部信息,而且轉換格式成wav
  7. 釋放AudioRecord,錄製流程完畢

獲取權限

    <!--音頻錄製權限 -->
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <!--讀取和寫入存儲權限-->
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

若是是Android5.0以上,以上3個權限須要動態受權
緩存

初始化獲取每一幀流的Size

private Integer mRecordBufferSize;
private void initMinBufferSize(){
        //獲取每一幀的字節流大小
        mRecordBufferSize = AudioRecord.getMinBufferSize(8000 
                , AudioFormat.CHANNEL_IN_MONO
                , AudioFormat.ENCODING_PCM_16BIT);
    }

第一個參數sampleRateInHz 採樣率(赫茲),方法註釋裏有說明

只能在4000192000的範圍內取值異步

AudioFormat類裏
public static final int SAMPLE_RATE_HZ_MIN = 4000; 最小4000
public static final int SAMPLE_RATE_HZ_MAX = 192000; 最大192000

ide

第二個參數channelConfig 聲道配置 描述音頻聲道的配置,例如左聲道/右聲道/前聲道/後聲道。

AudioFormat類錄
public static final int CHANNEL_IN_LEFT = 0x4;//左聲道
public static final int CHANNEL_IN_RIGHT = 0x8;//右聲道
public static final int CHANNEL_IN_FRONT = 0x10;//前聲道
public static final int CHANNEL_IN_BACK = 0x20;//後聲道
public static final int CHANNEL_IN_LEFT_PROCESSED = 0x40;
public static final int CHANNEL_IN_RIGHT_PROCESSED = 0x80;
public static final int CHANNEL_IN_FRONT_PROCESSED = 0x100;
public static final int CHANNEL_IN_BACK_PROCESSED = 0x200;
public static final int CHANNEL_IN_PRESSURE = 0x400;
public static final int CHANNEL_IN_X_AXIS = 0x800;
public static final int CHANNEL_IN_Y_AXIS = 0x1000;
public static final int CHANNEL_IN_Z_AXIS = 0x2000;
public static final int CHANNEL_IN_VOICE_UPLINK = 0x4000;
public static final int CHANNEL_IN_VOICE_DNLINK = 0x8000;
public static final int CHANNEL_IN_MONO = CHANNEL_IN_FRONT;//單聲道
public static final int CHANNEL_IN_STEREO = (CHANNEL_IN_LEFT | CHANNEL_IN_RIGHT);//立體聲道(左右聲道)
工具

 

第三個參數audioFormat 音頻格式 表示音頻數據的格式。

注意!通常的手機設備可能只支持 16位PCM編碼,若是其餘的都會報錯爲壞值.this

public static final int ENCODING_PCM_16BIT = 2; //16位PCM編碼
public static final int ENCODING_PCM_8BIT = 3; //8位PCM編碼
public static final int ENCODING_PCM_FLOAT = 4; //4位PCM編碼
public static final int ENCODING_AC3 = 5;
public static final int ENCODING_E_AC3 = 6;
public static final int ENCODING_DTS = 7;
public static final int ENCODING_DTS_HD = 8;
public static final int ENCODING_MP3 = 9; //MP3編碼 此格式可能會由於不設備不支持報錯
public static final int ENCODING_AAC_LC = 10;
public static final int ENCODING_AAC_HE_V1 = 11;
public static final int ENCODING_AAC_HE_V2 = 12;
編碼

初始化音頻錄製AudioRecord

private AudioRecord mAudioRecord;
private void initAudioRecord(){
        mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC
                , 8000
                , AudioFormat.CHANNEL_IN_MONO
                , AudioFormat.ENCODING_PCM_16BIT
                , mRecordBufferSize);
    }
  • 第一個參數audioSource 音頻源   這裏選擇使用麥克風:MediaRecorder.AudioSource.MIC
  • 第二個參數sampleRateInHz 採樣率(赫茲)  與前面初始化獲取每一幀流的Size保持一致
  • 第三個參數channelConfig 聲道配置 描述音頻聲道的配置,例如左聲道/右聲道/前聲道/後聲道。   與前面初始化獲取每一幀流的Size保持一致
  • 第四個參數audioFormat 音頻格式  表示音頻數據的格式。  與前面初始化獲取每一幀流的Size保持一致
  • 第五個參數緩存區大小,就是上面咱們配置的AudioRecord.getMinBufferSize 

開始錄製與保存錄制音頻文件

private boolean mWhetherRecord;
private File pcmFile;
private void startRecord(){
        pcmFile = new File(AudioRecordActivity.this.getExternalCacheDir().getPath(),"audioRecord.pcm");
        mWhetherRecord = true;
        new Thread(new Runnable() {
            @Override
            public void run() {
                mAudioRecord.startRecording();//開始錄製
                FileOutputStream fileOutputStream = null;
                try {
                    fileOutputStream = new FileOutputStream(pcmFile);
                    byte[] bytes = new byte[mRecordBufferSize];
                    while (mWhetherRecord){
                        mAudioRecord.read(bytes, 0, bytes.length);//讀取流
                        fileOutputStream.write(bytes);
                        fileOutputStream.flush();

                    }
                    Log.e(TAG, "run: 暫停錄製" );
                    mAudioRecord.stop();//中止錄製
                    fileOutputStream.flush();
                    fileOutputStream.close();
                   addHeadData();//添加音頻頭部信息而且轉成wav格式
                } catch (FileNotFoundException e) {
                    e.printStackTrace();
                    mAudioRecord.stop();
                } catch (IOException e) {
                    e.printStackTrace();
                }

            }
        }).start();
    }

這裏說明一下爲何用布爾值,來關閉錄製.有些小夥伴會發現AudioRecord是能夠獲取到錄製狀態的.那麼確定有人會用狀態來判斷while是否還須要處理流.這種是錯誤的作法.由於MIC屬於硬件層任何硬件的東西都是異步的並且會有很大的延時.因此回調的狀態也是有延時的,有時候流沒了,可是狀態仍是顯示爲正在錄製.spa

中止錄製

 就是調用mAudioRecord.stop();方法來中止錄製,可是由於我在上面的保存流後作了調用中止視頻錄製,因此我這裏只須要切換布爾值就能夠關閉音頻錄製code

private void stopRecord(){
        mWhetherRecord = false;
    }

給音頻文件添加頭部信息,而且轉換格式成wav

音頻錄製完成後,這個時候去存儲目錄找到音頻文件部分,會提示沒法播放文件.實際上是由於沒有加入音頻頭部信息.通常經過麥克風採集的錄音數據都是PCM格式的,即不包含頭部信息,播放器沒法知道音頻採樣率、位寬等參數,致使沒法播放,顯然是很是不方便的。pcm轉換成wav,咱們只須要在pcm的文件起始位置加上至少44個字節的WAV頭信息便可。orm

偏移地址   命名       內容
00-03   ChunkId       "RIFF"
04-07   ChunkSize      下個地址開始到文件尾的總字節數(此Chunk的數據大小)
08-11   fccType       "WAVE"
12-15   SubChunkId1       "fmt ",最後一位空格。
16-19   SubChunkSize1    通常爲16,表示fmt Chunk的數據塊大小爲16字節
20-21   FormatTag      1:表示是PCM 編碼
22-23   Channels         聲道數,單聲道爲1,雙聲道爲2
24-27   SamplesPerSec      採樣率
28-31   BytesPerSec     碼率 :採樣率 * 採樣位數 * 聲道個數,bytePerSecond = sampleRate * (bitsPerSample / 8) * channels
32-33   BlockAlign       每次採樣的大小:位寬*聲道數/8
34-35   BitsPerSample     位寬
36-39   SubChunkId2     "data"
40-43   SubChunkSize2      音頻數據的長度
44-...   data         音頻數據
private void addHeadData(){
        pcmFile = new File(AudioRecordActivity.this.getExternalCacheDir().getPath(),"audioRecord.pcm");
        handlerWavFile = new File(AudioRecordActivity.this.getExternalCacheDir().getPath(),"audioRecord_handler.wav");
        PcmToWavUtil pcmToWavUtil = new PcmToWavUtil(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
        pcmToWavUtil.pcmToWav(pcmFile.toString(),handlerWavFile.toString());
    }

寫入頭部信息的工具類

注意輸入File和輸出File不能同一個,由於沒有作緩存.

public class PcmToWavUtil {
    private static final String TAG = "PcmToWavUtil";

    /**
     * 緩存的音頻大小
     */
    private int mBufferSize;
    /**
     * 採樣率
     */
    private int mSampleRate;
    /**
     * 聲道數
     */
    private int mChannel;


    /**
     * @param sampleRate sample rate、採樣率
     * @param channel channel、聲道
     * @param encoding Audio data format、音頻格式
     */
    PcmToWavUtil(int sampleRate, int channel, int encoding) {
        this.mSampleRate = sampleRate;
        this.mChannel = channel;
        this.mBufferSize = AudioRecord.getMinBufferSize(mSampleRate, mChannel, encoding);
    }


    /**
     * pcm文件轉wav文件
     *
     * @param inFilename 源文件路徑
     * @param outFilename 目標文件路徑
     */
    public void pcmToWav(String inFilename, String outFilename) {
        FileInputStream in;
        FileOutputStream out;
        long totalAudioLen;//總錄音長度
        long totalDataLen;//總數據長度
        long longSampleRate = mSampleRate;
        int channels = mChannel == AudioFormat.CHANNEL_IN_MONO ? 1 : 2;
        long byteRate = 16 * mSampleRate * channels / 8;
        byte[] data = new byte[mBufferSize];
        try {
            in = new FileInputStream(inFilename);
            out = new FileOutputStream(outFilename);
            totalAudioLen = in.getChannel().size();
            totalDataLen = totalAudioLen + 36;

            writeWaveFileHeader(out, totalAudioLen, totalDataLen,
                    longSampleRate, channels, byteRate);
            while (in.read(data) != -1) {
                out.write(data);
                out.flush();

            }
            Log.e(TAG, "pcmToWav: 中止處理");
            in.close();
            out.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }


    /**
     * 加入wav文件頭
     */
    private void writeWaveFileHeader(FileOutputStream out, long totalAudioLen,
                                     long totalDataLen, long longSampleRate, int channels, long byteRate)
            throws IOException {
        byte[] header = new byte[44];
        // RIFF/WAVE header
        header[0] = 'R';
        header[1] = 'I';
        header[2] = 'F';
        header[3] = 'F';
        header[4] = (byte) (totalDataLen & 0xff);
        header[5] = (byte) ((totalDataLen >> 8) & 0xff);
        header[6] = (byte) ((totalDataLen >> 16) & 0xff);
        header[7] = (byte) ((totalDataLen >> 24) & 0xff);
        //WAVE
        header[8] = 'W';
        header[9] = 'A';
        header[10] = 'V';
        header[11] = 'E';
        // 'fmt ' chunk
        header[12] = 'f';
        header[13] = 'm';
        header[14] = 't';
        header[15] = ' ';
        // 4 bytes: size of 'fmt ' chunk
        header[16] = 16;
        header[17] = 0;
        header[18] = 0;
        header[19] = 0;
        // format = 1
        header[20] = 1;
        header[21] = 0;
        header[22] = (byte) channels;
        header[23] = 0;
        header[24] = (byte) (longSampleRate & 0xff);
        header[25] = (byte) ((longSampleRate >> 8) & 0xff);
        header[26] = (byte) ((longSampleRate >> 16) & 0xff);
        header[27] = (byte) ((longSampleRate >> 24) & 0xff);
        header[28] = (byte) (byteRate & 0xff);
        header[29] = (byte) ((byteRate >> 8) & 0xff);
        header[30] = (byte) ((byteRate >> 16) & 0xff);
        header[31] = (byte) ((byteRate >> 24) & 0xff);
        // block align
        header[32] = (byte) (2 * 16 / 8);
        header[33] = 0;
        // bits per sample
        header[34] = 16;
        header[35] = 0;
        //data
        header[36] = 'd';
        header[37] = 'a';
        header[38] = 't';
        header[39] = 'a';
        header[40] = (byte) (totalAudioLen & 0xff);
        header[41] = (byte) ((totalAudioLen >> 8) & 0xff);
        header[42] = (byte) ((totalAudioLen >> 16) & 0xff);
        header[43] = (byte) ((totalAudioLen >> 24) & 0xff);
        out.write(header, 0, 44);
    }
}

釋放AudioRecord,錄製流程完畢

調用release()方法釋放資源

mAudioRecord.release();

最後你就能夠在指定目錄下找到音頻文件播放了

 

最後介紹下其餘API

獲取AudioRecord初始化狀態

public int getState() {
    return mState;
}
 

注意!這裏是初始化狀態,不是錄製狀態,它只會返回2個狀態

  • AudioRecord#STATE_INITIALIZED    //已經初始化
  • AudioRecord#STATE_UNINITIALIZED  //沒有初始化

獲取AudioRecord錄製狀態

public int getRecordingState() {
        synchronized (mRecordingStateLock) {
            return mRecordingState;
        }
    }
 

返回錄製狀態,它只返回2個狀態

  • AudioRecord#RECORDSTATE_STOPPED    //中止錄製
  • AudioRecord#RECORDSTATE_RECORDING    //正在錄製
相關文章
相關標籤/搜索