使用javacv錄像,同時進行訊飛聲紋認證

因爲最近的demo中須要在活體檢測的同時進行音視頻錄製 ,  嘗試使用MediaRecord和camera來錄製視頻 , 然而Camera.onPreviewFrame 不能與 MediaRecord同時調用。活體檢測的原理實際上是把camera的預覽回調onPreviewFrame(byte[] data, Camera camera) 中的圖片數據data做爲參數傳遞到活體檢測引擎中去拿返回的檢測結果碼,因爲種種緣由 , 不能使用Camera2實現 , 因而經過谷歌瞭解到javacv這個庫能夠錄製視頻 , 下了幾個demo , 感受不只知足需求 , 錄製的視頻質量也還能夠。使用javacv中的FrameRecorder進行錄像,錄像的時候,調用record方法寫幀數據和音頻數據,這時候咱們有一個需求,錄像的同時,要把聲音實時拿過來進行聲紋認證。由此產生了2個問題:java

問題1:android

語音識別用的是訊飛的SDK,要求聲音採樣率8k或16k。而設置FrameRecorder.setSampleRate(8000)後,再FrameRecorder.start()會報錯,報錯以下:數組

avcodec_encode_audio2() error 2: Could not encode audio packet.ide

 

問題2:oop

javacv官方錄製demo中,從AudioRecord中read到的是ShortBuffer,而訊飛SDK方法要求傳入byte,他的方法以下:this

public void writeAudio(byte[] data, int start, int length) url

百度谷歌無果,只好本身研究。

spa

  • 使用javacv進行錄像

下面是使用javacv進行錄像的示例代碼:線程

1. 初始化 ffmpeg_recordercode

public void initRecorder() {
String ffmpeg_link = parentPath + "/" + "video.mp4";
Log.w(LOG_TAG, "init recorder");

if (yuvIplimage == null) {
yuvIplimage = IplImage.create(cameraManager.getDefaultSize().width,
cameraManager.getDefaultSize().height, IPL_DEPTH_8U, 2);
Log.i(LOG_TAG, "create yuvIplimage");
}

Log.i(LOG_TAG, "ffmpeg_url: " + ffmpeg_link);
recorder = new FFmpegFrameRecorder(ffmpeg_link,
cameraManager.getDefaultSize().width,
cameraManager.getDefaultSize().height, 1);
recorder.setFormat("mp4");
recorder.setSampleRate(sampleAudioRateInHz);
// Set in the surface changed method
recorder.setFrameRate(frameRate);

Log.i(LOG_TAG, "recorder initialize success");

audioRecordRunnable = new AudioRecordRunnable();
audioThread = new Thread(audioRecordRunnable);
try {
recorder.start();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
audioThread.start();
}

 

2. 捕捉攝像頭視頻數據:

public void onPreviewFrame(byte[] data, Camera camera) {
int during = checkIfMax(new Date().getTime());
/* get video data */
if (yuvIplimage != null && isStart) {
yuvIplimage.getByteBuffer().put(data);
//yuvIplimage = rotateImage(yuvIplimage.asCvMat(), 90).asIplImage();
Log.v(LOG_TAG, "Writing Frame");
try {
System.out.println(System.currentTimeMillis() - videoStartTime);
if (during < 6000) {
recorder.setTimestamp(1000 * during);
recorder.record(yuvIplimage);
}
} catch (FFmpegFrameRecorder.Exception e) {
Log.v(LOG_TAG, e.getMessage());
e.printStackTrace();
}
}
}

3. 捕捉聲音數據:

class AudioRecordRunnable implements Runnable {

@Override
public void run() {
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);

// Audio
int bufferSize;
short[] audioData;
int bufferReadResult;

bufferSize = AudioRecord.getMinBufferSize(sampleAudioRateInHz,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleAudioRateInHz,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);

audioData = new short[bufferSize];

Log.d(LOG_TAG, "audioRecord.startRecording()");
audioRecord.startRecording();

/* ffmpeg_audio encoding loop */
while (!isFinished) {
// Log.v(LOG_TAG,"recording? " + recording);
bufferReadResult = audioRecord.read(audioData, 0,
audioData.length);
if (bufferReadResult > 0) {
// Log.v(LOG_TAG, "bufferReadResult: " + bufferReadResult);
// If "recording" isn't true when start this thread, it
// never get's set according to this if statement...!!!
// Why? Good question...
if (isStart) {
try {
Buffer[] barray = new Buffer[1];
barray[0] = ShortBuffer.wrap(audioData, 0,
bufferReadResult);
recorder.record(barray);
// Log.v(LOG_TAG,"recording " + 1024*i + " to " +
// 1024*i+1024);
} catch (FFmpegFrameRecorder.Exception e) {
Log.v(LOG_TAG, e.getMessage());
e.printStackTrace();
}
}
}
}
Log.v(LOG_TAG, "AudioThread Finished, release audioRecord");

/* encoding finish, release recorder */
if (audioRecord != null) {
audioRecord.stop();
audioRecord.release();
audioRecord = null;
Log.v(LOG_TAG, "audioRecord released");
}
}
}

 

解決問題1:

demo中默認設置FrameRecorder.setSampleRate(44100)沒問題,想到一個辦法,這個地方設置44100,在語音採集的地方設置8000,最後成功了。不過這個計算時間的方法要修改:

public static int getTimeStampInNsFromSampleCounted(int paramInt) {
// return (int) (paramInt / 0.0441D);
return (int) (paramInt / 0.0080D);
}

 

 

解決問題2:

short數組轉byte數組,注意數組長度變爲原來的2倍

public static byte[] short2byte(short[] sData) {
int shortArrsize = sData.length;
byte[] bytes = new byte[shortArrsize * 2];

for (int i = 0; i < shortArrsize; i++) {
bytes[i * 2] = (byte) (sData[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (sData[i] >> 8);
sData[i] = 0;
}
return bytes;

}

 

錄製音頻源碼:

 /**
     * 錄製音頻的線程
     */
    class AudioRecordRunnable implements Runnable {
        short[] audioData;
        private final AudioRecord audioRecord;
        private int mCount = 0;
        int sampleRate = Constants.AUDIO_SAMPLING_RATE;
 
 
        private AudioRecordRunnable() {
            int bufferSize = AudioRecord.getMinBufferSize(sampleRate,
                    AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
            audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate,
                    AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
            audioData = new short[bufferSize];
 
 
        }
 
 
        /**
         * 包含了音頻的數據和起始位置
         *
         * @param buffer
         */
        private void record(Buffer buffer) {
            synchronized (mAudioRecordLock) {
                this.mCount += buffer.limit();
                if (!mIsPause) {
                    try {
                        if (mRecorder != null) {
                            mRecorder.record(sampleRate, new Buffer[]{buffer});
                        }
                    } catch (FrameRecorder.Exception e) {
                        e.printStackTrace();
                    }
                }
            }
        }
 
 
        /**
         * 更新音頻的時間戳
         */
        private void updateTimestamp() {
            int i = Util.getTimeStampInNsFromSampleCounted(this.mCount);
            if (mAudioTimestamp != i) {
                mAudioTimestamp = i;
                mAudioTimeRecorded = System.nanoTime();
            }
        }
 
 
        public void run() {
            android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
            if (audioRecord != null) {
                //判斷音頻錄製是否被初始化
                while (this.audioRecord.getState() == 0) {
                    try {
                        Thread.sleep(100L);
                    } catch (InterruptedException localInterruptedException) {
                    }
                }
                this.audioRecord.startRecording();
                while ((runAudioThread)) {
                    updateTimestamp();
                    int bufferReadResult = this.audioRecord.read(audioData, 0, audioData.length);
                    if (bufferReadResult > 0) {
                        if (recording || (mVideoTimestamp > mAudioTimestamp)) {
                            record(ShortBuffer.wrap(audioData, 0, bufferReadResult));
                        }
                        if (SpeechManager.getInstance().isListening()) {
                            SpeechManager.getInstance().writeAudio(Util.short2byte(audioData), 0, bufferReadResult * 2);
                        }
                    }
                }
                SpeechManager.getInstance().stopListener();
                this.audioRecord.stop();
                this.audioRecord.release();
            }
        }
    }
相關文章
相關標籤/搜索