基於react的錄音及音頻曲線繪製的組件開發

簡介

這裏寫圖片描述

演示地址css

最近因爲工做須要,須要在react上用到一個錄音的功能,錄音主要包含開始錄音,暫停錄音,中止錄音,並將頻譜經過canvas繪製出來。起初開發時找了一個現成的包,可是這個第三方的包不支持暫停功能,也不支持音頻轉碼,只能輸出audio/webm格式,因此本身在週末決定從新寫一個關於react錄音的插件。react

使用

目前這個包已經上傳至npm,須要用的同窗能夠運行指令git

npm install react-audio-analyser --save

下載到本地,更多詳細的使用方法請看這裏。歡迎你們使用,也但願多多提issue。有興趣的同窗能夠繼續往下看,文章接下來會詳細講述一下錄音的實現及開發過程。github

項目簡介(react-audio-analyser)

圖片描述

項目自己主要在2個文件夾,component就是組件react-audio-analyser存放的位置。web

component:

  • audioConvertWav.js audio/webm轉audio/wav
  • index.js 外層的index.js用於暴露組件,內層index爲組件的容器(組建自己)
  • MediaRecorder.js 組件錄音主要處理邏輯。
  • RenderCanvas.js 音頻曲線繪製處理邏輯。
  • index.css 暫未啓用

demo:

  • demo主要用於對組件的演示,主要包含控制按鈕(開始,暫停,結束)的渲染,及邏輯處理。

react-audio-analyser

index.jsnpm

import React, {Component} from "react";
import MediaRecorder from "./MediaRecorder";
import RenderCanvas from "./RenderCanvas";
import "./index.css";

@MediaRecorder
@RenderCanvas
class AudioAnalyser extends Component {

    componentDidUpdate(prevProps) { // 檢測傳入status的變化
        if (this.props.status !== prevProps.status) {
            const event = {
                inactive: this.stopAudio,
                recording: this.startAudio,
                paused: this.pauseAudio
            }[this.props.status];
            event && event();
        }
    }

    render() {
        const {
            children, className, audioSrc
        } = this.props;
        return (
            <div className={className}>
                <div>
                    {this.renderCanvas()} // canvas 渲染
                </div>
                {children} // 控制按鈕
                {
                    audioSrc &&
                    <div>
                        <audio controls src={audioSrc}/>
                    </div>
                }
            </div>
        );
    }
}

AudioAnalyser.defaultProps = {
    status: "", //組件狀態
    audioSrc: "", //音頻資源URL
    backgroundColor: "rgba(0, 0, 0, 1)", //背景色
    strokeColor: "#ffffff", //音頻曲線顏色
    className: "audioContainer", //樣式類
    audioBitsPerSecond: 128000, //音頻碼率
    audioType: "audio/webm", //輸出格式
    width: 500, //canvas寬
    height: 100 //canvas高
};

export default AudioAnalyser;

組件的大致思路是,在src/component/AudioAnalyser/index.js 中渲染音頻canvas,以及經過插槽的方式去將控制按鈕渲染進來,這樣作的好處是,使用組件的人能夠自主的控制按鈕樣式,也暴露了組件的樣式類,供父級傳入新的樣式類來修改整個組件的樣式。
所以關於組件的開始,暫停,中止等狀態的觸發,也是由具體使用組件時提供的按鈕來改變狀態,傳入組件,組件自己經過對props的更改來觸發相關的鉤子。canvas

組件掛載了2個裝飾器,分別是MediaRecorder,RenderCanvas這兩個裝飾器分別用於處理音頻邏輯和渲染canvas曲線。裝飾器自己繼承了當前掛載的類,使得上下文被打通,更有利於屬性方法的調用。api

MediaRecorder

/**
 * @author j_bleach 2018/8/18
 * @describe 媒體記錄(包含開始,暫停,中止等媒體流及回調操做)
 * @param Target 被裝飾類(AudioAnalyser)
 */
import convertWav from "./audioConvertWav";

const MediaRecorderFn = Target => {
    const constraints = {audio: true};
    return class MediaRecorderClass extends Target {
        static audioChunk = [] // 音頻信息存儲對象
        static mediaRecorder = null // 媒體記錄對象
        static audioCtx = new (window.AudioContext || window.webkitAudioContext)(); // 音頻上下文

        constructor(props) {
            super(props);
            MediaRecorderClass.compatibility();
            this.analyser = MediaRecorderClass.audioCtx.createAnalyser();
        }

        /**
         * @author j_bleach 2018/08/02 17:06
         * @describe 瀏覽器navigator.mediaDevices兼容性處理
         */
        static compatibility() {
            const promisifiedOldGUM = (constraints) => {
                // First get ahold of getUserMedia, if present
                const getUserMedia =
                    navigator.getUserMedia ||
                    navigator.webkitGetUserMedia ||
                    navigator.mozGetUserMedia;

                // Some browsers just don't implement it - return a rejected promise with an error
                // to keep a consistent interface
                if (!getUserMedia) {
                    return Promise.reject(
                        new Error("getUserMedia is not implemented in this browser")
                    );
                }
                // Otherwise, wrap the call to the old navigator.getUserMedia with a Promise
                return new Promise(function (resolve, reject) {
                    getUserMedia.call(navigator, constraints, resolve, reject);
                });
            };

            // Older browsers might not implement mediaDevices at all, so we set an empty object first
            if (navigator.mediaDevices === undefined) {
                navigator.mediaDevices = {};
            }

            // Some browsers partially implement mediaDevices. We can't just assign an object
            // with getUserMedia as it would overwrite existing properties.
            // Here, we will just add the getUserMedia property if it's missing.
            if (navigator.mediaDevices.getUserMedia === undefined) {
                navigator.mediaDevices.getUserMedia = promisifiedOldGUM;
            }
        }

        /**
         * @author j_bleach 2018/8/19
         * @describe 驗證函數,若是存在即執行
         * @param fn: function 被驗證函數
         * @param e: object 事件對象 event object
         */
        static checkAndExecFn(fn, e) {
            typeof fn === "function" && fn(e)
        }

        /**
         * @author j_bleach 2018/8/19
         * @describe 音頻流轉blob對象
         * @param type: string 音頻的mime-type
         * @param cb: function 錄音中止回調
         */
        static audioStream2Blob(type, cb) {
            let wavBlob = null;
            const chunk = MediaRecorderClass.audioChunk;
            const audioWav = () => {
                let fr = new FileReader();
                fr.readAsArrayBuffer(new Blob(chunk, {type}))
                let frOnload = (e) => {
                    const buffer = e.target.result
                    MediaRecorderClass.audioCtx.decodeAudioData(buffer).then(data => {
                        wavBlob = new Blob([new DataView(convertWav(data))], {
                            type: "audio/wav"
                        })
                        MediaRecorderClass.checkAndExecFn(cb, wavBlob);
                    })
                }
                fr.onload = frOnload
            }
            switch (type) {
                case "audio/webm":
                    MediaRecorderClass.checkAndExecFn(cb, new Blob(chunk, {type}));
                    break;
                case "audio/wav":
                    audioWav();
                    break;
                default:
                    return void 0
            }
        }

        /**
         * @author j_bleach 2018/8/18
         * @describe 開始錄音
         */
        startAudio = () => {
            const recorder = MediaRecorderClass.mediaRecorder;
            if (!recorder || (recorder && recorder.state === "inactive")) {
                navigator.mediaDevices.getUserMedia(constraints).then(stream => {
                    this.recordAudio(stream);
                }).catch(err => {
                        throw new Error("getUserMedia failed:", err);
                    }
                )
                return false
            }
            if (recorder && recorder.state === "paused") {
                MediaRecorderClass.resumeAudio();
            }
        }
        /**
         * @author j_bleach 2018/8/19
         * @describe 暫停錄音
         */
        pauseAudio = () => {
            const recorder = MediaRecorderClass.mediaRecorder;
            if (recorder && recorder.state === "recording") {
                recorder.pause();
                recorder.onpause = () => {
                    MediaRecorderClass.checkAndExecFn(this.props.pauseCallback);
                }
                MediaRecorderClass.audioCtx.suspend();
            }
        }
        /**
         * @author j_bleach 2018/8/18
         * @describe 中止錄音
         */
        stopAudio = () => {
            const {audioType} = this.props;
            const recorder = MediaRecorderClass.mediaRecorder;
            if (recorder && ["recording", "paused"].includes(recorder.state)) {
                recorder.stop();
                recorder.onstop = () => {
                    MediaRecorderClass.audioStream2Blob(audioType, this.props.stopCallback);
                    MediaRecorderClass.audioChunk = []; // 結束後,清空音頻存儲
                }
                MediaRecorderClass.audioCtx.suspend();
                this.initCanvas();
            }
        }

        /**
         * @author j_bleach 2018/8/18
         * @describe mediaRecorder音頻記錄
         * @param stream: binary data 音頻流
         */
        recordAudio(stream) {
            const {audioBitsPerSecond, mimeType} = this.props;
            MediaRecorderClass.mediaRecorder = new MediaRecorder(stream, {audioBitsPerSecond, mimeType});
            MediaRecorderClass.mediaRecorder.ondataavailable = (event) => {
                MediaRecorderClass.audioChunk.push(event.data);
            }
            MediaRecorderClass.audioCtx.resume();
            MediaRecorderClass.mediaRecorder.start();
            MediaRecorderClass.mediaRecorder.onstart = (e) => {
                MediaRecorderClass.checkAndExecFn(this.props.startCallback, e);
            }
            MediaRecorderClass.mediaRecorder.onresume = (e) => {
                MediaRecorderClass.checkAndExecFn(this.props.startCallback, e);
            }
            MediaRecorderClass.mediaRecorder.onerror = (e) => {
                MediaRecorderClass.checkAndExecFn(this.props.errorCallback, e);
            }
            const source = MediaRecorderClass.audioCtx.createMediaStreamSource(stream);
            source.connect(this.analyser);
            this.renderCurve(this.analyser);
        }

        /**
         * @author j_bleach 2018/8/19
         * @describe 恢復錄音
         */
        static resumeAudio() {
            MediaRecorderClass.audioCtx.resume();
            MediaRecorderClass.mediaRecorder.resume();
        }
    }
}
export default MediaRecorderFn;

這個裝飾器主要使用到了navigator.mediaDevices.getUserMediaMediaRecorder這兩個api,navigator.mediaDevices.getUserMedia是用於調用硬件設備的api,在對麥克風攝像頭進行操做時都須要用到這個。以前在作視頻相關開發的時候,還用到了mediaDevices下的MediaDevices.ondevicechange和navigator.mediaDevices.enumerateDevices這兩個方法分別用來檢測輸入硬件變化,以及硬件設備列表查詢,此次音頻沒有用這兩個方法,緣由是我觀察到開發時大多設備都默認包含有音頻輸入,要求不像視頻那麼嚴格,因此本組件只作了navigator.mediaDevices的兼容處理,有想法的同窗能夠把這兩個方法也加上。數組

在對音頻作記錄時,主要應用到的一個api是MediaRecorder,這個api對瀏覽器有必定的要求,目前只支持谷歌以及火狐。
MediaRecorder 主要有4種回調,MediaRecorder.pause(),MediaRecorder.resume(),MediaRecorder.start(),MediaRecorder.stop(),分別對應於錄音的4種狀態。promise

圖片描述

該裝飾器包含三個關鍵的回調函數:startAudio,pauseAudio,stopAudio。用於對各狀態的處理,觸發條件就是經過改變傳入組件的status屬性,本組件在開發過程當中沒有對開始和恢復的回調進行區別,這多是一個遺漏的地方,須要的同窗只能在上層狀態機改變時自行區分了。

RenderCanvas

在MediaRecorder.js中,當開始錄音後,會經過AudioContext將設備輸入的音頻流,建立爲一個音頻資源對象,而後將這個對象關聯至AnalyserNode(一個用於音頻可視化的分析對象)。即

const source = MediaRecorderClass.audioCtx.createMediaStreamSource(stream);
source.connect(this.analyser);

在組件掛載時期,初始化一塊黑色背景白色中線的畫布。

configCanvas() {
            const {height, width, backgroundColor, strokeColor} = this.props;
            const canvas = RenderCanvasClass.canvasRef.current;
            RenderCanvasClass.canvasCtx = canvas.getContext("2d");
            RenderCanvasClass.canvasCtx.clearRect(0, 0, width, height);
            RenderCanvasClass.canvasCtx.fillStyle = backgroundColor;
            RenderCanvasClass.canvasCtx.fillRect(0, 0, width, height);
            RenderCanvasClass.canvasCtx.lineWidth = 2;
            RenderCanvasClass.canvasCtx.strokeStyle = strokeColor;
            RenderCanvasClass.canvasCtx.beginPath();
        }

這個畫布用於組件初始化顯示,以及中止以後的恢復狀態。
在開啓錄音後,首先建立一個可視化無符號8位的類型數組,數組長度爲analyserNode的fftsize(fft:快速傅里葉變換)長度,默認爲2048。而後經過analyserNode的getByteTimeDomainData這個api,將音頻信息存儲在剛剛建立的類型數組上。這樣就能夠獲得一個帶有音頻信息,且長度爲2048的類型數組,將canvas畫布的寬度分割爲2048份,而後有畫布左邊中點爲圓點,開始根據數組的值爲高來繪製音頻曲線,即:
圖片描述

renderCurve = () => {
            const {height, width} = this.props;
            RenderCanvasClass.animationId = window.requestAnimationFrame(this.renderCurve); // 定時動畫
            const bufferLength = this.analyser.fftSize; // 默認爲2048
            const dataArray = new Uint8Array(bufferLength);
            this.analyser.getByteTimeDomainData(dataArray);// 將音頻信息存儲在長度爲2048(默認)的類型數組(dataArray)
            this.configCanvas();
            const sliceWidth = Number(width) / bufferLength;
            let x = 0;
            for (let i = 0; i < bufferLength; i++) {
                const v = dataArray[i] / 128.0;
                const y = v * height / 2;
                RenderCanvasClass.canvasCtx[i === 0 ? "moveTo" : "lineTo"](x, y);
                x += sliceWidth;
            }
            RenderCanvasClass.canvasCtx.lineTo(width, height / 2);
            RenderCanvasClass.canvasCtx.stroke();
        }

經過requestAnimationFrame這個api來實現動畫效果,這是一個作動畫渲染經常使用到的api,最近作地圖路徑導航也用到了這個渲染,他比setTimeout在渲染視圖上有着更好的性能,須要注意的點和定時器同樣,就是在結束選而後,一個要手動取消動畫,即:

window.cancelAnimationFrame(RenderCanvasClass.animationId);

至此,關於音頻曲線的繪製就結束了,項目自己仍是有一些小的細節待改進,也有一些小的迭代會更新上去,好比新的音頻格式,新的曲線展現等等,更多請關注git更新。

項目地址

https://github.com/jiwenjiang...

相關文章
相關標籤/搜索