TheAmazingAudioEngine是Michael Tyson開源的iOS第三方音頻框架。不少音頻類APP應用這個框架做開發。html
應用這個框架,能夠比較方便地實現iOS音頻開發中的各類音效的實現。ios
開始以前,製做了這張圖,或許能夠更清楚地瞭解iOS開發中各類音頻框架以及其結構關係。(基於官方文檔 Using Audio 及objc中國 音頻API一覽 一文整理。若有謬誤,請斧正,謝謝。)git
iOS下各類音頻框架github
TheAmazingAudioEngine就是基於AudioUnit框架、AudioToolBox框架、AVFoundation框架的封裝,使其更方便使用。app
這部分和官方AVAudioPalyer以及AVAudioEngine都比較相似,拿到文件路徑、或者音頻buffer,調用相關方法播放便可,這裏舉例文件的播放。
具體步驟:框架
#pragma mark - 音頻播放 - (void)playNewSongCH1:(NSURL *)songURL { if (_selectedSongCH1Player) { [_audioController removeChannels:@[_selectedSongCH1Player]]; _selectedSongCH1Player = nil; } // 建立AEAudioFilePlayer對象 _selectedSongCH1Player = [[AEAudioFilePlayer alloc] initWithURL:songURL error:nil]; // 進行播放 [_audioController addChannels:@[_selectedSongCH1Player]]; }
關於音頻文件路徑的獲取,若是是直接拖進Xcode的文件,利用文件名及後綴便可建立NSURL對象,以下:ide
// 歌曲名和後綴名 static NSString *audioFileName = @"leftRightTest"; static NSString *audioFileFormat = @"mp3"; NSURL *songURL = [[NSBundle mainBundle] URLForResource:audioFileName withExtension:audioFileFormat];
若是是想拿手機中的歌曲,則經過MPMediaPickerController的委託方法mediaPicker:didPickMediaItems:方法得到,以下:post
#pragma mark - MPMediaPickerControllerDelegate - (void)mediaPicker:(MPMediaPickerController *)mediaPicker didPickMediaItems:(MPMediaItemCollection *)mediaItemCollection { // 我這裏要播放兩首歌,因此有兩個MPMediaPickerController對象,這裏做一個判斷 if (mediaPicker == _mediaCH1PickerController) { // mediaItemCollection.representativeItem.assetURL這一句便可拿到使用者選擇歌曲的URL // 備註:這裏已經將播放歌曲的方法playNewSongCH1:封裝到自定義的engine類中 [[HNMCManager shareManager].engine playNewSongCH1:mediaItemCollection.representativeItem.assetURL]; } else { [[HNMCManager shareManager].engine playNewSongCH2:mediaItemCollection.representativeItem.assetURL]; } [self dismissViewControllerAnimated:YES completion:nil]; }
步驟:ui
範例:spa
// 保存的錄音文件名字 static NSString *ch1RecorderFileName = @"ch1Recording.m4a"; #pragma mark - 開始錄音 - (void)setupCH1RecorderBeginRecording { // 實例化AERecorder對象 _ch1Recorder = [[AERecorder alloc] initWithAudioController:_audioController]; // 獲取錄製後文件存放的路徑 NSString *filePath = [self getFilePathWithFileName:ch1RecorderFileName]; NSError *error = nil; if (![_ch1Recorder beginRecordingToFileAtPath:filePath fileType:kAudioFileM4AType error:&error]) { return; } // 同時錄製輸入及輸出通道的聲音(即既錄人聲,也錄手機播放的聲音) [_audioController addInputReceiver:_ch1Recorder]; [_audioController addOutputReceiver:_ch1Recorder]; } #pragma mark Helper Method - (NSString *)getFilePathWithFileName:(NSString *)fileName { NSString *documentsFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) firstObject]; NSString *filePath = [documentsFolder stringByAppendingPathComponent:fileName]; return filePath; } #pragma mark - 中止錄音 - (void)stopCH1Recording { if (_ch1Recorder) { [_ch1Recorder finishRecording]; [_audioController removeInputReceiver:_ch1Recorder]; [_audioController removeOutputReceiver:_ch1Recorder]; _ch1Recorder = nil; } } #pragma mark - 播放錄音 - (void)playRecordCH1 { // 經過文件名拿到文件路徑 NSString *filePath = [self getFilePathWithFileName:ch1RecorderFileName]; // 若是文件不存在,結束 if ( ![[NSFileManager defaultManager] fileExistsAtPath:filePath] ) { return; } NSError *error = nil; // 利用AEAudioFilePlayer對象進行播放 _ch1RecorderPlayer = [[AEAudioFilePlayer alloc] initWithURL:[NSURL fileURLWithPath:filePath] error:&error]; if (!_ch1RecorderPlayer) { [[[UIAlertView alloc] initWithTitle:@"Error" message:[NSString stringWithFormat:@"Couldn't start playback: %@", [error localizedDescription]] delegate:nil cancelButtonTitle:nil otherButtonTitles:@"OK", nil] show]; return; } // 播放結束後發送一個播放結束通告(可選步驟) __weak HNAudioEngine *weakSelf = self; _ch1RecorderPlayer.completionBlock = ^{ weakSelf.ch1RecorderPlayer = nil; [[NSNotificationCenter defaultCenter] postNotificationName:kNotificationPlayRecordCH1Completed object:nil]; }; // 進行播放 [_audioController addChannels:@[_ch1RecorderPlayer]]; }
利用TheAmazingAudioEngine中的AEPlaythroughChannel對象,能夠方便地實現邊錄邊播。應用場景,想象一下:能夠將手機連上音箱,手機就變成一個擴音器了(固然,應該還有不少噪音、迴響之類要處理的)。
代碼比較簡單:
#pragma mark 同步錄播(邊錄邊播)相關 - (void)setupCH1playthroughChannelBeginRecording { // 實例化AEPlaythroughChannel對象 _ch1playthroughChannel = [[AEPlaythroughChannel alloc] init]; // 利用addInputReceiver:方法add到AEAudioController對象中 [_audioController addInputReceiver:_ch1playthroughChannel]; // 利用addChannels:方法add到AEAudioController對象中 // 我理解:上一行是爲了錄製,這一行是爲了播放 [_audioController addChannels:@[_ch1playthroughChannel]]; } #pragma mark 設置音量 - (void)setupCH1playthroughChannelVolume:(double)volume { if (_ch1playthroughChannel) { _ch1playthroughChannel.volume = volume; } } #pragma mark 中止 - (void)stopCH1Playthrough { if (_ch1playthroughChannel) { [_audioController removeInputReceiver:_ch1playthroughChannel]; [_audioController removeChannels:@[_ch1playthroughChannel]]; _ch1playthroughChannel = nil; } }
全部音效都是基於AEAudioUnitFilter類實現的。
TheAmazingAudioEngine上的音效比蘋果官方的AVAudioEngine豐富且容易實現。
總的步驟:
舉例:
該框架有現成的高通音效類:
#pragma mark 高通音效 - (void)setupFilterHighPass:(double)cutoffFrequency { // 建立並添加AEAudioUnitFilter實例 [self addHighpassFilter]; // 設置相關屬性值,達到音效的控制 _highPassFilter.cutoffFrequency = cutoffFrequency; } - (void)addHighpassFilter { // _highPassFilter是AEHighPassFilter類的實例 // AEHighPassFilter是AEAudioUnitFilter的子類 if (!_highPassFilter) { _highPassFilter = [[AEHighPassFilter alloc] init]; [_audioController addFilter:_highPassFilter]; } else { if ( ![_audioController.filters containsObject:_highPassFilter] ) { [_audioController addFilter:_highPassFilter]; } } }
由於原本對音頻相關領域的概念、知識不太瞭解,實現EQ調整還頗費了一番周折。須要實現的EQ調整相似下圖:
要實現10段EQ的音效調整
能夠經過AEParametricEqFilter類實現,該類也是AEAudioUnitFilter的子類,要實現10段EQ值的調整,就要建立10個AEParametricEqFilter對象,給centerFrequency屬性賦值20Hz-20000Hz之間的值(取決於你要調整哪一個頻率的聲音)。而具體音效調整,則是調整增益值(經過gain屬性),值範圍:-20dB to 20dB。
#pragma mark EQ音效 // 建立10個AEParametricEqFilter對象 - (void)creatEqFliters { _eq20HzFilter = [[AEParametricEqFilter alloc] init]; _eq50HzFilter = [[AEParametricEqFilter alloc] init]; _eq100HzFilter = [[AEParametricEqFilter alloc] init]; _eq200HzFilter = [[AEParametricEqFilter alloc] init]; _eq500HzFilter = [[AEParametricEqFilter alloc] init]; _eq1kFilter = [[AEParametricEqFilter alloc] init]; _eq2kFilter = [[AEParametricEqFilter alloc] init]; _eq5kFilter = [[AEParametricEqFilter alloc] init]; _eq10kFilter = [[AEParametricEqFilter alloc] init]; _eq20kFilter = [[AEParametricEqFilter alloc] init]; _eqFilters = @[_eq20HzFilter, _eq50HzFilter, _eq100HzFilter, _eq200HzFilter, _eq500HzFilter, _eq1kFilter, _eq2kFilter, _eq5kFilter, _eq10kFilter, _eq20kFilter]; } - (void)setupFilterEq:(NSInteger)eqType value:(double)gain { switch (eqType) { case EQ_20Hz: { // 設置須要調整的頻率,並將傳入的增益值gain賦值給gain屬性,達到音效調整效果 [self setupEqFilter:_eq20HzFilter centerFrequency:20 gain:gain]; break; } case EQ_50Hz: { [self setupEqFilter:_eq50HzFilter centerFrequency:50 gain:gain]; break; } case EQ_100Hz: { [self setupEqFilter:_eq100HzFilter centerFrequency:100 gain:gain]; break; } case EQ_200Hz: { [self setupEqFilter:_eq200HzFilter centerFrequency:200 gain:gain]; break; } case EQ_500Hz: { [self setupEqFilter:_eq500HzFilter centerFrequency:500 gain:gain]; break; } case EQ_1K: { [self setupEqFilter:_eq1kFilter centerFrequency:1000 gain:gain]; break; } case EQ_2K: { [self setupEqFilter:_eq2kFilter centerFrequency:2000 gain:gain]; break; } case EQ_5K: { [self setupEqFilter:_eq5kFilter centerFrequency:5000 gain:gain]; break; } case EQ_10K: { [self setupEqFilter:_eq10kFilter centerFrequency:10000 gain:gain]; break; } case EQ_20K: { [self setupEqFilter:_eq20kFilter centerFrequency:20000 gain:gain]; break; } } } - (void)setupEqFilter:(AEParametricEqFilter *)eqFilter centerFrequency:(double)centerFrequency gain:(double)gain { if ( ![_audioController.filters containsObject:eqFilter] ) { for (AEParametricEqFilter *existEqFilter in _eqFilters) { if (eqFilter == existEqFilter) { [_audioController addFilter:eqFilter]; break; } } } eqFilter.centerFrequency = centerFrequency; eqFilter.qFactor = 1.0; eqFilter.gain = gain; }
以上就是應用TheAmazingAudioEngine框架進行音頻播放、錄製、音效實現的一次簡單實踐分享。
固然,這個框架能作的事情還有不少,有時間的朋友能夠繼續發掘。