需求:最近公司需要做一個(gè)樓宇對(duì)講的功能:門(mén)口機(jī)(連接WIFI)撥號(hào)對(duì)室內(nèi)機(jī)(對(duì)應(yīng)的WIFI)的設(shè)備進(jìn)行呼叫,室內(nèi)機(jī)收到呼叫之后將對(duì)收到的數(shù)據(jù)進(jìn)行UDP廣播的轉(zhuǎn)發(fā),手機(jī)(連接對(duì)應(yīng)的WIFI)收到視頻流之后,實(shí)時(shí)的展示視頻數(shù)據(jù)(手機(jī)可以接聽(tīng),掛斷,手機(jī)接聽(tīng)之后,室內(nèi)機(jī)不展示視頻,只是進(jìn)行轉(zhuǎn)發(fā)。)
簡(jiǎn)單點(diǎn)說(shuō)就是手機(jī)客戶(hù)端需要做一個(gè)類(lèi)似于直播平臺(tái)的軟件,可以實(shí)時(shí)的展示視頻,實(shí)時(shí)的播放接收到的聲音數(shù)據(jù),并且實(shí)時(shí)將手機(jī)麥克風(fēng)收到的聲音回傳給室內(nèi)機(jī),室內(nèi)機(jī)負(fù)責(zé)轉(zhuǎn)發(fā)給門(mén)口機(jī)。
這篇文章介紹iOS怎么進(jìn)行實(shí)時(shí)的錄音和播放收到的聲音數(shù)據(jù)
想要使用系統(tǒng)的框架實(shí)時(shí)播放聲音和錄音數(shù)據(jù),就得知道音頻隊(duì)列服務(wù),
在A(yíng)udioToolbox框架中的音頻隊(duì)列服務(wù),它完全可以做到音頻播放和錄制,
一個(gè)音頻服務(wù)隊(duì)列有三個(gè)部分組成:
1.三個(gè)緩沖器Buffers:沒(méi)個(gè)緩沖器都是一個(gè)存儲(chǔ)音頻數(shù)據(jù)的臨時(shí)倉(cāng)庫(kù)。
2.一個(gè)緩沖隊(duì)列Buffer Queue:一個(gè)包含音頻緩沖器的有序隊(duì)列。
3.一個(gè)回調(diào)CallBack:一個(gè)自定義的隊(duì)列回調(diào)函數(shù)。
具體怎么運(yùn)轉(zhuǎn)的還是百度吧!
我的簡(jiǎn)單理解:
對(duì)于播放:系統(tǒng)會(huì)自動(dòng)從緩沖隊(duì)列中循環(huán)取出每個(gè)緩沖器中的數(shù)據(jù)進(jìn)行播放,我們需要做的就是將接收到的數(shù)據(jù)循環(huán)的放到緩沖器中,剩下的就交給系統(tǒng)去實(shí)現(xiàn)了。
對(duì)于錄音: 系統(tǒng)會(huì)自動(dòng)將錄的聲音放入隊(duì)列中的每個(gè)緩沖器中,我們需要做的就是從回調(diào)函數(shù)中將數(shù)據(jù)轉(zhuǎn)化我們自己的數(shù)據(jù)就OK了。
#pragma mark--實(shí)時(shí)播放
1. 導(dǎo)入系統(tǒng)框架AudioToolbox.framework AVFoundation.framework
2. 獲取麥克風(fēng)權(quán)限,在工程的Info.plist文件中加入Privacy - Microphone Usage Description 這個(gè)key 描述:App想要訪(fǎng)問(wèn)您的麥克風(fēng)
3. 創(chuàng)建播放聲音的類(lèi) EYAudio
EYAudio.h
#import <Foundation/Foundation.h>@interface EYAudio : NSObject// 播放的數(shù)據(jù)流數(shù)據(jù)- (void)playWithData:(NSData *)data;// 聲音播放出現(xiàn)問(wèn)題的時(shí)候可以重置一下- (void)resetPlay;// 停止播放- (void)stop;@end
EYAudio.m
#import "EYAudio.h"
#import <AVFoundation/AVFoundation.h>#import <AudioToolbox/AudioToolbox.h>#define MIN_SIZE_PER_FRAME 1920 //每個(gè)包的大小,室內(nèi)機(jī)要求為960,具體看下面的配置信息#define QUEUE_BUFFER_SIZE 3 //緩沖器個(gè)數(shù)#define SAMPLE_RATE 16000 //采樣頻率@interface EYAudio(){ AudioQueueRef audioQueue; //音頻播放隊(duì)列 AudioStreamBasicDescription _audioDescription; AudioQueueBufferRef audioQueueBuffers[QUEUE_BUFFER_SIZE]; //音頻緩存 BOOL audioQueueBufferUsed[QUEUE_BUFFER_SIZE]; //判斷音頻緩存是否在使用 NSLock *sysnLock; NSMutableData *tempData; OSStatus osState;}@end@implementation EYAudio#pragma mark - 提前設(shè)置AVAudioSessionCategoryMultiRoute 播放和錄音+ (void)initialize{ NSError *error = nil; //只想要播放:AVAudioSessionCategoryPlayback //只想要錄音:AVAudioSessionCategoryRecord //想要"播放和錄音"同時(shí)進(jìn)行 必須設(shè)置為:AVAudioSessionCategoryMultiRoute 而不是AVAudioSessionCategoryPlayAndRecord(設(shè)置這個(gè)不好使) BOOL ret = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryMultiRoute error:&error]; if (!ret) { NSLog(@"設(shè)置聲音環(huán)境失敗"); return; } //啟用audio session ret = [[AVAudioSession sharedInstance] setActive:YES error:&error]; if (!ret) { NSLog(@"啟動(dòng)失敗"); return; }}- (void)resetPlay{ if (audioQueue != nil) { AudioQueueReset(audioQueue); }}- (void)stop{ if (audioQueue != nil) { AudioQueueStop(audioQueue,true); } audioQueue = nil; sysnLock = nil;}- (instancetype)init{ self = [super init]; if (self) { sysnLock = [[NSLock alloc]init]; //設(shè)置音頻參數(shù) 具體的信息需要問(wèn)后臺(tái) _audioDescription.mSampleRate = SAMPLE_RATE; _audioDescription.mFormatID = kAudioFormatLinearPCM; _audioDescription.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; //1單聲道 _audioDescription.mChannelsPerFrame = 1; //每一個(gè)packet一偵數(shù)據(jù),每個(gè)數(shù)據(jù)包下的楨數(shù),即每個(gè)數(shù)據(jù)包里面有多少楨 _audioDescription.mFramesPerPacket = 1; //每個(gè)采樣點(diǎn)16bit量化 語(yǔ)音每采樣點(diǎn)占用位數(shù) _audioDescription.mBitsPerChannel = 16; _audioDescription.mBytesPerFrame = (_audioDescription.mBitsPerChannel / 8) * _audioDescription.mChannelsPerFrame; //每個(gè)數(shù)據(jù)包的bytes總數(shù),每楨的bytes數(shù)*每個(gè)數(shù)據(jù)包的楨數(shù) _audioDescription.mBytesPerPacket = _audioDescription.mBytesPerFrame * _audioDescription.mFramesPerPacket; // 使用player的內(nèi)部線(xiàn)程播放 新建輸出 AudioQueueNewOutput(&_audioDescription, AudioPlayerAQInputCallback, (__bridge void * _Nullable)(self), nil, 0, 0, &audioQueue); // 設(shè)置音量 AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 1.0); // 初始化需要的緩沖區(qū) for (int i = 0; i < QUEUE_BUFFER_SIZE; i++) { audioQueueBufferUsed[i] = false; osState = AudioQueueAllocateBuffer(audioQueue, MIN_SIZE_PER_FRAME, &audioQueueBuffers[i]); } osState = AudioQueueStart(audioQueue, NULL); if (osState != noErr) { NSLog(@"AudioQueueStart Error"); } } return self;}// 播放數(shù)據(jù)-(void)playWithData:(NSData *)data{ [sysnLock lock]; tempData = [NSMutableData new]; [tempData appendData: data]; NSUInteger len = tempData.length; Byte *bytes = (Byte*)malloc(len); [tempData getBytes:bytes length: len]; int i = 0; while (true) { if (!audioQueueBufferUsed[i]) { audioQueueBufferUsed[i] = true; break; }else { i++; if (i >= QUEUE_BUFFER_SIZE) { i = 0; } } } audioQueueBuffers[i] -> mAudioDataByteSize = (unsigned int)len; // 把bytes的頭地址開(kāi)始的len字節(jié)給mAudioData,向第i個(gè)緩沖器 memcpy(audioQueueBuffers[i] -> mAudioData, bytes, len); // 釋放對(duì)象 free(bytes); //將第i個(gè)緩沖器放到隊(duì)列中,剩下的都交給系統(tǒng)了 AudioQueueEnqueueBuffer(audioQueue, audioQueueBuffers[i], 0, NULL); [sysnLock unlock];}// ************************** 回調(diào) **********************************// 回調(diào)回來(lái)把buffer狀態(tài)設(shè)為未使用static void AudioPlayerAQInputCallback(void* inUserData,AudioQueueRef audioQueueRef, AudioQueueBufferRef audioQueueBufferRef) { EYAudio* audio = (__bridge EYAudio*)inUserData; [audio resetBufferState:audioQueueRef and:audioQueueBufferRef];}- (void)resetBufferState:(AudioQueueRef)audioQueueRef and:(AudioQueueBufferRef)audioQueueBufferRef { // 防止空數(shù)據(jù)讓audioqueue后續(xù)都不播放,為了安全防護(hù)一下 if (tempData.length == 0) { audioQueueBufferRef->mAudioDataByteSize = 1; Byte* byte = audioQueueBufferRef->mAudioData; byte = 0; AudioQueueEnqueueBuffer(audioQueueRef, audioQueueBufferRef, 0, NULL); } for (int i = 0; i < QUEUE_BUFFER_SIZE; i++) { // 將這個(gè)buffer設(shè)為未使用 if (audioQueueBufferRef == audioQueueBuffers[i]) { audioQueueBufferUsed[i] = false; } }}@end外界使用: 不斷調(diào)用下面的方法將NSData傳遞進(jìn)來(lái)
- (void)playWithData:(NSData *)data;
#pragma mark--實(shí)時(shí)錄音
1. 導(dǎo)入系統(tǒng)框架AudioToolbox.framework AVFoundation.framework
2. 創(chuàng)建錄音的類(lèi) EYRecord
EYRecord.h
#import <Foundation/Foundation.h>@interface ESARecord : NSObject//開(kāi)始錄音- (void)startRecording;//停止錄音- (void)stopRecording;@end
EYRecord.m
#import "ESARecord.h"#import <AudioToolbox/AudioToolbox.h>#define QUEUE_BUFFER_SIZE 3 // 輸出音頻隊(duì)列緩沖個(gè)數(shù)#define kDefaultBufferDurationSeconds 0.03//調(diào)整這個(gè)值使得錄音的緩沖區(qū)大小為960,實(shí)際會(huì)小于或等于960,需要處理小于960的情況#define kDefaultSampleRate 16000 //定義采樣率為16000extern NSString * const ESAIntercomNotifationRecordString;static BOOL isRecording = NO;@interface ESARecord(){ AudioQueueRef _audioQueue; //輸出音頻播放隊(duì)列 AudioStreamBasicDescription _recordFormat; AudioQueueBufferRef _audioBuffers[QUEUE_BUFFER_SIZE]; //輸出音頻緩存}@property (nonatomic, assign) BOOL isRecording;@end@implementation ESARecord- (instancetype)init{ self = [super init]; if (self) { //重置下 memset(&_recordFormat, 0, sizeof(_recordFormat)); _recordFormat.mSampleRate = kDefaultSampleRate; _recordFormat.mChannelsPerFrame = 1; _recordFormat.mFormatID = kAudioFormatLinearPCM; _recordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; _recordFormat.mBitsPerChannel = 16; _recordFormat.mBytesPerPacket = _recordFormat.mBytesPerFrame = (_recordFormat.mBitsPerChannel / 8) * _recordFormat.mChannelsPerFrame; _recordFormat.mFramesPerPacket = 1; //初始化音頻輸入隊(duì)列 AudioQueueNewInput(&_recordFormat, inputBufferHandler, (__bridge void *)(self), NULL, NULL, 0, &_audioQueue); //計(jì)算估算的緩存區(qū)大小 int frames = (int)ceil(kDefaultBufferDurationSeconds * _recordFormat.mSampleRate); int bufferByteSize = frames * _recordFormat.mBytesPerFrame; NSLog(@"緩存區(qū)大小%d",bufferByteSize); //創(chuàng)建緩沖器 for (int i = 0; i < QUEUE_BUFFER_SIZE; i++){ AudioQueueAllocateBuffer(_audioQueue, bufferByteSize, &_audioBuffers[i]); AudioQueueEnqueueBuffer(_audioQueue, _audioBuffers[i], 0, NULL); } } return self;}-(void)startRecording{ // 開(kāi)始錄音 AudioQueueStart(_audioQueue, NULL); isRecording = YES;}void inputBufferHandler(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime,UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc){ if (inNumPackets > 0) { ESARecord *recorder = (__bridge ESARecord*)inUserData; [recorder processAudioBuffer:inBuffer withQueue:inAQ]; } if (isRecording) { AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL); }}- (void)processAudioBuffer:(AudioQueueBufferRef )audioQueueBufferRef withQueue:(AudioQueueRef )audioQueueRef{ NSMutableData * dataM = [NSMutableData dataWithBytes:audioQueueBufferRef->mAudioData length:audioQueueBufferRef->mAudioDataByteSize]; if (dataM.length < 960) { //處理長(zhǎng)度小于960的情況,此處是補(bǔ)00 Byte byte[] = {0x00}; NSData * zeroData = [[NSData alloc] initWithBytes:byte length:1]; for (NSUInteger i = dataM.length; i < 960; i++) { [dataM appendData:zeroData]; } } // NSLog(@"實(shí)時(shí)錄音的數(shù)據(jù)--%@", dataM); //此處是發(fā)通知將dataM 傳遞出去 [[NSNotificationCenter defaultCenter] postNotificationName:@"EYRecordNotifacation" object:@{@"data" : dataM}];}-(void)stopRecording{ if (isRecording) { isRecording = NO; //停止錄音隊(duì)列和移除緩沖區(qū),以及關(guān)閉session,這里無(wú)需考慮成功與否 AudioQueueStop(_audioQueue, true); //移除緩沖區(qū),true代表立即結(jié)束錄制,false代表將緩沖區(qū)處理完再結(jié)束 AudioQueueDispose(_audioQueue, true); } NSLog(@"停止錄音");}@end如果不好使嘗試將 EYRecord.m ----> EYRecord.mm
以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持VEVB武林網(wǎng)。
新聞熱點(diǎn)
疑難解答
圖片精選
網(wǎng)友關(guān)注