AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

87 Posts
Sort by:
Post not yet marked as solved
3 Replies
64 Views
I'm facing an issue where I can't play an audio file stored in my project after receiving a push-to-talk notification. Strangely, I'm able to play the audio file by tapping on a button before receiving the push notification, but it doesn't work afterward without any error messages. I've ensured that I've set up everything correctly in my project's capabilities. Any insights on what might be causing this issue would be greatly appreciated. I set everything in capabilities Set permission in .plist Request permission in app delegate I make connection to the room when app becomes active and received succes Then I setup .halfDuplex for this channel In restoredChannelUUID I activate AVAudioSession After sending the ppt push, I parse speaker and make it activeRemoteParticipant. I see than delegate function channelManager didActivate works good Where I tried to play audio from my player I see this prints in console, but no sound play
Posted
by dmitry225.
Last updated
.
Post not yet marked as solved
0 Replies
51 Views
In my application, I use CallKit and have supportsHolding = true set. During my phone call, another call comes in (e.g., GSM). I accept the incoming call and put the current call on hold. If I end the active call myself, everything is fine, and CallKit calls the method provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession). However, if the other party ends the call, the second call remains on hold. In the application, the user clicks on unhold, and I notify CallKit that the hold has ended. But in this case, the didActivate method is not called at all. If I try to activate the audio myself after unhold, I receive the error: Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} AVAudioSessionErrorInsufficientPriority == NSOSStatusErrorDomain Code: 561017449 What needs to be done for CallKit to activate my audio?
Posted Last updated
.
Post not yet marked as solved
0 Replies
92 Views
I am using the category - playAndRecord and mode - videoChat with options - duckOthers. But during an audio call when I try to call setActive(true) an exception occurs and when I try to setActive(true) again after the audio call ended, I am not getting any exception, but voice is not coming. Below is what I am trying to do. So once initial session active attempt fails the system is not activating the session. I have used AVAudioSession.interruptionNotification already but still its not setting the session as desired when audio call is ended. try session.setCategory(.playAndRecord, mode: .videoChat, options: .duckOthers) try session.setActive(true)
Posted Last updated
.
Post marked as solved
1 Replies
94 Views
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage: func startRecording() { let settings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 22050, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] do { recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings) recorder?.delegate = self recorder!.record() recording = true } catch { recording = false recordingFinished(success: false) } } The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16 A subsequent attempt to load the file into AVAudioPlayer results in: MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method. I've also tried constructing the recorder with let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1) if audioFormat == nil { print("Audio format failed.") } else { do { recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!) ... with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
Posted Last updated
.
Post not yet marked as solved
0 Replies
75 Views
I need to duck the audio coming from ApplicationMusicPlayer while playing a local file using AVAudioPlayer. I've tried using the duckOthers option as follows, but it doesn't work: let appAudioSession = AVAudioSession.sharedInstance() do { try appAudioSession.setCategory(.playAndRecord, mode: .default, options: .duckOthers) Maybe this is because there's one session for the entire app, and ApplicationMusicPlayer is using it? This is a fairly critical problem for my application, since Music content is always much louder than locally recorded content. Any insight appreciated.
Posted Last updated
.
Post not yet marked as solved
1 Replies
799 Views
I'd like to allow the speech synthesizer to play on the device speaker while simultaneously mixing with a phone call. I've worked with a number of different configurations but am unable to find a configuration that achieves the functionality I am trying to achieve - or allows mixing with a phone call at all. There is a flag: mixToTelephonyUplink that seems to suggest that at least some mixing with a phone call is possible using the speech synthesizer, but I'm currently unable to find almost any documentation about this flag besides basic API docs. I've had some some luck at least getting the synthesizer to always play to the speaker with the following audio session configuration - but the sound never is mixed with a phone call. Instead, it is ducked and muted while the phone call takes place. I've tried quite a few configuration combinations for the category and overrides, but nothings seems to work quite as I'd expect it to. synthesizer.mixToTelephonyUplink = true try? audioSession.setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .defaultToSpeaker]) try? audioSession.setActive(true, options: []) try? audioSession.overrideOutputAudioPort(.speaker) Is there some kind of documentation for this that's off the beaten path that I'm somehow missing? I'm going to continue with guess and check, but I'm starting to think this flag - and the functionality it implies, actually wasn't ever fully implemented.
Posted
by d-skinio.
Last updated
.
Post not yet marked as solved
0 Replies
101 Views
Background When I receive the InterruptionBegan notification (the interruption type is AVAudioSessionInterruptionTypeBegan) , I pause playing music. When I receive the InterruptionEnded notification (the interruption type is AVAudioSessionInterruptionTypeEnded), I resume playing music. however, sometimes i has got the error code: AVAudioSessionErrorCodeCannotInterruptOthers (560557684) Some Solutions I searched stackoverflow, there's some similar questions, and some solutions here are not very satisfying as : I don't want my app to mix with others, and once again, it all works most of the time. My app already uses remote control events so this doesn't solve anything. Questions 1.Have someone ever encountered this problem ? 2.Can we solve this problem and how ? 3.In addition, I noticed that there's property named otherAudioPlaying in AVAudioSession, we can know there's another app is playing,the quetion is if we can know which app is playing ?
Posted
by haitao_.
Last updated
.
Post not yet marked as solved
1 Replies
289 Views
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
Posted
by wmk.
Last updated
.
Post not yet marked as solved
1 Replies
169 Views
I'm trying to integrate Callkit into a Flutter app that uses webRTC for calls and I have an issue with taking calls on locked screen. CXAnswerCallAction requires to have the action.fulfill() method called after the connection is established. Here is a pice of code without waiting for establishment of the connection: guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{ action.fail() return } call.data.isAccepted = true self.answerCall = call self.callManager?.updateCall(call) sendEvent(SwiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON()) DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) { self.configureAudioSession() } action.fulfill() } This causes the connection time counter to be immediately visible on the screen, but the user still has to wait for connection establishment and can't hear anything. Here is the code that waits for the establishment of the connection before calling action.fulfill(): if(self.awaitedConnection.uuid != uuid) { action.fail() } else if(self.awaitedConnection.isConnected) { DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) { self.configureAudioSession() } action.fulfill() } else { DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1000)) { self.waitForConnection(uuid: uuid, action: action) } } } public func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) { guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{ action.fail() return } call.data.isAccepted = true self.answerCall = call self.callManager?.updateCall(call) self.awaitedConnection.uuid = action.callUUID self.awaitedConnection.isConnected = false sendEvent(wiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON()) waitForConnection(uuid: action.callUUID, action: action) } Unfortunately, though it works great on iOS 15.7, on 17.3 it causes lack of audio, no sound and no recording. I also can't enable it later when the call is ongoing. For reference: let session = AVAudioSession.sharedInstance() do{ try session.setCategory(AVAudioSession.Category.playAndRecord, options: AVAudioSession.CategoryOptions.allowBluetooth) try session.setMode(self.getAudioSessionMode(data?.audioSessionMode ?? "voiceChat")) try session.setActive(data?.audioSessionActive ?? true) try session.setPreferredSampleRate(data?.audioSessionPreferredSampleRate ?? 44100.0) try session.setPreferredIOBufferDuration(data?.audioSessionPreferredIOBufferDuration ?? 0.005) }catch{ print(error) } } I can see in the docs of action.fulfill() that "You should only call this method from the implementation of a CXProviderDelegate method". I this the reason for the issue? But how can I do it if I need to wait for the connection asynchronously and the provider method is synchronous?
Posted
by cand123.
Last updated
.
Post not yet marked as solved
1 Replies
254 Views
Hello, I’ve been trying to play system sounds in my app, but this hasn’t really been working. I am frequently switching between speech recognition (Speech framework) and sounds, so perhaps that’s where the issue lies. However, despite my best efforts, I haven't been able to solve the issue. I've been resetting the AVAudioSession category before playing a sound or starting speech recognition (as depicted in the code snippet below), to no avail. Has this happened to anyone else? Does anybody know how to fix the issue? recognizer = nil try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: []) try? AVAudioSession.sharedInstance().setActive(true) AudioServicesPlaySystemSound(1113) try? AVAudioSession.sharedInstance().setCategory(.record, mode: .spokenAudio, options: []) try? AVAudioSession.sharedInstance().setActive(true) recognizer = SpeechRecognition(word: wordSheet) recognizer!.startRecognition() Thank you.
Posted Last updated
.
Post not yet marked as solved
1 Replies
238 Views
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer. Let's say there are instances A and B of the Custom Player class. When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated. B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower. At the same time when A is terminated, B can complete the task without any impact on B. What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues. let audioMix = AVMutableAudioMix() var audioMixParameters: [AVMutableAudioMixInputParameters] = [] try composition.tracks(withMediaType: .audio).forEach { track in let inputParameter = AVMutableAudioMixInputParameters(track: track) inputParameter.trackID = track.trackID var callbacks = MTAudioProcessingTapCallbacks( version: kMTAudioProcessingTapCallbacksVersion_0, clientInfo: UnsafeMutableRawPointer( Unmanaged.passRetained(clientInfo).toOpaque() ), init: { tap, clientInfo, tapStorageOut in tapStorageOut.pointee = clientInfo }, finalize: { tap in Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release() }, prepare: nil, unprepare: nil, process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in var timeRange = CMTimeRange.zero let status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, &timeRange, numberFramesOut) if noErr == status { .... } }) var tap: Unmanaged<MTAudioProcessingTap>? let status = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) guard noErr == status else { return } inputParameter.audioTapProcessor = tap?.takeUnretainedValue() audioMixParameters.append(inputParameter) tap?.release() } audioMix.inputParameters = audioMixParameters return audioMix
Posted
by koggo2.
Last updated
.
Post marked as solved
1 Replies
262 Views
I am writing code to monitor the incoming audio levels in VisionOS. It works properly in the simulator, but gets an error on the device. Curious if anyone has any tips. I took out some of the code so it's a bit shorter, as it fails in setupAudioEngine when I try to start the engine with this error: Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.) Thanks in advance! Here is my code: class AudioInputMonitor: ObservableObject { private var audioEngine: AVAudioEngine? @Published var inputLevel: Float = 0 init() { requestMicrophonePermission() } private func requestMicrophonePermission() { AVAudioApplication.requestRecordPermission { granted in DispatchQueue.main.async { if granted { self.setupAudioSessionAndEngine() } else { print("Microphone permission not granted") // Handle the case where permission is not granted } } } } private func setupAudioSessionAndEngine() { do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .measurement, options: []) try audioSession.setActive(true) self.setupAudioEngine() } catch { print("Failed to set up the audio session: \(error)") } } private func setupAudioEngine() { audioEngine = AVAudioEngine() guard let inputNode = audioEngine?.inputNode else { print("Failed to get the audio input node") return } let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { [weak self] (buffer, _) in self?.analyzeAudio(buffer: buffer) } do { try audioEngine?.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } private func analyzeAudio(buffer: AVAudioPCMBuffer) { // removed to be brief } func stopMonitoring() { // removed to be brief } }
Posted
by pj4533.
Last updated
.
Post not yet marked as solved
0 Replies
229 Views
Hi all, I have created a QuickLook Preview for my custom datatype in my app. I use SwiftUI wrapped in UIKit for the preview. My issue is that when I try and play audio using AVAudioPlayer, I receive a status code 50 error. Does anyone know if there are seperate permissions I need to request before being able to do this? Here are the errors I get while trying to set my audio session as active and play on the avaudioplayer Thanks for your help and advice! The operation couldn’t be completed. (OSStatus error -50.) nwi_state: registration failed (9) connection <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents = "XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" } } auto-cancelling <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } 0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID throwing swix::exception: !(is_valid()) AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302 rebuilding null connection 0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID connection <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents = "XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" } } throwing swix::exception: !(is_valid()) auto-cancelling <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
Posted
by emilea.
Last updated
.
Post not yet marked as solved
1 Replies
839 Views
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly. I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController. If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails. On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses. However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices. My questions are: Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad) Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
Posted
by jblaker.
Last updated
.
Post not yet marked as solved
0 Replies
185 Views
Hello, I am having issue with the setting my avaudiosession output to bluetooth a2dp device. I want to use built in mic for the input and a2dp device (airpod pro 2) for the output route. Whenever I set the .allowBluetoothA2DP for my avaudioSession option, the output changes to speaker. the mode is default and category is playandrecord. If I do the same procedure with airpod pro 1, the output sets to the airpod pro 1. I am having the trouble when I use airpod pro 2 with iphone with ios 17. It seems like there is no issue with ios version below 17. Anyone went through this kind of issue? Thank you in advance.
Posted
by jeeeeeee.
Last updated
.
Post not yet marked as solved
0 Replies
254 Views
When setting the mode during the configuration of an audio session in Swift, the previously configured categoryOptions get reset. For example, if you perform setMode as shown below, you will observe that all previously set categoryOptions are cleared. Example: try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.allowBluetooth, .defaultToSpeaker]) try AVAudioSession.sharedInstance().setMode(.voiceChat) If you need to change the mode while maintaining the categoryOptions, you have to perform setCategory once again. Although the exact reason for this behavior has not been identified, the practical impact on the application's functionality is not yet clear. Why do you think this handling is in place?
Posted Last updated
.
Post not yet marked as solved
1 Replies
281 Views
I've been trying to make a native version of my iPad app which uses AVAudioPlayer. Everything works fine in iOS and iPad OS, however, when running on visionOS, it sounds like it's constantly skipping (both in the simulator and on an actual device). Anyone know why this might be or how to fix or a workaround?
Posted
by kudit.
Last updated
.
Post not yet marked as solved
0 Replies
284 Views
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app: EXC_BREAKPOINT Exception 6, Code 1, Subcode 4304279688 +0x009888 AudioRecordModule.setupAudioEngine +0x009788 AudioRecordModule.setupAudioEngine +0x00c5bc AudioRecordModule.handleConfigurationChange Below is the relevant code in the Recorder class. public class AudioRecordModule: Module { private var audioEngine: AVAudioEngine? private func startRecording(options recordingOptions: RecordingOptions) { try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers) try AVAudioSession.sharedInstance().setActive(true) outputFormat = AVAudioFormat( commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16, sampleRate: Double(recordingOptions.sampleRate), channels: AVAudioChannelCount(recordingOptions.channels), interleaved: true )! let fileUri = URL(string: recordingOptions.fileUri)! let formatSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: recordingOptions.sampleRate, AVNumberOfChannelsKey: recordingOptions.channels, AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue, ] self.recordedFile = try AVAudioFile( forWriting: fileUri, settings: formatSettings, commonFormat: outputFormat.commonFormat, interleaved: outputFormat.isInterleaved ) if !hadSetupNotification { setupNotifications() } } func handleConfigurationChange() { DispatchQueue.main.async { self.releaseAudioEngine() self.setupAudioEngine() if self.state == "recording" { // we could attempt to keep recording do { try self.audioEngine?.start() } catch { self.internalPauseRecording() self.sendInterruptEvent() } } } } func setupNotifications() { nc.addObserver( forName: Notification.Name.AVAudioEngineConfigurationChange, object: nil, queue: nil ) { [weak self] _ in guard let weakself = self else { return } if weakself.state != "inactive" { weakself.handleConfigurationChange() } } } private func setupAudioEngine() { self.audioEngine = nil let audioEngine = AVAudioEngine() self.audioEngine = audioEngine let inputNode = audioEngine.inputNode let inputFormat = inputNode.inputFormat(forBus: 0) let converter = AVAudioConverter(from: inputFormat, to: outputFormat)! inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in do { let inputBlock: AVAudioConverterInputBlock = { _, outStatus in outStatus.pointee = AVAudioConverterInputStatus.haveData return buffer } let frameCapacity = AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate) let outputBuffer = AVAudioPCMBuffer( pcmFormat: self.outputFormat, frameCapacity: frameCapacity )! var error: NSError? converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if let error = error { throw error } else { try self.recordedFile?.write(from: outputBuffer) } } catch { print(error) } } } private func releaseAudioEngine() { if let audioEngine = self.audioEngine { audioEngine.inputNode.removeTap(onBus: 0) audioEngine.stop() } audioEngine = nil } } Beside that, the record module works normally. It is just the configuration change that it does not handle well. I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch. AVAudioRecorder is not an option for me. Thank you for your help.
Posted
by hoang_ff.
Last updated
.
Post not yet marked as solved
1 Replies
313 Views
:( We are currently in the process of developing a video calling app using WebRTC. We initiate one-to-one video calls with the AVAudioSession configured as follows: do { if audioSession.category != .playAndRecord { try audioSession.setCategory( AVAudioSession.Category.playAndRecord, options: [ .defaultToSpeaker ] ) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } if audioSession.mode != .videoChat { try audioSession.setMode(.videoChat) } } catch { logger.error(msg: "AVAudioSession: \(error.localizedDescription)") } After initiating a video call, we recorded this app's video call using the iOS default screen recording feature. As a result, the recorded video includes system audio. However, iOS/iPad apps with similar features (Zoom, Skype, Slack) do not include audio in their recordings. Why does this difference occur? Is this behavior a security feature of iOS, and are there specific conditions required? Is there a need for some sort of configuration in AVAudioSession? additional :( I also reached out to Apple Developer Technical Support, and they responded, "We were able to reproduce it, but since we don't understand the issue, we will investigate it." What's that about...
Posted
by soejima.
Last updated
.