AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

89 Posts
Sort by:
Post not yet marked as solved
1 Replies
683 Views
AVSpeechSynthesisVoice.speechVoices() returns voices that are no longer available after upgrading from iOS 16 to iOS 17 (although this has been an issue for a long time, I think). To reproduce: On iOS 16 download 1 or more enhanced voices under “Accessibility > Spoken Content > Voices”. Upgrade to iOS 17 Call AVSpeechSynthesisVoice.speechVoices() and note that the voices installed in step (1) are still present, yet they are no longer downloaded, therefore they don’t work. And there is no property on AVSpeechSynthesisVoice to indicate if the voice is still available or not. This is a problem for apps that allow users to choose among the available system voices. I receive many support emails surrounding iOS upgrades about this issue. I have to tell them to re-download the voices which is not obvious to them. I've created a feedback item for this as well (FB12994908).
Posted Last updated
.
Post not yet marked as solved
0 Replies
693 Views
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
Posted Last updated
.
Post not yet marked as solved
0 Replies
579 Views
When recording audio over bluetooth from AirPods to iPhone using the AVAudioRecorder the Bluetooth audio codec used is always AAC-ELD independent of the codec to store which is selected in the AVAudioRecorder instance. As far as I know must every Bluetooth device support SBC, hence, it should be possible for the AirPods to transmit the recorded audio using the SBC codec instead of AAC-ELD. However, I could not find any resource on how the request this codec using the AVAudioRecorder or AVAudioEngine. Is it possible to request SBC at all and if yes how?
Posted
by dubachti.
Last updated
.
Post not yet marked as solved
3 Replies
1.7k Views
Hi Folks, I'm currently working on video conferencing app and use AVRoutePickerView for output device selection. Since iOS 16 release it started to display incorrect names for ear-piece and speaker options. For both of them the name is iPhone (before it was iPhone/Speaker) Output changes works correctly but display names confuse. Here is my audio session configuration:     private func configureSession() {         let configuration = RTCAudioSessionConfiguration.webRTC()         configuration.category = AVAudioSession.Category.playAndRecord.rawValue         configuration.categoryOptions = [.allowBluetooth, .allowBluetoothA2DP, .duckOthers, .mixWithOthers]         let session = RTCAudioSession.sharedInstance()         session.lockForConfiguration()         do {             try session.setConfiguration(configuration)         } catch let error as NSError {             logError("[AudioSessionManager] Unable to configure RTCAudioSession with error: \(error.localizedDescription)")         }         session.unlockForConfiguration()     }          private func overrideOutputPortIfNeeded() {         DispatchQueue.main.async {             guard let currentOutputType = self.session.currentRoute.outputs.first?.portType else { return }                      self.session.lockForConfiguration()             let shouldOverride = [.builtInReceiver, .builtInSpeaker].contains(currentOutputType)             logDebug("[AudioSessionManager] Should override output to speaker? \(shouldOverride)")             if shouldOverride {                 do {                     try self.session.overrideOutputAudioPort(.speaker)                 } catch let error as NSError {                     logError("[AudioSessionManager] Unable to override output to Speaker: \(error.localizedDescription)")                 }             }             self.session.unlockForConfiguration()         }     } Any help appreciated, Thansk!
Posted Last updated
.
Post not yet marked as solved
0 Replies
901 Views
Our app is a game written in Unity where we have most of our audio playback handled by Unity. However, one of our game experiences utilized microphone input for speech recognition, and so in order for us to perform echo cancellation (while the game has audio playback), we setup an audio stream from Unity to native Swift code that performs the mixing of the input/output nodes. We however found that by streaming the audio buffer to our AVAudioSession: The volume of the audio playback appears to output differently When capturing a screen recording of the app, the audio playback being played from AVAudioSession does not get captured at all. Looking to figure out what could be causing the discrepency in playback as well as capture behaviour during screen recordings. We setup the AVAudioSession with this configuration: AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: .mixWithOthers) with inputNode.setVoiceProcessingEnabled(true) after connecting our IO and mixer nodes. Any suggestions or ideas on what to look out for would be appreciated!
Posted Last updated
.
Post not yet marked as solved
0 Replies
651 Views
I have the Flutter mobile app and I'm using the record flutter package for recording audio. So I'm facing an issue while recording the audio while the phone is locked. App Behavior: First we start the app and connect it to a Bluetooth device Then the app starts looking for the trigger of 1 from the device connected with it. On receiving the trigger from device it start recording. while mobile locked and app is running in background. AVAudioSession_iOS.mm:2367 Failed to set category, error: '!int' Failed to set up audio session: Error Domain=NSOSStatusErrorDomain Code=560557684 "(null)" I'm getting this error when AVAudioSession setting the category. My is for Users security purpose so it need to record background let me know how can I achive this functionality
Posted Last updated
.
Post not yet marked as solved
0 Replies
707 Views
The loop plays smoothly in audacity but when I run it in the device or simulator it clicks each loop at different intensities. I config the session at App level: let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try audioSession.setActive(true) } catch { print("Setting category session for AVAudioSession Failed") } And then I made my method on my class: func playSound(soundId: Int) { let sound = ModelData.shared.sounds[soundId] if let bundle = Bundle.main.path(forResource: sound.filename, ofType: "flac") { let backgroundMusic = NSURL(fileURLWithPath: bundle) do { audioPlayer = try AVAudioPlayer(contentsOf:backgroundMusic as URL) audioPlayer?.prepareToPlay() audioPlayer?.numberOfLoops = -1 // for infinite times audioPlayer?.play() isPlayingSounds = true } catch { print(error) } } } Does anyone have any clue? Thanks! PS: If I use AVQueuePlayer and repeat the item the click noise disappear (but its no use, because I would need to repeat it indefinitely without wasting memory), if I use AVLooper I get a silence between loops. All with the same sound. Idk :/ PS2: The same happens with ALAC files.
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.1k Views
I am using AVSpeechSynthesizer to get audio buffer and play, I am using AVAudioEngine and AVAudioPlayerNode to play the buffer. But I am getting error. [avae] AVAEInternal.h:76 required condition is false: [AVAudioPlayerNode.mm:734:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)] 2023-05-02 03:14:35.709020-0700 AudioPlayer[12525:308940] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount' Can anyone please help me to play the AVAudioBuffer from AVSpeechSynthesizer write method?
Posted
by connyhung.
Last updated
.
Post not yet marked as solved
1 Replies
1.9k Views
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation. When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low. It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Posted
by ObCG.
Last updated
.
Post not yet marked as solved
1 Replies
1.4k Views
In my app, which gives voice prompts and sound warnings while driving, I test if the audio session is connected to CarPlay in order to use CarPlay settings for AVAudioSession by doing: BOOL found = NO; for (AVAudioSessionPortDescription * portDescr in [AVAudioSession sharedInstance].currentRoute.outputs) { if ([portDescr.portType isEqualToString:AVAudioSessionPortCarAudio]) found = YES; } This has worked for several years. But now, in iOS 16, when the app is in background, it won't detect that the iPhone is connected to CarPlay most of the times by using this method. Therefore, I will set up the audio session in a way which won't play any sound to the user. I have seen this happening in other apps such as Google Maps. So my guess is that something is not working in iOS 16 an AVAudioSession in background. Has anyone experienced this? Any workaround? I tried notifications to detect a change in a route, but sometimes I wouldn't get the notification if the app was in background when connecting to CarPlay. So this workaround didn't work either. Any ideas? Thank you.
Posted Last updated
.
Post not yet marked as solved
0 Replies
411 Views
I am wondering if it's possible to obtain audio focus when the app is in the background without using the duckOthers option. I tested the Amazon Echo buds with the Alexa app and found that it can obtain audio focus. I am curious about how this is accomplished. I have a BLE device that can connect with my app. After connecting the device and my app, I put my app in the background and play a song from the Spotify app. Then, when I press a button on my BLE device, it sends a BLE command to my app to play music. However, my app cannot obtain audio focus, so the music cannot be played. The only way to make it work is to configure duckOthers. Compare with Echo buds, if we do the same steps, it can get audio focus. Is it because it has the MFI? do { let options: AVAudioSession.CategoryOptions = [.allowBluetoothA2DP, .defaultToSpeaker, .duckOthers] try audioSession.setCategory(.playAndRecord, mode: .spokenAudio, options: options) DDLogDebug("\(LOG_TAG) \(#function) setting category: \(audioSession.category.rawValue), " + "options: \(audioSession.categoryOptions.rawValue)") } catch { DDLogWarn("\(LOG_TAG) \(#function) Failed to configure audio session: \(error.localizedDescription)") } }
Posted
by cindyQin1.
Last updated
.
Post not yet marked as solved
0 Replies
441 Views
Hi everyone! We develop some fitness app and we met an issue with level of the sound in our videos. The flow is: the User is listening their music from streaming platform. Then he opens our app and decided to start a workout. While he is making their exercises they have an option to watch the video how to make it. And that's where smth weird happend. When the User clicks on video to play and once it's playing the music from streaming app is a little reducing, not so loud as it was before. When the User is reducing the sound lever with their side button on in our video player volume slider, the music volume is also reducing. Ans sometimes the music volume is reducing and doesn't go back up until the User kills the our app and restart the workout. So there's some conflict between our app and streaming app. Maybe someone now the solution how fix this issue and do not change the sound volume from streaming app while Users is watching videos? Thanks in advance, will be glad to receive your suggestions
Posted Last updated
.
Post not yet marked as solved
0 Replies
1.1k Views
I have a timer in one of my apps. I now want to add audio that plays at the end of the timer. It's a workout app and the sound should remind the user that it is time for the next exercise. The audio should duck music playback (Apple Music / Spotify) and also work in background. Background audio is enabled for the app. I am not able to achieve everything at the same time. I set the audio session to category playback with options duckOthers. do { try AVAudioSession.sharedInstance().setCategory( .playback, options: .duckOthers ) } catch { print(error) } For playback I just use the AVAudioPlayer. When the user starts the timer, i schedule a timer in the future and play the sound. While this works perfectly in the foreground, the sound is not played back when going to background, as timers are not fired in the background, but rather when the user puts the app back in foreground. I have also tried using AVAudioEngine and AVAudioPlayerNode, as the latter can start playback delayed. The case from above works now, but the audio ducking begins immediately when initialising the AVAudioEngine, which is also not what i want. Is there any other approach that I am not aware of?
Posted
by denrase.
Last updated
.
Post not yet marked as solved
1 Replies
991 Views
I am analysing sounds by tapping the mic on the Mac. All is working well, but it disrupts other (what I assume) are low priority sounds e.g. dragging an item off the dock, sending a message is messages, speaking something in Shortcuts or Terminal. Other sounds like music.app playing, Siri speaking are not disrupted. The disruption sounds like the last part of the sound being repeated two extra times, very noticeable. This is the code: import Cocoa import AVFAudio class AudioHelper: NSObject { let audioEngine = AVAudioEngine() func start() async throws { audioEngine.inputNode.installTap(onBus: 0, bufferSize: 8192, format: nil) { buffer, time in } try audioEngine.start() } } I have tried increasing the buffer, changing the qos to utility (in the hope the sound analysis would become less important than the disrupted sounds),running on a non-main thread, but no luck. MacOS 13.4.1 Any assistance would be appreciated.
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.7k Views
I have a music app that can play in the background, using AVQueuePlayer. I'm in the process of adding support for CloudKit sync of the CoreData store, switching from NSPersistentContainer to NSPersistentCloudKitContainer. The initial sync can be fairly large (10,000+ records), depending on how much the user has used the app. The issue I'm seeing is this: ✅ When the app is in the foreground, CloudKit sync uses a lot of CPU, nearly 100% for a long time (this is expected during the initial sync). ✅ If I AM NOT playing music, when I put the app in the background, CloudKit sync eventually stops syncing until I bring the app to the foreground again (this is also expected). ❌ If I AM playing music, when I put the app in the background, CloudKit never stops syncing, which leads the system to terminate the app after a certain amount of time due to high CPU usage. Is there any way to pause the CloudKit sync when the app is in the background or is there any way to mitigate this?
Posted
by jbuckner.
Last updated
.
Post not yet marked as solved
1 Replies
1.7k Views
Hi, I have multiple audio files I want to decide which channel goes to which output. For example, how to route four 2-channel audio files to an 8-channel output. Also If I have an AVAudioPlayerNode playing a 2-channel track through headphones, can I flip the channels on the output for playback, i.e flip left and right? I have read the following thread which seeks to do something similar, but it is from 2012 and I do not quite understand how it would work in modern day. Many thanks, I am a bit stumped.
Posted
by jaolan.
Last updated
.
Post not yet marked as solved
0 Replies
745 Views
When configuring an AVAudioSession as playAndRecord, I have to select the CarPlay input as preferredInput to make sure that the output is also routed to the car - if I set the preferredInput to the built-in mic, the output is routed to the speakers instead. However, when I select the CarPlay input as preferredInput, AVAudioSession configures the output as mono: (lldb) po session.currentRoute.inputs.first! <AVAudioSessionPortDescription: 0x282fcec30, type = CarAudio; name = CarPlay; UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583; selectedDataSource = (null)> (lldb) po session.currentRoute.inputs.first!.channels ▿ Optional<Array<AVAudioSessionChannelDescription>> ▿ some : 1 element - 0 : <AVAudioSessionChannelDescription: 0x282fccc70, name = CarPlay; label = 0 (0x0); number = 1; port UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583> (lldb) po session.currentRoute.outputs ▿ 1 element - 0 : <AVAudioSessionPortDescription: 0x282fce9d0, type = CarAudio; name = CarPlay; UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583; selectedDataSource = (null)> (lldb) po session.currentRoute.outputs.first!.channels ▿ Optional<Array<AVAudioSessionChannelDescription>> ▿ some : 1 element - 0 : <AVAudioSessionChannelDescription: 0x282fd8590, name = CarPlay; label = 0 (0x0); number = 1; port UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583> When I configure the session only for playback, the output is stereo, as you'd expect from a car system. This is on iOS 17 beta1, and I'm afraid I can't check whether this is a new regression or has already existed before, but it's quite likely it has existed before. Any advice on how I can circumvent this issue?
Posted Last updated
.
Post not yet marked as solved
1 Replies
2.5k Views
When the screen is unlocked the AVSpeechSynthesizer.speak is working fine and in locked not working do { try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: .default, options: .defaultToSpeaker) try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation) } catch { print("audioSession properties weren't set because of an error.") } let utterance = AVSpeechUtterance(string: voiceOutdata) utterance.voice = AVSpeechSynthesisVoice(language: "en-US") let synth = AVSpeechSynthesizer() synth.speak(utterance) defer { disableAVSession() } Error Log in the locked state [AXTTSCommon] Failure starting audio queue alp![AXTTSCommon] _BeginSpeaking: couldn't begin playback
Posted
by ruwabi.
Last updated
.
Post not yet marked as solved
0 Replies
500 Views
Hi, I am developing a POC music player app. I use AVAudioSession; I have implemented background music and integration with command center. I am focusing now on volume. I am able to receive volume changes, with systemVolumeDidChange. About setting the volume, I am able to set it using MPVolumeView, but not for remote wifi audio device (for example, HomePods). I have the following open points: the native Podcast app is able to control volume when connected to HomePods. How does it do? the native Podcast app has icons for AirPods, HomePods, even Car bluetooth. Are there icon propeties for audioSession.currentRoute.outputs? Or what should I use instead? Here an example of what I would like to achieve:
Posted Last updated
.
Post not yet marked as solved
4 Replies
3.1k Views
I want to record both IMU data and Audio data from Airpods Pro. I have tried many times, and I failed. I can successfully record the IMU data and iPhone's microphone data simultaneously. When I choose Airpods Pro's microphone in the setCategory() function, the IMU data collection process stopped. If I change recordingSession.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth) to ecordingSession.setCategory(.playAndRecord, mode: .default), everything is okay except the audio is recorded from the handphone. If I add options: .allowBluetooth, the IMU update will stop. Could you give me some suggestions for this? Below are some parts of my code.   let My_IMU = CMHeadphoneMotionManager()   let My_writer = CSVWriter()   var write_state: Bool = false   func test()   {     recordingSession = AVAudioSession.sharedInstance()     do {       try recordingSession.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth)       try recordingSession.setActive(true)       recordingSession.requestRecordPermission() { [unowned self] allowed in         DispatchQueue.main.async {           if allowed == false {print("failed to record!")}         }       }     } catch {       print("failed to record!")     }                 let audioFilename = getDocumentsDirectory().appendingPathComponent("test_motion_Audio.m4a")     let settings = [       AVFormatIDKey: Int(kAudioFormatMPEG4AAC),       AVSampleRateKey: 8000,       AVNumberOfChannelsKey: 1,       AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue     ]     do     {       audioRecorder = try AVAudioRecorder(url: audioFilename, settings: settings)       audioRecorder.delegate = self       audioRecorder.record()     }     catch     {       print("Fail to record!")       finishRecording()     }           write_state.toggle()     let dir = FileManager.default.urls(      for: .documentDirectory,      in: .userDomainMask     ).first!           let filename = "test_motion_Audio.csv"     let fileUrl = dir.appendingPathComponent(filename)     My_writer.open(fileUrl)           APP.startDeviceMotionUpdates(to: OperationQueue.current!, withHandler: {[weak self] motion, error in       guard let motion = motion, error == nil else { return }       self?.My_writer.write(motion)     })   }
Posted
by wang0665.
Last updated
.