AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

88 Posts
Sort by:
Post not yet marked as solved
0 Replies
733 Views
When configuring an AVAudioSession as playAndRecord, I have to select the CarPlay input as preferredInput to make sure that the output is also routed to the car - if I set the preferredInput to the built-in mic, the output is routed to the speakers instead. However, when I select the CarPlay input as preferredInput, AVAudioSession configures the output as mono: (lldb) po session.currentRoute.inputs.first! <AVAudioSessionPortDescription: 0x282fcec30, type = CarAudio; name = CarPlay; UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583; selectedDataSource = (null)> (lldb) po session.currentRoute.inputs.first!.channels ▿ Optional<Array<AVAudioSessionChannelDescription>> ▿ some : 1 element - 0 : <AVAudioSessionChannelDescription: 0x282fccc70, name = CarPlay; label = 0 (0x0); number = 1; port UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583> (lldb) po session.currentRoute.outputs ▿ 1 element - 0 : <AVAudioSessionPortDescription: 0x282fce9d0, type = CarAudio; name = CarPlay; UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583; selectedDataSource = (null)> (lldb) po session.currentRoute.outputs.first!.channels ▿ Optional<Array<AVAudioSessionChannelDescription>> ▿ some : 1 element - 0 : <AVAudioSessionChannelDescription: 0x282fd8590, name = CarPlay; label = 0 (0x0); number = 1; port UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583> When I configure the session only for playback, the output is stereo, as you'd expect from a car system. This is on iOS 17 beta1, and I'm afraid I can't check whether this is a new regression or has already existed before, but it's quite likely it has existed before. Any advice on how I can circumvent this issue?
Posted
by
Post not yet marked as solved
6 Replies
2k Views
Turn on address sanitizer on Xcode and use a real device and put a Test.mp3 file in the Xcode project. Then it will crash when you initialise a AVAudioPlayer with a mp3 file (with a wav file it works fine). I have made an entry in feedback assistent -> FB12425453. var player : AVAudioPlayer? func playSound() { if let url = Bundle.main.url(forResource: "Test", withExtension: "mp3") { self.player = try? AVAudioPlayer(contentsOf: url) // --> deallocation of non allocated memory problem --> with a "wav" file it works .... } }
Posted
by
Post not yet marked as solved
1 Replies
979 Views
I am analysing sounds by tapping the mic on the Mac. All is working well, but it disrupts other (what I assume) are low priority sounds e.g. dragging an item off the dock, sending a message is messages, speaking something in Shortcuts or Terminal. Other sounds like music.app playing, Siri speaking are not disrupted. The disruption sounds like the last part of the sound being repeated two extra times, very noticeable. This is the code: import Cocoa import AVFAudio class AudioHelper: NSObject { let audioEngine = AVAudioEngine() func start() async throws { audioEngine.inputNode.installTap(onBus: 0, bufferSize: 8192, format: nil) { buffer, time in } try audioEngine.start() } } I have tried increasing the buffer, changing the qos to utility (in the hope the sound analysis would become less important than the disrupted sounds),running on a non-main thread, but no luck. MacOS 13.4.1 Any assistance would be appreciated.
Posted
by
Post not yet marked as solved
0 Replies
1.1k Views
I have a timer in one of my apps. I now want to add audio that plays at the end of the timer. It's a workout app and the sound should remind the user that it is time for the next exercise. The audio should duck music playback (Apple Music / Spotify) and also work in background. Background audio is enabled for the app. I am not able to achieve everything at the same time. I set the audio session to category playback with options duckOthers. do { try AVAudioSession.sharedInstance().setCategory( .playback, options: .duckOthers ) } catch { print(error) } For playback I just use the AVAudioPlayer. When the user starts the timer, i schedule a timer in the future and play the sound. While this works perfectly in the foreground, the sound is not played back when going to background, as timers are not fired in the background, but rather when the user puts the app back in foreground. I have also tried using AVAudioEngine and AVAudioPlayerNode, as the latter can start playback delayed. The case from above works now, but the audio ducking begins immediately when initialising the AVAudioEngine, which is also not what i want. Is there any other approach that I am not aware of?
Posted
by
Post not yet marked as solved
0 Replies
436 Views
Hi everyone! We develop some fitness app and we met an issue with level of the sound in our videos. The flow is: the User is listening their music from streaming platform. Then he opens our app and decided to start a workout. While he is making their exercises they have an option to watch the video how to make it. And that's where smth weird happend. When the User clicks on video to play and once it's playing the music from streaming app is a little reducing, not so loud as it was before. When the User is reducing the sound lever with their side button on in our video player volume slider, the music volume is also reducing. Ans sometimes the music volume is reducing and doesn't go back up until the User kills the our app and restart the workout. So there's some conflict between our app and streaming app. Maybe someone now the solution how fix this issue and do not change the sound volume from streaming app while Users is watching videos? Thanks in advance, will be glad to receive your suggestions
Posted
by
Post not yet marked as solved
1 Replies
988 Views
I'd like to allow the speech synthesizer to play on the device speaker while simultaneously mixing with a phone call. I've worked with a number of different configurations but am unable to find a configuration that achieves the functionality I am trying to achieve - or allows mixing with a phone call at all. There is a flag: mixToTelephonyUplink that seems to suggest that at least some mixing with a phone call is possible using the speech synthesizer, but I'm currently unable to find almost any documentation about this flag besides basic API docs. I've had some some luck at least getting the synthesizer to always play to the speaker with the following audio session configuration - but the sound never is mixed with a phone call. Instead, it is ducked and muted while the phone call takes place. I've tried quite a few configuration combinations for the category and overrides, but nothings seems to work quite as I'd expect it to. synthesizer.mixToTelephonyUplink = true try? audioSession.setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .defaultToSpeaker]) try? audioSession.setActive(true, options: []) try? audioSession.overrideOutputAudioPort(.speaker) Is there some kind of documentation for this that's off the beaten path that I'm somehow missing? I'm going to continue with guess and check, but I'm starting to think this flag - and the functionality it implies, actually wasn't ever fully implemented.
Posted
by
Post not yet marked as solved
0 Replies
400 Views
I am wondering if it's possible to obtain audio focus when the app is in the background without using the duckOthers option. I tested the Amazon Echo buds with the Alexa app and found that it can obtain audio focus. I am curious about how this is accomplished. I have a BLE device that can connect with my app. After connecting the device and my app, I put my app in the background and play a song from the Spotify app. Then, when I press a button on my BLE device, it sends a BLE command to my app to play music. However, my app cannot obtain audio focus, so the music cannot be played. The only way to make it work is to configure duckOthers. Compare with Echo buds, if we do the same steps, it can get audio focus. Is it because it has the MFI? do { let options: AVAudioSession.CategoryOptions = [.allowBluetoothA2DP, .defaultToSpeaker, .duckOthers] try audioSession.setCategory(.playAndRecord, mode: .spokenAudio, options: options) DDLogDebug("\(LOG_TAG) \(#function) setting category: \(audioSession.category.rawValue), " + "options: \(audioSession.categoryOptions.rawValue)") } catch { DDLogWarn("\(LOG_TAG) \(#function) Failed to configure audio session: \(error.localizedDescription)") } }
Posted
by
Post not yet marked as solved
1 Replies
950 Views
I was play a pcm(24kHz 16bit) file with AudioQueue, and it mix with other sound( 192kHz 24bit) named sound2. Setting for AVAudioSession as: category (AVAudioSessionCategoryPlayback), options (AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionDuckOthers) when playing pcm the sound2 should pushed volume lower as setting. BUT, there has a absolutly mute keep 0.5 second when the sound2 become low, and after a 0.5s mute the pushed lower sound came out. It only become in sound2 situation(192kHz, 24bit). if the sound2's quality lower everything is OK.
Posted
by
Post not yet marked as solved
0 Replies
698 Views
The loop plays smoothly in audacity but when I run it in the device or simulator it clicks each loop at different intensities. I config the session at App level: let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try audioSession.setActive(true) } catch { print("Setting category session for AVAudioSession Failed") } And then I made my method on my class: func playSound(soundId: Int) { let sound = ModelData.shared.sounds[soundId] if let bundle = Bundle.main.path(forResource: sound.filename, ofType: "flac") { let backgroundMusic = NSURL(fileURLWithPath: bundle) do { audioPlayer = try AVAudioPlayer(contentsOf:backgroundMusic as URL) audioPlayer?.prepareToPlay() audioPlayer?.numberOfLoops = -1 // for infinite times audioPlayer?.play() isPlayingSounds = true } catch { print(error) } } } Does anyone have any clue? Thanks! PS: If I use AVQueuePlayer and repeat the item the click noise disappear (but its no use, because I would need to repeat it indefinitely without wasting memory), if I use AVLooper I get a silence between loops. All with the same sound. Idk :/ PS2: The same happens with ALAC files.
Posted
by
Post not yet marked as solved
0 Replies
640 Views
I have the Flutter mobile app and I'm using the record flutter package for recording audio. So I'm facing an issue while recording the audio while the phone is locked. App Behavior: First we start the app and connect it to a Bluetooth device Then the app starts looking for the trigger of 1 from the device connected with it. On receiving the trigger from device it start recording. while mobile locked and app is running in background. AVAudioSession_iOS.mm:2367 Failed to set category, error: '!int' Failed to set up audio session: Error Domain=NSOSStatusErrorDomain Code=560557684 "(null)" I'm getting this error when AVAudioSession setting the category. My is for Users security purpose so it need to record background let me know how can I achive this functionality
Posted
by
Post not yet marked as solved
0 Replies
882 Views
Our app is a game written in Unity where we have most of our audio playback handled by Unity. However, one of our game experiences utilized microphone input for speech recognition, and so in order for us to perform echo cancellation (while the game has audio playback), we setup an audio stream from Unity to native Swift code that performs the mixing of the input/output nodes. We however found that by streaming the audio buffer to our AVAudioSession: The volume of the audio playback appears to output differently When capturing a screen recording of the app, the audio playback being played from AVAudioSession does not get captured at all. Looking to figure out what could be causing the discrepency in playback as well as capture behaviour during screen recordings. We setup the AVAudioSession with this configuration: AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: .mixWithOthers) with inputNode.setVoiceProcessingEnabled(true) after connecting our IO and mixer nodes. Any suggestions or ideas on what to look out for would be appreciated!
Posted
by
Post not yet marked as solved
1 Replies
798 Views
AVSpeechSynthesizer was not working. it was working perfect before. below is my code objective - c. -(void)playVoiceMemoforMessageEVO:(NSString*)msg { [[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil]; AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc]init]; AVSpeechUtterance *speechutt = [AVSpeechUtterance speechUtteranceWithString:msg]; speechutt.volume=90.0f; speechutt.rate=0.50f; speechutt.pitchMultiplier=0.80f; [speechutt setRate:0.3f]; speechutt.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-us"]; [synthesizer speakUtterance:speechutt]; } please help me to solve this issue.
Posted
by
Post not yet marked as solved
0 Replies
567 Views
When recording audio over bluetooth from AirPods to iPhone using the AVAudioRecorder the Bluetooth audio codec used is always AAC-ELD independent of the codec to store which is selected in the AVAudioRecorder instance. As far as I know must every Bluetooth device support SBC, hence, it should be possible for the AirPods to transmit the recorded audio using the SBC codec instead of AAC-ELD. However, I could not find any resource on how the request this codec using the AVAudioRecorder or AVAudioEngine. Is it possible to request SBC at all and if yes how?
Posted
by
Post not yet marked as solved
1 Replies
671 Views
AVSpeechSynthesisVoice.speechVoices() returns voices that are no longer available after upgrading from iOS 16 to iOS 17 (although this has been an issue for a long time, I think). To reproduce: On iOS 16 download 1 or more enhanced voices under “Accessibility > Spoken Content > Voices”. Upgrade to iOS 17 Call AVSpeechSynthesisVoice.speechVoices() and note that the voices installed in step (1) are still present, yet they are no longer downloaded, therefore they don’t work. And there is no property on AVSpeechSynthesisVoice to indicate if the voice is still available or not. This is a problem for apps that allow users to choose among the available system voices. I receive many support emails surrounding iOS upgrades about this issue. I have to tell them to re-download the voices which is not obvious to them. I've created a feedback item for this as well (FB12994908).
Posted
by
Post not yet marked as solved
0 Replies
680 Views
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
Posted
by
Post not yet marked as solved
0 Replies
649 Views
From an app that reads audio from the built-in microphone, I'm receiving many crash logs where the AVAudioEngine fails to start again after the app was suspended. Basically, I'm calling these two methods in the app delegate's applicationDidBecomeActive and applicationDidEnterBackground methods respectively: let audioSession = AVAudioSession.sharedInstance() func startAudio() throws { self.audioEngine = AVAudioEngine() try self.audioSession.setCategory(.record, mode: .measurement)} try audioSession.setActive(true) self.audioEngine!.inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil, block: { ... }) self.audioEngine!.prepare() try self.audioEngine!.start() } func stopAudio() throws { self.audioEngine?.stop() self.audioEngine?.inputNode.removeTap(onBus: 0) self.audioEngine = nil try self.audioSession.setActive(false, options: [.notifyOthersOnDeactivation]) } In the crash logs (iOS 16.6) I'm seeing that this works fine several times as the app is opened and closed, but suddenly the audioEngine.start() call fails with the error Error Domain=com.apple.coreaudio.avfaudio Code=-10851 "(null)" UserInfo={failed call=err = AUGraphParser::InitializeActiveNodesInInputChain(ThisGraph, *GetInputNode())} and the audioEngine!.inputNode.outputFormat(forBus: 0) is something like <AVAudioFormat 0x282301c70: 2 ch, 0 Hz, Float32, deinterleaved> . Also, right before installing the tap, audioSession.availableInputs contains an entry of type MicrophoneBuiltIn but audioSession.currentRoute lists no inputs at all. I was not able to reproduce this situation on my own devices yet. Does anyone have an idea why this is happening?
Posted
by
Post not yet marked as solved
0 Replies
341 Views
when i found other sounds by using '[AVAudioSession sharedInstance].otherAudioPlaying', if it return true, i will get currentRoute. but i meet some issues, it's outputs & inputs number all > 0 . i want to judge the sound is by inputs devices or output devices? please help!!!!!!!
Posted
by
Post marked as solved
1 Replies
712 Views
in iOS 17 (21A5326A) audioSession.setCategory(.playAndRecord, mode: .default,options: .allowBluetooth) does not set input to bluetooth. In iOS 16 it does. Here the steps to reproduce: Create project with storyboard. in info.plist add NSMicrophoneUsageDescription Your microphone will be used to record your speech when you press the "Start Recording" button. put in ViewController: import UIKit import Speech class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. } override public func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) AVAudioSession.sharedInstance().requestRecordPermission { granted in } } func startAudioSession(){ let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .default,options: .allowBluetooth) try audioSession.setActive(true,options: .notifyOthersOnDeactivation) print(audioSession.currentRoute.description) } catch { } } @IBAction func btnTap(_ sender: UIButton) { startAudioSession() } } put button on the Main.storyboard and link it to btnTap Connect bluetooth headset to iphone, start the App and tap button. in iOS 16 see the current route - bluetooth. in iOS 17 see the current route - speaker
Posted
by
Post not yet marked as solved
1 Replies
582 Views
I've integrated MPVolumeView into my view, and it correctly responds to hardware volume changes as expected. However, once I initiate audio streaming using AVAudioEngine to capture microphone audio and AudioUnit for decoding, the MPVolumeView ceases to reflect changes made using the hardware volume buttons. Additionally, even when I adjust the volume using the slider on MPVolumeView, it doesn't change the system volume. Has anyone else encountered this issue? What might be causing MPVolumeView to stop responding to hardware volume changes once streaming starts? For the AVAudioSession.Mode, I use the default setting because using .voiceChat prevents MPVolumeView update from device volume changes permanently. let session = AVAudioSession.sharedInstance() do { try session.setCategory(.playAndRecord, options: [.allowBluetooth]) try session.setActive(true) } catch { print(error.localizedDescription) }
Posted
by
Post not yet marked as solved
1 Replies
540 Views
Hello, I have struggled to resolve issue above question. I could speak utterance when I turn on my iPhone, but when my iPhone goes to background mode(turn off iPhone), It doesn't speak any more. I think it is possible to play audio or speak utterance because I can play music on background status in youtube. Any help please??
Posted
by