AVAudioNode

RSS for tag

Use the AVAudioNode abstract class for audio generation, processing, or I/O block.

AVAudioNode Documentation

Posts under AVAudioNode tag

20 Posts
Sort by:
Post not yet marked as solved
1 Replies
285 Views
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
Posted
by wmk.
Last updated
.
Post not yet marked as solved
1 Replies
295 Views
Hello everyone, I'm relatively new to iOS development, and I'm currently working on a Flutter plugin package. I want to use the AVFAudio package to load instrument sounds from an SF2 file into different channels. Specifically, I'd like to load individual instruments from the SF2 file onto separate channels. However, I've been struggling to find a way to achieve this. Could someone guide me on how to load SF2 instrument sounds into different channels using AVFAudio? I've tried various combinations of parameters (program number, soundbank MSB, and soundbank LSB), but none seem to work. If anyone has experience with AVFAudio and SF2 files, I'd greatly appreciate your help. Perhaps there's a proven approach or a way to determine the correct values for these parameters? Should I use a soundfont editor to inspect specific values within the SF2 file? Thank you in advance for any assistance! Best regards, Melih
Posted Last updated
.
Post not yet marked as solved
0 Replies
190 Views
Hi everybody, I'm trying to use the multi input of an usb device using the AVAudioEngine. My aim is to connect different inputNode channels to 2 or more different audionode (f.e. mixer). I'm able to get a spefic input channel from the engine inputNode with OSStatus err = AudioUnitSetProperty(avEngine.inputNode.audioUnit, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Output, 1, outputChannelMap, propSize); but this will change the routing to all the input node and to all the destination mixer nodes. How to send channel 1 of inputNode to a mixerNode1 and channel 2 to another mixerNode2?
Posted
by Untruth.
Last updated
.
Post not yet marked as solved
0 Replies
211 Views
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 (related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022) If I convert the buffer to AVAudioPCMFormatFloat32 playback works. My questions are: Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application? If 1 is YES is this documented anywhere? If 1 is YES is this required format subject to change at any point? Thanks! I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
Posted Last updated
.
Post not yet marked as solved
0 Replies
282 Views
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app: EXC_BREAKPOINT Exception 6, Code 1, Subcode 4304279688 +0x009888 AudioRecordModule.setupAudioEngine +0x009788 AudioRecordModule.setupAudioEngine +0x00c5bc AudioRecordModule.handleConfigurationChange Below is the relevant code in the Recorder class. public class AudioRecordModule: Module { private var audioEngine: AVAudioEngine? private func startRecording(options recordingOptions: RecordingOptions) { try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers) try AVAudioSession.sharedInstance().setActive(true) outputFormat = AVAudioFormat( commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16, sampleRate: Double(recordingOptions.sampleRate), channels: AVAudioChannelCount(recordingOptions.channels), interleaved: true )! let fileUri = URL(string: recordingOptions.fileUri)! let formatSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: recordingOptions.sampleRate, AVNumberOfChannelsKey: recordingOptions.channels, AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue, ] self.recordedFile = try AVAudioFile( forWriting: fileUri, settings: formatSettings, commonFormat: outputFormat.commonFormat, interleaved: outputFormat.isInterleaved ) if !hadSetupNotification { setupNotifications() } } func handleConfigurationChange() { DispatchQueue.main.async { self.releaseAudioEngine() self.setupAudioEngine() if self.state == "recording" { // we could attempt to keep recording do { try self.audioEngine?.start() } catch { self.internalPauseRecording() self.sendInterruptEvent() } } } } func setupNotifications() { nc.addObserver( forName: Notification.Name.AVAudioEngineConfigurationChange, object: nil, queue: nil ) { [weak self] _ in guard let weakself = self else { return } if weakself.state != "inactive" { weakself.handleConfigurationChange() } } } private func setupAudioEngine() { self.audioEngine = nil let audioEngine = AVAudioEngine() self.audioEngine = audioEngine let inputNode = audioEngine.inputNode let inputFormat = inputNode.inputFormat(forBus: 0) let converter = AVAudioConverter(from: inputFormat, to: outputFormat)! inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in do { let inputBlock: AVAudioConverterInputBlock = { _, outStatus in outStatus.pointee = AVAudioConverterInputStatus.haveData return buffer } let frameCapacity = AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate) let outputBuffer = AVAudioPCMBuffer( pcmFormat: self.outputFormat, frameCapacity: frameCapacity )! var error: NSError? converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if let error = error { throw error } else { try self.recordedFile?.write(from: outputBuffer) } } catch { print(error) } } } private func releaseAudioEngine() { if let audioEngine = self.audioEngine { audioEngine.inputNode.removeTap(onBus: 0) audioEngine.stop() } audioEngine = nil } } Beside that, the record module works normally. It is just the configuration change that it does not handle well. I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch. AVAudioRecorder is not an option for me. Thank you for your help.
Posted
by hoang_ff.
Last updated
.
Post not yet marked as solved
5 Replies
1.3k Views
Hello, I'm facing an issue with Xcode 15 and iOS 17: it seems impossible to get AVAudioEngine's audio input node to work on simulator. inputNode has a 0ch, 0kHz input format, connecting input node to any node or installing a tap on it fails systematically. What we tested: Everything works fine on iOS simulators <= 16.4, even with Xcode 15. Nothing works on iOS simulator 17.0 on Xcode 15. Everything works fine on iOS 17.0 device with Xcode 15. More details on this here: https://github.com/Fesongs/InputNodeFormat Any idea on this? Something I'm missing? Thanks for your help 🙏 Tom PS: I filed a bug on Feedback Assistant, but it usually takes ages to get any answer so I'm also trying here 😉
Posted Last updated
.
Post not yet marked as solved
0 Replies
627 Views
Hi there, I'm having some trouble with AVAudioMixerNode only working when there is a single input, and outputting silence or very quiet buzzing when >1 input node is connected. My setup has voice processing enabled, input going to a sink, and N source nodes going to the main mixer node, going to the output node. In all cases I am connecting nodes in the graph with the same declared format: 48kHz 1 channel Float32 PCM. This is working great for 1 source node, but as soon as I add a second it breaks. I can reproduce this behaviour in the SignalGenerator sample, when the same format is used everywhere. Again, it'll work fine with 1 source node even in this configuration, but add another and there's silence. Am I doing something wrong with formats here? Is this expected? As I understood it with voice processing on and use of a mixer node I should be able to use my own format essentially everywhere in my graph? My SignalGenerator modified repro example follows: import Foundation import AVFoundation // True replicates my real app's behaviour, which is broken. // You can remove one source node connection // to make it work even when this is true. let showBrokenState: Bool = true // SignalGenerator constants. let frequency: Float = 440 let amplitude: Float = 0.5 let duration: Float = 5.0 let twoPi = 2 * Float.pi let sine = { (phase: Float) -> Float in return sin(phase) } let whiteNoise = { (phase: Float) -> Float in return ((Float(arc4random_uniform(UINT32_MAX)) / Float(UINT32_MAX)) * 2 - 1) } // My "application" format. let format: AVAudioFormat = .init(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 1, interleaved: true)! // Engine setup. let engine = AVAudioEngine() let mainMixer = engine.mainMixerNode let output = engine.outputNode try! output.setVoiceProcessingEnabled(true) let outputFormat = engine.outputNode.inputFormat(forBus: 0) let sampleRate = Float(format.sampleRate) let inputFormat = format var currentPhase: Float = 0 let phaseIncrement = (twoPi / sampleRate) * frequency let srcNodeOne = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList) for frame in 0..<Int(frameCount) { let value = sine(currentPhase) * amplitude currentPhase += phaseIncrement if currentPhase >= twoPi { currentPhase -= twoPi } if currentPhase < 0.0 { currentPhase += twoPi } for buffer in ablPointer { let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer) buf[frame] = value } } return noErr } let srcNodeTwo = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList) for frame in 0..<Int(frameCount) { let value = whiteNoise(currentPhase) * amplitude currentPhase += phaseIncrement if currentPhase >= twoPi { currentPhase -= twoPi } if currentPhase < 0.0 { currentPhase += twoPi } for buffer in ablPointer { let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer) buf[frame] = value } } return noErr } engine.attach(srcNodeOne) engine.attach(srcNodeTwo) engine.connect(srcNodeOne, to: mainMixer, format: inputFormat) engine.connect(srcNodeTwo, to: mainMixer, format: inputFormat) engine.connect(mainMixer, to: output, format: showBrokenState ? inputFormat : outputFormat) // Put the input node to a sink just to match the formats and make VP happy. let sink: AVAudioSinkNode = .init { timestamp, numFrames, data in .zero } engine.attach(sink) engine.connect(engine.inputNode, to: sink, format: showBrokenState ? inputFormat : outputFormat) mainMixer.outputVolume = 0.5 try! engine.start() CFRunLoopRunInMode(.defaultMode, CFTimeInterval(duration), false) engine.stop()
Posted
by RichLogan.
Last updated
.
Post not yet marked as solved
0 Replies
425 Views
As the title suggests I am using AVAudioEngine for SpeechRecognition input & AVAudioPlayer for sound output. Apple says in this talk https://developer.apple.com/videos/play/wwdc2019/510 that the setVoiceProcessingEnabled function very usefully cancels the output from speaker to the mic. I set voiceProcessing on the Input and output nodes. It seems to work however the volume is low, even when the system volume is turned up. Any solution to this would be much appreciated.
Posted
by sleep9.
Last updated
.
Post not yet marked as solved
1 Replies
700 Views
AVSpeechSynthesizer was not working. it was working perfect before. below is my code objective - c. -(void)playVoiceMemoforMessageEVO:(NSString*)msg { [[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil]; AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc]init]; AVSpeechUtterance *speechutt = [AVSpeechUtterance speechUtteranceWithString:msg]; speechutt.volume=90.0f; speechutt.rate=0.50f; speechutt.pitchMultiplier=0.80f; [speechutt setRate:0.3f]; speechutt.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-us"]; [synthesizer speakUtterance:speechutt]; } please help me to solve this issue.
Posted
by mubarak.
Last updated
.
Post not yet marked as solved
0 Replies
781 Views
Our app is a game written in Unity where we have most of our audio playback handled by Unity. However, one of our game experiences utilized microphone input for speech recognition, and so in order for us to perform echo cancellation (while the game has audio playback), we setup an audio stream from Unity to native Swift code that performs the mixing of the input/output nodes. We however found that by streaming the audio buffer to our AVAudioSession: The volume of the audio playback appears to output differently When capturing a screen recording of the app, the audio playback being played from AVAudioSession does not get captured at all. Looking to figure out what could be causing the discrepency in playback as well as capture behaviour during screen recordings. We setup the AVAudioSession with this configuration: AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: .mixWithOthers) with inputNode.setVoiceProcessingEnabled(true) after connecting our IO and mixer nodes. Any suggestions or ideas on what to look out for would be appreciated!
Posted Last updated
.
Post not yet marked as solved
2 Replies
910 Views
I am using AVSpeechSynthesizer to get audio buffer and play, I am using AVAudioEngine and AVAudioPlayerNode to play the buffer. But I am getting error. [avae] AVAEInternal.h:76 required condition is false: [AVAudioPlayerNode.mm:734:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)] 2023-05-02 03:14:35.709020-0700 AudioPlayer[12525:308940] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount' Can anyone please help me to play the AVAudioBuffer from AVSpeechSynthesizer write method?
Posted
by connyhung.
Last updated
.
Post not yet marked as solved
0 Replies
779 Views
We are developing an app that uses external hardware to measure analogue hearing-loop performance . It uses audio jack on phone/iPad. With the new hardware on iPad using USB-C , we have noticed that the same input , one with lighting adapter and one with usb-C adapter - both produce way different input levels. The USB-C is ~23dB lower, with the same code and settings. That's almost 10x difference. Is there any way to control the USB-C adapter? am I missing something ? The code simply uses AVAudioInputNode and block attached to it via self.inputNode.installTap we do adjust gain to 1.0 let gain: Float = 1.0 try session.setInputGain(gain) But that still does not help. I wish there was an apple lab I could go to , to speak to engineers about it.
Posted
by greggj.
Last updated
.
Post not yet marked as solved
1 Replies
2.1k Views
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count. let inputNode = avEngine.inputNode print("Format #1: \(inputNode.outputFormat(forBus: 0))") // Format #1: <AVAudioFormat 0x600002bb4be0:  1 ch,  44100 Hz, Float32> try! inputNode.setVoiceProcessingEnabled(true) print("Format #2: \(inputNode.outputFormat(forBus: 0))") // Format #2: <AVAudioFormat 0x600002b18f50:  3 ch,  44100 Hz, Float32, deinterleaved> Is this expected? How can I interpret these channels? My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files. But when voice processing messes up with the channels layout, I cannot rely on this anymore.
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
1k Views
I have a timer in one of my apps. I now want to add audio that plays at the end of the timer. It's a workout app and the sound should remind the user that it is time for the next exercise. The audio should duck music playback (Apple Music / Spotify) and also work in background. Background audio is enabled for the app. I am not able to achieve everything at the same time. I set the audio session to category playback with options duckOthers. do { try AVAudioSession.sharedInstance().setCategory( .playback, options: .duckOthers ) } catch { print(error) } For playback I just use the AVAudioPlayer. When the user starts the timer, i schedule a timer in the future and play the sound. While this works perfectly in the foreground, the sound is not played back when going to background, as timers are not fired in the background, but rather when the user puts the app back in foreground. I have also tried using AVAudioEngine and AVAudioPlayerNode, as the latter can start playback delayed. The case from above works now, but the audio ducking begins immediately when initialising the AVAudioEngine, which is also not what i want. Is there any other approach that I am not aware of?
Posted
by denrase.
Last updated
.
Post not yet marked as solved
1 Replies
864 Views
I am analysing sounds by tapping the mic on the Mac. All is working well, but it disrupts other (what I assume) are low priority sounds e.g. dragging an item off the dock, sending a message is messages, speaking something in Shortcuts or Terminal. Other sounds like music.app playing, Siri speaking are not disrupted. The disruption sounds like the last part of the sound being repeated two extra times, very noticeable. This is the code: import Cocoa import AVFAudio class AudioHelper: NSObject { let audioEngine = AVAudioEngine() func start() async throws { audioEngine.inputNode.installTap(onBus: 0, bufferSize: 8192, format: nil) { buffer, time in } try audioEngine.start() } } I have tried increasing the buffer, changing the qos to utility (in the hope the sound analysis would become less important than the disrupted sounds),running on a non-main thread, but no luck. MacOS 13.4.1 Any assistance would be appreciated.
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.2k Views
I cannot seem to create an AVAudioFile from a URL to be played in an AVAudioEngine. Here is my complete code, following the documentation. import UIKit import AVKit import AVFoundation class ViewController: UIViewController { let audioEngine = AVAudioEngine() let audioPlayerNode = AVAudioPlayerNode() override func viewDidLoad() { super.viewDidLoad() streamAudioFromURL(urlString: "https://samplelib.com/lib/preview/mp3/sample-9s.mp3") } func streamAudioFromURL(urlString: String) { guard let url = URL(string: urlString) else { print("Invalid URL") return } let audioFile = try! AVAudioFile(forReading: url) let audioEngine = AVAudioEngine() let playerNode = AVAudioPlayerNode() audioEngine.attach(playerNode) audioEngine.connect(playerNode, to: audioEngine.outputNode, format: audioFile.processingFormat) playerNode.scheduleFile(audioFile, at: nil, completionCallbackType: .dataPlayedBack) { _ in /* Handle any work that's necessary after playback. */ } do { try audioEngine.start() playerNode.play() } catch { /* Handle the error. */ } } } I am getting the following error on let audioFile = try! AVAudioFile(forReading: url) Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.coreaudio.avfaudio Code=2003334207 "(null)" UserInfo={failed call=ExtAudioFileOpenURL((CFURLRef)fileURL, &_extAudioFile)} I have tried many other .mp3 file URLs as well as .wav and .m4a and none seem to work. The documentation makes this look so easy but I have been trying for hours to no avail. If you have any suggestions, they would be greatly appreciated!
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.5k Views
Hi, I have multiple audio files I want to decide which channel goes to which output. For example, how to route four 2-channel audio files to an 8-channel output. Also If I have an AVAudioPlayerNode playing a 2-channel track through headphones, can I flip the channels on the output for playback, i.e flip left and right? I have read the following thread which seeks to do something similar, but it is from 2012 and I do not quite understand how it would work in modern day. Many thanks, I am a bit stumped.
Posted
by jaolan.
Last updated
.
Post not yet marked as solved
1 Replies
902 Views
I'm trying to change the audio input (microphone) between all the available devices from AVAudioSession.sharedInstance().availableInputs. I'm using AVAudioSession.routeChangeNotification to get automatic route changes when devices get connected/disconnected and change the preferred input with setPreferredInput, then I restart my audioEngine and it works fine. But when I try to change the preferred input programmatically It doesn't change the audio capture inputNode. But keeps the last connected device and capturing. Even the AVAudioSession.sharedInstance().currentRoute.inputs changes but the audioEngine?.inputNode doesn't change to setPreferredInput call. WhatsApp seems to have done that without any issues. Any suggestions or leads are highly appreciated. Thanks.
Posted Last updated
.
Post marked as solved
2 Replies
1.1k Views
Hi After IOS 16 update users are experiencing audio distortion when attempting to change Pitch and/or Tempo in the app which uses AVAudiounitTimepitch to achieve this. Prior to IOS 16 it has worked quite admirably. I see suggestions to change the algorithm used but am able to achieve this. Any pointers are appreciated
Posted
by nanduvad.
Last updated
.