AudioUnit

RSS for tag

Create audio unit extensions and add sophisticated audio manipulation and processing capabilities to your app using AudioUnit.

AudioUnit Documentation

Posts under AudioUnit tag

39 Posts
Sort by:
Post not yet marked as solved
2 Replies
651 Views
I have some visualisation-heavy AUv3's, and the goal is not to perform graphics-intensive tasks if the plugin window is not opened inside the host app (such as Logic Pro). On iOS, this is easily accomplished by the viewWillAppear, etc overrides. But on macOS, it seems these overrides are not called every time the user opens / closes the plugin windows in the host application. I did try some alternate methods, like trying to traverse the view / controller hierarchy, or make use of the window property, to no avail. What substitute mechanism could I use to determine visibility status of an AUv3 on macOS? Thanks in advance, Zoltan
Posted
by znyari.
Last updated
.
Post not yet marked as solved
0 Replies
198 Views
Hello, We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow. We set the AudioSession as followed: [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_]; We are creating our Audio unit as followed: AudioComponentDescription desc_; desc_.componentType = kAudioUnitType_Output; desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc_.componentManufacturer = kAudioUnitManufacturer_Apple; desc_.componentFlags = 0; desc_.componentFlagsMask = 0; AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_); IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO."); UInt32 one_ = 1; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO"); IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate); bool isInterleaved = _channel == 2 ? true : false; self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO"); UInt32 maxFramesPerSlice_ = 4096; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO"); UInt32 propSize_ = sizeof(UInt32); IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO"); AURenderCallbackStruct renderCallbackStruct_; renderCallbackStruct_.inputProc = playbackCallback; renderCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); AURenderCallbackStruct inputCallbackStruct_; inputCallbackStruct_.inputProc = recordingCallback; inputCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); And as soon as we try to start the AudioUnit we have the following error: PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6} We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere. We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow. It used to work on visonOS 1.0.1 Regards, Summit-tech
Posted
by kjijijiji.
Last updated
.
Post marked as solved
2 Replies
412 Views
Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script. sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod So with the OS update the command fails, if a computer has SIP enabled (what is the default). sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod Password: Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted It would be super nice if either the change can be: reverted OR I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
Posted Last updated
.
Post not yet marked as solved
2 Replies
714 Views
Hi, I'm having trouble saving user presets in the plugin for Audio Units. This works well for saving the user presets in the Host, but I get an error when trying to save them in the plugin. I'm not using a parameter tree, but instead using the fullState's getter and setter for saving and retrieving a dictionary with the state. With some simplified parameters it looks something like this: var gain: Double = 0.0 var frequency: Double = 440.0     private var currentState: [String: Any] = [:] override var fullState: [String: Any]? {     get {       // Save the current state       currentState["gain"] = gain       currentState["frequency"] = frequency       // Return the preset state       return ["myPresetKey": currentState]     }     set {       // Extract the preset state       currentState = newValue?["myPresetKey"] as? [String: Any] ?? [:]       // Set the Audio Unit's properties       gain = currentState["gain"] as? Double ?? 0.0       frequency = currentState["frequency"] as? Double ?? 440.0     }  } This works perfectly well for storing user presets when saved in the host. When trying to save them in the plugin to be able to reuse them across hosts, I get the following error in the interface: "Missing key in preset state map". Note that I am testing mostly in AUM. I could not find any documentation for what the missing key is about and how can I get around this. Any ideas?
Posted Last updated
.
Post not yet marked as solved
0 Replies
138 Views
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups. When the Standalone App or Logic Pro is in the foreground and active all is well and as expected. However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore. The processing thread itself stays on a p-core. But has to wait for the other threads to finish. How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
Posted Last updated
.
Post not yet marked as solved
0 Replies
192 Views
Hi everybody, I'm trying to use the multi input of an usb device using the AVAudioEngine. My aim is to connect different inputNode channels to 2 or more different audionode (f.e. mixer). I'm able to get a spefic input channel from the engine inputNode with OSStatus err = AudioUnitSetProperty(avEngine.inputNode.audioUnit, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Output, 1, outputChannelMap, propSize); but this will change the routing to all the input node and to all the destination mixer nodes. How to send channel 1 of inputNode to a mixerNode1 and channel 2 to another mixerNode2?
Posted
by Untruth.
Last updated
.
Post not yet marked as solved
0 Replies
229 Views
Is there a way for an FXPlug to access the Source audio? Or do we need to make an AU plugin, apply it to a audio source [both video or audio track], and feed the info via shared memory to an FXPlug? Is there an AU plugin for external processes to "listen" to the audio?
Posted
by belisoful.
Last updated
.
Post not yet marked as solved
1 Replies
248 Views
Is this an uncaught C++ exception that could have originated from my code? or something else? (this report is from a tester) (also, why can't crash reporter tell you info about what exception wasn't caught?) (Per instructions here, to view the crash report, you'll need to rename the attached .txt to .ips to view the crash report) thanks! AudulusAU-2024-02-14-020421.txt
Posted
by Audulus.
Last updated
.
Post not yet marked as solved
0 Replies
298 Views
Hello, I am working on an AUv3 extension project using SwiftUI in Xcode and have encountered a peculiar issue when implementing a simple alert on Mac Catalyst. The code is straightforward; it's merely an alert triggered by a button within a SwiftUI view. Here's the relevant portion of the code: import SwiftUI struct SwiftAUv3ExtensionMainView: View { var parameterTree: ObservableAUParameterGroup @State var showingAlert = false var body: some View { VStack { ParameterSlider(param: parameterTree.global.gain) Button(action: {showingAlert = true}, label: { Text("Button") }) } .alert("Alert", isPresented: $showingAlert, actions: {}, message: { Text("Message") }) } } The problem arises when this alert is displayed and subsequently closed. Upon closing the alert, the cursor turns into a spinning rainbow and the app freezes for several seconds. Additionally, Xcode's console displays the warning: -[NSWindow makeKeyWindow] called on _NSAlertPanel which returned NO from -[NSWindow canBecomeKeyWindow]. I am looking for insights or solutions to address this issue. Has anyone else experienced similar problems with SwiftUI alerts in AUv3 extension projects, especially when using Mac Catalyst? Any advice or suggestions would be greatly appreciated. Thank you.
Posted
by ststnd.
Last updated
.
Post not yet marked as solved
0 Replies
257 Views
This can be reproduced easily with XCode's generated AUv3-Extension Projects. For MIDI Processor type AUv3-Extensions, the contextName property is only set once during initializing when added as a MIDI FX within Logic Pro, but not after changing the track's name manually. For Music Effect type AUv3-Extensions, contextName is set initially when added as an Audio FX within Logic Pro as well as updated as expected after changing the tracks's name manually. Am I missing something or is this a Logic Pro bug? Thanks, Tobias
Posted
by suelliman.
Last updated
.
Post not yet marked as solved
1 Replies
465 Views
I tried the same code on ios17 and ios16 when enable address sanitizer, ios17 will crash, why? Can anyone help me? AudioComponent comp = {0}; AudioComponentDescription compDesc = {0}; compDesc.componentType = kAudioUnitType_Output; compDesc.componentSubType = kAudioUnitSubType_RemoteIO; compDesc.componentManufacturer = kAudioUnitManufacturer_Apple; compDesc.componentFlags = 0; compDesc.componentFlagsMask = 0; comp = AudioComponentFindNext(NULL, &compDesc); if (comp == NULL) { assert(false); } AudioUnit tempAudioUnit; osResult = AudioComponentInstanceNew(comp, &tempAudioUnit); if (osResult != noErr) { assert(false); }
Posted
by ymzhang.
Last updated
.
Post not yet marked as solved
1 Replies
1k Views
Hi, This topic is about Workgroups. I create child processes and I'd like to communicate a os_workgroup_t to my child process so they can join the work group as well. As far as I understand, the os_workgroup_t value is local to the process. I've found that one can use os_workgroup_copy_port() and os_workgroup_create_with_port(), but I'm not familiar at all with ports and I wonder what would be the minimal effort to achieve that. Thank you very much! Alex
Posted
by abique.
Last updated
.
Post not yet marked as solved
2 Replies
463 Views
Hi, I have an app that has been developed with AudioUnit RemoteIO with renderCallbacks. The app has been performing fine, except on iOS 17 with devices like iPhone 14 or iPhone 15. On iPhone 14, the same app (a metronomic device) was performing fine with iOS 16, and when the customer updated to iOS 17, suddenly the audio was glitchy, had ghost sounds and sound artifacts. This does not happen on iPhone 11 Pro with iOS 17 (works fine!). However, I have been able to reproduce it on iPhone 15 Pro with iOS 17. It works ok at lower BPM and when the BPM goes over a certain threshold, the audio starts getting glitchy. The audio buffers are precomputed, so the render callback is relatively straightforward. Has anyone else seen this kind of issue on iPhone 14/iPhone 15 running iOS 17? I'm following up with Apple on this, but thought I would see if others are facing similar issues with their apps. Thanks, Sridhar
Posted Last updated
.
Post not yet marked as solved
0 Replies
426 Views
I am trying to migrate an Audio Unit host based on the AUv2 C API to the newer AUv3 API. While the migration itself was relatively straightforward (in terms of getting it to compile), the actual rendering fails at run-time with error -10876 aka. kAudioUnitErr_NoConnection. The app does not use AUGraph or AVAudioEngine, perhaps that is an issue? Since the AUv3 and the AUv2 API are bridged in both directions and the rendering works fine with the v2 API, I would expect there to be some way to make it work via the v3 API though. Perhaps someone has an idea why (or under which circumstances) the render block throws this error? For context, the app is Mixxx, an open-source DJing application, and here is the full diff between my AUv2 -> v3 migration: https://github.com/fwcd/mixxx/pull/5/files
Posted
by fwcd.
Last updated
.
Post not yet marked as solved
0 Replies
381 Views
I'm battling with Audio Workgroups on macOS. I've got it working for Standalone apps, getting the workgroup from the HAL/Device, and for AUv2/AUv3 plugins. I can verify that my plugin/app's processing threads are executing together with the main workgroup thread, using P-cores. So far so good! Now, I'm trying to get this working over IPC with my ***** app. From the documentation, I figured that I can get the mach port from the main audio workgroup (in my Audio Unit) using the os_workgroup_copy_port call. Then I pass this port over IPC to my ***** process, where I want to create a new workgroup from this mach port (which should be slaved to the master workgroup), using the os_workgroup_create_with_port call. However, when doing this, I get an access violation error in my external process. In my test case, I'm hosting an AUv2 in the AUXPC_arrow process (with Logic), and sending the mach port id over to my ***** App, which is also signed with the appropriate entitlements for accessing mach ports (I think): com.apple.security.temporary-exception.mach-lookup.global-name Now, the question is, should this automagically allow me to use a mach port owned by the AUXPC process? Does that process ALSO have to use some specific entitlement? I of course cannot change the entitlements of Apple's bundles. Many thanks for any assistance.
Posted
by saleteg.
Last updated
.
Post not yet marked as solved
3 Replies
951 Views
Back at the start of January this year we filed a bug report that still hasn't been acknowledged. Before we file a code level support ticket, we would love to know if there’s any one else out there who has experienced anything similar. We have read documentation (and again repeatedly) and searched the web over and still found no solution and this issue does look like it could be a bug in the system (or our coding) rather than proper behaviour. The app is a host for a v3 audio unit which itself is a workstation that can host other audio units. The host and audio unit are both working well in all other areas other than this issue. Note: This is not running on catalyst, it is a native Mac OS app. ( not iOS ) The problem is that when an AUv3 is hosted out of process (on the Mac) and then goes to fetch a list of all the other available installed audio units, the list that is returned from the system does not contain any of the other v3 audio units in the system. It only contains v2. We see this issue if we load our AU out of process in our own bespoke host, and also when it loads into Logic Pro which gives no choice but to load out of process. This means that, as it stands at the moment, when we release the app our users will have limited functionality in Logic Pro, and possibly by then, other hosts too. In our own host we can load our hosting AU in-process and then it can find and use all the available units both v2 and v3. So no issue there but sadly when loaded into the only other host that can do anything like this ( Logic Pro at the time of posting) it won't be able to use v3 units which is quite a serious limitation. SUMMARY v3 Audio Unit being hosted out of process. Audio unit fetches the list of available audio units on the system. v3 audio units are not provided in the list. Only v2 are presented. EXPECTED In some ways this seems to be the opposite of the the behaviour we would expect. We would expect to see and host ALL the other units that are installed on the system. “out of process” suggests the safer of the two options and so this looks like it could be related to some kind of sand boxing issue. But sadly we cannot work out a solution hence this report. Is Quin “The Eskimo” still around? Can you help us out here?
Posted Last updated
.
Post not yet marked as solved
1 Replies
494 Views
While playing sound, I need to create an AudioUnit to record the microphone at the same time. In order to use echo cancellation, i choose kAudioUnitSubType_VoiceProcessingIO subType to init AudioUnit. It works well in iOS 16 and below. But in iOS 17, the playing volume decreases while playing audio and record. Thank you for your help and hope to see your suggestions.
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.1k Views
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.1k Views
I'm the developer of a small utility for Mac called "MusicDeviceHost". https://apps.apple.com/us/app/musicdevicehost/id1261046263?mt=12 As the name suggests, it is a host application for audio units (music device components). See also "Using Sound Canvas VA with QMidi": https://youtu.be/F9C4BiBR A problem occurs while trying to authorize the "Sound Canvas VA" component, Roland Cloud Manager (v3.0.3) returns the following error: “Authorization Error - RM Service not connected Error Connecting to Roland Cloud Manager Service” I guess the error is caused by some permission denied to the sandboxed application version. The NOT sandboxed version of MDH actually works flawlessly. I am using the following entitlements: com.apple.security.app-sandbox com.apple.security.network.client So connecting to the service should work, because "com.apple.security.network.client" is enabled. At Roland, they say: "Cloud Manager isn't supported in a sandboxed environment." But as far as I can see, MainStage and other sandboxed apps works fine... So what is the right answer? Is there someone out there with the same issue? Thanks for helping :)
Posted
by mixage.
Last updated
.
Post not yet marked as solved
1 Replies
477 Views
I'm trying to build an audio unit that can load another one. The first step, listing and getting the components only works in the example code if and when the audio unit is loaded by its accompanying main app. But when loaded inside Logic Pro for example, the listed components will be limited to Apple-manufactured ones. On another forum, although not for an AppEx, someone indicated that the solution was to enable Inter-App Audio, which is now deprecated. Tried all three methods of AVAudioUnitComponentManager to get the component list. Please advise. Many thanks, Zoltan
Posted
by znyari.
Last updated
.