Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

53 Posts
Sort by:
Post not yet marked as solved
3 Replies
623 Views
I've added a listener block for camera notifications. This works as expected: the listener block is invoked then the camera is activated/deactivated. However, when I call CMIOObjectRemovePropertyListenerBlock to remove the listener block, though the call succeeds, camera notifications are still delivered to the listener block. Since in the header file it states this function "Unregisters the given CMIOObjectPropertyListenerBlock from receiving notifications when the given properties change." I'd assume that once called, no more notifications would be delivered? Sample code: #import <Foundation/Foundation.h> #import <CoreMediaIO/CMIOHardware.h> #import <AVFoundation/AVCaptureDevice.h> int main(int argc, const char * argv[]) { AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; OSStatus status = -1; CMIOObjectID deviceID = 0; CMIOObjectPropertyAddress propertyStruct = {0}; propertyStruct.mSelector = kAudioDevicePropertyDeviceIsRunningSomewhere; propertyStruct.mScope = kAudioObjectPropertyScopeGlobal; propertyStruct.mElement = kAudioObjectPropertyElementMain; deviceID = (UInt32)[camera performSelector:NSSelectorFromString(@"connectionID") withObject:nil]; CMIOObjectPropertyListenerBlock listenerBlock = ^(UInt32 inNumberAddresses, const CMIOObjectPropertyAddress addresses[]) { NSLog(@"Callback: CMIOObjectPropertyListenerBlock invoked"); }; status = CMIOObjectAddPropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), listenerBlock); if(noErr != status) { NSLog(@"ERROR: CMIOObjectAddPropertyListenerBlock() failed with %d", status); return -1; } NSLog(@"Monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID); sleep(10); status = CMIOObjectRemovePropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_main_queue(), listenerBlock); if(noErr != status) { NSLog(@"ERROR: 'AudioObjectRemovePropertyListenerBlock' failed with %d", status); return -1; } NSLog(@"Stopped monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID); sleep(10); return 0; } Compiling and running this code outputs: Monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21) Callback: CMIOObjectPropertyListenerBlock invoked Callback: CMIOObjectPropertyListenerBlock invoked Stopped monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21) Callback: CMIOObjectPropertyListenerBlock invoked Callback: CMIOObjectPropertyListenerBlock invoked Note the last two log messages showing that the CMIOObjectPropertyListenerBlock is still invoked ...even though CMIOObjectRemovePropertyListenerBlock has successfully been invoked. Am I just doing something wrong here? Or is the API broken?
Post marked as solved
6 Replies
715 Views
After update XCode to 15, I encountered a crash with UnsafeMutableRawPointer. To recreate the problem I write this simple test code. class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() test() } private func test() { var abl = AudioBufferList() let capacity = 4096 let lp1 = UnsafeMutableAudioBufferListPointer(&amp;amp;abl) let outputBuffer1 = UnsafeMutablePointer&amp;lt;Int8&amp;gt;.allocate(capacity: capacity) let outputBuffer2 = UnsafeMutablePointer&amp;lt;Int8&amp;gt;.allocate(capacity: capacity) // It crashed here lp1[0].mData = UnsafeMutableRawPointer(outputBuffer1) lp1[0].mNumberChannels = 1 lp1[0].mDataByteSize = UInt32(capacity) lp1[1].mData = UnsafeMutableRawPointer(outputBuffer2) lp1[1].mNumberChannels = 1 lp1[1].mDataByteSize = UInt32(capacity) let lp2 = UnsafeMutableAudioBufferListPointer(&amp;amp;abl) let data = ( UnsafeMutablePointer&amp;lt;Int16&amp;gt;.allocate(capacity: 4096), packet: 1 ) lp2[0].mData = UnsafeMutableRawPointer(data.0) } } I checked the XCode 15 Release Notes and found out they did something with the pointer default initialization P1020R1 - Smart pointer creation with default initialization Is this causing the problem or I'm doing it wrong? Because it work perfectly fine with XCode 14.3.1 and below P/s: I can't provide the full crash logs cause it's company property but I can provide these: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 5 Application Specific Information: stack buffer overflow Thread 5 name: Dispatch queue: com.apple.NSXPCConnection.user.endpoint Thread 5 Crashed: 0 libsystem_kernel.dylib 0x20419ab78 __pthread_kill + 8 1 libsystem_pthread.dylib 0x23de0c3bc pthread_kill + 268 2 libsystem_c.dylib 0x1d780c44c __abort + 128 3 libsystem_c.dylib 0x1d77f7868 __stack_chk_fail + 96 Clearly there are something wrong with the memory address after init UnsafeMutableRawPointer from the UnsafeMutablePointer&amp;lt;Int8&amp;gt;
Posted
by
Post not yet marked as solved
0 Replies
527 Views
Hello, I used kAudioDevicePropertyDeviceIsRunningSomewhere to check if an internal or external microphone is being used. My code works well for the internal microphone, and for microphones which are connected using a cable. External microphones which are connected using bluetooth are not reporting their status. The status is always requested successfully, but it is always reported as inactive. Main relevant parts in my code : static inline AudioObjectPropertyAddress makeGlobalPropertyAddress(AudioObjectPropertySelector selector) { AudioObjectPropertyAddress address = { selector, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster, }; return address; } static BOOL getBoolProperty(AudioDeviceID deviceID, AudioObjectPropertySelector selector) { AudioObjectPropertyAddress const address = makeGlobalPropertyAddress(selector); UInt32 prop; UInt32 propSize = sizeof(prop); OSStatus const status = AudioObjectGetPropertyData(deviceID, &address, 0, NULL, &propSize, &prop); if (status != noErr) { return 0; //this line never gets executed in my tests. The call above always succeeds, but it always gives back "false" status. } return static_cast<BOOL>(prop == 1); } ... __block BOOL microphoneActive = NO; iterateThroughAllInputDevices(^(AudioObjectID object, BOOL *stop) { if (getBoolProperty(object, kAudioDevicePropertyDeviceIsRunningSomewhere) != 0) { microphoneActive = YES; *stop = YES; } }); What could cause this and how could it be fixed? Thank you for your help in advance!
Posted
by
Post not yet marked as solved
1 Replies
554 Views
I am trying to migrate an Audio Unit host based on the AUv2 C API to the newer AUv3 API. While the migration itself was relatively straightforward (in terms of getting it to compile), the actual rendering fails at run-time with error -10876 aka. kAudioUnitErr_NoConnection. The app does not use AUGraph or AVAudioEngine, perhaps that is an issue? Since the AUv3 and the AUv2 API are bridged in both directions and the rendering works fine with the v2 API, I would expect there to be some way to make it work via the v3 API though. Perhaps someone has an idea why (or under which circumstances) the render block throws this error? For context, the app is Mixxx, an open-source DJing application, and here is the full diff between my AUv2 -> v3 migration: https://github.com/fwcd/mixxx/pull/5/files
Posted
by
Post not yet marked as solved
0 Replies
412 Views
“I am trying to monitor sound input on an output device with the lowest possible latency on MAC and iPhone. I would like to know if it is possible to send the input buffer to the output device without having to do it through the callbacks of both processes, that is, as close as possible to redirecting them by hardware. I am using the Core Audio API, specifically AudioQueue Services, to achieve this. I also use HAL for configuration, but I would not like to depend too much on HAL since I understand that it is not accessible from iOS.
Posted
by
Post not yet marked as solved
1 Replies
647 Views
We've been doing the following in our app for years without issues: [[NSSound soundSystem:@"Basso"] play] Suddenly we're seeing hundreds of crashes from macOS 14.0 users and we're not sure what's causing this. There are no memory leaks within the app and all the stack traces are around NSSound: 0 AudioToolbox 0x1f558 MEDeviceStreamClient::RemoveRunningClient(AQIONodeClient&, bool, bool) + 3096 1 AudioToolbox 0x1e8fc AQMEDevice::RemoveRunningClient(AQIONodeClient&, bool) + 108 2 AudioToolbox 0x1e854 AQMixEngine_Base::RemoveRunningClient(AQIONodeClient&, bool) + 76 3 AudioToolbox 0xcdd78 AudioQueueObject::StopRunning(AQIONode*, bool) + 244 4 AudioToolbox 0xcbdd0 AudioQueueObject::Stop(bool, bool, int*) + 736 5 AudioToolbox 0xf1840 AudioQueueXPC_Server::Stop(unsigned int, bool) + 172 6 AudioToolbox 0x1418b4 ___ZN20AudioQueueXPC_Bridge4StopEjb_block_invoke + 72 7 libdispatch.dylib 0x3910 _dispatch_client_callout + 20 8 libdispatch.dylib 0x130f8 _dispatch_sync_invoke_and_complete_recurse + 64 9 AudioToolbox 0x141844 AudioQueueXPC_Bridge::Stop(unsigned int, bool) + 184 10 AudioToolbox 0xa09b0 AQ::API::V2Impl::AudioQueueStop(OpaqueAudioQueue*, unsigned char) + 492 11 AVFAudio 0xbe12c AVAudioPlayerCpp::disposeQueue(bool) + 188 12 AVFAudio 0x341dc -[AudioPlayerImpl dealloc] + 72 13 AVFAudio 0x358a0 -[AVAudioPlayer dealloc] + 36 14 AppKit 0x1b13b4 -[NSAVAudioPlayerSoundEngine dealloc] + 44 15 AppKit 0x1b132c -[NSSound dealloc] + 164 16 libobjc.A.dylib 0xf418 AutoreleasePoolPage::releaseUntil(objc_object**) + 196 17 libobjc.A.dylib 0xbaf0 objc_autoreleasePoolPop + 260 18 CoreFoundation 0x3c57c _CFAutoreleasePoolPop + 32 19 Foundation 0x30e88 -[NSAutoreleasePool drain] + 140 20 Foundation 0x31f94 _NSAppleEventManagerGenericHandler + 92 21 AE 0xbd8c _AppleEventsCheckInAppWithBlock + 13808 22 AE 0xb6b4 _AppleEventsCheckInAppWithBlock + 12056 23 AE 0x4cc4 aeProcessAppleEvent + 488 24 HIToolbox 0x402d4 AEProcessAppleEvent + 68 25 AppKit 0x3a29c _DPSNextEvent + 1440 26 AppKit 0x80db94 -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 716 27 AppKit 0x2d43c -[NSApplication run] + 476 28 AppKit 0x4708 NSApplicationMain + 880 29 ??? 0x180739058 (Missing)
Posted
by
Post not yet marked as solved
1 Replies
545 Views
Dear Sirs, I'm trying to find a way to save and restore some settings of an Audio Server Plugin so that they will be available again after a reboot. I came across the functions WriteToStorage and CopyFromStorage which seem to work correct but after a reboot my settings seem to be gone. Am I doing something wrong and normally this storage should survive a reboot or is this not the intended way to have persistent settings. What would be the recommended way if I want to use these settings right from the start before and user mode app is started? Thanks and best regards, Johannes
Posted
by
Post marked as solved
2 Replies
593 Views
Hello, I am using Core Audio to output a sine wave at a constant frequency (256Hz). The problem I have is that the sound starts very nice and pure, but gets distored over time - feels like there is some sort of a cumulative error which gets worse as the time goes by. I am using AudioDeviceCreateIOProcID to create a callback, in which I populate the buffer with samples. I only have a single buffer, because my samples are interleaved. Buffer size is always constant (12800 bytes). Samples are floats (from -1 to 1). Here is what I tried to identify the reasons for distortion: I validated that each subsequent callback starts generating samples with the proper phase i.e. the one at which the previous callback ended. E.g. if the last sample from previous callback was 0.8f, then the first callback in next callback is going to be 0.82f as expected. I was wondering if maybe hardware plays the buffer when I am filling it, so I even used mutex to lock the buffer as I am writing to it, but it did not do anything at all. This probably means that the buffer that is passed to callback by OS is already safe to write to. I inspected AudioStreamBasicDescription, buffer size and how many bytes I write to the buffer - it all matches my expectations. Any ideas on what might be causing this sound distortion over time?
Posted
by
Post not yet marked as solved
0 Replies
688 Views
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
Posted
by
Post not yet marked as solved
0 Replies
666 Views
I like to know NullAudio.c is official SDK sample or not. And the reason of enum and UID is defined in NullAudio.c, not defined in SDK header files. I try to use kObjectID_Mute_Output_Master, but it defined different values on each 3rd party plugin. kObjectID_Mute_Output_Master = 10 // NullAudio.c kObjectID_Mute_Output_Master = 9 // https://github.com/ExistentialAudio/BlackHole kObjectID_Mute_Output_Master = 6 // https://github.com/q-p/SoundPusher I can build BlackHole and SoundPusher, these plugin worked. This enum should be defined SDK header and keep same value on each SDK version. I like to know why 3rd party defined different value. If you know the history of NullAudio.c, please let me know.
Posted
by
Post not yet marked as solved
3 Replies
1.3k Views
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Posted
by
Post not yet marked as solved
0 Replies
886 Views
We are developing an app that uses external hardware to measure analogue hearing-loop performance . It uses audio jack on phone/iPad. With the new hardware on iPad using USB-C , we have noticed that the same input , one with lighting adapter and one with usb-C adapter - both produce way different input levels. The USB-C is ~23dB lower, with the same code and settings. That's almost 10x difference. Is there any way to control the USB-C adapter? am I missing something ? The code simply uses AVAudioInputNode and block attached to it via self.inputNode.installTap we do adjust gain to 1.0 let gain: Float = 1.0 try session.setInputGain(gain) But that still does not help. I wish there was an apple lab I could go to , to speak to engineers about it.
Posted
by
Post not yet marked as solved
0 Replies
608 Views
A: iPhone SE 2nd (iOS 16.5) Used bluetooth model: Shokz OpenRun S803 B: Any mobile device A uses bluetooth microphone/speaker, and make a call to B using iPhone app. Mute the A's headphone. (The bluetooth device support mute by hardware). While A mutes, B speaks. Unmute A's headphone. Every time B speaks, B can hear the echo. Since there is no audio data during the hardware muted, VPIO don't recognize audio reference data to remove echo signal. Is there any alternative to resolve this echo in VoIP software using VPIO?
Posted
by
Post not yet marked as solved
0 Replies
568 Views
I am currently working on a project that involves real-time audio processing in my iOS/macOS application. I have been exploring the Audio Unit Hosting API and specifically the AUHAL units for handling audio input and output. My goal is to establish a direct connection between an input AUHAL unit and an output AUHAL unit to achieve seamless real-time audio processing. I've been researching and experimenting with the API, but I haven't been able to find a clear solution or documentation regarding this specific scenario. Has anyone attempted such a configuration or encountered similar requirements? I would greatly appreciate any insights, suggestions, or pointers to relevant documentation that could help me achieve this direct connection between the input and output AUHAL units. Thank you in advance for your time and assistance. Best regards, Yosemite
Posted
by
Post not yet marked as solved
2 Replies
902 Views
Hi, I'm having trouble saving user presets in the plugin for Audio Units. This works well for saving the user presets in the Host, but I get an error when trying to save them in the plugin. I'm not using a parameter tree, but instead using the fullState's getter and setter for saving and retrieving a dictionary with the state. With some simplified parameters it looks something like this: var gain: Double = 0.0 var frequency: Double = 440.0     private var currentState: [String: Any] = [:] override var fullState: [String: Any]? {     get {       // Save the current state       currentState["gain"] = gain       currentState["frequency"] = frequency       // Return the preset state       return ["myPresetKey": currentState]     }     set {       // Extract the preset state       currentState = newValue?["myPresetKey"] as? [String: Any] ?? [:]       // Set the Audio Unit's properties       gain = currentState["gain"] as? Double ?? 0.0       frequency = currentState["frequency"] as? Double ?? 440.0     }  } This works perfectly well for storing user presets when saved in the host. When trying to save them in the plugin to be able to reuse them across hosts, I get the following error in the interface: "Missing key in preset state map". Note that I am testing mostly in AUM. I could not find any documentation for what the missing key is about and how can I get around this. Any ideas?
Posted
by
Post not yet marked as solved
6 Replies
1.9k Views
hi, so i have a little bit of work left on the Asus Xonar family of audio devices. thanks to APPUL's samplepciaudiodriver code and their excellent documentation, Evegeny Gavrilov's kxAudio driver for MAC and Takashi Iwai's exceptional documentation of the ALSA API i have something that is ready for testing. the stats look good, but unfortunately i this is my second HDAV1.3 deluxe. the other one is also in the same room consuming all of my devices with powered audio outputs. no matter, i am in the process of acquiring another xonar sound card in this family. which brings me to my question: what is the benefit of getting an apple developer account for 99 dollars a year? will i be able to distribute a beta kext with my signature that will allow people to test the binary? i don't think others could run a self-signed kext built on one machine, on another, correct? so would a developer license allow others to test a binary built on my machine, assuming they're x86? my hope is that the developer program would allow me to test the binaries and solicit input from enthusiast mac pro owners WORLD WIDE. i them hope to create a new program that will give us the wealth mixers/controls this fantastic line is capable of providing.
Posted
by
Post marked as solved
35 Replies
14k Views
Hi all,  Apple dropping on-going development for FireWire devices that were supported with the Core Audio driver standard is a catastrophe for a lot of struggling musicians who need to both keep up to date on security updates that come with new OS releases, and continue to utilise their hard earned investments in very expensive and still pristine audio devices that have been reduced to e-waste by Apple's seemingly tone-deaf ignorance in the cries for on-going support.  I have one of said audio devices, and I'd like to keep using it while keeping my 2019 Intel Mac Book Pro up to date with the latest security updates and OS features.  Probably not the first time you gurus have had someone make the logical leap leading to a request for something like this, but I was wondering if it might be somehow possible of shoe-horning the code used in previous versions of Mac OS that allowed the Mac to speak with the audio features of such devices to run inside the Ventura version of the OS.  Would it possible? Would it involve a lot of work? I don't think I'd be the only person willing to pay for a third party application or utility that restored this functionality. There has to be 100's of thousands of people who would be happy to spare some cash to stop their multi-thousand dollar investment in gear to be so thoughtlessly resigned to the scrap heap.  Any comments or layman-friendly explanations as to why this couldn’t happen would be gratefully received!  Thanks,  em
Posted
by
Post not yet marked as solved
9 Replies
3.6k Views
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions. I am using RemoteIO to playback live audio at 4000 hz. The error is the following: Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets This is how the audio format and the callback is set: // Set the Audio format AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 4000; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self); status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); Note that the mSampleRate I set is 4000 Hz. In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because: number of frames * buffer duration = sampling rate 93 * 0.02322 = 4000 Hz However, in iOS 16 I am getting the aforementioned error in the callback. Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal. number of frames * buffer duration = sampling rate 2 * 0.02322 = 0.046 Hz That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate. I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function. Any help on this issue would be appreciated.
Posted
by