Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

55 Posts
Sort by:
Post marked as solved
1 Replies
104 Views
Hi, My application doesn't start playback anymore after signing it with entitlements. <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>com.apple.security.app-sandbox</key> <true/> <key>com.apple.security.files.user-selected.read-only</key> <true/> <key>com.apple.security.device.audio-input</key> <true/> <key>com.apple.security.device.microphone</key> <true/> <key>com.apple.security.assets.music.read-write</key> <true/> <key>com.apple.security.network.server</key> <true/> </dict> </plist> regards, Joël
Posted
by
Post not yet marked as solved
0 Replies
97 Views
The device listing core-audio API kAudioHardwarePropertyDevices does not list Airplay device if virtual-audio-driver is selected as sound output device in System settings. This virtual audio driver is developed by us and is named as BoomAudio. We need to select BoomAudio in System Settings Sound output so that we can get system audio and apply Boom effects/enhancement. But whenever BoomAudio is selected as sound output, we cannot get Airplay device in device-list API and hence cannot play-through to AirPlay sound output device. Steps: Select BoomAudio as Sound output in System Settings. (The issue occurs if any other sound output device like Headphone/Internal Speakers is selected) If AppleTV is connected then we should not AirPlay the system-display. Only Sound output of System should be Airplayed. Build and run the sample project that we have attached “SampleAirplayAudio Click on the button “Sound Output Device List”. Output: In the console of Xcode, Airplay device does not get listed. BoomAudio can be installed from the following path: https://d3jbf8nvvpx3fh.cloudfront.net/gdassets/airplaydts/Boom+2+Installer.zip The sample project 'SampleAirplayAudio' is available at this path: https://d3jbf8nvvpx3fh.cloudfront.net/gdassets/airplaydts/SampleAirplayAudio.zip We have already raised Bug report at Feedback Assistant Apple and the bug id is: FB7543204
Posted
by
Post not yet marked as solved
0 Replies
216 Views
Hi all, I'm working on an app that involves measuring the heading of one iPhone relative to another iPhone. I need to be able to record audio at the same time from at least 2 of built-in data sources at once. Does anyone know how I can achieve this? I've found that, when using the .measurement mode for an AVAudioSession, the stereo polar pattern is not available. Also, I see that it doesn't seem possible to select multiple data sources. Is there something I'm missing? If this is not possible, why not?
Posted
by
Post not yet marked as solved
0 Replies
336 Views
Hello, We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow. We set the AudioSession as followed: [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_]; We are creating our Audio unit as followed: AudioComponentDescription desc_; desc_.componentType = kAudioUnitType_Output; desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc_.componentManufacturer = kAudioUnitManufacturer_Apple; desc_.componentFlags = 0; desc_.componentFlagsMask = 0; AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_); IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO."); UInt32 one_ = 1; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO"); IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate); bool isInterleaved = _channel == 2 ? true : false; self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO"); UInt32 maxFramesPerSlice_ = 4096; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO"); UInt32 propSize_ = sizeof(UInt32); IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO"); AURenderCallbackStruct renderCallbackStruct_; renderCallbackStruct_.inputProc = playbackCallback; renderCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); AURenderCallbackStruct inputCallbackStruct_; inputCallbackStruct_.inputProc = recordingCallback; inputCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); And as soon as we try to start the AudioUnit we have the following error: PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6} We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere. We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow. It used to work on visonOS 1.0.1 Regards, Summit-tech
Posted
by
Post not yet marked as solved
3 Replies
294 Views
I see unexpected behavior when using AudioObjectGetPropertyData() to get the Channel Number Name or the Channel Category Name for the iPhone Microphone or the MacBook Pro Microphone audio devices. I am running macOS 14.4 Sonoma on a Intel MacBook Pro 15" 2019. I have a test program that loops though all audio devices on a system, and all channels on each device. It uses AudioObjectGetPropertyData() to get the device name and manufacturer name and then iterate over the input and output channels getting Channel Number Name, Channel Name and Channel Category. I would expect some of these values (like channel Name frequently is) to be empty CFStrings. Or for others to return FALSE to AudioObjectHasProperty() if the driver does not implement the property. And that is how things behave for most devices... ... except for the MacBook Pro Microphone and iPhone Microphone devices. There I get AudioObjectHasProperty() return TRUE but then a AudioObjectGetPropertyData() call with the exact same AudioObjectPropertyAddress returns with an error code 'WHAT'. Took me a little while to realize the error cord being returned was 'WHAT' not 'what' and I added a modified checkError() function here to capture that and more. So what surprised me is: If AudioObjectHasProperty() returns TRUE then I expect that the matching AudioObjectGetPropertyData() works. and What the heck is 'WHAT'? I assume it is supposed to mean 'what' aka kAudioHardwareUnspecifiedError. Why is that actual error value not returned? Are there other places that return 'WHAT' or capitalized versions of these standard OSStatus CoreAudio errors? The example program is not complex but is too long for here so it's on GitHub at https://github.com/Darryl-Ramm/Wot Here is some output from that program showing the unexpected behavior: output.txt
Posted
by
dkr
Post not yet marked as solved
1 Replies
286 Views
Hi Hopefully a simple question. I just reached for AudioObjectShow() to help debug stuff and it does not appear to work on audio devices or audio streams. It prints nothing for them. It does work on AudiokAudioObjectSystemObject, I've not explored what else it does or does not work on. I could not find any other posts about this, it it expected to work? On all audio objects? I'm on macOS 14.4. Here is a simple demo. AudioObjectShow() prints out info for the System AudiokAudioObjectSystemObject but then prints nothing as we loop through the audio devices in the system (and same for all streams on all these devices, but I'm not showing that here). #include <CoreAudio/AudioHardware.h> static const AudioObjectPropertyAddress devicesAddress = { kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain }; static const AudioObjectPropertyAddress nameAddress = { kAudioObjectPropertyName, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain }; int main(int argc, const char *argv[]) { UInt32 size; AudioObjectID *audioDevices; AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &devicesAddress, 0, NULL, &size); audioDevices = (AudioObjectID *) malloc(size); AudioObjectGetPropertyData(kAudioObjectSystemObject, &devicesAddress, 0, NULL, &size, audioDevices); UInt32 nDevices = size / sizeof(AudioObjectID); printf("--- AudioObjectShow(kAudioObjectSystemObject):\n"); AudioObjectShow(kAudioObjectSystemObject); for (int i=0; i < nDevices; i++) { printf("-------------------------------------------------\n"); printf("audioDevices[%d] = 0x%x\n", i, audioDevices[i]); AudioObjectGetPropertyDataSize(audioDevices[i], &nameAddress, 0, NULL, &size); CFStringRef cfString = malloc(size); AudioObjectGetPropertyData(audioDevices[i], &nameAddress, 0, NULL, &size, &cfString); CFShow(cfString); //AudioObjectShow() give us anything? printf("--- AudioObjectShow(audioDevices[%d]=0x%x):\n", i, audioDevices[i]); AudioObjectShow(audioDevices[i]); printf("---\n"); } } Start of output... AudioObjectID: 0x1 Class: Audio System Object Name: The Audio System Object ------------------------------------------------- audioDevices[0] = 0xd2 Darryl\u2019s iPhone Microphone --- AudioObjectShow(audioDevices[0]=0xd2): --- ------------------------------------------------- audioDevices[1] = 0x41 LG UltraFine Display Audio --- AudioObjectShow(audioDevices[1]=0x41): --- ------------------------------------------------- audioDevices[2] = 0x3b LG UltraFine Display Audio --- AudioObjectShow(audioDevices[2]=0x3b): --- ------------------------------------------------- audioDevices[3] = 0x5d BlackHole 16ch --- AudioObjectShow(audioDevices[3]=0x5d): --- -------------------------------------------------
Posted
by
dkr
Post not yet marked as solved
0 Replies
232 Views
The CoreAudio framework has a process class property kAudioProcessPropertyDevices, which is used to obtain an array of AudioObjectIDs that represent the devices currently used by the process for output. But this property behaves incorrectly. Specifically, if a process switches from one microphone to another while streaming, this property returns the output device ID as the input device ID. Steps to reproduce: run FaceTime select "MacBook Pro Microphone" as an input device from the FaceTime menu select "MacBook Pro Speaker" as an output device from the FaceTime menu start a call get kAudioProcessPropertyDevices for Input scope: returns ID1 - "MacBook Pro Microphone" [CORRECT] get kAudioProcessPropertyDevices for Output scope: returns ID2 - "MacBook Pro Speaker" [CORRECT] change the input device in the FaceTime menu to any other microphone ("AirPods Pro" - ID3) get kAudioProcessPropertyDevices for Input scope: returns ID2 "MacBook Pro Speaker" but should be ID3 "AirPods Pro" [INCORRECT] get "kAudioProcessPropertyDevices" for Output scope: returns ID2 "MacBook Pro Speaker" [CORRECT] Monitoring the property change for kAudioProcessPropertyDevices could provide a means to track audio streaming processes, but its current flaw renders it unusable. So I'm curious if the macOS developers plan to address this issue in future releases, or if anyone can come up with a reliable alternative for identifying processes and associated audio devices being used for playback or recording.
Posted
by
Post marked as solved
4 Replies
1.8k Views
Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script. sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod So with the OS update the command fails, if a computer has SIP enabled (what is the default). sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod Password: Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted It would be super nice if either the change can be: reverted OR I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
Post not yet marked as solved
0 Replies
289 Views
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups. When the Standalone App or Logic Pro is in the foreground and active all is well and as expected. However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore. The processing thread itself stays on a p-core. But has to wait for the other threads to finish. How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
Posted
by
Post not yet marked as solved
0 Replies
415 Views
I am currently working on planning a multi-component software system that consists of an Audio Server Plugin and an application for user interaction. I have very little experience with IPC/XPC and its performance implications, so I hope I can find a little guidance here. The Audio Server plugin publishes a number of multi-channel output devices on which it should perform computations and pass the result on to a different Core Audio device. My concerns here are: Can the plugin directly access other CoreAudio devices for audio output or is this prohibited by the sandboxing? If it cannot, would relaying the audio data via XPC be a good idea in terms of low latency stability? Can I use metal compute from within the Audio Server plugin? I have not found any information about metal related sandboxing entitlements. I am also concerned about performance implications as above. Regarding the user interface application, I would like to know: If a process that has not been started by launchd can communicate with the Audio Server plugin using XPC. If not, would a user agent instead of an app be a better choice? Or are there other communication channels that would work with sandboxing? Thank you very much! Andreas
Posted
by
Post not yet marked as solved
0 Replies
350 Views
At least under macOS Sonoma 14.2.1 kAudioFormatFlagIsBigEndian for 24bit audio doesn't seem to be supported by the CoreAudio engine when providing kAudioServerPlugInIOOperationWriteMix streaming buffers for our CoreAudio server plugin. Is that correct and to be expected? Or how should the AudioStreamBasicDescription be filled out on a kAudioStreamPropertyPhysicalFormat request to correctly announce 24bit big endian audio to CoreAudio? Thanks, hagen.
Posted
by
Post not yet marked as solved
0 Replies
431 Views
I have a music player that is able to save and restore AU parameters using the kAudioUnitProperty_ClassInfo property. For non apple AUs, this works fine. But for any of the Apple units, the class info can be set only the first time after the audio graph is built. Subsequent sets of the property do not stick even though the OSStatus code is 0 upon return. Previously this had worked fine. But sometime, not sure when, the Apple provided AUs changed their behavior and is now causing me problems. Can anyone help shed light on this ? Thanks in advance for the help. Jeff Frey
Posted
by
Post not yet marked as solved
4 Replies
612 Views
I am trying to get the raw audio data from the system microphone using AudioToolbox and CoreFoundation frameworks. So far the writing packets to file logic works but when I try to capture the raw data into a file I am getting white noise. Callback function looks like this: static void MyAQInputCallback(void *inUserData, AudioQueueRef inQueue, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) { MyRecorder *recorder = (MyRecorder *)inUserData; if (inNumPackets > 0) { CheckError(AudioFileWritePackets(recorder->recordFile, FALSE, inBuffer->mAudioDataByteSize, inPacketDesc, recorder->recordPacket, &inNumPackets, inBuffer->mAudioData), "AudioFileWritePackets failed"); recorder->recordPacket += inNumPackets; int sampleCount = inBuffer->mAudioDataByteSize / sizeof(AUDIO_DATA_TYPE_FORMAT); AUDIO_DATA_TYPE_FORMAT* samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData; FILE *fp = fopen(filename, "a"); for (int i = 0; i < sampleCount; i++){ fprintf(fp, "%i;\n",samples[i]); } fclose(fp); } if (recorder->running) CheckError(AudioQueueEnqueueBuffer(inQueue, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed"); } Some parameters: NumberRecordBuffers = 3 buffer duration = 0.1 format->mFramesPerPacket = 4096 samplerate = 44100 inNumPackets = 1 recordFormat.mFormatID = kAudioFormatAppleLossless; recordFormat.mChannelsPerFrame = 1; recordFormat.mBitsPerChannel = 16; Is this the correct way to do this? I could not find much information in the documentation. Any help is appreciated. Thank you in advance.
Posted
by
Post not yet marked as solved
1 Replies
427 Views
i have create one recording application, but user switch off or kill the application, so that time how to save ongoing record.
Posted
by
Post not yet marked as solved
2 Replies
640 Views
I am working on an app that uses Core Audio through JUCE library for audio. The problem I'm trying to solve is that when the app is using a full duplex audio interface such as one from Focusrite Scarlett series for output, the app shows a dialog requesting permission to use microphone. The root cause of the issue is that by default, Core Audio opens full duplex devices for both input and output. On previous macOS versions, I was able to work around the problem by disabling the input stream before starting the IOProc by setting AudioHardwareIOProcStreamUsage to all zero for input. On macOS Sonoma this disables input so that the microphone indicator is not shown, but the permission popup is still shown. What other reasons there are to show the popup? I have noticed that Chrome and Slack have the same problem that they show the microphone popup when trying to play sounds on the Focusrite, but for example Deezer manages without the popup.
Posted
by
Post not yet marked as solved
2 Replies
1.9k Views
I report here some messages from Apple Community for an untracked bug in macOS Sonoma (from 14.0 to 14.2 beta 4 at the time). https://discussions.apple.com/thread/255214328 Original message 1: I've finally noticed a pattern that occurs rather frequently on macOS Sonoma. I was blaming Bluetooth issues before, but it looks like it's more about audio in general. What happens is that at some point, all audio freezes. The hotkeys for the audio controls show "Stop" sign, like there are no audio outputs connected, the taskbar is completely unresponsive: Control Centre shows a spinning circle, the sidebar is not opening (Spotlight works, though). If you go to the System Settings, some menu items will be unresponsive: Sound doesn't open, Bluetooth does not open, Accessibility and Siri & Spotlights all do not open. Then, a new bug appeared that I've just started to notice recently. The screen is flashing like there's an Accessibility feature enabled that uses warning flash instead of sound. It appears just randomly, out of nowhere. Immediately after that, sound works just normally. When this is happening, and video/audio content in the browser and wherever does not work, Tidal shows many random errors, and Firefox just completely hangs when you try to play a video on YouTube. I've tried to stop coreaudiod and it did restart the daemon, but nothing else happened. The device is a very fresh M1 Max MacBook, and nothing like that was happening on Ventura. I've had audio cracks on another M1 Pro laptop, but this one didn't even have those. P.S. This is happened just when I was writing this post, and I've disabled Bluetooth just before. Now, the Bluetooth section in the Settings is opening, but others are still unresponsive. For reference - I have yabai and BetterSnapTool installed, which modify system behavior, but with system protection enabled. Siri is disabled. I've tried to stop a bunch of random processes when this happened, but none helped so far. This issue constantly haunts me since I've upgraded, and it's extremely annoying. Original message 2: Yes, I'm thinking it's a combination of Bluetooth and audio issues. I've got all apps that are trying to use audio crashing after I'm just connecting my Bluetooth earbuds. Now I see that the coreaudiod is just not running this time - I've tried to connect to a Slack Huddle, and it just hanged, sound is unresponsive again and the Settings app is not working as I mentioned before. I've checked the Activity Monitor and found that the process that works with audio on macOS (coreaudiod) is not running. I've attempted to launch coreaudiod with sudo launchctl load /system/library/launchdaemons/com.apple.audio.coreaudiod.plis, and got Load failed: 5: Input/output error as a response. After a while, when I disabled the earbuds it's started again on its own and coreaudiod is running, and the audio controls are working once more. Original message 3: Just accidentally looked at the Console App while looking for logs for other things, and found out that my codeaudiod crashing by cooldown every day 10 to 50 times with intervals from 1 second to a couple of hours, around 5 minutes on average. The crash is the following: Crashed Thread: 18 Dispatch queue: com.apple.audio.device.~:AMS2_StackedOutput:0.event Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000 I also found that avconferenced fails too occasionally, though very rare - I believe that's the process that connects iPad as a second screen, and it _too fails with sigsegv on 0x0 - though not that it's some unique bug to attempt to read memory at 0, maybe just a coincidence. @Flomaster do you use Sidecar by chance? My message: I too have this problem on my MacBook Pro M2 Pro since upgrading to macOS Sonoma. It mainly occurs with AirPods Pro 2 but I have also had this happen to me using OnePlus Buds. The blockages are the same as you have experienced and, as I often work in video conferences, blocking MS Teams or Google Meet is really becoming a serious problem. Desperate, I tried installing macOS Sonoma 14.2 beta but none of the updates solved the problem. I even tried a full restore, re-importing the data from Time Machine but to no avail. Indeed, with Beta 4 the problem seems to have worsened because the AirPods now even struggle to connect.
Posted
by
Post not yet marked as solved
0 Replies
534 Views
Hello, I am an audio developer, currently using macOS version 14.1.1. I noticed that after disabling the microphone, the small yellow dot in the Control Center disappears immediately, but the one in the menu bar takes about 20 seconds to disappear. I tested the built-in Voice Memos app and found the same behavior. Our users may be concerned about their privacy being violated, even though the software is not using the microphone at that time. We believe this is a bug, and the microphone icon in the menu bar should disappear immediately after the microphone is no longer in use. Do you have plans to fix this issue in future versions? Additionally, is there any workaround for the current version? As a supplement, we are using CoreAudio API with AudioDeviceStart & AudioDeviceStop, not AudioUnit.
Posted
by