Post not yet marked as solved
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer.
Let's say there are instances A and B of the Custom Player class.
When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated.
B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower.
At the same time when A is terminated, B can complete the task without any impact on B.
What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues.
let audioMix = AVMutableAudioMix()
var audioMixParameters: [AVMutableAudioMixInputParameters] = []
try composition.tracks(withMediaType: .audio).forEach { track in
let inputParameter = AVMutableAudioMixInputParameters(track: track)
inputParameter.trackID = track.trackID
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: UnsafeMutableRawPointer(
Unmanaged.passRetained(clientInfo).toOpaque()
),
init: { tap, clientInfo, tapStorageOut in
tapStorageOut.pointee = clientInfo
},
finalize: { tap in
Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release()
},
prepare: nil,
unprepare: nil,
process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in
var timeRange = CMTimeRange.zero
let status = MTAudioProcessingTapGetSourceAudio(tap,
numberFrames,
bufferListInOut,
flagsOut,
&timeRange,
numberFramesOut)
if noErr == status {
....
}
})
var tap: Unmanaged<MTAudioProcessingTap>?
let status = MTAudioProcessingTapCreate(kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap)
guard noErr == status else {
return
}
inputParameter.audioTapProcessor = tap?.takeUnretainedValue()
audioMixParameters.append(inputParameter)
tap?.release()
}
audioMix.inputParameters = audioMixParameters
return audioMix
Post not yet marked as solved
If user use AirPods, and he change the name of AirPods to "xxxx", how to get the origin name of AirPods?
I am writing code to monitor the incoming audio levels in VisionOS. It works properly in the simulator, but gets an error on the device. Curious if anyone has any tips.
I took out some of the code so it's a bit shorter, as it fails in setupAudioEngine when I try to start the engine with this error:
Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.)
Thanks in advance!
Here is my code:
class AudioInputMonitor: ObservableObject {
private var audioEngine: AVAudioEngine?
@Published var inputLevel: Float = 0
init() {
requestMicrophonePermission()
}
private func requestMicrophonePermission() {
AVAudioApplication.requestRecordPermission { granted in
DispatchQueue.main.async {
if granted {
self.setupAudioSessionAndEngine()
} else {
print("Microphone permission not granted")
// Handle the case where permission is not granted
}
}
}
}
private func setupAudioSessionAndEngine() {
do {
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .measurement, options: [])
try audioSession.setActive(true)
self.setupAudioEngine()
} catch {
print("Failed to set up the audio session: \(error)")
}
}
private func setupAudioEngine() {
audioEngine = AVAudioEngine()
guard let inputNode = audioEngine?.inputNode else {
print("Failed to get the audio input node")
return
}
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { [weak self] (buffer, _) in
self?.analyzeAudio(buffer: buffer)
}
do {
try audioEngine?.start()
} catch {
print("Error starting audio engine: \(error.localizedDescription)")
}
}
private func analyzeAudio(buffer: AVAudioPCMBuffer) {
// removed to be brief
}
func stopMonitoring() {
// removed to be brief
}
}
Post not yet marked as solved
Hi all,
I have created a QuickLook Preview for my custom datatype in my app.
I use SwiftUI wrapped in UIKit for the preview. My issue is that when I try and play audio using AVAudioPlayer, I receive a status code 50 error.
Does anyone know if there are seperate permissions I need to request before being able to do this?
Here are the errors I get while trying to set my audio session as active and play on the avaudioplayer
Thanks for your help and advice!
The operation couldn’t be completed. (OSStatus error -50.)
nwi_state: registration failed (9)
connection <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents =
"XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" }
}
auto-cancelling <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 }
0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID
throwing swix::exception: !(is_valid())
AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
rebuilding null connection
0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID
connection <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents =
"XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" }
}
throwing swix::exception: !(is_valid())
auto-cancelling <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 }
AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
Post not yet marked as solved
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly.
I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController.
If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails.
On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses.
However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices.
My questions are:
Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad)
Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
Post not yet marked as solved
Hello, I am having issue with the setting my avaudiosession output to bluetooth a2dp device.
I want to use built in mic for the input and a2dp device (airpod pro 2) for the output route.
Whenever I set the .allowBluetoothA2DP for my avaudioSession option, the output changes to speaker.
the mode is default and category is playandrecord.
If I do the same procedure with airpod pro 1, the output sets to the airpod pro 1.
I am having the trouble when I use airpod pro 2 with iphone with ios 17. It seems like there is no issue with ios version below 17.
Anyone went through this kind of issue?
Thank you in advance.
Post not yet marked as solved
When setting the mode during the configuration of an audio session in Swift, the previously configured categoryOptions get reset. For example, if you perform setMode as shown below, you will observe that all previously set categoryOptions are cleared.
Example:
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.allowBluetooth, .defaultToSpeaker])
try AVAudioSession.sharedInstance().setMode(.voiceChat)
If you need to change the mode while maintaining the categoryOptions, you have to perform setCategory once again. Although the exact reason for this behavior has not been identified, the practical impact on the application's functionality is not yet clear. Why do you think this handling is in place?
Post not yet marked as solved
I've been trying to make a native version of my iPad app which uses AVAudioPlayer. Everything works fine in iOS and iPad OS, however, when running on visionOS, it sounds like it's constantly skipping (both in the simulator and on an actual device).
Anyone know why this might be or how to fix or a workaround?
Post not yet marked as solved
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app:
EXC_BREAKPOINT
Exception 6, Code 1, Subcode 4304279688
+0x009888 AudioRecordModule.setupAudioEngine
+0x009788
AudioRecordModule.setupAudioEngine
+0x00c5bc
AudioRecordModule.handleConfigurationChange
Below is the relevant code in the Recorder class.
public class AudioRecordModule: Module {
private var audioEngine: AVAudioEngine?
private func startRecording(options recordingOptions: RecordingOptions) {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers)
try AVAudioSession.sharedInstance().setActive(true)
outputFormat = AVAudioFormat(
commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16,
sampleRate: Double(recordingOptions.sampleRate),
channels: AVAudioChannelCount(recordingOptions.channels),
interleaved: true
)!
let fileUri = URL(string: recordingOptions.fileUri)!
let formatSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: recordingOptions.sampleRate,
AVNumberOfChannelsKey: recordingOptions.channels,
AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue,
]
self.recordedFile = try AVAudioFile(
forWriting: fileUri,
settings: formatSettings,
commonFormat: outputFormat.commonFormat,
interleaved: outputFormat.isInterleaved
)
if !hadSetupNotification {
setupNotifications()
}
}
func handleConfigurationChange() {
DispatchQueue.main.async {
self.releaseAudioEngine()
self.setupAudioEngine()
if self.state == "recording" {
// we could attempt to keep recording
do {
try self.audioEngine?.start()
} catch {
self.internalPauseRecording()
self.sendInterruptEvent()
}
}
}
}
func setupNotifications() {
nc.addObserver(
forName: Notification.Name.AVAudioEngineConfigurationChange,
object: nil,
queue: nil
) { [weak self] _ in
guard let weakself = self else {
return
}
if weakself.state != "inactive" {
weakself.handleConfigurationChange()
}
}
}
private func setupAudioEngine() {
self.audioEngine = nil
let audioEngine = AVAudioEngine()
self.audioEngine = audioEngine
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let converter = AVAudioConverter(from: inputFormat, to: outputFormat)!
inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
do {
let inputBlock: AVAudioConverterInputBlock = { _, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer
}
let frameCapacity =
AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength
/ AVAudioFrameCount(buffer.format.sampleRate)
let outputBuffer = AVAudioPCMBuffer(
pcmFormat: self.outputFormat,
frameCapacity: frameCapacity
)!
var error: NSError?
converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock)
if let error = error {
throw error
} else {
try self.recordedFile?.write(from: outputBuffer)
}
} catch {
print(error)
}
}
}
private func releaseAudioEngine() {
if let audioEngine = self.audioEngine {
audioEngine.inputNode.removeTap(onBus: 0)
audioEngine.stop()
}
audioEngine = nil
}
}
Beside that, the record module works normally. It is just the configuration change that it does not handle well.
I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch.
AVAudioRecorder is not an option for me.
Thank you for your help.
Post not yet marked as solved
:(
We are currently in the process of developing a video calling app using WebRTC.
We initiate one-to-one video calls with the AVAudioSession configured as follows:
do {
if audioSession.category != .playAndRecord {
try audioSession.setCategory(
AVAudioSession.Category.playAndRecord,
options: [
.defaultToSpeaker
]
)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
}
if audioSession.mode != .videoChat {
try audioSession.setMode(.videoChat)
}
} catch {
logger.error(msg: "AVAudioSession: \(error.localizedDescription)")
}
After initiating a video call, we recorded this app's video call using the iOS default screen recording feature.
As a result, the recorded video includes system audio.
However, iOS/iPad apps with similar features (Zoom, Skype, Slack) do not include audio in their recordings.
Why does this difference occur?
Is this behavior a security feature of iOS, and are there specific conditions required?
Is there a need for some sort of configuration in AVAudioSession?
additional :(
I also reached out to Apple Developer Technical Support, and they responded, "We were able to reproduce it, but since we don't understand the issue, we will investigate it."
What's that about...
Post not yet marked as solved
I am creating an app where you can record a video and listen to music in the background. At the top of my viewDidLoad I set the AVAudioSession Category to .playAndRecord
let audioSession = AVAudioSession.sharedInstance()
AVCaptureSession().automaticallyConfiguresApplicationAudioSession = false
do {
try audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.mixWithOthers, .allowAirPlay, .allowBluetoothA2DP])
try audioSession.setActive(true)
} catch {
print("error trying to record and play audio")
}
However when I do this the audio cuts out for a second or less at app open and app close. I would like the audio to continue playing and not cutout. Is there anything I can do to ensure the audio continues to play?
Post not yet marked as solved
Hi There,
I am trying to record a meeting and upload it to AWS server. The recording is in .m4a format and the upload request is a URLSession request.
The following code works perfectly for recordings less than 15 mins. But then for greater recordings, it gets stuck
Could you please help me out in this?
func startRecording() {
let audioURL = getAudioURL()
let audioSettings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do {
audioRecorder = try AVAudioRecorder(url: audioURL, settings: audioSettings)
audioRecorder.delegate = self
audioRecorder.record()
} catch {
finishRecording(success: false)
}
}
func uploadRecordedAudio{
let _ = videoURL.startAccessingSecurityScopedResource()
let input = UploadVideoInput(signedUrl: signedUrlResponse, videoUrl: videoURL, fileExtension: "m4a")
self.fileExtension = "m4a"
uploadService.uploadFile(videoUrl: videoURL, input: input)
videoURL.stopAccessingSecurityScopedResource()
}
func uploadFileWithMultipart(endPoint: UploadEndpoint) {
var urlRequest: URLRequest
urlRequest = endPoint.urlRequest
uploadTask = URLSession.shared.uploadTask(withStreamedRequest: urlRequest)
uploadTask?.delegate = self
uploadTask?.resume()
}
Post not yet marked as solved
Hello,
I'm facing an issue with Xcode 15 and iOS 17: it seems impossible to get AVAudioEngine's audio input node to work on simulator.
inputNode has a 0ch, 0kHz input format,
connecting input node to any node or installing a tap on it fails systematically.
What we tested:
Everything works fine on iOS simulators <= 16.4, even with Xcode 15.
Nothing works on iOS simulator 17.0 on Xcode 15.
Everything works fine on iOS 17.0 device with Xcode 15.
More details on this here: https://github.com/Fesongs/InputNodeFormat
Any idea on this? Something I'm missing?
Thanks for your help 🙏
Tom
PS: I filed a bug on Feedback Assistant, but it usually takes ages to get any answer so I'm also trying here 😉
I am creating a camera app where I would like music from another app (Apple Music, Spotify, etc.) to continue playing once the app is opened. Currently I am using .mixWithOthers to do this in my viewDidLoad.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSession.Category.playback, options: [.mixWithOthers])
try audioSession.setActive(true)
} catch {
print("error trying to record and play audio")
}
However I am running into an issue where the music only plays if you resume music playback once you start recording a video. Otherwise, when you open the app music will stop when you see the preview. The interesting thing is that if you start playing music while recording, then once you stop music continues to play in the preview view. If you close the app (not force close) and reopen then music play back continues as expected. However, once you force close the app then it returns to the original behavior. I've tried to do research on this and I have not been able to find anything. Any help is appreciated. Let me know if more details are needed.
Post not yet marked as solved
there is a method setPreferredInput in AVAudioSession which can be used to select different input device. But, does there any similar function like "setPerferredOutput" so that in my APP I can select a specific audio output device to play audio ?
I do not want user to change it through system interfaces (such as the Control Center), but by logic inside APP.
thanks!
Post not yet marked as solved
Hi,
Is it technically possible to stream external BT device's microphone input to iPhones speakers output?
Post not yet marked as solved
How can I record audio in a keyboard extension? I've enabled microphone support by enabling "RequestsOpenAccess". When I try to record, I get the error below in the console. This doesn't make sense as Apple's docs seem to say that microphone access is allowed with Full Keyboard Access. What is the point of enabling the microphone if the app cannot access the data from the microphone?
-CMSUtilities- CMSUtility_IsAllowedToStartRecording: Client sid:0x2205e, XXXXX(17965), 'prim' with PID 17965 was NOT allowed to start recording because it is an extension and doesn't have entitlements to record audio.
Post not yet marked as solved
I was play a pcm(24kHz 16bit) file with AudioQueue, and it mix with other sound( 192kHz 24bit) named sound2.
Setting for AVAudioSession as:
category (AVAudioSessionCategoryPlayback),
options (AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionDuckOthers)
when playing pcm the sound2 should pushed volume lower as setting. BUT, there has a absolutly mute keep 0.5 second when the sound2 become low, and after a 0.5s mute the pushed lower sound came out.
It only become in sound2 situation(192kHz, 24bit). if the sound2's quality lower everything is OK.
Post not yet marked as solved
Basically for this iPhone app I want to be able to record from either the built in microphone or from a connected USB audio device while simultaneously playing back processed audio on connected AirPods. It's a pretty simple AVAudioEngine setup that includes a couple of effects units. The category is set to .playAndRecord with the .allowBluetooth and .allowBluetoothA2DP options added. With no attempts to set the preferred input and AirPods connected, the AirPods mic will be used and output also goes to the AirPods. If I call setPreferredInput to either built in mic or a USB audio device I will get input as desired but then output will always go to the speaker. I don't really see a good explanation for this and overrideOutputAudioPort does not really seem to have suitable options.
Testing this on iPhone 14 Pro
Post not yet marked as solved
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode
This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue.
I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from:
try session.setCategory(.playAndRecord, options: .defaultToSpeaker)
to
try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker)
This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only.
I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not:
2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71 Invalid input size for property 1684431725
2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225 Invalid input size for property 1684431725
I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.