AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

AVFoundation: Strange error while trying to switch camera formats with the touch of a single button.
I'm getting the following output from my iOS app's debug console, note the error on the last line: Capture format keys: ["600x600@25", "1200x1200@5", "1200x1200@30", "1600x1200@2", "1600x1200@30", "3200x2400@15", "3200x2400@2", "600x600@30"] Start capture session for 1600x1200@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetPhoto]> Stop capture session: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0> Start capture session for 600x600@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0> <<<< FigSharedMemPool >>>> Fig assert: "blkHdr->useCount > 0" at (FigSharedMemPool.c:591) - (err=0) This is in response to trying to switch capture formats between the two key modes that must regularly be used by my application. Below you will find the functions that I use to start and stop capturing frames to my preview layer. I have a UI with three buttons. Off Mode 1 Mode 2 If I tap the Off button in between tapping Mode 1 or Mode 2 all is well; I can do this all day. However, attempt to jump between Mode 1andMode 2` directly I run into issues. I added a layer of software between the UI and the underlying functions so that I could make sure to turn off the Camera before turning it back on in the opposite mode and was surprised to get this output. Can someone at Apple please tell me what is going on here? For the rest of you, if anyone knows the magic incantation to safely switch camera formats, please paste that code here. Thanks. I've included my code below. func start(for deviceFormat: String) { sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(self.captureSession)") do { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.stopRunning() captureSession.beginConfiguration() // May not be necessary. try captureDevice.lockForConfiguration() // Without this we get an error. captureDevice.activeFormat = format captureDevice.unlockForConfiguration() // Matching function: Necessary. captureSession.commitConfiguration() // Matching function: May not be necessary. captureSession.startRunning() } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") errorPublisher.send(error) } } } func stop() { sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(self.captureSession)") captureSession.stopRunning() } }
0
0
100
4d
AirPlay stuck on buffering state
We are facing a weird behaviour when implementing the AirPlay functionality of our iOS app. When we test our app on Apple TV devices everything works fine. On some smart TVs with a specific AirPlay receiver version, (more details below) the stream gets stuck on buffering state immediately after switching to AirPlay mode. On other smart TVs, with different AirPlay receiver version, everything works as expected. The interesting part is that other free or DRM protected streams, work fine on all devices. Smart TVs that AirPlay works fine AirPlay Version -> 25.06 (19.9.9) Smart TVs that AirPlay stuck at buffering state: AirPlayReceiverSDKVersion -> 3.3.0.54 AirPlayReceiverAppVersion -> 53.122.0 You can reproduce this issue using the following stream url: https://tr.vod.cdn.cosmotetvott.gr/v1/310/668/1674288197219/1674288197219.ism/.m3u8?qual=a&ios=1&hdnts=st=1713194669\~exp=1713237899\~acl=\*/310/668/1674288197219/1674288197219.ism/\*\~id=cab757e3-9922-48a5-988b-3a4f5da368b6\~data=de9bbd0100a8926c0311b7dbe5389f7d91e94a199d73b6dc75ea46a4579769d7~hmac=77b648539b8f3a823a7d398d69e5dc7060632c29 If this link expires, notify me to send a new one for testing. Could you please provide to us any specific suggestion as to what causes this issue on those specific streams?
3
0
498
1w
AVPlayer CoreMediaErrorDomain -12642
Hi everyone, I am having a problem on AVPlayer when I try to play some videos. The video starts for a few seconds, but immediately after I see a black screen and in the console there is the following error: <__NSArrayM 0x14dbf9f30>( { StreamPlaylistError = "-12314"; comment = "have audio audio-aacl-54 in STREAMINF without EXT-X-MEDIA audio group"; date = "2024-05-13 20:46:19 +0000"; domain = CoreMediaErrorDomain; status = "-12642"; uri = "http://127.0.0.1:8080/master.m3u8"; }, { "c-conn-type" = 1; "c-severity" = 2; comment = "Playlist parse error"; "cs-guid" = "871C1871-D566-4A3A-8465-2C58FDC18A19"; date = "2024-05-13 20:46:19 +0000"; domain = CoreMediaErrorDomain; status = "-12642"; uri = "http://127.0.0.1:8080/master.m3u8"; } )
1
0
206
1w
HEIC Image generation broken for iOS 17.5 simulator?
This code to write UIImage data as heic works in iOS simulator with iOS < 17.5 import AVFoundation import UIKit extension UIImage { public var heic: Data? { heic() } public func heic(compressionQuality: CGFloat = 1) -> Data? { let mutableData = NSMutableData() guard let destination = CGImageDestinationCreateWithData(mutableData, AVFileType.heic as CFString, 1, nil), let cgImage = cgImage else { return nil } let options: NSDictionary = [ kCGImageDestinationLossyCompressionQuality: compressionQuality, kCGImagePropertyOrientation: cgImageOrientation.rawValue, ] CGImageDestinationAddImage(destination, cgImage, options) guard CGImageDestinationFinalize(destination) else { return nil } return mutableData as Data } public var isHeicSupported: Bool { (CGImageDestinationCopyTypeIdentifiers() as! [String]).contains("public.heic") } var cgImageOrientation: CGImagePropertyOrientation { .init(imageOrientation) } } extension CGImagePropertyOrientation { init(_ uiOrientation: UIImage.Orientation) { switch uiOrientation { case .up: self = .up case .upMirrored: self = .upMirrored case .down: self = .down case .downMirrored: self = .downMirrored case .left: self = .left case .leftMirrored: self = .leftMirrored case .right: self = .right case .rightMirrored: self = .rightMirrored @unknown default: fatalError() } } } But with iOS 17.5 simulator it seems to be broken. The call of CGImageDestinationFinalize writes this error into the console: writeImageAtIndex:962: *** CMPhotoCompressionSessionAddImage: err = kCMPhotoError_UnsupportedOperation [-16994] (codec: 'hvc1') On physical devices it still seems to work. Is there any known workaround for the iOS simulator?
1
1
136
1w
How can I disable Camera Reaction Effects on macOS
I have an app that has the camera continuously running, as it is doing its own AI, have zero need for Apple'video effects, and am seeing a 200% performance hit after updating to Sonoma. The video effects are the "heaviest stack trace" when profiling my app with Instruments CPU profiler (see below). Is forcing your software onto developers not something Microsoft would do? Is there really no way to opt out? 6671 Jamscape_exp (23038) 2697 start_wqthread 2697 _pthread_wqthread 2183 _dispatch_workloop_worker_thread 2156 _dispatch_root_queue_drain_deferred_wlh 2153 _dispatch_lane_invoke 2146 _dispatch_lane_serial_drain 1527 _dispatch_client_callout 1493 _dispatch_call_block_and_release 777 __88-[PTHandGestureDetector initWithFrameSize:asyncInitQueue:externalHandDetectionsEnabled:]_block_invoke 777 -[VCPHandGestureVideoRequest initWithOptions:] 508 -[VCPHandGestureClassifier initWithMinHandSize:] 508 -[VCPCoreMLRequest initWithModelName:] 506 +[MLModel modelWithContentsOfURL:configuration:error:] 506 -[MLModelAsset modelWithError:] 506 -[MLModelAsset load:] 506 +[MLLoader loadModelFromAssetAtURL:configuration:error:] 506 +[MLLoader _loadModelFromAssetAtURL:configuration:loaderEvent:error:] 505 +[MLLoader _loadModelFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] 505 +[MLLoader _loadWithModelLoaderFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] 505 +[MLLoader _loadModelFromArchive:configuration:modelVersion:compilerVersion:loaderEvent:useUpdatableModelLoaders:loadingClasses:error:] 505 +[MLLoader _loadModelWithClass:fromArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] 445 +[MLMultiFunctionProgramEngine loadModelFromCompiledArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] 333 -[MLMultiFunctionProgramEngine initWithProgramContainer:configuration:error:] 333 -[MLNeuralNetworkEngine initWithContainer:configuration:error:] 318 -[MLNeuralNetworkEngine _setupContextAndPlanWithConfiguration:usingCPU:reshapeWithContainer:error:] 313 -[MLNeuralNetworkEngine _addNetworkToPlan:error:] 313 espresso_plan_add_network 313 EspressoLight::espresso_plan::add_network(char const*, espresso_storage_type_t) 313 EspressoLight::espresso_plan::add_network(char const*, espresso_storage_type_t, std::__1::shared_ptrEspresso::net) 313 Espresso::load_network(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::compute_path, bool) 235 Espresso::reload_network_on_context(std::__1::shared_ptrEspresso::net const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::compute_path) 226 Espresso::load_and_shape_network(std::__1::shared_ptrEspresso::SerDes::generic_serdes_object const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::network_shape const&, Espresso::compute_path, std::__1::shared_ptrEspresso::blob_storage_abstract const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&) 214 Espresso::load_network_layers_internal(std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&, std::__1::shared_ptrEspresso::abstract_context const&, Espresso::network_shape const&, std::__1::basic_istream<char, std::__1::char_traits>, Espresso::compute_path, bool, std::__1::shared_ptrEspresso::blob_storage_abstract const&) 208 Espresso::run_dispatch_v2(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, std::__1::basic_istream<char, std::__1::char_traits>) 141 try_dispatch(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, std::__1::basic_istream<char, std::__1::char_traits>, Espresso::platform const&, Espresso::compute_path const&) 131 Espresso::get_net_info_ir(std::__1::shared_ptrEspresso::abstract_context, std::__1::shared_ptrEspresso::net, std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, Espresso::network_shape const&, Espresso::compute_path const&, Espresso::platform const&, Espresso::compute_path const&, std::__1::shared_ptrEspresso::cpu_context_transfer_algo_t&, std::__1::shared_ptrEspresso::net_info_ir_t&, std::__1::shared_ptrEspresso::kernels_validation_status_t&) 131 Espresso::cpu_context_transfer_algo_t::create_net_info_ir(std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, std::__1::shared_ptrEspresso::abstract_context, Espresso::network_shape const&, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t) 120 Espresso::cpu_context_transfer_algo_t::check_all_kernels_availability_on_context(std::__1::vector<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::allocator<std::__1::shared_ptrEspresso::SerDes::generic_serdes_object>> const&, std::__1::shared_ptrEspresso::abstract_context&, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t&) 120 is_kernel_available_on_engine(unsigned long, std::__1::shared_ptrEspresso::base_kernel, Espresso::kernel_info_t const&, std::__1::shared_ptrEspresso::SerDes::generic_serdes_object, std::__1::shared_ptrEspresso::abstract_context, Espresso::compute_path, std::__1::shared_ptrEspresso::net_info_ir_t, std::__1::shared_ptrEspresso::kernels_validation_status_t) 83 Espresso::ANECompilerEngine::mix_reshape_kernel::is_valid_for_engine(std::__1::shared_ptrEspresso::kernels_validation_status_t, Espresso::base_kernel::validate_for_engine_args_t const&) const 45 int ValidateLayer<ANECReshapeLayerDesc, ZinIrReshapeUnit, ZinIrReshapeUnitInfo, ANECReshapeLayerDescAlternate>(void, ANECReshapeLayerDesc const*, ANECTensorDesc const*, unsigned long, unsigned long*, ANECReshapeLayerDescAlternate**, ANECTensorValueDesc const*) 45 void ValidateLayer_Impl<ANECReshapeLayerDesc, ZinIrReshapeUnit, ZinIrReshapeUnitInfo, ANECReshapeLayerDescAlternate>(void*, ANECReshapeLayerDesc const*, ANECTensorDesc const*, unsigned long, unsigned long*, ANECReshapeLayerDescAlternate**, ANECTensorValueDesc const*) (...)
3
0
186
1w
Getting 561015905 while trying to initiate recording when the app is in background
I'm trying to start and stop recording when my app is in background periodically. I implemented it using Timer and DispatchQueue. However whenever I am trying to initiate the recording I get this error. This issue does not exist in foreground. Here is the current state of my app and configuration. I have added "Background Modes" capability in the Signing & Capability and I also checked Audio and Self Care. Here is my Info.plist: <plist version="1.0"> <dict> <key>UIBackgroundModes</key> <array> <string>audio</string> </array> <key>WKBackgroundModes</key> <array> <string>self-care</string> </array> </dict> </plist> I also used the AVAudioSession with .record category and activated it. Here is the code snippet: func startPeriodicMonitoring() { let session = AVAudioSession.sharedInstance() do { try session.setCategory(AVAudioSession.Category.record, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: []) print("Session Activated") print(session) // Start recording. measurementTimer = Timer.scheduledTimer(withTimeInterval: measurementInterval, repeats: true) { _ in self.startMonitoring() DispatchQueue.main.asyncAfter(deadline: .now() + self.recordingDuration) { self.stopMonitoring() } } measurementTimer?.fire() // Start immediately } catch let error { print("Unable to set up the audio session: \(error.localizedDescription)") } } Any thoughts on this? I have tried most of the ways but the issue is still there.
3
0
162
1w
AVPlayer and TLS 1.3 compliance for low latency HLS live stream
Hi guys, I'm investigating failure to play low latency Live HLS stream and I'm getting following error: (String) “<AVPlayerItemErrorLog: 0x30367da10>\n#Version: 1.0\n#Software: AppleCoreMedia/1.0.0.21L227 (Apple TV; U; CPU OS 17_4 like Mac OS X; en_us)\n#Date: 2024/05/17 13:11:46.046\n#Fields: date time uri cs-guid s-ip status domain comment cs-iftype\n2024/05/17 13:11:16.016 https://s2-h21-nlivell01.cdn.xxxxxx.***/..../xxxx.m3u8 -15410 \“CoreMediaErrorDomain\” \“Low Latency: Server must support http2 ECN and SACK\” -\n2024/05/17 13:11:17.017 -15410 \“CoreMediaErrorDomain\” \“Invalid server blocking reload behavior for low latency\” -\n2024/05/17 13:11:17.017 The stream works when loading from dev server with TLS 1.3, but fails on CDN servers with TLS 1.2. Regular Live streams and VOD streams work normally on those CDN servers. I tried to configure TLSv1.2 in Info.plist, but that didn't help. When running nscurl --ats-diagnostics --verbose it is passing for the server with TLS 1.3, but failing for CDN servers with TLS 1.2 due to error Code=-1005 "The network connection was lost." Is TLS 1.3 required or just recommended? Refering to https://developer.apple.com/documentation/http-live-streaming/enabling-low-latency-http-live-streaming-hls and https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis Is it possible to configure AVPlayer to skip ECN and SACK validation? Thanks.
0
0
175
2w
Random crash from AVFAudio library
Hi everyone ! I'm getting random crashes when I'm using the Speech Recognizer functionality in my app. This is an old bug (for 8 years on Apple Forums) and I will really appreciate if anyone from Apple will be able to find a fix for this crashes. Can anyone also help me please to understand what could I do to keep the Speech Recognizer functionality still available in my app, but to avoid this crashes (if there is any other native library available or a CocoaPod library). Here is my code and also the crash log for it. Code: func startRecording() { startStopRecordBtn.setImage(UIImage(#imageLiteral(resourceName: "microphone_off")), for: .normal) if UserDefaults.standard.bool(forKey: Constants.darkTheme) { commentTextView.textColor = .white } else { commentTextView.textColor = .black } commentTextView.isUserInteractionEnabled = false recordingLabel.text = Constants.recording if recognitionTask != nil { recognitionTask?.cancel() recognitionTask = nil } let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.record) try audioSession.setMode(AVAudioSession.Mode.measurement) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } catch { showAlertWithTitle(message: Constants.error) } recognitionRequest = SFSpeechAudioBufferRecognitionRequest() let inputNode = audioEngine.inputNode guard let recognitionRequest = recognitionRequest else { fatalError(Constants.error) } recognitionRequest.shouldReportPartialResults = true recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in var isFinal = false if result != nil { self.commentTextView.text = result?.bestTranscription.formattedString isFinal = (result?.isFinal)! } if error != nil || isFinal { self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil self.startStopRecordBtn.isEnabled = true } }) let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer: AVAudioPCMBuffer, when: AVAudioTime) in // CRASH HERE self?.recognitionRequest?.append(buffer) } audioEngine.prepare() do { try audioEngine.start() } catch { showAlertWithTitle(message: Constants.error) } } Here is the crash log: Thanks for very much for reading this !
2
0
186
2w
Background AVExportSession for UHD Videos
First off - I have read and fully understand this post - Apple doesn't want us abusing users' hardware so as to maximize the quality of experience across apps for their customers. I am 100% on board with this philosophy, I understand all design decisions and agree with them. To the problem: I have an app that takes photo assets, processes them for network (exportSession.shouldOptimizeForNetworkUse = true), and then uploads them. Some users have been having trouble, did some digging, they're trying to upload 4K 60FPS videos. I think that is ridiculous, but it's not my place. The issue is that the export time for a 4K60FPS video that is ~40s long can be as long as 2m. So if they select a video to upload, and then background the app that upload will ALWAYS fail because the processing fails (I have BG uploads working just fine). My understanding is that default I have 30s to run things in the background. I can use UIApplication.pleasegivemebackgroundtime to request up to 30 more seconds. That is obviously not enough. So my second option is BGProcessingTask - but that's not guaranteed to run ever. Which I understand and agree with, but when the user selects a video while the app is in the foreground the expectation is that it immediately begins processing. So I can't use a BGProcessingTask? Just wondering what the expected resolution here is. I run tasks, beg for time, if it doesn't complete I queue up a BGTask that may or may not ever run? That seems ****** for a user - they start the process, see it begin, but then if the video is too big they just have to deal with it possibly just not happening until later? They open up the app and the progress bar has magically regressed. That would infuriate me. Is there another option I'm not seeing? Some way to say "this is a large background task that will ideally take 30-60s, but it may take up to ~5-7m. It is user-facing though so it must start right away"? (I have never seen 5-7m, but 1-2m is common) Another option is to just upload the full 4K60FPS behemoth and do the processing on my end, but that seems worse for the user? We're gobbling upload bandwidth for something we're going to downsample anyway. Seems worse to waste data than battery (since that's the tradeoff at the end of the day). Anyway, just wondering what the right way to do this is. Trivially reproducible - record 1m 4K60FPS video, create an export session, export, background, enjoy failure.
2
0
174
2w
The FOV is not inconsistent when taking photo if stabilization applied in iPhone15 pro max
Hi, I am developing iOS mobile camera. I noticed one issue related to the user privacy. when AVCaptureVideoStabilizationModeStandard is set to AVCaptureConnection which sessionPreset is 1920x1080Preset, after using system API to take a photo, the FOV of the photo will be bigger than preview stream and it will show more content especially in iPhone 15 pro max rear camera. I think this inconsistency will cause the user privacy issue. Can you show me the solution if I don't want to turn the StabilizationMode OFF? I tried other devices, this issue is ok but in iPhone 15pm this issue is very obvious. Any suggestions are appreciated.
1
0
187
3w
Will the new iPad Pro support RAW capture?
Just watched the new product release, and I'm really hoping the new iPad Pro being advertised as the next creative tool for filmmakers and artists will finally allow RAW captures in the native Camera app or AVFoundation API (currently RAW available devices returns 0 on the previous iPad Pro). With all these fancy multicam camera features and camera hardware, I don't think it really takes that much to enable ProRAW and Action Mode on the software side of the iPad. Unless their strategy is to make us "shoot on iPhone and edit on iPad" (as implied in their video credits) which has been my workflow with the iPhone 15 and 2022 iPad Pro :( :(
0
0
194
3w
HDR10 MVHECV can not play on Safari
Hi, just generated a HDR10 MVHEVC file, mediainfo is below: Color range : Limited Color primaries : BT.2020 Transfer characteristics : PQ Matrix coefficients : BT.2020 non-constant Codec configuration box : hvcC+lhvC then generate the segment files with below command: mediafilesegmenter --iso-fragmented -t 4 -f av_1 av_new_1.mov then upload the segment files and prog_index.m3u8 to web server. just find that can not play the HLS stream on Safari... the url is http://ip/vod/prog_index.m3u8 just checked that if i remove the tag Transfer characteristics : PQ when generating the MVHEVC file. above same mediafilesegmenter command and upload the files to web server. the new version of HLS stream is can play on Safari... Is there any way to play HLS PQ video on Safari. thanks.
0
0
185
3w
AVCaptureDevice.setExposureModeCustom takes too long ?
Hello, I am working on a fairly complex iPhone app that controls the front built-in wide angle camera. I need to take and display a sequence of photos that cover the whole range of focus value available. Here is how I do it : call setExposureModeCustom to set the first lens position wait for the completionHandler to be called back capture a photo do it again for the next lens position. etc. This works fine, but it takes longer than I expected for the completionHandler to be called back. From what I've seen, the delay scales with the exposure duration. When I set the exposure duration to the max value: on the iPhone 14 Pro, it takes about 3 seconds (3 times the max exposure) on the iPhone 8 1.3s (4 times the max exposure). I was expecting a delay of two times the exposure duration: take a photo, throw one away while changing lens position, take the next photo, etc. but this takes more than that. I also tried the same thing with changing the ISO instead of the focus position and I get the same kind of delays. Also, I do not think the problem is linked to the way I process the images because I get the same delay even if I do nothing with the output. Is there something I could do to make things go faster for this use-case ? Any input would be appreciated, Thanks I created a minimal testing app to reproduce the issue : import Foundation import AVFoundation class Main:NSObject, AVCaptureVideoDataOutputSampleBufferDelegate { let dispatchQueue = DispatchQueue(label:"VideoQueue", qos: .userInitiated) let session:AVCaptureSession let videoDevice:AVCaptureDevice var focus:Float = 0 override init(){ session = AVCaptureSession() session.beginConfiguration() session.sessionPreset = .photo videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)! super.init() let videoDeviceInput = try! AVCaptureDeviceInput(device: videoDevice) session.addInput(videoDeviceInput) let videoDataOutput = AVCaptureVideoDataOutput() if session.canAddOutput(videoDataOutput) { session.addOutput(videoDataOutput) videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] videoDataOutput.setSampleBufferDelegate(self, queue: dispatchQueue) } session.commitConfiguration() dispatchQueue.async { self.startSession() } } func startSession(){ session.startRunning() //lock max exposure duration try! videoDevice.lockForConfiguration() let exposure = videoDevice.activeFormat.maxExposureDuration.seconds * 0.5 print("set max exposure", exposure) videoDevice.setExposureModeCustom(duration: CMTime(seconds: exposure, preferredTimescale: 1000), iso: videoDevice.activeFormat.minISO){ time in print("did set max exposure") self.changeFocus() } videoDevice.unlockForConfiguration() } func changeFocus(){ let date = Date.now print("set focus", focus) try! videoDevice.lockForConfiguration() videoDevice.setFocusModeLocked(lensPosition: focus){ time in let dt = abs(date.timeIntervalSinceNow) print("did set focus - took:", dt, "frames:", dt/self.videoDevice.exposureDuration.seconds) self.next() } videoDevice.unlockForConfiguration() } func next(){ focus += 0.02 if focus > 1 { print("done") return } changeFocus() } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){ print("did receive video frame") } }
0
0
201
4w
Voice Processing in multiple apps simultaneously
Hi everyone! We are wondering whether it's possible to have two macOS apps use the Voice Processing from Audio Engine at the same time, since we have had issues trying to do so. Specifically, our app seems to cut off the input stream from the other, only if it has Voice Processing enabled. We are developing a macOS app that records microphone input simultaneously with videoconference apps like Zoom. We are utilizing the Voice Processing from Audio Engine like in this sample: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing We have also noticed this behaviour in Safari recording audios with the Javascript Web Audio API, which also seems to use Voice Processing under the hood due to the Echo Cancellation. Any leads on this would be greatly appreciated! Thanks
0
0
207
May ’24
Error on first photo captured after device orientation change
I am using AVFoundation to capture a photo. This was all working fine, then I realized all the photos were saving to the photo library in portrait mode. I wanted them to save in the orientation the device was in when the camera took the picture, much as the built in camera app does on iOS. So I added this code: if let videoConnection = photoOutput.connection(with: .video), videoConnection.isVideoOrientationSupported { // From() is just a helper to get video orientations from the device orientation. videoConnection.videoOrientation = .from(UIDevice.current.orientation) print("Photo orientation set to \(videoConnection.videoOrientation).") } With this addition, the first photo taken after a device rotation logs this error in the debugger: <<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:866) - (err=-12784) Subsequent photos will not repeat the error. Once you rotate the device again, same behavior. Photos taken after the app loads, but before any rotations have been made, do not produce this error. I have tried many things, no dice. If I comment this code out it works without error, but of course the photos are all saved in portrait mode again.
0
1
191
May ’24