Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

86 Posts
Sort by:
Post not yet marked as solved
1 Replies
292 Views
Hello Apple Developer Community, I'm excited to make my first post here and am seeking guidance for a feature I'd like to implement in my app. My objective is to enable users to select an image and crop it. Ideally, there should be a visible indicator, like a rectangle, to show the area that will be cropped. Upon clicking the save button, the image would be saved with the selected cropped area. I'm aiming for functionality to the image editor in the Photos app. Is there a straightforward method or integration for this that adheres to Apple's native frameworks, without resorting to external GitLab repositories? Thank you in advance for your assistance. Best regards, Nicola
Posted Last updated
.
Post not yet marked as solved
0 Replies
381 Views
Hello, I came across the Object Capture for iOS example from WWDC23, which utilizes LiDAR sensor. However, I’m interested in using the TrueDepth camera system instead. What I have tried is to save depth photos (.HEIC) to the Images/ folder (based on modifying the example below), which is hopefully used by the Photogrammetry session. But I haven’t been successful so far in starting the 3D reconstruction. Could there be something I’ve missed, or is the Object Capture sample code exclusively designed for LiDAR? Or maybe .HEIC is not the right format to use? Thank you for your assistance. import AVFoundation import UIKit class DepthPhotoCapture: NSObject, AVCapturePhotoCaptureDelegate { let photoOutput = AVCapturePhotoOutput() let captureSession = AVCaptureSession() override init() { super.init() setupCaptureSession() } func setupCaptureSession() { // Get the front camera (TrueDepth camera) guard let frontCamera = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front) else { print("Unable to access front camera!") return } do { // Create an input object from the camera let input = try AVCaptureDeviceInput(device: frontCamera) // Add the input to the capture session captureSession.addInput(input) } catch { print("Unable to create AVCaptureDeviceInput: \(error)") } // Check if depth data capture is supported if photoOutput.isDepthDataDeliverySupported { // Enable depth data capture photoOutput.isDepthDataDeliveryEnabled = true } // Add the photo output to the capture session captureSession.addOutput(photoOutput) // Start the capture session captureSession.startRunning() } func captureDepthPhoto() { // Create a photo settings object let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc]) photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled // Capture a photo with depth data photoOutput.capturePhoto(with: photoSettings, delegate: self) } // Implement the AVCapturePhotoCaptureDelegate method func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let imageData = photo.fileDataRepresentation() else { print("Error while generating image from photo capture data.") return } // Get the documents directory let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! // Append the image directory and a unique image name let fileURL = documentsDirectory.appendingPathComponent("Images/").appendingPathComponent(UUID().uuidString).appendingPathExtension("heic") do { // Write the image data to the file try imageData.write(to: fileURL) print("Saved photo with depth data to \(fileURL)") } catch { print("Failed to write the image data to disk: \(error)") } } }
Posted
by cy_w.
Last updated
.
Post not yet marked as solved
0 Replies
332 Views
Hello. I have three questions about the Sensitive Content Analysis (SCA) framework: SCA seems to be asynchronous. Is there a limit to how much a single app can send through it at a time? For video analysis, can the video be broken into smaller chunks, and then all chunks be hit concurrently? Can a video stream be sampled as it's being streamed? e.g. Maybe it samples one frame every 3 seconds and scans those? Thanks.
Posted
by onehat.
Last updated
.
Post not yet marked as solved
0 Replies
417 Views
Hi iOS community need your help. I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4 ERROR: ` error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record}) My Code here: final class CedulaScanningVC: UIViewController { var captureSession: AVCaptureSession! var stillImageOutput: AVCapturePhotoOutput! var videoPreviewLayer: AVCaptureVideoPreviewLayer! var delegate: ScanCedulaDelegate? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) self.captureSession.stopRunning() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupCamera() } // MARK: - Configure Camera func setupCamera() { captureSession = AVCaptureSession() captureSession.sessionPreset = .medium guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else { print("Unable to access back camera!") return } let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: backCamera) //Step 9 stillImageOutput = AVCapturePhotoOutput() if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) { captureSession.addInput(input) captureSession.addOutput(stillImageOutput) setupLivePreview() } } catch let error { print("Error Unable to initialize back camera: \(error.localizedDescription)") } } func setupLivePreview() { videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer.videoGravity = .resizeAspectFill videoPreviewLayer.connection?.videoOrientation = .portrait self.view.layer.addSublayer(videoPreviewLayer) //Step12 DispatchQueue.global(qos: .userInitiated).async { [weak self] in self?.captureSession.startRunning() //Step 13 DispatchQueue.main.async { self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero } } } func failed() { let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert) ac.addAction(UIAlertAction(title: "OK", style: .default)) present(ac, animated: true) captureSession = nil } // MARK: - actions func cameraButtonPressed() { let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]) stillImageOutput.capturePhoto(with: settings, delegate: self) } } extension CedulaScanningVC: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { print("error: \(error)") captureSession.stopRunning() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in guard let self = self else {return} guard let imageData = photo.fileDataRepresentation() else { print("NO image captured") return } let image = UIImage(data: imageData) self.delegate?.capturedImage(image: image) } } } I don't know what am doing wrong ?
Posted Last updated
.
Post not yet marked as solved
0 Replies
362 Views
Hi Everyone need your help . I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4 ERROR: error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record}) Here is my Code: `final class CedulaScanningVC: UIViewController { var captureSession: AVCaptureSession! var stillImageOutput: AVCapturePhotoOutput! var videoPreviewLayer: AVCaptureVideoPreviewLayer! var delegate: ScanCedulaDelegate? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) self.captureSession.stopRunning() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupCamera() } // MARK: - Configure Camera func setupCamera() { captureSession = AVCaptureSession() captureSession.sessionPreset = .medium guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else { print("Unable to access back camera!") return } let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: backCamera) //Step 9 stillImageOutput = AVCapturePhotoOutput() if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) { captureSession.addInput(input) captureSession.addOutput(stillImageOutput) setupLivePreview() } } catch let error { print("Error Unable to initialize back camera: \(error.localizedDescription)") } } func setupLivePreview() { videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer.videoGravity = .resizeAspectFill videoPreviewLayer.connection?.videoOrientation = .portrait self.view.layer.addSublayer(videoPreviewLayer) //Step12 DispatchQueue.global(qos: .userInitiated).async { [weak self] in self?.captureSession.startRunning() //Step 13 DispatchQueue.main.async { self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero } } } func failed() { let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert) ac.addAction(UIAlertAction(title: "OK", style: .default)) present(ac, animated: true) captureSession = nil } // MARK: - actions func cameraButtonPressed() { let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]) stillImageOutput.capturePhoto(with: settings, delegate: self) } } extension CedulaScanningVC: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { print("error: \(error)") captureSession.stopRunning() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in guard let self = self else {return} guard let imageData = photo.fileDataRepresentation() else { print("NO image captured") return } let image = UIImage(data: imageData) self.delegate?.capturedImage(image: image) } } }` I don't know what am doing wrong ?
Posted Last updated
.
Post not yet marked as solved
0 Replies
458 Views
We have a Share Extension that fails in Photos on macOS when trying to share a JPEG image for the following reason: From the NSItemProvider we get from the NSExtensionItem.attachments, we try to load the image using loadFileRepresentation(forTypeIdentifier: “public.image”, completionHandler: …). This fails for .jpeg images in the library. There seems to be a mismatch in expected and actual file extension internally. Here is the log: Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.jpeg" UserInfo={NSLocalizedDescription=Cannot load representation of type public.jpeg, NSUnderlyingError=0x1527c1a80 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg." UserInfo={NSLocalizedDescription=Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg., NSUnderlyingError=0x152789670 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot create a temporary file. Error: Undefined error: 0" UserInfo={NSLocalizedDescription=Cannot create a temporary file. Error: Undefined error: 0}}}}}``` In the specified folder, there is an image, however, it’s named IMG_3517.jpeg, not IMG_3517.jpg. This seems to be a bug in Photo’s item provider implementation. If we use loadObject(ofClass: URL.self, completionHandler: …) instead, we get the correct .jpeg URL in the completion handler.
Posted Last updated
.
Post not yet marked as solved
1 Replies
385 Views
When I: open an existing project create a new PhotoExtensions target run the new target in an iOS simulator (eg iPhone 15, iOS 17.0) Select photos as the app to run Open a photo Tap the ... button at the top right I see: Copy, Duplicate, Hide, etc. But I do not see my new Extension. Is there something else I need to be doing in order to see my new Extension in 'action'?
Posted Last updated
.
Post marked as solved
2 Replies
638 Views
I'm running this SwiftUI sample app for photos without any modifications except for adding my developer profile, which is necessary to build it. When I tap on the thumbnail to see the photo library (after granting access to my photo library), I see that some of the thumbnails are stuck in a loading state, and when I tap on thumbnails, I only see a low-resolution image (the thumbnail), not the full-resolution image that should load. In the console I can see this error that occurs when tapping on a thumbnail to see the full-resolution image: CachedImageManager requestImage error: The operation couldn’t be completed. (PHPhotosErrorDomain error 3164.) When I make a few modifications necessary to run the app as a native macOS app, all the thumbnails load immediately, and clicking on them reveals the full-resolution images.
Posted Last updated
.
Post not yet marked as solved
2 Replies
415 Views
From IOS 17. Have an issue when saving video and reading it from PHPickerViewController. Video become into jpeg file Code save video and I saw reason because i changed creationDate. But IOS 16 no bug here doVertifyAccessAblum() { DispatchQueue.global(qos: .background).async { if let url = URL(string: videoURL), let urlData = NSData(contentsOf: url) { let galleryPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]; let filePath="\(galleryPath)/\(url.lastPathComponent).mp4" DispatchQueue.main.async { urlData.write(toFile: filePath, atomically: true) PHPhotoLibrary.shared().performChanges({ let changeRequest = PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: filePath)) changeRequest?.creationDate = Date() }) { success, error in LOGGING.debug("Save video with status: success=\(success) error=\(String(describing: error))") } } } } } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
254 Views
I am running Sonoma 14.1.1 and having an issue with DNG ProRAW images when choosing to edit. I can choose to edit using the photos app, or edit with Photoshop and in both cases the image once edit is chosen becomes less clear and slightly fuzzy with loss of detail. This happens too when I choose to export the photo from photos as a DNG file and then also try to edit with Photoshop the photos. There is a drastically noticeable difference in quality of the image. This appears to be the way the image is handles in the photos app itself. Even if I save directly from the iPhone to files, it does the same thing once on the Mac. Attached are some screen shot examples, but still clear to see the difference.
Posted
by ronwoods.
Last updated
.
Post not yet marked as solved
0 Replies
344 Views
We have an iOS/iPadOS app that uses the ImageCaptureCore framework to communicate with PTP cameras. The same app also works with the 'macOS Catalyst' destination. But we like to deprecate the use of macOS Catalyst and then use 'Designed for iPad' destination instead. But when I use this destination then no cameras are provided to the app (ICDeviceBrowser)! :( I've noticed that on iOS devices the 'Settings app' usually shows that our app is allowed to use 'Camera'. But this possibility doesn't appear when the destination is 'Designed for iPad'. To my understanding below entitlements are used for a 'real' macOS app (including Catalyst), but I have added it anyway with: com.apple.security.device.camera com.apple.security.device.usb com.apple.security.personal-information.photos-library all set to YES and also the same privacy-related entries needed for iOS. Any pointers would be appreciated, thanks! :) macOS 14.1.1; Xcode 15.0.1
Posted
by bims.
Last updated
.
Post not yet marked as solved
0 Replies
403 Views
I'm developing 3D Scanner works on iPad. I'm using AVCapturePhoto and Photogrammetry Session. photoCaptureDelegate is like below: extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic") let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ]) let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) let colorSpace = CGColorSpace(name: CGColorSpace.sRGB) let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ]) try? fileData!.write(to: fileUrl, options: .atomic) } } But, Photogrammetry session spits warning messages: Sample 0 missing LiDAR point cloud! Sample 1 missing LiDAR point cloud! Sample 2 missing LiDAR point cloud! Sample 3 missing LiDAR point cloud! Sample 4 missing LiDAR point cloud! Sample 5 missing LiDAR point cloud! Sample 6 missing LiDAR point cloud! Sample 7 missing LiDAR point cloud! Sample 8 missing LiDAR point cloud! Sample 9 missing LiDAR point cloud! Sample 10 missing LiDAR point cloud! The session creates a usdz 3d model but scale is not correct. I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
Posted Last updated
.
Post not yet marked as solved
0 Replies
377 Views
I am trying to get an implementation of Object Capture using ObjectCaptureSession in iOS 17. I have been following the example supplied by Apple, but I cannot get the session object to be in the correct state to allow ObjectCaptureSession::beginNewScanPassAfterFlip() to be called. I get the following error when I call session.beginNewScanPassAfterFlip() Can't beginNewScanPassAfterFlip() from state == capturing Must be .paused from .capturing To start with, there is no state of ObjectCaptureSession which is .paused , so is this talking about .isPaused? I have tried using session.pause() and confirm that it is paused using .isPaused but I get the same error as above. I have checked the output of session.state, and confirm it is .capturing I have put print statements in the example, and confirm that before session.beginNewScanPassAfterFlip() is called at line 104 of OnboardingButtonView the state is .capturing This goes against the documentation in this page: https://developer.apple.com/documentation/realitykit/objectcapturesession/beginnewscanpassafterflip() Note, I have also tried pausing the session and calling beginNewScanPassAfterFlip() but this results in the warning: I am hoping for some clarification if there is something I am missing?
Posted Last updated
.
Post not yet marked as solved
1 Replies
486 Views
guard let rawfilter = CoreImage.CIRAWFilter(imageData: data, identifierHint: nil) else { return } guard let ciImage = rawfilter.outputImage else { return } let width = Int(ciImage.extent.width) let height = Int(ciImage.extent.height) let rect = CGRect(x: 0, y: 0, width: width, height: height) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: rect, format: .RGBA16, colorSpace: CGColorSpaceCreateDeviceRGB()) else { return } print("cgImage prepared") guard let dataProvider = cgImage.dataProvider else { return } let rgbaData = CFDataCreateMutableCopy(kCFAllocatorDefault, 0, dataProvider.data) In iOS 16 this process is much faster than the same process in iOS 17 Is there a method to boost up the decoding speed?
Posted
by qdwang.
Last updated
.
Post not yet marked as solved
2 Replies
1.3k Views
Environment: iOS 16 beta 2, beta 3. iPhone 11 Pro, 12 mini Steps to reproduce: Subscribe to Photo Library changes via PHPhotoLibraryChangeObserver, put some logs to track inserted/deleted objects: func photoLibraryDidChange(_ changeInstance: PHChange) { if let changeDetails = changes.changeDetails(for: allPhotosFetchResult) { for insertion in changeDetails.insertedObjects { print("🥶 INSERTED: ", insertion.localIdentifier) } for deletion in changeDetails.removedObjects { print("🥶 DELETED: ", deletion.localIdentifier) } } } Save a photo to camera roll with PHAssetCreationRequest Go to the Photo Library, delete the newly saved photo Come back to the app and watch the logs: 🥶 INSERTED:  903933C3-7B83-4212-8DF1-37C2AD3A923D/L0/001 🥶 DELETED:  39F673E7-C5AC-422C-8BAA-1BF865120BBF/L0/001 Expected result: localIdentifier of the saved and deleted asset is the same string in both logs. In fact: It's different. So it appears that either the localIdentifier of an asset gets changed after successful saving, or it's a bug in the Photos framework in iOS 16. I've checked - in iOS 15 it works fine (IDs in logs match).
Posted
by popov_v_d.
Last updated
.
Post not yet marked as solved
5 Replies
1.3k Views
Unhandled error (NSCocoaErrorDomain, 134093) occurred during faulting and was thrown: Error Domain=NSCocoaErrorDomain Code=134093 "(null)" Fatal Exception: NSInternalInconsistencyException 0 CoreFoundation 0xed5e0 __exceptionPreprocess 1 libobjc.A.dylib 0x2bc00 objc_exception_throw 2 CoreData 0x129c8 _PFFaultHandlerLookupRow 3 CoreData 0x11d60 _PF_FulfillDeferredFault 4 CoreData 0x11c58 _pvfk_header 5 CoreData 0x98e64 _sharedIMPL_pvfk_core_c 6 PhotoLibraryServices 0x6d8b0 -[PLInternalResource orientation] 7 PhotoLibraryServices 0x6d7bc -[PLInternalResource orientedWidth] 8 Photos 0x147e74 ___presentFullResourceAtIndex_block_invoke 9 PhotoLibraryServices 0x174ee4 __53-[PLManagedObjectContext _directPerformBlockAndWait:]_block_invoke 10 CoreData 0x208ec developerSubmittedBlockToNSManagedObjectContextPerform 11 libdispatch.dylib 0x4300 _dispatch_client_callout 12 libdispatch.dylib 0x136b4 _dispatch_lane_barrier_sync_invoke_and_complete 13 CoreData 0x207f8 -[NSManagedObjectContext performBlockAndWait:] 14 PhotoLibraryServices 0x174e98 -[PLManagedObjectContext _directPerformBlockAndWait:] 15 PhotoLibraryServices 0x1738c8 -[PLManagedObjectContext performBlockAndWait:] 16 Photos 0x147d30 _presentFullResourceAtIndex 17 Photos 0x1476bc PHChooserListContinueEnumerating 18 Photos 0x1445e0 -[PHImageResourceChooser presentNextQualifyingResource] 19 Photos 0x2ea74 -[PHImageRequest startRequest] 20 Photos 0x3f2c0 -[PHMediaRequestContext _registerAndStartRequests:] 21 Photos 0x3e484 -[PHMediaRequestContext start] 22 Photos 0x1f0710 -[PHImageManager runRequestWithContext:] 23 Photos 0x1efdb0 -[PHImageManager requestImageDataAndOrientationForAsset:options:resultHandler:] 24 TeraBox 0x2497f0c closure #1 in LocalPhotoLibManager.getDataFrom(_:_:) + 549 (LocalPhotoLibManager.swift:549) 25 TeraBox 0x1835fc4 thunk for @escaping @callee_guaranteed () -> () (<compiler-generated>) 26 TeraBox 0x1cb1288 +[DuboxOCException tryOC:catchException:] + 18 (DuboxOCException.m:18) 27 TeraBox 0x249b4d4 specialized LocalPhotoLibManager.convert(with:_:) + 548 (LocalPhotoLibManager.swift:548) 28 TeraBox 0x2493b24 closure #1 in closure #1 in closure #1 in LocalPhotoLibManager.scanAlbumUpdateLocalphotoTable(_:) + 173 (LocalPhotoLibManager.swift:173) 29 TeraBox 0x1835fc4 thunk for @escaping @callee_guaranteed () -> () (<compiler-generated>) 30 libdispatch.dylib 0x26a8 _dispatch_call_block_and_release 31 libdispatch.dylib 0x4300 _dispatch_client_callout 32 libdispatch.dylib 0x744c _dispatch_queue_override_invoke 33 libdispatch.dylib 0x15be4 _dispatch_root_queue_drain 34 libdispatch.dylib 0x163ec _dispatch_worker_thread2 35 libsystem_pthread.dylib 0x1928 _pthread_wqthread 36 libsystem_pthread.dylib 0x1a04 start_wqthread
Posted Last updated
.
Post not yet marked as solved
2 Replies
401 Views
Hi there :) We're in our way to make an app to can communicate with DSLR camera by using ImageCaptureCore framework for PTP communication with the camera. In our app, we're sending some PTP commands to the camera by using requestSendPTPCommand(_:outData:completion:). This is our snipped-code to execute a PTP command. public extension ICCameraDevice { func sendCommand(command: Command) async { do { print("sendCommand ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++") print("sendCommand \(command.tag()) : sendCommand Started") let result = try await self.requestSendPTPCommand(command.encodeCommand().commandBuffer, outData: nil) let (data, response) = result print("sendCommand \(command.tag()) : sendCommand Finished") print("sendCommand data: \(data.bytes.count)") print("sendCommand response: \(response.bytes.count)") if !response.bytes.isEmpty { command.decodeResponse(responseData: response) } print("sendCommand \(command.tag()) : sendCommand Finished with response code \(command.responseCode)") print("sendCommand ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++") if command.responseCode != .ok { isRunning = false errorResponseCode = command.responseCode.rawValue print("response error with code = \(command.responseCode)") return } let copiedData = data.deepCopy() command.decodeData(data: copiedData) } catch { isRunning = false print("Error Send Command with error: \(error.localizedDescription)") } } } The function sendCommand(command: Command) async is called in a while-loop in async-await way. So that it needs to wait a sent command to finish before executing another command. The looping keeps running since the device connected to the camera. The result is, the process is running by no problem at all for several minutes, It can get camera's setting, device info, even its images. But then the problems occurred. The amount of time is random, some time it takes only 15 minutes, some time it takes 1 hour. There are 2 problems recorded in our case: 1. The requestSendPTPCommand(_:outData:completion:) result returning empty data without throwing any error, because the error never be caught in catch block. This is my printed result: sendCommand +++++++++++++++++++++++++++++++++++++ sendCommand GetObjectHandlesCommand : sendCommand Started sendCommand GetObjectHandlesCommand : sendCommand Finished sendCommand data: 0 sendCommand response: 0 sendCommand GetObjectHandlesCommand : sendCommand Finished with response code undefined sendCommand +++++++++++++++++++++++++++++++++++++ 2. It crashes with the last message in my logger: sendCommand +++++++++++++++++++++++++++++++++++++ sendCommand GetObjectHandlesCommand : sendCommand Started 2023-10-27 10:44:37.186768+0700 PTPHelper_Example[76486:11538353] [PTPHelper_Example] remoteCamera ! Canon EOS 200D - Error Domain=NSCocoaErrorDomain Code=4097 “connection to service with pid 76493 created from an endpoint” UserInfo={NSDebugDescription=connection to service with pid 76493 created from an endpoint} 2023-10-27 10:44:37.187146+0700 PTPHelper_Example[76486:11538353] [PTPHelper_Example] failureBlock ! Canon EOS 200D - Failure block was called due to a connection failure. For crashed issue, I've tried to attach in this post. But it always failed with messaged An error occured while uploading this log. Please try again later.. So that, I uploaded it in my google drive with url: https://drive.google.com/file/d/1IvJohGwp1zSkncTWHc3430weGntciB3K/view?usp=sharing Reproduced on iOS 16.3.1. I've checked the stack traces of the other threads but nothing suspicious got my attention. But it might relate to this issue https://developer.apple.com/forums/thread/104576. But I can't ensure. Any good idea of how to address these crashes shown above? Thank you!
Posted
by IvanPN.
Last updated
.
Post not yet marked as solved
1 Replies
440 Views
Using ImageCaptureCore, to send PTP devices to cameras via tether, I noticed that all of my Nikon cameras can take up to an entire minute for PTP events to start logging. My Canons and Sonys are ready instantly. Any idea why? I use the ICDeviceBrowser to browse for cameras, and then request to open the session. According to the docs, it says it's ready after it enumerates its objects? If that's the case, is there a way to bypass that? Even on an empty SD card it's slow.
Posted Last updated
.
Post not yet marked as solved
2 Replies
850 Views
The latest iPhone 15 Pro models support additional focal lengths on the main 24mm (1x) lens: 28mm ("1.2x") and 35mm ("1.5x"). These are supposed to use data from the full sensor to achieve optical quality images (i.e. no upscaling), so I would expect these new focal lengths to appear in the secondaryNativeResolutionZoomFactors array, just like the 2x option does. However, the activeFormat.secondaryNativeResolutionZoomFactors property still only reports [2.0] when using the main 1x lens. Is this an oversight, or is there something special (other than setting the zoom factor) we need to do to access the high-quality 28mm and 35mm modes? I'm wary of simply setting 1.2 or 1.5 as the zoom factor, as that isn't truly the ratio between the base 24mm and the virtual focal lengths.
Posted
by tenuki.
Last updated
.
Post not yet marked as solved
6 Replies
14k Views
Running on: iMac 27" 5k late 2015 - 64gb ram and a 16tb Pegasus Promise2 R4 raid5 via Thunderbolt. After trying Big Sur - found issues with Luminar Photo app, decided to return to Catalina on the iMac. Reformatted my internal drive and reinstalled Catalina 15.5 and reformatted the raid. But I keep getting the following message upon restarting: "Incompatible Disk. This disk uses features that are not supported on this version of MacOS" and my Pegasus2 R4 portion no longer appears on the desktop or in Disk Utility... Looked into this and discovered that it may be an issue of Mac OS Extended vs APFS The iMac was formatted to APFS prior to installing OS11 so I reformatted to APFS when returning to Catalina. The issues persisted so I re-reformatted from a bootable USB - this time to Mac OS Extended (journaled) and the issues seems to be resolved. The iMac runs slower on MacOS Ext, but it is running and the Raid is recognised... I'd love to go back to APFS but am afraid it will "break" things. Any thought on this would be welcome. Thanks Nick
Posted Last updated
.