Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

81 Posts
Sort by:
Post not yet marked as solved
0 Replies
435 Views
Hi iOS community need your help. I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4 ERROR: ` error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record}) My Code here: final class CedulaScanningVC: UIViewController { var captureSession: AVCaptureSession! var stillImageOutput: AVCapturePhotoOutput! var videoPreviewLayer: AVCaptureVideoPreviewLayer! var delegate: ScanCedulaDelegate? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) self.captureSession.stopRunning() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupCamera() } // MARK: - Configure Camera func setupCamera() { captureSession = AVCaptureSession() captureSession.sessionPreset = .medium guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else { print("Unable to access back camera!") return } let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: backCamera) //Step 9 stillImageOutput = AVCapturePhotoOutput() if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) { captureSession.addInput(input) captureSession.addOutput(stillImageOutput) setupLivePreview() } } catch let error { print("Error Unable to initialize back camera: \(error.localizedDescription)") } } func setupLivePreview() { videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer.videoGravity = .resizeAspectFill videoPreviewLayer.connection?.videoOrientation = .portrait self.view.layer.addSublayer(videoPreviewLayer) //Step12 DispatchQueue.global(qos: .userInitiated).async { [weak self] in self?.captureSession.startRunning() //Step 13 DispatchQueue.main.async { self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero } } } func failed() { let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert) ac.addAction(UIAlertAction(title: "OK", style: .default)) present(ac, animated: true) captureSession = nil } // MARK: - actions func cameraButtonPressed() { let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]) stillImageOutput.capturePhoto(with: settings, delegate: self) } } extension CedulaScanningVC: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { print("error: \(error)") captureSession.stopRunning() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in guard let self = self else {return} guard let imageData = photo.fileDataRepresentation() else { print("NO image captured") return } let image = UIImage(data: imageData) self.delegate?.capturedImage(image: image) } } } I don't know what am doing wrong ?
Posted
by
Post not yet marked as solved
0 Replies
383 Views
Hi Everyone need your help . I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4 ERROR: error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record}) Here is my Code: `final class CedulaScanningVC: UIViewController { var captureSession: AVCaptureSession! var stillImageOutput: AVCapturePhotoOutput! var videoPreviewLayer: AVCaptureVideoPreviewLayer! var delegate: ScanCedulaDelegate? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) self.captureSession.stopRunning() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupCamera() } // MARK: - Configure Camera func setupCamera() { captureSession = AVCaptureSession() captureSession.sessionPreset = .medium guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else { print("Unable to access back camera!") return } let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: backCamera) //Step 9 stillImageOutput = AVCapturePhotoOutput() if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) { captureSession.addInput(input) captureSession.addOutput(stillImageOutput) setupLivePreview() } } catch let error { print("Error Unable to initialize back camera: \(error.localizedDescription)") } } func setupLivePreview() { videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer.videoGravity = .resizeAspectFill videoPreviewLayer.connection?.videoOrientation = .portrait self.view.layer.addSublayer(videoPreviewLayer) //Step12 DispatchQueue.global(qos: .userInitiated).async { [weak self] in self?.captureSession.startRunning() //Step 13 DispatchQueue.main.async { self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero } } } func failed() { let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert) ac.addAction(UIAlertAction(title: "OK", style: .default)) present(ac, animated: true) captureSession = nil } // MARK: - actions func cameraButtonPressed() { let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]) stillImageOutput.capturePhoto(with: settings, delegate: self) } } extension CedulaScanningVC: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { print("error: \(error)") captureSession.stopRunning() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in guard let self = self else {return} guard let imageData = photo.fileDataRepresentation() else { print("NO image captured") return } let image = UIImage(data: imageData) self.delegate?.capturedImage(image: image) } } }` I don't know what am doing wrong ?
Posted
by
Post not yet marked as solved
0 Replies
484 Views
We have a Share Extension that fails in Photos on macOS when trying to share a JPEG image for the following reason: From the NSItemProvider we get from the NSExtensionItem.attachments, we try to load the image using loadFileRepresentation(forTypeIdentifier: “public.image”, completionHandler: …). This fails for .jpeg images in the library. There seems to be a mismatch in expected and actual file extension internally. Here is the log: Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.jpeg" UserInfo={NSLocalizedDescription=Cannot load representation of type public.jpeg, NSUnderlyingError=0x1527c1a80 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg." UserInfo={NSLocalizedDescription=Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg., NSUnderlyingError=0x152789670 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot create a temporary file. Error: Undefined error: 0" UserInfo={NSLocalizedDescription=Cannot create a temporary file. Error: Undefined error: 0}}}}}``` In the specified folder, there is an image, however, it’s named IMG_3517.jpeg, not IMG_3517.jpg. This seems to be a bug in Photo’s item provider implementation. If we use loadObject(ofClass: URL.self, completionHandler: …) instead, we get the correct .jpeg URL in the completion handler.
Posted
by
Post not yet marked as solved
1 Replies
410 Views
When I: open an existing project create a new PhotoExtensions target run the new target in an iOS simulator (eg iPhone 15, iOS 17.0) Select photos as the app to run Open a photo Tap the ... button at the top right I see: Copy, Duplicate, Hide, etc. But I do not see my new Extension. Is there something else I need to be doing in order to see my new Extension in 'action'?
Posted
by
Post marked as solved
2 Replies
719 Views
I'm running this SwiftUI sample app for photos without any modifications except for adding my developer profile, which is necessary to build it. When I tap on the thumbnail to see the photo library (after granting access to my photo library), I see that some of the thumbnails are stuck in a loading state, and when I tap on thumbnails, I only see a low-resolution image (the thumbnail), not the full-resolution image that should load. In the console I can see this error that occurs when tapping on a thumbnail to see the full-resolution image: CachedImageManager requestImage error: The operation couldn’t be completed. (PHPhotosErrorDomain error 3164.) When I make a few modifications necessary to run the app as a native macOS app, all the thumbnails load immediately, and clicking on them reveals the full-resolution images.
Posted
by
Post not yet marked as solved
2 Replies
437 Views
From IOS 17. Have an issue when saving video and reading it from PHPickerViewController. Video become into jpeg file Code save video and I saw reason because i changed creationDate. But IOS 16 no bug here doVertifyAccessAblum() { DispatchQueue.global(qos: .background).async { if let url = URL(string: videoURL), let urlData = NSData(contentsOf: url) { let galleryPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]; let filePath="\(galleryPath)/\(url.lastPathComponent).mp4" DispatchQueue.main.async { urlData.write(toFile: filePath, atomically: true) PHPhotoLibrary.shared().performChanges({ let changeRequest = PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: filePath)) changeRequest?.creationDate = Date() }) { success, error in LOGGING.debug("Save video with status: success=\(success) error=\(String(describing: error))") } } } } } }
Posted
by
Post not yet marked as solved
0 Replies
270 Views
I am running Sonoma 14.1.1 and having an issue with DNG ProRAW images when choosing to edit. I can choose to edit using the photos app, or edit with Photoshop and in both cases the image once edit is chosen becomes less clear and slightly fuzzy with loss of detail. This happens too when I choose to export the photo from photos as a DNG file and then also try to edit with Photoshop the photos. There is a drastically noticeable difference in quality of the image. This appears to be the way the image is handles in the photos app itself. Even if I save directly from the iPhone to files, it does the same thing once on the Mac. Attached are some screen shot examples, but still clear to see the difference.
Posted
by
Post not yet marked as solved
0 Replies
360 Views
We have an iOS/iPadOS app that uses the ImageCaptureCore framework to communicate with PTP cameras. The same app also works with the 'macOS Catalyst' destination. But we like to deprecate the use of macOS Catalyst and then use 'Designed for iPad' destination instead. But when I use this destination then no cameras are provided to the app (ICDeviceBrowser)! :( I've noticed that on iOS devices the 'Settings app' usually shows that our app is allowed to use 'Camera'. But this possibility doesn't appear when the destination is 'Designed for iPad'. To my understanding below entitlements are used for a 'real' macOS app (including Catalyst), but I have added it anyway with: com.apple.security.device.camera com.apple.security.device.usb com.apple.security.personal-information.photos-library all set to YES and also the same privacy-related entries needed for iOS. Any pointers would be appreciated, thanks! :) macOS 14.1.1; Xcode 15.0.1
Posted
by
Post not yet marked as solved
0 Replies
423 Views
I'm developing 3D Scanner works on iPad. I'm using AVCapturePhoto and Photogrammetry Session. photoCaptureDelegate is like below: extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic") let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ]) let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) let colorSpace = CGColorSpace(name: CGColorSpace.sRGB) let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ]) try? fileData!.write(to: fileUrl, options: .atomic) } } But, Photogrammetry session spits warning messages: Sample 0 missing LiDAR point cloud! Sample 1 missing LiDAR point cloud! Sample 2 missing LiDAR point cloud! Sample 3 missing LiDAR point cloud! Sample 4 missing LiDAR point cloud! Sample 5 missing LiDAR point cloud! Sample 6 missing LiDAR point cloud! Sample 7 missing LiDAR point cloud! Sample 8 missing LiDAR point cloud! Sample 9 missing LiDAR point cloud! Sample 10 missing LiDAR point cloud! The session creates a usdz 3d model but scale is not correct. I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
Posted
by
Post not yet marked as solved
0 Replies
400 Views
I am trying to get an implementation of Object Capture using ObjectCaptureSession in iOS 17. I have been following the example supplied by Apple, but I cannot get the session object to be in the correct state to allow ObjectCaptureSession::beginNewScanPassAfterFlip() to be called. I get the following error when I call session.beginNewScanPassAfterFlip() Can't beginNewScanPassAfterFlip() from state == capturing Must be .paused from .capturing To start with, there is no state of ObjectCaptureSession which is .paused , so is this talking about .isPaused? I have tried using session.pause() and confirm that it is paused using .isPaused but I get the same error as above. I have checked the output of session.state, and confirm it is .capturing I have put print statements in the example, and confirm that before session.beginNewScanPassAfterFlip() is called at line 104 of OnboardingButtonView the state is .capturing This goes against the documentation in this page: https://developer.apple.com/documentation/realitykit/objectcapturesession/beginnewscanpassafterflip() Note, I have also tried pausing the session and calling beginNewScanPassAfterFlip() but this results in the warning: I am hoping for some clarification if there is something I am missing?
Posted
by
Post not yet marked as solved
0 Replies
368 Views
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch). Photogrammetry with ObjectCaptureSession was successful, but other trials are not. I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto. It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails. and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed. the settings are: camera: back Lidar camera, image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true I wonder iPad supports Photogrammetry with PhotogrammetrySamples I've already tested some sample codes provided by apple: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture What should I do to make Photogrammetry successful?
Posted
by
Post not yet marked as solved
1 Replies
505 Views
guard let rawfilter = CoreImage.CIRAWFilter(imageData: data, identifierHint: nil) else { return } guard let ciImage = rawfilter.outputImage else { return } let width = Int(ciImage.extent.width) let height = Int(ciImage.extent.height) let rect = CGRect(x: 0, y: 0, width: width, height: height) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: rect, format: .RGBA16, colorSpace: CGColorSpaceCreateDeviceRGB()) else { return } print("cgImage prepared") guard let dataProvider = cgImage.dataProvider else { return } let rgbaData = CFDataCreateMutableCopy(kCFAllocatorDefault, 0, dataProvider.data) In iOS 16 this process is much faster than the same process in iOS 17 Is there a method to boost up the decoding speed?
Posted
by
Post not yet marked as solved
1 Replies
621 Views
Hi! I am creating a document based app and I am wondering how can I save images in my model and integrate them in the document file. "Image" does not conform to "Persistence model" and I am sure that "Data" will be saved in the document file. Any clue on how can I do it? Many thanks.
Post not yet marked as solved
2 Replies
423 Views
Hi there :) We're in our way to make an app to can communicate with DSLR camera by using ImageCaptureCore framework for PTP communication with the camera. In our app, we're sending some PTP commands to the camera by using requestSendPTPCommand(_:outData:completion:). This is our snipped-code to execute a PTP command. public extension ICCameraDevice { func sendCommand(command: Command) async { do { print("sendCommand ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++") print("sendCommand \(command.tag()) : sendCommand Started") let result = try await self.requestSendPTPCommand(command.encodeCommand().commandBuffer, outData: nil) let (data, response) = result print("sendCommand \(command.tag()) : sendCommand Finished") print("sendCommand data: \(data.bytes.count)") print("sendCommand response: \(response.bytes.count)") if !response.bytes.isEmpty { command.decodeResponse(responseData: response) } print("sendCommand \(command.tag()) : sendCommand Finished with response code \(command.responseCode)") print("sendCommand ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++") if command.responseCode != .ok { isRunning = false errorResponseCode = command.responseCode.rawValue print("response error with code = \(command.responseCode)") return } let copiedData = data.deepCopy() command.decodeData(data: copiedData) } catch { isRunning = false print("Error Send Command with error: \(error.localizedDescription)") } } } The function sendCommand(command: Command) async is called in a while-loop in async-await way. So that it needs to wait a sent command to finish before executing another command. The looping keeps running since the device connected to the camera. The result is, the process is running by no problem at all for several minutes, It can get camera's setting, device info, even its images. But then the problems occurred. The amount of time is random, some time it takes only 15 minutes, some time it takes 1 hour. There are 2 problems recorded in our case: 1. The requestSendPTPCommand(_:outData:completion:) result returning empty data without throwing any error, because the error never be caught in catch block. This is my printed result: sendCommand +++++++++++++++++++++++++++++++++++++ sendCommand GetObjectHandlesCommand : sendCommand Started sendCommand GetObjectHandlesCommand : sendCommand Finished sendCommand data: 0 sendCommand response: 0 sendCommand GetObjectHandlesCommand : sendCommand Finished with response code undefined sendCommand +++++++++++++++++++++++++++++++++++++ 2. It crashes with the last message in my logger: sendCommand +++++++++++++++++++++++++++++++++++++ sendCommand GetObjectHandlesCommand : sendCommand Started 2023-10-27 10:44:37.186768+0700 PTPHelper_Example[76486:11538353] [PTPHelper_Example] remoteCamera ! Canon EOS 200D - Error Domain=NSCocoaErrorDomain Code=4097 “connection to service with pid 76493 created from an endpoint” UserInfo={NSDebugDescription=connection to service with pid 76493 created from an endpoint} 2023-10-27 10:44:37.187146+0700 PTPHelper_Example[76486:11538353] [PTPHelper_Example] failureBlock ! Canon EOS 200D - Failure block was called due to a connection failure. For crashed issue, I've tried to attach in this post. But it always failed with messaged An error occured while uploading this log. Please try again later.. So that, I uploaded it in my google drive with url: https://drive.google.com/file/d/1IvJohGwp1zSkncTWHc3430weGntciB3K/view?usp=sharing Reproduced on iOS 16.3.1. I've checked the stack traces of the other threads but nothing suspicious got my attention. But it might relate to this issue https://developer.apple.com/forums/thread/104576. But I can't ensure. Any good idea of how to address these crashes shown above? Thank you!
Posted
by
Post not yet marked as solved
1 Replies
462 Views
Using ImageCaptureCore, to send PTP devices to cameras via tether, I noticed that all of my Nikon cameras can take up to an entire minute for PTP events to start logging. My Canons and Sonys are ready instantly. Any idea why? I use the ICDeviceBrowser to browse for cameras, and then request to open the session. According to the docs, it says it's ready after it enumerates its objects? If that's the case, is there a way to bypass that? Even on an empty SD card it's slow.
Posted
by
Post not yet marked as solved
1 Replies
475 Views
I'm working on a game which uses HDR display output for a much brighter range. One of a feature of the game is the ability to export in-game photos. The only appropriate format I found for this is Open EXR. The embedded Photos app is capable of showing HDR photos on an HDR display. However, if drop an EXR file to the photos with a large range, it won't be properly displayed with HDR mode with the full range. At the same time, pressing Edit on the file makes it HDR displayable and it remains displayable if save the edit with any, even a tiny, change. Moreover, if the EXR file is placed next to 'true' HDR one (or an EXR 'fixed' as on above), then durring scroll between the files, the broken EXR magically fixes at the exact moment the other HDR drives up to the screen. I tested on different files with various internal format. Seems to be a coomon problem for all. Tested on the latest iOS 17.0.3. Thank you in advance.
Posted
by
Post not yet marked as solved
0 Replies
329 Views
Goal is to get/save the captured photo (From default camera) immediately from my app when the app is in background. When I capture a photo with default camera, photoLibraryDidChange(_:) function do not execute that time. But when I reopen my app, that time this function executes and give the images, which were captured in that particular time. how to execute photoLibraryDidChange(_:) when app is in background?
Posted
by
Post not yet marked as solved
0 Replies
325 Views
hii developers currently i developing a ios camera app in that camera app i need to add features like photographic styles in ios 13 i need only that page view not filters this is my big problem..i used uipageviewcontroller and swipe gesture if i use page in background main camera view func also run, i used one button if i press the button i need views like photographic styles view just view this is my problem i can't do that so please if anyone can read this comment please and solve ..thanks in advance
Posted
by