Discuss using the camera on Apple devices.

Posts under Camera tag

177 Posts
Sort by:
Post not yet marked as solved
0 Replies
241 Views
■detail In a Xamarin iOS app, there is a screen (Screen A) designed for capturing ID photos. We've written code to set the default camera zoom to 2x when opening Screen A, enabling users to take photos by pressing a button. The subsequent screen (Screen B) serves as a preview screen for the photos taken on Screen A. The issue at hand is that photos captured on Screen A are unintentionally displayed in grayscale on Screen B. The correct behavior should be displaying them in color on Screen B. This problem occurs only on iPhone 14 Pro Max with iOS 17.0; it does not occur on iPhone 15 Pro with iOS 17.1. Moreover, when the code for a 2x zoom is not present during the capture settings, photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. If the code for a 2x zoom is present during the capture settings, and the AVCaptureSession's SessionPreset is set to Preset640x480, the photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. Is there an instance where the setting of AVCaptureSession's SessionPreset on iPhone 14 Pro Max with iOS 17.0 influences unintentional grayscale conversion when processing images after taking a 2x zoom photo? ■how to reproduce Using the camera 2x zoom code with AVCaptureSession's SessionPreset set to Preset during capturing on iPhone 14 Pro Max with iOS 17.0 using XCode15.1's iOS SDK(17.2). ■enviroment We are building a program using Xamarin.iOS in Visual Studio for Mac. During the build process, Xcode 15.1 (iOS SDK 17) is utilized.
Posted
by ethan731.
Last updated
.
Post not yet marked as solved
0 Replies
260 Views
Hello, I've been developing a web app which I need the front camera and need to take a picture at a higher resolution. But I have one issue. When I call navigator.mediaDevices.getUserMedia() in the browser to get the resolution of the camera, it shows it as 2052 x 2736. But it's a 12 MP front camera. I take a picture of myself using the camera app on the iPad and it shows 12 MP picture. The back camera reports it fine. You can also test it out on webcamtests.com to see the resolution.
Posted Last updated
.
Post not yet marked as solved
0 Replies
359 Views
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS? I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data. As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture. I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken. Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture? If not, is there a way to calculate the matrix based on the image's metadata?
Posted Last updated
.
Post not yet marked as solved
1 Replies
303 Views
I am new to Objective C and relatively new to iOS development. I have an AVCaptureDevice object at hand and would like to print the maximum supported photo dimensions as provided by activeFormat.supportedMaxPhotoDimensions, using Objective C. I tried the following: for (NSValue *obj in device.activeFormat.supportedMaxPhotoDimensions) { CMVideoDimensions *vd = (__bridge CMVideoDimensions *)obj; NSString *s = [NSString stringWithFormat:@"res=%d:%d", vd->width, vd->height]; //print that string } If I run this code, I get: res=314830880:24994 This is way too high, and there is obviously something I am doing wrong, but I don't know what it could be. According to the information I see on the developer forum, I should get something closer to 4000:3000. I can successfully read device.activeFormat.videoFieldOfView and other fields, so I believe my code is sound overall.
Posted Last updated
.
Post not yet marked as solved
4 Replies
443 Views
Using an iPhone 8 and iOS 16.7.5, when taking a picture, the EXIF data I'm getting does not seem to make sense. I am getting: FocalLength: 399/100 FocalLengthIn35mmFilm: 177 The FocalLength EXIF field is correct since iPhone 8's back lens does have a focal length of 3.99mm. The FocalLengthIn35mmFilm value, however, is wrong. The actual value is (obviously) much less, probably between 23 and 27mm (ish). Could this be a bug, or may FocalLengthIn35mmFilm is expressed in a unit which I am not aware of? Thanks for your help.
Posted Last updated
.
Post not yet marked as solved
0 Replies
246 Views
The 24-megapixel camera is most widely used for daily photography, and the 48-megapixel camera is only used for taking landscapes or photos. After all, it takes up too much memory. The biggest problem with the 14promax now is that its photography is lame, and the 12-megapixel camera has long lagged behind Android. There are too many. Adding 24 million modes is much more valuable than updating iOS, and the experience is directly doubled.
Posted
by ASD499.
Last updated
.
Post not yet marked as solved
0 Replies
325 Views
the code found in the file "Camera" in the template for "Capturing Photos" gives warnings that say How can I go around this by still keeping the same functions as the original "Camera" file? I am using a features that REQUIRES Mac Catalyst 17.0 or above.
Posted
by Pokka.
Last updated
.
Post not yet marked as solved
0 Replies
294 Views
In a Xamarin iOS app, there is a screen (Screen A) designed for capturing ID photos. We've written code to set the default camera zoom to 2x when opening Screen A, enabling users to take photos by pressing a button. The subsequent screen (Screen B) serves as a preview screen for the photos taken on Screen A. The issue at hand is that photos captured on Screen A are unintentionally displayed in grayscale on Screen B. The correct behavior should be displaying them in color on Screen B. This problem occurs only on iPhone 14 Pro Max with iOS 17.0; it does not occur on iPhone 15 Pro with iOS 17.1. Moreover, when the code for a 2x zoom is not present during the capture settings, photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. If the code for a 2x zoom is present during the capture settings, and the AVCaptureSession's SessionPreset is set to Preset640x480, the photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. Is there an instance where the setting of AVCaptureSession's SessionPreset on iPhone 14 Pro Max with iOS 17.0 influences unintentional grayscale conversion when processing images after taking a 2x zoom photo?
Posted
by ethan731.
Last updated
.
Post not yet marked as solved
2 Replies
332 Views
Apple in all their wisdom has deprecated almost every API that can be used to get the interface orientation because they want developers to treat an orientation change as a simple size change. However, there are other uses for the interface orientation other than adjusting the UI. Since the camera is fixed to the device, and does not rotate with the interface, images from the camera need to be adjusted for orientation when displaying and/or processing them for computer vision tasks. Using traits is not a reliable way of determining the orientation, especially when running on an iPad. What is the recommended way to determine the relative angle of the camera in relation to the interface now all the interfaceOrientation APIs are deprecated? And specifically: how to get a notification of an interface orientation change?
Posted
by Aaargh.
Last updated
.
Post not yet marked as solved
1 Replies
356 Views
I'm trying to use the host camera from inside a virtual machine created using the Virtualization framework in Swift. I can't seem to figure out how to achieve this though, and unlike audio devices, keyboards, displays etc there doesn't seem to be a corresponding class and docs page for cameras or generic usb devices. Is there any way to connect a built in Apple camera to a mac virtual machine created with the Virtualization framework?
Posted Last updated
.
Post not yet marked as solved
0 Replies
483 Views
I have an app that is getting rejected from TestFlight because of this error: ITMS-90683: Missing purpose string in Info.plist - Your app’s code references one or more APIs that access sensitive user data, or the app has one or more entitlements that permit such access. The Info.plist file for the “TurtleTuner.app” bundle should contain a NSCameraUsageDescription key with a user-facing purpose string explaining clearly and completely why your app needs the data. If you’re using external libraries or SDKs, they may reference APIs that require a purpose string. While your app might not use these APIs, a purpose string is still required. For details, visit: https://developer.apple.com/documentation/uikit/protecting_the_user_s_privacy/requesting_access_to_protected_resources. The app does not use the camera, only the microphone. I cannot find references to the camera in any of the third party libraries I'm using. What are some ways to troubleshoot this beyond looking for "camera" in the few dependencies? For context, this commit allows the app to get through successfully to TestFlight: https://github.com/tsargent/turtle-tuner/commit/67d4a52e62839ad6c2a49848bea9c408d983f17a While this following commit, which reverts the commit, fails on TestFlight with the mentioned camera permission error: https://github.com/tsargent/turtle-tuner/commit/c95b0b16c4e85d77e625d36b816ed53faa826cf5
Posted
by twsargent.
Last updated
.
Post not yet marked as solved
1 Replies
711 Views
I have an iOS app that uses (camera) video feed and applies CoreImage filters to simulate a specific real world effect (for educational purposes). Now I wanted to make a similar app for visionOS and apply the same CoreImage filters to the content (live view) users sees while wearing Apple Vision Pro headset. Is there a way to do it with current APIs and what would you recommend? I saw that we cannot get video feed from camera(s), is there a way to do it with ARKit and applying the filters somehow using that? I know visionOS is a young/fresh platform but any help would be great! Thank you!
Posted
by darescore.
Last updated
.
Post not yet marked as solved
2 Replies
543 Views
’m using the AVFoundation Swift APIs to record a Video (CMSampleBuffers) and Audio (CMSampleBuffers) to a file using AVAssetWriter. Initializing the AVAssetWriter happens quite quickly, but calling assetWriter.startWriting() fully blocks the entire application AND ALL THREADS for 3 seconds. This only happens in Debug builds, not in Release. Since it blocks all Threads and only happens in Debug, I’m lead to believe that this is an Xcode/Debugger/LLDB hang issue that I’m seeing. Does anyone experience something similar? Here’s how I set all of that up: startRecording(...) And here’s the line that makes it hang for 3+ seconds: assetWriter.startWriting(...)
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
0 Replies
212 Views
My app uses camera and photo library. I found that if a user follows certain steps, they will no longer be able to change the photo permissions for my app in the Settings app. The steps are as follows Press the camera button in the app to launch the camera. Take a picture with camera permissions granted. grant ".addOnly" permission to the photo library. Press the photo library button in the app to read photo library. Deny ".readWrite" permission to the photo library. After step 5, the Settings app only shows items to switch ".addOnly" permissions, but not ".readWrite" permissions. I am aware that in iOS14 or later, the permission required after a photo is taken with the camera should be ".addOnly". Therefore, I suspect that this problem is occurring in other apps. So far I have devised my app to deal with this problem, but is this the expected behavior of the Settings app? If so, how can I avoid this problem?
Posted
by MikageX86.
Last updated
.
Post not yet marked as solved
1 Replies
354 Views
I have two USB cameras, they both recognize the camera properly under windows, but on macos there is one that recognizes only the USB, but not the camera. Would like to ask if there is any solution idea or the underlying code reference. thank you.
Posted
by CloserNs.
Last updated
.
Post not yet marked as solved
0 Replies
456 Views
This isn't just my observation but lots of people around me and also you can find tonnes of feedback on the inter webs. The processing of images taken with the front facing camera on the 15 (and I think 14 before) is so over processed that I'm aware of people jumping to other phones. And they're right. The 15 exacerbates that even more. You can turn off HDR (a viewing thing), you can prioritise speed over processing but really you cannot turn this off. You can take a Live Photo and then choose a different frame and the processing is less. As a developer I look at that and think it's bonkers, it's just software so why hasn't anyone produced a camera app that makes faces look good (not AI processing) from the front camera. I can be all enthusiastic and say I will develop one but it seems like a simple, obvious fix for Apple. To have the settings so bad that I have friends returning their phones, seems pretty bad. And as a photographer I would agree. There's a lot to love with Apple on the 15 and the log and prores but a simple selfie produces such ugly results. That's an actual problem. So throwing it it out there. What does everyone think? cheers Paul
Posted Last updated
.
Post not yet marked as solved
0 Replies
430 Views
Hi iOS community need your help. I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4 ERROR: ` error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record}) My Code here: final class CedulaScanningVC: UIViewController { var captureSession: AVCaptureSession! var stillImageOutput: AVCapturePhotoOutput! var videoPreviewLayer: AVCaptureVideoPreviewLayer! var delegate: ScanCedulaDelegate? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) self.captureSession.stopRunning() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupCamera() } // MARK: - Configure Camera func setupCamera() { captureSession = AVCaptureSession() captureSession.sessionPreset = .medium guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else { print("Unable to access back camera!") return } let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: backCamera) //Step 9 stillImageOutput = AVCapturePhotoOutput() if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) { captureSession.addInput(input) captureSession.addOutput(stillImageOutput) setupLivePreview() } } catch let error { print("Error Unable to initialize back camera: \(error.localizedDescription)") } } func setupLivePreview() { videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer.videoGravity = .resizeAspectFill videoPreviewLayer.connection?.videoOrientation = .portrait self.view.layer.addSublayer(videoPreviewLayer) //Step12 DispatchQueue.global(qos: .userInitiated).async { [weak self] in self?.captureSession.startRunning() //Step 13 DispatchQueue.main.async { self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero } } } func failed() { let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert) ac.addAction(UIAlertAction(title: "OK", style: .default)) present(ac, animated: true) captureSession = nil } // MARK: - actions func cameraButtonPressed() { let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]) stillImageOutput.capturePhoto(with: settings, delegate: self) } } extension CedulaScanningVC: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { print("error: \(error)") captureSession.stopRunning() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in guard let self = self else {return} guard let imageData = photo.fileDataRepresentation() else { print("NO image captured") return } let image = UIImage(data: imageData) self.delegate?.capturedImage(image: image) } } } I don't know what am doing wrong ?
Posted Last updated
.
Post not yet marked as solved
0 Replies
380 Views
Hi Everyone need your help . I am working on an application where I am capturing photo from the back camera using AVCaptureSession. It is working fine with the devices running iOS17+ but I am facing an error on device iPhone X running iOS 16.7.4 ERROR: error: Optional(Error Domain=AVFoundationErrorDomain Code=-11803 "Cannot Record" UserInfo={NSUnderlyingError=0x283f0b780 {Error Domain=NSOSStatusErrorDomain Code=-16409 "(null)"}, NSLocalizedRecoverySuggestion=Try recording again., AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record}) Here is my Code: `final class CedulaScanningVC: UIViewController { var captureSession: AVCaptureSession! var stillImageOutput: AVCapturePhotoOutput! var videoPreviewLayer: AVCaptureVideoPreviewLayer! var delegate: ScanCedulaDelegate? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) self.captureSession.stopRunning() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupCamera() } // MARK: - Configure Camera func setupCamera() { captureSession = AVCaptureSession() captureSession.sessionPreset = .medium guard let backCamera = AVCaptureDevice.default(for: AVMediaType.video) else { print("Unable to access back camera!") return } let input: AVCaptureDeviceInput do { input = try AVCaptureDeviceInput(device: backCamera) //Step 9 stillImageOutput = AVCapturePhotoOutput() if captureSession.canAddInput(input) && captureSession.canAddOutput(stillImageOutput) { captureSession.addInput(input) captureSession.addOutput(stillImageOutput) setupLivePreview() } } catch let error { print("Error Unable to initialize back camera: \(error.localizedDescription)") } } func setupLivePreview() { videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer.videoGravity = .resizeAspectFill videoPreviewLayer.connection?.videoOrientation = .portrait self.view.layer.addSublayer(videoPreviewLayer) //Step12 DispatchQueue.global(qos: .userInitiated).async { [weak self] in self?.captureSession.startRunning() //Step 13 DispatchQueue.main.async { self?.videoPreviewLayer.frame = self?.view.bounds ?? .zero } } } func failed() { let ac = UIAlertController(title: "Scanning not supported", message: "Your device does not support scanning a code from an item. Please use a device with a camera.", preferredStyle: .alert) ac.addAction(UIAlertAction(title: "OK", style: .default)) present(ac, animated: true) captureSession = nil } // MARK: - actions func cameraButtonPressed() { let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]) stillImageOutput.capturePhoto(with: settings, delegate: self) } } extension CedulaScanningVC: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { print("error: \(error)") captureSession.stopRunning() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in guard let self = self else {return} guard let imageData = photo.fileDataRepresentation() else { print("NO image captured") return } let image = UIImage(data: imageData) self.delegate?.capturedImage(image: image) } } }` I don't know what am doing wrong ?
Posted Last updated
.
Post not yet marked as solved
0 Replies
186 Views
There is a need to obtain data on the position of the TrueDepth camera matrix. Couldn't find anything in the documentation. Has anyone solved this problem? Is it generally possible to obtain this data?
Posted
by AlexChuk.
Last updated
.
Post not yet marked as solved
0 Replies
175 Views
There is a need to obtain data on the position of the TruDepth camera matrix. Couldn't find anything in the documentation. Has anyone solved this problem? Is it generally possible to obtain this data?
Posted
by AlexChuk.
Last updated
.