Discuss using the camera on Apple devices.

Posts under Camera tag

169 Posts
Sort by:
Post not yet marked as solved
0 Replies
86 Views
Similar post on StackOverflow and multiple people reported this (you will encounter it if you run the app for like 10 minutes). I'm hoping this could get Apple's attention somehow After downloading the project code (https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera) and running the Swift sample code on an iPhone 14 Pro, the app crashes intermittently, throwing this error: Execution of the command buffer was aborted due to an error during execution. Caused GPU Timeout Error (00000002:kIOGPUCommandBufferCallbackErrorTimeout) Sometimes it will crash within a few seconds, sometimes it can take around 10 minutes. Has anyone here experienced this crash from the sample code, or using the LiDAR camera? I have spent a long time trying to solve this issue, I have searched the web high and low and submitted (I think) a report to Apple about it. I am unable to get xcode to show me the line of code where the crash is happening. Any help would be greatly appreciated.
Posted Last updated
.
Post not yet marked as solved
0 Replies
117 Views
I am currently renovating an application for macOS Sonoma (14.4) which triggers a Canon 60D via USB cable. Unlike what happened before in MacOS 10.6, the camera (ICCameraDevice) has description that contains only 2 capabilities: { UUIDString = "00000000-0000-0000-0000-000004A93215"; autolaunchApplicationPath = ""; capabilities = ( ICCameraDeviceCanDeleteOneFile, ICCameraDeviceCanAcceptPTPCommands ); class = ICCameraDevice; connectionID = 0xffff0001; delegate = "<0x600003157ac0>"; deviceID = 0xffff0001; deviceRef = 0xffff0001; iconPath = "(null)"; locationDescription = ICDeviceLocationDescriptionUSB; moduleExecutableArchitecture = 0; modulePath = "/System/Library/Image Capture/Devices/PTPCamera.app"; moduleVersion = "1.0"; name = "Canon EOS 60D"; persistentIDString = "00000000-0000-0000-0000-000004A93215"; shared = NO; softwareInstallPercentDone = "0.000000"; transportType = ICTransportTypeUSB; type = 0x00000101; } timeOffset : 0.000000 hasConfigurableWiFiInterface : N/A isAccessRestrictedAppleDevice : NO As you can see, ICCameraDeviceCanTakePicture is not present now, and so I cannot take a picture with requestTakePicture. Do I need to do anything special to regain these capabilities, like in older versions of macOS? Is my only option to use PTP commands? Thanks!
Posted
by da_gagnon.
Last updated
.
Post not yet marked as solved
2 Replies
141 Views
When running an iOS app as designed for iPad on an m1 Mac mini the UIImagePickerController.isSourceTypeAvailable(.camera) api returns true leading to a crash (attached) if the camera is selected to upload an image to the app as my much loved Mac mini does not have a camera. For the moment have disabled camera if platform is Mac by adding the qualification: ProcessInfo().isiOSAppOnMac == false but this seems like a bug or does the crash also happen on Macs with cameras? Other image picker options work fine. Crash log
Posted
by ptclarke.
Last updated
.
Post marked as solved
1 Replies
114 Views
I want to take 48MP photos and get the same iso and exposure duration as I set. Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set if (AVCaptureDevice.isExposureModeSupported:.custom) { AVCaptureDevice.exposureMode = .custom; } Set AVCaptureDevice.setExposureModeCustomWithDuration:1/20 ISO:100 completionHandler:handler Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) The API discussion of setExposureModeCustomWithDuration told me https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624646-setexposuremodecustomwithduratio/ To ensure that the receiver's ISO and exposureDuration values are honored while in AVCaptureExposureModeCustom or AVCaptureExposureModeLocked, you must set your AVCapturePhotoSettings.photoQualityPrioritization property to AVCapturePhotoQualityPrioritizationSpeed. But at last step, when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .speed, the photo resolution is (4000, 3000), only 12MP, not is (8000, 6000). the iso and exposure duration on the photo are the same as what I set. and when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .balanced/.qulity, the photo is (8000, 6000) , but the iso and exposeure duration obtained on the photo is different from the one I set. What do I need to do to take 48MP photos and set the iso and exposure duration successfully?
Posted
by Zard.
Last updated
.
Post not yet marked as solved
3 Replies
209 Views
The methods described in https://developer.apple.com/forums/thread/715452?answerId=729571022#729571022 to obtain 48 MP image captures no longer seem to work on iOS 17.4 under certain circumstances. Previously, the following steps were sufficient to get 48 MP capture from AVFoundation: Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoOutput.maxPhotoQualityPrioritization to .quality Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoSettings.photoQualityPrioritization to .quality As of iOS 17.4, the exact same code that worked through 17.3 no longer works if the session was configured manually (resulting in the .inputPriority session preset) rather than using a session preset (like .high). When configuring the session manually, all the intervening steps work (an active format can be found with the appropriate dimensions, the photo output settings can be set to 8064x6048 successfully, etc.), but the resulting photo is 4032x3024. Again, these same steps worked flawlessly prior to iOS 17.4. Am I missing something? Did iOS 17.4 change the requirements for 48 MP capture, or is this a bug?
Posted
by tenuki.
Last updated
.
Post not yet marked as solved
1 Replies
172 Views
I'm currently working on an iPad application that uses a third party sdk to scan a drivers license, and then allows the user to take a picture of themselves. However, when the user is directed to the self photo view, the AVCaptureSession preview will freeze. The app as a whole does not freeze. Only the view preview. I believe this is an issue with the OS, because this only happens on iPad 9s. All the other iPads work fine. Has anyone else seen this issue? Also, is there anyway to see logs from the AVCaptureSession so I can see what is happening? Maybe there is a way I can see when it freezes and then restart it.
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.2k Views
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput and scan QRCode. After updating to iPadOS 17.4, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (7th Gen) iPad (6th Gen) iPad Pro (10.5) iPad Pro (12.9 2nd Gen) This issue has not occur on any other devices I have. This may only occur on devices with model number "iPad7,x". I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after iPadOS17.4? Or is there some problem on the OS side?
Posted
by N.Otani.
Last updated
.
Post not yet marked as solved
1 Replies
119 Views
Hi hope all are well! We've been working on a live streaming app and it's going quite well! Just got the aspect ratio locked as desired. Now the audio, its volume is extremely low. It sounds like it's using the headset mic instead of the bottom mic that's used on Facetime or on speakerphone calls. We tried flipping cameras and specifying sample rates, almost every constraint in MediaConstraints - no go! Is there any way to specify this? Thanks in advance!
Posted Last updated
.
Post not yet marked as solved
1 Replies
291 Views
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro? My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
Posted
by wladislaw.
Last updated
.
Post not yet marked as solved
2 Replies
145 Views
Dear Team, I am trying to add contact from QRCode. But it seems that the built-in QR code reader of iphone camera isn't able to decode the FullName with space containing in last name correctly ex:-Collin A. Al Miller. I have attached all the screenshot for your reference. Here are the examples: When I am trying to focus iphone camera on QRCode the fullname (Collin A. Al Miller). scan the The full name its giving the empty result without the fullname. The attached screenshot details a)CameraQRNotWorking b)NotWorkingQRCOde 2)When i try to removed the blank space and tried to add comma or - in the full name its getting recognised and its working perfectly. The attached screenshot name a)CameraQRCodeWorking b)workingQRCODE 3)Both the full name are working perfectly in QR camera scanner of android Collin A. Al-Miller or Collin A, Al Miller. The attached screenshot name AndroidQRCODE Hope this issue will get resolved in upcoming release. Kindly provide the feedback relatedto this issue Code to generate vcard var str = "BEGIN:VCARD \n" + "VERSION:2.1 \n" + "FN:\("Collin A. Al Miller") \n" + "TITLE:\("") \n" if options.showPersonalPhone { str.append(contentsOf: "item1.TEL;CELL:\("+91987654320") \n") str.append(contentsOf: "item1.X-ABLabel:Mobile\n") } if options.showWorkPhone { str.append(contentsOf: "item2.TEL;WORK;VOICE:\("+91987654320") \n") str.append(contentsOf: "item2.X-ABLabel:Work Phone\n") } if options.showEmail { str.append(contentsOf: "item3.EMAIL;WORK;INTERNET:\("test@gmail.com") \n") str.append(contentsOf: "item3.X-ABLabel:Work Email\n") } if options.showWebsite { str.append(contentsOf: "URL:www.test.com \n") } if options.showLocation { str.append(contentsOf: "ADR;WORK:;;\("Bangalore") \n") } str.append(contentsOf: "END:VCARD")
Posted
by Shohib.
Last updated
.
Post not yet marked as solved
0 Replies
165 Views
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs. When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization. self.capturePhotoOutput = AVCapturePhotoOutput.init() self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } if let connection = capturePhotoOutput?.connection(with: .video) { if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .off } } DispatchQueue.main.async { [self] in self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ let photoSettings = AVCapturePhotoSettings() if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType] photoSettings.isAutoStillImageStabilizationEnabled = false capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self) } } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { if let dataImage = photo.fileDataRepresentation() { print(UIImage(data: dataImage)?.size as Any) let dataProvider = CGDataProvider(data: dataImage as CFData) let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation)) } else { print("some error here") } } As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image. In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added. self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue")) if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) { captureSession?.addOutput(videoDataOutput!) } /* If I cancel the comment line, video stabilization is enabled. if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } */ DispatchQueue.main.async { [self] in self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ takePicture = true } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if !takePicture { return //we have nothing to do with the image buffer } //try and get a CVImageBuffer out of the sample buffer guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer)) let ciImage = CIImage.init(cvImageBuffer: cvBuffer) let ciContext = CIContext() let cgImage = ciContext.createCGImage(ciImage, from: rect) guard cgImage != nil else {return } let uiImage = UIImage(cgImage: cgImage!) }
Posted Last updated
.
Post not yet marked as solved
0 Replies
137 Views
while trying to use the external camera, ios is not detecting exposure setting of connected external camera and check isExposureModeSupported is always returning false. And capture image also don't have any exposure details. How can we use or change these settings
Posted
by Parv156.
Last updated
.
Post not yet marked as solved
1 Replies
190 Views
When I use LiDAR, AVCaptureDeviceTypeBuiltInLiDARDepthCamera is used. As AVCaptureDeviceTypeBuiltInLiDARDepthCamera is A device that consists of two cameras, one LiDAR and one YUV. I found that the LiDAR data is 30fps, even making the YUV data 30 fps. But I really need the 240fps YUV data. Is there a way to utilize the 30fps LiDAR with 240fps YUV camera? Any reply would be appreciated.
Posted
by zqj2000.
Last updated
.
Post not yet marked as solved
1 Replies
184 Views
Is there a possibility to develop an iOS app that is connected to an external camera connected through lightning or USB-C port and receives video stream. We need to be able to get this video stream even while the app is in the background or if the phone is locked. We could have the camera connected wirelessly through the lightning port. Is there an available library or a sample app featuring such functionalities. Thanks.
Posted
by FaridHage.
Last updated
.
Post marked as solved
2 Replies
174 Views
Hello Community, I plan an app to correct a specific child's behavior. For this to work, the app needs to run in the background and will be triggered to use the front camera when a pre-defined app is on screen (YouTube as an example). Snap photos will be taken for image processing before their deletion every few seconds. When the specific behavior is found, the app will take down the device volume (and put it back when "fixed"). The user photos/data are deleted and nothing is sent, saved, or shared. My main concern is that the app is always in the background and using the camera frequently. I'm unsure if that is possible/allowed, and if so, how stable will it be. But most importantly, I do not want this code activity to be found suspicious when uploading the app to the store. Hope this is clear. I would appreciate an advice. Thanks, Avi
Posted
by avihaybi.
Last updated
.
Post not yet marked as solved
3 Replies
295 Views
Platfrom: iphone XR System: ios 17.3.1 using iphone front camera(normal camera), configure data output format to 'kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange' ('420v' (video range)) I found that Cb, Cr is inside [16, 240], but Y is outside range [16, 235], e.g 240, 255 It will lead that after convert to rbg, rgb may be negative number , and then clamp the r,g,b value between 0 and 255, finally convert clamped rgb back to yuv, yuv is different from origin yuv. The maxium difference of y channel will be 20. Both procssing by pure cpu and using metal shader will get this result CVPixelBuffer.h kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ // ... some code ... // config camra data output format NSDictionary* options = @{ (__bridge NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange), //(__bridge NSString*)kCVPixelBufferMetalCompatibilityKey : @(YES), }; [_videoDataOutput setVideoSettings:options]; // ... some code ... - (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferRef pixelBuffer = imageBuffer; CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); uint8_t* yBase = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); uint8_t* uvBase = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); int imageWidth = (int)CVPixelBufferGetWidth(pixelBuffer); // 720 int imageHeight = (int)CVPixelBufferGetHeight(pixelBuffer);// 1280 int y_width = (int)CVPixelBufferGetWidthOfPlane (pixelBuffer, 0); // 720 int y_height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 0); // 1280 int uv_width = (int)CVPixelBufferGetWidthOfPlane (pixelBuffer, 1); // 360 int uv_height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 1); // 640 int y_stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); int uv_stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1); // 768 // check Y-plane if (TRUE) { for(int i = 0 ; i < imageHeight ; i++) { for(int j = 0; j < imageWidth ; j++) { uint8_t nv12pixel = *(yBase + y_stride * i + j ); if (nv12pixel < 16 || nv12pixel > 235) { // [16, 235] NSLog(@"%s: y panel out of range, coord (x:%d, y:%d), h-coord (x:%d, y:%d) ; nv12 %u " ,__FUNCTION__ ,j ,i ,j/2, i/2 ,nv12pixel ); } } } } CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); } // ... some code ... How to deal with this case ? Hope to get reply, Thanks
Posted
by ZoGo996.
Last updated
.
Post not yet marked as solved
1 Replies
818 Views
CVPixelBuffer.h defines kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange = 'x420', /* 2 plane YCbCr10 4:2:0, each 10 bits in the MSBs of 16bits, video-range (luma=[64,940] chroma=[64,960]) */ But when I set above format camera output, and I find the output pixelbuffer's value is exceed the range.I can see [0 -255] for 420YpCbCr8BiPlanarVideoRange and [0,1023] for 420YpCbCr10BiPlanarVideoRange Is it a bug or something wrong of the output?If it is not how can I choose the correct matrix transfer the yuv data to rgb?
Posted
by vrsure.
Last updated
.
Post not yet marked as solved
0 Replies
222 Views
Hello there. I am working on a project to control iPhone camera. Using Bluetooth I can take pictures using Vol+ Command. Just wondering, is it possible to do the same over USB? Thanks in advance. Regards.
Posted
by sakibnaz.
Last updated
.
Post not yet marked as solved
0 Replies
178 Views
■detail In a Xamarin iOS app, there is a screen (Screen A) designed for capturing ID photos. We've written code to set the default camera zoom to 2x when opening Screen A, enabling users to take photos by pressing a button. The subsequent screen (Screen B) serves as a preview screen for the photos taken on Screen A. The issue at hand is that photos captured on Screen A are unintentionally displayed in grayscale on Screen B. The correct behavior should be displaying them in color on Screen B. This problem occurs only on iPhone 14 Pro Max with iOS 17.0; it does not occur on iPhone 15 Pro with iOS 17.1. Moreover, when the code for a 2x zoom is not present during the capture settings, photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. If the code for a 2x zoom is present during the capture settings, and the AVCaptureSession's SessionPreset is set to Preset640x480, the photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. Is there an instance where the setting of AVCaptureSession's SessionPreset on iPhone 14 Pro Max with iOS 17.0 influences unintentional grayscale conversion when processing images after taking a 2x zoom photo? ■how to reproduce Using the camera 2x zoom code with AVCaptureSession's SessionPreset set to Preset during capturing on iPhone 14 Pro Max with iOS 17.0 using XCode15.1's iOS SDK(17.2). ■enviroment We are building a program using Xamarin.iOS in Visual Studio for Mac. During the build process, Xcode 15.1 (iOS SDK 17) is utilized.
Posted
by ethan731.
Last updated
.
Post not yet marked as solved
0 Replies
188 Views
Hello, I've been developing a web app which I need the front camera and need to take a picture at a higher resolution. But I have one issue. When I call navigator.mediaDevices.getUserMedia() in the browser to get the resolution of the camera, it shows it as 2052 x 2736. But it's a 12 MP front camera. I take a picture of myself using the camera app on the iPad and it shows 12 MP picture. The back camera reports it fine. You can also test it out on webcamtests.com to see the resolution.
Posted Last updated
.