Discuss using the camera on Apple devices.

Posts under Camera tag

171 Posts
Sort by:
Post not yet marked as solved
3 Replies
377 Views
The methods described in https://developer.apple.com/forums/thread/715452?answerId=729571022#729571022 to obtain 48 MP image captures no longer seem to work on iOS 17.4 under certain circumstances. Previously, the following steps were sufficient to get 48 MP capture from AVFoundation: Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoOutput.maxPhotoQualityPrioritization to .quality Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoSettings.photoQualityPrioritization to .quality As of iOS 17.4, the exact same code that worked through 17.3 no longer works if the session was configured manually (resulting in the .inputPriority session preset) rather than using a session preset (like .high). When configuring the session manually, all the intervening steps work (an active format can be found with the appropriate dimensions, the photo output settings can be set to 8064x6048 successfully, etc.), but the resulting photo is 4032x3024. Again, these same steps worked flawlessly prior to iOS 17.4. Am I missing something? Did iOS 17.4 change the requirements for 48 MP capture, or is this a bug?
Posted
by tenuki.
Last updated
.
Post not yet marked as solved
1 Replies
298 Views
I'm currently working on an iPad application that uses a third party sdk to scan a drivers license, and then allows the user to take a picture of themselves. However, when the user is directed to the self photo view, the AVCaptureSession preview will freeze. The app as a whole does not freeze. Only the view preview. I believe this is an issue with the OS, because this only happens on iPad 9s. All the other iPads work fine. Has anyone else seen this issue? Also, is there anyway to see logs from the AVCaptureSession so I can see what is happening? Maybe there is a way I can see when it freezes and then restart it.
Posted Last updated
.
Post not yet marked as solved
1 Replies
246 Views
Hi hope all are well! We've been working on a live streaming app and it's going quite well! Just got the aspect ratio locked as desired. Now the audio, its volume is extremely low. It sounds like it's using the headset mic instead of the bottom mic that's used on Facetime or on speakerphone calls. We tried flipping cameras and specifying sample rates, almost every constraint in MediaConstraints - no go! Is there any way to specify this? Thanks in advance!
Posted Last updated
.
Post not yet marked as solved
1 Replies
440 Views
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro? My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
Posted
by wladislaw.
Last updated
.
Post not yet marked as solved
2 Replies
238 Views
Dear Team, I am trying to add contact from QRCode. But it seems that the built-in QR code reader of iphone camera isn't able to decode the FullName with space containing in last name correctly ex:-Collin A. Al Miller. I have attached all the screenshot for your reference. Here are the examples: When I am trying to focus iphone camera on QRCode the fullname (Collin A. Al Miller). scan the The full name its giving the empty result without the fullname. The attached screenshot details a)CameraQRNotWorking b)NotWorkingQRCOde 2)When i try to removed the blank space and tried to add comma or - in the full name its getting recognised and its working perfectly. The attached screenshot name a)CameraQRCodeWorking b)workingQRCODE 3)Both the full name are working perfectly in QR camera scanner of android Collin A. Al-Miller or Collin A, Al Miller. The attached screenshot name AndroidQRCODE Hope this issue will get resolved in upcoming release. Kindly provide the feedback relatedto this issue Code to generate vcard var str = "BEGIN:VCARD \n" + "VERSION:2.1 \n" + "FN:\("Collin A. Al Miller") \n" + "TITLE:\("") \n" if options.showPersonalPhone { str.append(contentsOf: "item1.TEL;CELL:\("+91987654320") \n") str.append(contentsOf: "item1.X-ABLabel:Mobile\n") } if options.showWorkPhone { str.append(contentsOf: "item2.TEL;WORK;VOICE:\("+91987654320") \n") str.append(contentsOf: "item2.X-ABLabel:Work Phone\n") } if options.showEmail { str.append(contentsOf: "item3.EMAIL;WORK;INTERNET:\("test@gmail.com") \n") str.append(contentsOf: "item3.X-ABLabel:Work Email\n") } if options.showWebsite { str.append(contentsOf: "URL:www.test.com \n") } if options.showLocation { str.append(contentsOf: "ADR;WORK:;;\("Bangalore") \n") } str.append(contentsOf: "END:VCARD")
Posted
by Shohib.
Last updated
.
Post not yet marked as solved
0 Replies
279 Views
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs. When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization. self.capturePhotoOutput = AVCapturePhotoOutput.init() self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } if let connection = capturePhotoOutput?.connection(with: .video) { if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .off } } DispatchQueue.main.async { [self] in self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ let photoSettings = AVCapturePhotoSettings() if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType] photoSettings.isAutoStillImageStabilizationEnabled = false capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self) } } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { if let dataImage = photo.fileDataRepresentation() { print(UIImage(data: dataImage)?.size as Any) let dataProvider = CGDataProvider(data: dataImage as CFData) let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation)) } else { print("some error here") } } As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image. In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added. self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue")) if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) { captureSession?.addOutput(videoDataOutput!) } /* If I cancel the comment line, video stabilization is enabled. if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } */ DispatchQueue.main.async { [self] in self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ takePicture = true } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if !takePicture { return //we have nothing to do with the image buffer } //try and get a CVImageBuffer out of the sample buffer guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer)) let ciImage = CIImage.init(cvImageBuffer: cvBuffer) let ciContext = CIContext() let cgImage = ciContext.createCGImage(ciImage, from: rect) guard cgImage != nil else {return } let uiImage = UIImage(cgImage: cgImage!) }
Posted Last updated
.
Post not yet marked as solved
0 Replies
220 Views
while trying to use the external camera, ios is not detecting exposure setting of connected external camera and check isExposureModeSupported is always returning false. And capture image also don't have any exposure details. How can we use or change these settings
Posted
by Parv156.
Last updated
.
Post not yet marked as solved
1 Replies
296 Views
When I use LiDAR, AVCaptureDeviceTypeBuiltInLiDARDepthCamera is used. As AVCaptureDeviceTypeBuiltInLiDARDepthCamera is A device that consists of two cameras, one LiDAR and one YUV. I found that the LiDAR data is 30fps, even making the YUV data 30 fps. But I really need the 240fps YUV data. Is there a way to utilize the 30fps LiDAR with 240fps YUV camera? Any reply would be appreciated.
Posted
by zqj2000.
Last updated
.
Post not yet marked as solved
1 Replies
305 Views
Is there a possibility to develop an iOS app that is connected to an external camera connected through lightning or USB-C port and receives video stream. We need to be able to get this video stream even while the app is in the background or if the phone is locked. We could have the camera connected wirelessly through the lightning port. Is there an available library or a sample app featuring such functionalities. Thanks.
Posted
by FaridHage.
Last updated
.
Post marked as solved
2 Replies
264 Views
Hello Community, I plan an app to correct a specific child's behavior. For this to work, the app needs to run in the background and will be triggered to use the front camera when a pre-defined app is on screen (YouTube as an example). Snap photos will be taken for image processing before their deletion every few seconds. When the specific behavior is found, the app will take down the device volume (and put it back when "fixed"). The user photos/data are deleted and nothing is sent, saved, or shared. My main concern is that the app is always in the background and using the camera frequently. I'm unsure if that is possible/allowed, and if so, how stable will it be. But most importantly, I do not want this code activity to be found suspicious when uploading the app to the store. Hope this is clear. I would appreciate an advice. Thanks, Avi
Posted
by avihaybi.
Last updated
.
Post not yet marked as solved
3 Replies
412 Views
Platfrom: iphone XR System: ios 17.3.1 using iphone front camera(normal camera), configure data output format to 'kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange' ('420v' (video range)) I found that Cb, Cr is inside [16, 240], but Y is outside range [16, 235], e.g 240, 255 It will lead that after convert to rbg, rgb may be negative number , and then clamp the r,g,b value between 0 and 255, finally convert clamped rgb back to yuv, yuv is different from origin yuv. The maxium difference of y channel will be 20. Both procssing by pure cpu and using metal shader will get this result CVPixelBuffer.h kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ // ... some code ... // config camra data output format NSDictionary* options = @{ (__bridge NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange), //(__bridge NSString*)kCVPixelBufferMetalCompatibilityKey : @(YES), }; [_videoDataOutput setVideoSettings:options]; // ... some code ... - (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferRef pixelBuffer = imageBuffer; CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); uint8_t* yBase = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); uint8_t* uvBase = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); int imageWidth = (int)CVPixelBufferGetWidth(pixelBuffer); // 720 int imageHeight = (int)CVPixelBufferGetHeight(pixelBuffer);// 1280 int y_width = (int)CVPixelBufferGetWidthOfPlane (pixelBuffer, 0); // 720 int y_height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 0); // 1280 int uv_width = (int)CVPixelBufferGetWidthOfPlane (pixelBuffer, 1); // 360 int uv_height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 1); // 640 int y_stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); int uv_stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1); // 768 // check Y-plane if (TRUE) { for(int i = 0 ; i < imageHeight ; i++) { for(int j = 0; j < imageWidth ; j++) { uint8_t nv12pixel = *(yBase + y_stride * i + j ); if (nv12pixel < 16 || nv12pixel > 235) { // [16, 235] NSLog(@"%s: y panel out of range, coord (x:%d, y:%d), h-coord (x:%d, y:%d) ; nv12 %u " ,__FUNCTION__ ,j ,i ,j/2, i/2 ,nv12pixel ); } } } } CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); } // ... some code ... How to deal with this case ? Hope to get reply, Thanks
Posted
by ZoGo996.
Last updated
.
Post not yet marked as solved
1 Replies
933 Views
CVPixelBuffer.h defines kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange = 'x420', /* 2 plane YCbCr10 4:2:0, each 10 bits in the MSBs of 16bits, video-range (luma=[64,940] chroma=[64,960]) */ But when I set above format camera output, and I find the output pixelbuffer's value is exceed the range.I can see [0 -255] for 420YpCbCr8BiPlanarVideoRange and [0,1023] for 420YpCbCr10BiPlanarVideoRange Is it a bug or something wrong of the output?If it is not how can I choose the correct matrix transfer the yuv data to rgb?
Posted
by vrsure.
Last updated
.
Post not yet marked as solved
0 Replies
301 Views
Hello there. I am working on a project to control iPhone camera. Using Bluetooth I can take pictures using Vol+ Command. Just wondering, is it possible to do the same over USB? Thanks in advance. Regards.
Posted
by sakibnaz.
Last updated
.
Post not yet marked as solved
0 Replies
226 Views
■detail In a Xamarin iOS app, there is a screen (Screen A) designed for capturing ID photos. We've written code to set the default camera zoom to 2x when opening Screen A, enabling users to take photos by pressing a button. The subsequent screen (Screen B) serves as a preview screen for the photos taken on Screen A. The issue at hand is that photos captured on Screen A are unintentionally displayed in grayscale on Screen B. The correct behavior should be displaying them in color on Screen B. This problem occurs only on iPhone 14 Pro Max with iOS 17.0; it does not occur on iPhone 15 Pro with iOS 17.1. Moreover, when the code for a 2x zoom is not present during the capture settings, photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. If the code for a 2x zoom is present during the capture settings, and the AVCaptureSession's SessionPreset is set to Preset640x480, the photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. Is there an instance where the setting of AVCaptureSession's SessionPreset on iPhone 14 Pro Max with iOS 17.0 influences unintentional grayscale conversion when processing images after taking a 2x zoom photo? ■how to reproduce Using the camera 2x zoom code with AVCaptureSession's SessionPreset set to Preset during capturing on iPhone 14 Pro Max with iOS 17.0 using XCode15.1's iOS SDK(17.2). ■enviroment We are building a program using Xamarin.iOS in Visual Studio for Mac. During the build process, Xcode 15.1 (iOS SDK 17) is utilized.
Posted
by ethan731.
Last updated
.
Post not yet marked as solved
0 Replies
246 Views
Hello, I've been developing a web app which I need the front camera and need to take a picture at a higher resolution. But I have one issue. When I call navigator.mediaDevices.getUserMedia() in the browser to get the resolution of the camera, it shows it as 2052 x 2736. But it's a 12 MP front camera. I take a picture of myself using the camera app on the iPad and it shows 12 MP picture. The back camera reports it fine. You can also test it out on webcamtests.com to see the resolution.
Posted Last updated
.
Post not yet marked as solved
0 Replies
327 Views
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS? I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data. As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture. I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken. Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture? If not, is there a way to calculate the matrix based on the image's metadata?
Posted Last updated
.
Post not yet marked as solved
1 Replies
281 Views
I am new to Objective C and relatively new to iOS development. I have an AVCaptureDevice object at hand and would like to print the maximum supported photo dimensions as provided by activeFormat.supportedMaxPhotoDimensions, using Objective C. I tried the following: for (NSValue *obj in device.activeFormat.supportedMaxPhotoDimensions) { CMVideoDimensions *vd = (__bridge CMVideoDimensions *)obj; NSString *s = [NSString stringWithFormat:@"res=%d:%d", vd->width, vd->height]; //print that string } If I run this code, I get: res=314830880:24994 This is way too high, and there is obviously something I am doing wrong, but I don't know what it could be. According to the information I see on the developer forum, I should get something closer to 4000:3000. I can successfully read device.activeFormat.videoFieldOfView and other fields, so I believe my code is sound overall.
Posted Last updated
.
Post not yet marked as solved
4 Replies
419 Views
Using an iPhone 8 and iOS 16.7.5, when taking a picture, the EXIF data I'm getting does not seem to make sense. I am getting: FocalLength: 399/100 FocalLengthIn35mmFilm: 177 The FocalLength EXIF field is correct since iPhone 8's back lens does have a focal length of 3.99mm. The FocalLengthIn35mmFilm value, however, is wrong. The actual value is (obviously) much less, probably between 23 and 27mm (ish). Could this be a bug, or may FocalLengthIn35mmFilm is expressed in a unit which I am not aware of? Thanks for your help.
Posted Last updated
.
Post not yet marked as solved
0 Replies
229 Views
The 24-megapixel camera is most widely used for daily photography, and the 48-megapixel camera is only used for taking landscapes or photos. After all, it takes up too much memory. The biggest problem with the 14promax now is that its photography is lame, and the 12-megapixel camera has long lagged behind Android. There are too many. Adding 24 million modes is much more valuable than updating iOS, and the experience is directly doubled.
Posted
by ASD499.
Last updated
.
Post not yet marked as solved
0 Replies
311 Views
the code found in the file "Camera" in the template for "Capturing Photos" gives warnings that say How can I go around this by still keeping the same functions as the original "Camera" file? I am using a features that REQUIRES Mac Catalyst 17.0 or above.
Posted
by Pokka.
Last updated
.