Discuss using the camera on Apple devices.

Posts under Camera tag

177 Posts
Sort by:
Post not yet marked as solved
0 Replies
525 Views
We’ve recently acquired a third party app that our drivers can activate if they feel unsafe. The app will stream audio and video from the iPhone to an operation centre when activated. Unfortunately, when connected to CarPlay, the operation centre only receives a blank feed and can’t capture video. If we disconnect CarPlay, everything works as expected. is this expected behaviour? Or is this something the developer is able to fix?
Posted
by
brf
Post not yet marked as solved
1 Replies
671 Views
Is there a setting in the iPhone 14's iOS that allows the flashlight (torch) remain on simultaneously with the camera taking a still shot? If not, is there an app that can do that?
Posted
by
Post marked as solved
3 Replies
815 Views
I am currently writing a software product which involves a Camera Extension and a Cocoa application. I would like to share some files between the two components and as of my understanding this should be quite straightforward by putting both applications into the same App Group and then accessing the particular Group Container. However doing so, does result in both components accessing different locations for the Group Container. I am using the following piece of code to create a new folder inside the container: let directory = FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: "group.235ADAK9D5.com.creativetoday.camskool")! let newDirectory = directory.appendingPathComponent("Mydir") try? FileManager.default.createDirectory(at: newDirectory, withIntermediateDirectories: false) If I run this I find that the Cocoa application is going to access the following Location and create the file there: /Users//Library/Group Containers//" Where as the Camera Extension will access the following Location and create the directory there: /private/var/db/cmiodalassistants/Library/Group Containers// If I create a file in one directory it does not appear in the other. I tried for both components to access the opposite directory but it results in an permission denied message. What am I doing wrong?
Posted
by
Post not yet marked as solved
0 Replies
732 Views
I have electronjs app for the MAC Catalyst. I have implemented audio/video calling functionalities. Those works well. I have also implemented functionality to share the screen by using below code. navigator.mediaDevices.getDisplayMedia(options).then((streams) => { var peer_connection = session.sessionDescriptionHandler.peerConnection; var video_track = streams.getVideoTracks()[0]; var sender_kind = peer_connection.getSenders().find((sender) => { return sender.track.kind == video_track.kind; }); sender_kind.replaceTrack(video_track); video_track.onended = () => { }; }, () => { console.log("Error occurred while sharing screen"); } ); But when I hit the button to share the screen by using above code, I am getting below error. Uncaught (in promise) DOMException: Not supported I have also tried navigator.getUserMedia(options,success,error). It's supported by the Mac Catalyst desktop apps. But it's only giving the streams of the webcam. I have also checked online if navigator.mediaDevices.getDisplayMedia(options) is supported in the Mac Catalyst or not. It's supports in the Mac Catalyst. But still I am facing this issue. I have also tried with the desktopCapturer API of the electronjs. But I don't know how can I get the streams from it. //CODE OF 'main.js' ipcMain.on("ask_permission", () => { desktopCapturer .getSources({ types: ["window", "screen"] }) .then(async (sources) => { for (const source of sources) { // console.log(source); if (source.name === "Entire screen") { win.webContents.send("SET_SOURCE", source.id); return; } } }); }); I have tried to get streams by using the below code in the preload.js. But I was getting the error Cannot read property 'srcObject' of undefined. window.addEventListener("DOMContentLoaded", (event) => { ipcRenderer.on("SET_SOURCE", async (event, sourceId) => { try { const stream = await navigator.mediaDevices.getUserMedia({ audio: false, video: { mandatory: { chromeMediaSource: "desktop", chromeMediaSourceId: sourceId, minWidth: 1280, maxWidth: 1280, minHeight: 720, maxHeight: 720, }, }, }); handleStream(stream); } catch (e) { handleError(e); } }); let btn = document.getElementById("btnStartShareOutgoingScreens"); btn.addEventListener("click", () => { if (isSharing == false) { ipcRenderer.send("ask_permission"); } else { console.error("USer is already sharing the screen.............."); } }); }); function handleStream(stream) { const video = document.createElement("video"); video.srcObject = stream; video.muted = true; video.id = "screenShareVideo"; video.style.display = "none"; const box = document.getElementById("app"); box.appendChild(video); isSharing = true; } How can I resolve it. If this is not supported in the MAC Catalyst, Is there is any other way to share the screen from the MAC Catalyst app by using WebRTC.
Posted
by
Post not yet marked as solved
0 Replies
840 Views
In session wwdc2023-10106, it was explained that "iPads with USB-C connectors support external cameras." I’m working on an iPadOS App for only iPads that support these features. I would like to give a requirement using the UIRequiredDeviceCapabilities key so that users with unsupported iPads do not accidentally download the app from the App Store. I have picked out a few candidates from the existing keys that could be used to accomplish this purpose. driverkit This key is a good choice, but it cannot support iPad (10th gen.), iPad mini (6th gen.), iPad Air (4th gen.), 11-in. iPad Pro (1st and 2nd gen.) and 12.9-in. iPad Pro (3rd and 4th gen.), even though these iPads have a USB-C connector. iphone-ipad-minimum-performance-a12 This is a very close choice for this key. This includes all iPads with a USB-C connector. However, this also includes iPads with a Lightning connector: iPad (9th gen.), iPad Air (3rd gen.), and iPad mini (5th gen.). Is there a more appropriate UIRequiredDeviceCapabilities key that resolves the above two issues? Also, is it possible to use external cameras on an iPad with a Lightning connector by using a "Lightning to USB 3 Camera Adapter" or "Lightning to USB Camera Adapter"?
Posted
by
Post not yet marked as solved
0 Replies
442 Views
I have a bug where the Tap Bar at the bottom is not properly shown in the camera view. the home view var body: some View { TabView { VStack { ScrollView { StatisticView() HistoryView() } } .tabItem { Label("Home", systemImage: "house") } ManualItemAddView() .tabItem { Label("Food" , systemImage: "carrot.fill") } CameraView() .tabItem { Label("Scan", systemImage: "camera") } AddWaterView(waterAmount: $waterAmount) .tabItem { Label("Water", systemImage: "drop.fill") } InformationView() .tabItem { Label("Information", systemImage: "person") } DeveloperView() .tabItem { Label("Developer", systemImage: "hammer") } } } the camera view var body: some View { ZStack { CameraPreview(session: controller.session) .onAppear(perform: { controller.startSession() }) .onDisappear(perform: { controller.session.stopRunning() }) .edgesIgnoringSafeArea(.all) // Make CameraPreview take up the whole screen VStack { Spacer() HStack { Button(action: { controller.processLastFrame() }) { Text("Snap") .font(.system(size: 20, weight: .regular, design: .default)) .foregroundColor(.white) .fontWeight(.regular) .padding(.vertical, 15.0) .padding(.horizontal, 20.0) .frame(maxWidth: .infinity) .background(Color.blue) .accentColor(.white) .cornerRadius(17.0) } } .padding() } .alert(isPresented: $controller.showAlert) { Alert(title: Text("Warning"), message: Text("Nothing detected with sufficient confidence."), dismissButton: .default(Text("OK"))) } .sheet(isPresented: $controller.shouldShowDetectedItemSheet) { if let detectedItem = controller.detectedItem { let itemImage = controller.lastFrame let itemCategory = controller.itemCategory let itemCalories = controller.itemCalories let itemSugar = controller.itemSugar let itemDescription = controller.itemDescription HistoryItemView(detectedItemName: detectedItem, date: Date(), shouldShowDetectedItemSheet: $controller.shouldShowDetectedItemSheet, isNewDetection: .constant(true)) } } } .onReceive(controller.$shouldDismiss, perform: { shouldDismiss in if shouldDismiss { controller.shouldDismiss = false controller.shouldNavigate = false // Reset the navigation flag } }) } how its supposed to look what it actually looks like in the camera view
Posted
by
Post not yet marked as solved
0 Replies
596 Views
Hello, I am trying to extract a bayer matrix from the sensor of an iPhone camera. However, I cannot find any examples about how to do this online. From my understanding I'm supposed to use the camera buffer. This Link is actually the ony indication that what I want to do is even possible, but I cannot find any information about kCVPixelFormatType_14Bayer_RGGB and where to use this in a command, even in apple's own swift avfoundation documentation. Any help or links to proper documentation would be much appreciated- thanks in advance.
Posted
by
Post not yet marked as solved
2 Replies
599 Views
The need is to take pictures with Swift code settable parameters such as F-Stop, exposure time, and zoom level, in an iPhone 14 Pro. What AVCam methods control these? If there are none, are there other ways to do this? The option to have the torch constantly light on, or off, is also needed. Flash will not be used.
Posted
by
Post not yet marked as solved
0 Replies
561 Views
I am trying to mimic the flash behavior of a disposable camera on iPhone. With a disposable camera, when the capture button is clicked, the flash instantly and quickly goes off once, right as the photo is captured. However, with the iPhone’s camera, when the capture button is pressed, the torch is turned on for roughly a second, and then the sound is produced & torch flashes on & off as the photo is capture. When using the straightforward Apple API of setting the AVCapturePhotoSettings.flashMode to on: var photoSettings = AVCapturePhotoSettings() photoSettings.flashMode = .on the system default flash behavior is applied. Instead, I would like to create my own custom flash behavior which does not include the torch being turned on for roughly a second before it flashes again in order to mimic a disposable camera. My idea was to manually toggle the torch either right before or during the AVCapturePhotoOutput.capturePhoto() process. private let flashSessionQueue = DispatchQueue(label: "flash session queue") func toggleTorch(_ on: Bool) { let device = AVCaptureDevice.default(.builtInDualWideCamera, for: .video, position: .back)! do { try device.lockForConfiguration() device.torchMode = on ? .on : .off device.unlockForConfiguration() } catch { print("Torch could not be used") } } func photoOutput(_ output: AVCapturePhotoOutput, willBeginCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings) { flashSessionQueue.async { self.toggleTorch(true) } flashSessionQueue.asyncAfter(deadline: .now() + 0.2) { self.toggleTorch(false) } } However, when I do this, the photo is not actually captured, and instead I get either error “**AVFoundationErrorDomain Code=-11800". The flash does occur—the torch turns on and off—but no photo is captured. Would it even be possible to create a custom flash by manually toggling the torch before / during a camera capture? I have been doing research on AVCaptureMultiCamSession and am also wondering if that could provide a solution.
Posted
by
Post not yet marked as solved
0 Replies
536 Views
Here is a code snippet by which I attempted to set a neutral white balance: func prepare(completionHandler: @escaping (Error?) -> Void) { func createCaptureSession() { self.captureSession = AVCaptureSession() } func configureCaptureDevices() throws { let session = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back) let cameras = (session.devices.compactMap { $0 }) if cameras.isEmpty { throw CameraControllerError.noCamerasAvailable } for camera in cameras { try camera.lockForConfiguration() camera.setWhiteBalanceModeLocked( with whiteBalanceGains: AVCaptureDevice.WhiteBalanceGains( redGain: 1.0, greenGain: 1.0, blueGain: 1.0), completionHandler handler: ((CMTime) -> Void)? = nil ) // Errors: Extra arguments at positions #2, #3 in call, Cannot find 'with' in scope camera.whiteBalanceMode = .locked camera.unlockForConfiguration() }catch{ // To Do: Error handling here } } My call to "AVCaptureDevice.WhiteBalanceGains()" gets errors. What is the correct way to do this?
Posted
by
Post marked as solved
1 Replies
560 Views
I am attempting to follow this example of how to create a camera app: https://www.appcoda.com/avfoundation-swift-guide/ It describes a sequence of 4 steps, and these steps are embodied in four functions: func prepare(completionHandler: @escaping (Error?) -> Void) { func createCaptureSession() { } func configureCaptureDevices() throws { } func configureDeviceInputs() throws { } func configurePhotoOutput() throws { } DispatchQueue(label: "prepare").async { do { createCaptureSession() try configureCaptureDevices() try configureDeviceInputs() try configurePhotoOutput() } catch { DispatchQueue.main.async { completionHandler(error) } return } DispatchQueue.main.async { completionHandler(nil) } } } What is confusing me is that it appears these four steps need to be executed sequentially, but in the Dispatch Queue they are executed simultaneously because .async is used. For example farther down that webpage the functions createCaptureSession(), and configureCaptureDevices(), and the others, are defined. In createCaptureSession() the member variables self.frontCamera, and self.rearCamera, are given values which are used in configureCaptureDevices(). So configureCaptureDevices() depends on createCaptureSession() having been already executed, something that it appears cannot be depended upon if both functions are executing simultaneously in separate threads. What then is the benefit of using DispatchQueue()? How is it assured the above example dependencies are met? What is the label parameter of the DispatchQueue()'s initializer used for?
Posted
by
Post not yet marked as solved
0 Replies
649 Views
I am developing a CMIO Camera Extension on macOS Ventura. Initially, I based this on the template camera extension (which creates its own frames). Later, I added a sink stream so that I could send the extension video from an app. That all works. Recently, I added the ability for the extension itself to initiate a capture session, so that it can augment the video from any available AVCaptureDevice without running its controlling app. That works, but I have to add the Camera capability to the extension's sandbox configuration, and add a camera usage string. This caused the OS to put up the user permission dialog, asking for permission to use the camera. However, the dialog uses the extension's bundle ID for its name, which is long and not user friendly. Furthermore, the extension isn't visible to the user (it is packaged inside the app which installs and controls it), so even a user-friendly name doesn't make that much sense to the end user. I tried adding a CFBundleDisplayName to the extension's plist, but the OS didn't use it in the permissions dialog. Is there a way to get the OS to present a more user-friendly name? Should I expect to see a permissions dialog pertaining to the extension at all? Where does the OS get the name from? After the changes (Camera access, adding a camera usage string), I noticed that the extension's icon (the generic extension icon) showed up in the dock, with its name equal to its bundle ID. Also, in Activity Monitor, the extension's process is displayed, using its CFBundleDisplayName (good). But about 30s after activation, the name is displayed in red, with " (not responding)" appended, although it is still working. The extension does respond to the requests I send it over the CMIO interface, and it continues to process video, but it isn't handling user events, while the OS thinks that it should, probably because of one or more of the changes to the plist that I have had to make. To get the icon out of the dock, I added LSUIElement=true to its plist. To get rid of the red "not responding", I changed the code in its main.swift from the template. It used to simply call CFRunLoopRun(). I commented out that call and instead make this call _ = NSApplicationMain(CommandLine.argc, CommandLine.unsafeArgv) That appears to work, but has the unfortunate side effect of increasing the CPU usage of the extension when it is idle from 0.3% to 1.0%. I do want the extension to be able to process Intents, so there is a price to be paid for that. But it doesn't need to do so until it is actively dealing with video. Is there a way to reduce the CPU usage of a background app, perhaps dynamically, making a tradeoff between CPU usage and response latency? Is it to be expected that a CMIOExtension shows up in the Dock, ever?
Posted
by
Post not yet marked as solved
0 Replies
645 Views
How does one get the list of controls which a CMIOObject has to offer? How do the objects in the CMIO hierarchy map to CMIOExtension objects? I expected the hierarchy to be something like this: the system has owned objects of type: 'aplg' `(kCMIOPlugInClassID)` has owned objects of type 'adev' `(kCMIODeviceClassID,` which may have owned objects of type 'actl' `(kCMIOControlClassID)` and has at least one owned object of type 'astr' `(kCMIOStreamClassID),` each of which may have owned objects of type 'actl' `(kCMIOControlClassID)` Instead, when I recursively traverse the object hierarchy, I find the devices and the plug-ins at the same level (under the system object). Only some of the device in my system have owned streams, although they all have a kCMIODevicePropertyStreams ('stm#') property. None of the devices or streams appear to have any controls, and none of the streams have any owned objects. I'm not using the qualifier when searching for owned objects, because the documentation implies that it may be nil if I'm not interested in narrowing my search. Should I expect to find any devices or streams with controls? And if so, how do I get a list of them? CMIOHardwareObject.h says that "Wildcards... are especially useful ...for querying an CMIOObject's list of CMIOControls. ", but there's no example of how to do this. My own device (from my camera extension) has no owned objects of type stream. I don't see any API call to convey ownership of the stream I create by the device it belongs to. How does the OS decide that a stream is 'owned' by a device? I've tried various scopes and elements - kCMIOObjectPropertyScopeGlobal, kCMIOObjectPropertyScopeWildcard, kCMIOControlPropertyScope, and kCMIOObjectPropertyElementMain, kCMIOObjectPropertyElementWildcard and kCMIOControlPropertyElement. I can't get a list of controls using any of these. Ultimately, I'm trying to find my provider, my devices and my streams using the CMIO interface, so that I can set and query properties on them. Is it reasonable to assume that the CMIOObject of type 'aplg' is the one corresponding to a CMIOExtensionProviderSource? This is on Ventura 13.4.1 on M1.
Posted
by
Post not yet marked as solved
1 Replies
570 Views
I am following these directions to code picture taking: https://www.appcoda.com/avfoundation-swift-guide/ These directions are for taking jpeg shots. But what is needed is to output to a lossless format such as DNG, PNG, or BMP. How would those instructions be modified to output pictures in a lossless format? Is there a tutorial similar to the one linked to above that explains how to do lossless formats?
Posted
by
Post marked as solved
1 Replies
581 Views
There is a crash on the closing brace of the photo capture completion handler where commented below. // Initiate image capture: cameraController.captureImage(){ ( image: UIImage?, error: Error? ) in // This closure is the completion handler, it is called when // the picture taking is completed. print( "camera completion handler called." ) guard let image = image else { print(error ?? "Image capture error") return } try? PHPhotoLibrary.shared().performChangesAndWait { PHAssetChangeRequest.creationRequestForAsset(from: image) } // crash here } When step through pauses on that closing brace, and the next step is taken, there is a crash. Once crashed what I see next is assembly code as shown in the below disassembly: libsystem_kernel.dylib`: 0x1e7d4a39c <+0>: mov x16, #0x209 0x1e7d4a3a0 <+4>: svc #0x80 -> 0x1e7d4a3a4 <+8>: b.lo 0x1e7d4a3c4 ; <+40> Thread 11: signal SIGABRT 0x1e7d4a3a8 <+12>: pacibsp 0x1e7d4a3ac <+16>: stp x29, x30, [sp, #-0x10]! 0x1e7d4a3b0 <+20>: mov x29, sp 0x1e7d4a3b4 <+24>: bl 0x1e7d3d984 ; cerror_nocancel 0x1e7d4a3b8 <+28>: mov sp, x29 0x1e7d4a3bc <+32>: ldp x29, x30, [sp], #0x10 0x1e7d4a3c0 <+36>: retab 0x1e7d4a3c4 <+40>: ret Obviously I have screwed up picture taking somewhere. I would much appreciate suggestions on what diagnostics will lead to the resolution of this problem. I can make the entire picture taking code available on request as an attachment. It is too lengthy to post here.
Posted
by
Post not yet marked as solved
1 Replies
517 Views
This handler in my photo app: cameraController.captureImage(){ ( image: UIImage?, error: Error? ) in // This closure is the completion handler, it is called when the picture taking is completed. if let error = error { print( "camera completion handler called with error: " + error.localizedDescription ) }else{ print( "camera completion handler called." ) } // If no image was captured return here: guard let image = image else { print(error ?? "Image capture error") return } var phpPhotoLib = PHPhotoLibrary.shared() if( phpPhotoLib.unavailabilityReason.debugDescription != "nil" ){ print( "phpPhotoLib.unavailabilityReason.debugDescription = " + phpPhotoLib.unavailabilityReason.debugDescription ) }else{ // There will be a crash right here unless the key "Privacy - Photo Library Usage Description" exists in this app's .plist file try? phpPhotoLib.performChangesAndWait{ PHAssetChangeRequest.creationRequestForAsset(from: image) } } } asks the user for permission to store the captured photo data in the library. It does this once. It does not ask on subsequent photo captures. Is there a way for app to bypass this user permission request dialog, or at least present it to the user in advance of triggering a photo capture when the app first runs? In this situation the camera is being used to capture science data in a laboratory setting. The scientist using this app already expects, and needs, the photo to be stored. The redundant permission request dialog makes the data collection cumbersome, and inconvenient, due to the extra effort required to reach the camera where it is located.
Posted
by
Post marked as solved
1 Replies
842 Views
I listed the AVCapturePhotoSettings.availablePhotoPixelFormatTypes array in my iPhone 14 during a running photo session and I got these type numbers: 875704422 875704438 1111970369 I have no idea what these numbers mean. How can I use these numbers to look up a human readable string that can tell me what these types are in a way I am familiar with, such as jpeg, tiff, png, bmp, dng, etc, so I know which of these numbers to choose when I instantiate the class: AVCaptureSession?
Posted
by
Post not yet marked as solved
0 Replies
811 Views
Hi there-- Ive been trying to recreate an issue with a camera app im developing on a clean install. I can easily start up a new VM with VirtualBuddy, UTM or Parallels but Im having trouble getting a camera device to passthrough. Parallels seems to be the most promising but whenever I try to connect it, there is an error. Does anyone have advice on the best VM provider to use to passthrough usb devices or native cameras? Thanks!
Posted
by