VisionKit

RSS for tag

Scan documents with the camera on iPhone and iPad devices using VisionKit.

VisionKit Documentation

Posts under VisionKit tag

63 Posts
Sort by:
Post not yet marked as solved
1 Replies
67 Views
Hi all apple devs! I am a young developer who is completely new to everything programming. I am currently trying to develop an app where I want to use visionkit, but I can't for the life of me figure out how to implement its features. I've been stuck on this for several days, so I am now resorting to asking all of you experts for help! Your assistance would be immensely appreciated! I started to develop the app trying to exclusively use swiftUI to futureproof my app. Upon figuring out what visionkit is, to my understanding it is more compatible with UIkit? So I rewrote the part of my code that will use visionkit into a UIkit based view, to simplify the integration of visionkits features. It might just have overcomplicated my code? Can visionkit be easily implemented using only swiftUI? I noticed in the demo on the video tutorial the code is in a viewcontroller not a contentview, is this what makes my image unresponsive? My image is not interactable like her demo in the video, where in my code do I go wrong? Help a noob out! The desired user flow is like this: User selects an image through the "Open camera" or "Open Camera Roll" buttons. Upon selection the UIkit based view opens and the selected image is displayed on it. (This is where I want to implement visionkit features) User interacts with the image by touching on it, if touching on a subject, the subject should be lifted out of the rest of the image and be assigned to the editedImage, which in turn displays only the subject without the background on the contentview. (For now the image is assigned to editedimage by longpressing without any subjectlifting since I cant get visionkit to work as I want) Anyways, here's a code snippet of my peculiar effort to implement subject lifting and visionkit into my app:
Posted
by emol.
Last updated
.
Post not yet marked as solved
0 Replies
95 Views
For example: we use DocKit for birdwatching, so we have an unknown field distance and direction. Distance = ? Direction = ? For example, the rock from which the observation is made. The task is to recognize the number of birds caught in the frame, add a detection frame and collect statistics. Question: What is the maximum number of frames processed with custom object recognition? If not enough, can I do the calculations myself and transfer to DokKit for fast movement?
Posted Last updated
.
Post not yet marked as solved
0 Replies
167 Views
Where can I find the Puzzle Game demo code they showed in the video for lift subjects from images in the app? Thank you! https://developer.apple.com/videos/play/wwdc2023/10176/
Posted
by az_az.
Last updated
.
Post not yet marked as solved
2 Replies
300 Views
Hi guys, has anyone tried using Vison Pro on train? I was getting "Tracking lost" or "Tracking unavailable" message (don't remember precisely). I could not quite get even Home screen. Home screen was kind of shaky and then as train was moving the Home screen went sideways. I could not make video when looking out of the window, again the same error message. I was trying to look inside, so that there was minimal movement detected by the device, there were no people in front of me, just empty seats, so I was expecting that Vision Pro should be able to lock on the surrounding space, but without any success. I managed to start one app I work on and I started watching movie, but the screen was in place for 30 seconds or so, then started moving around a little bit and then moved sideways flew out of the window and zipped past me and stayed somewhere behind on the track. Is it possible to switch Vision Pro into a regime, where it could ignore surroundings? Not sure if perhaps Airplane mode could help, but it was very diffcult to even open home screen, settintgs or control center, then I got the error message. It should be relatively simple algorithm to detect if let's say 70% of surroundings is roughly in place and ignore moving scene (like landscape passing in the window). Apple, please could you fix it or a provide hint within "Tips" how make Vision Pro work inside moving vehicles, if this is already possible? It would be a great Vision Pro usability, if I could watch movies when traveling and then at home do something meaningful, like taking a nap. Thanks.
Posted Last updated
.
Post not yet marked as solved
4 Replies
444 Views
Hi guys, if you started using Vision Pro, I'm sure you already found some limitations. Let's join forces and make feature requests. When creating Feedback, request from one guy may not get any attenption from Apple, but if we join and more of us make the same request, we might just push those ideas through. Feel free to add your ideas and don't forget to create feedback: app windows can only be moved forward to a distance of about 20ft/6m. I'm pretty sure some users would like to push window as far as a few miles away and make the window large to be still legible. This would be very interesting especialy when using Environments and 360-degree view. I really want to put some apps up on the sky above the mountains and around me, even those iOS apps just made compatible with Vision Pro. when capturing screen, I always get message "Video capture not possible due to insufficient lighting". Why? I have Environment loaded and extended 360 degrees with some apps opened, so there is no need for external lighting (at least I thing it's not needed). I just want to capture what I see. Imagine creating tutorials, recording lessons for learning various subjects, etc. Actual Vision Pro user might prefer loading their on environments an setup app in spatial domain, but for those that don't have it yet or when creating videos to be available on antique 2D computer screens , it may be useful to create 2D videos this way. 3D video recording is not very good, kind of shaky, not when Vision Pro is static, but when walking and especially when turning head left/right/up/down (even relatively slowly). I think hardware should be able to capture and create nice and smooth video. It's possible that Apple just designed simple camera app and wants to give developers a chance to create a better Camera app, but it still would be nice to have something better out of the box. I would like to be able to walk through Environments. I understand safety of see-through effect, so users didn't hit any obstacles, but perhaps obstacles could be detected and when user gets to 6ft/2m from obstacle then it could present at first warning (there is already "You are close to and object" and then make surroundigns visible, but if there are no obstacles (user can be located in large space and can place a tape or a thread around the safe area), I should be able to walk around and take a look inside that crater on the Moon. We need Environments, Environments, Environments and yet more of them, I was hoping for hundreds, so we could even pick some of them and use in our apps, like games where you want to setup a specific environment. Well, that's just a beginning and I could go on and on and on, but tell me what you guys think. Regards and enjoy new virtual adventure! Robert
Posted Last updated
.
Post not yet marked as solved
1 Replies
406 Views
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro? My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
Posted
by wladislaw.
Last updated
.
Post not yet marked as solved
0 Replies
214 Views
Can you use View with Transferable View in the one WindowGroup to another ImmersiveSpace with RealityView? I can drag, but the drop event isn't captured when with RealityView var body: some View { let droppable = Droppable( model: model ) RealityView { content in // Add the initial RealityKit content content.add(floorEntity) } .onDrop( of: ... // or .dropDestination( For ... {} //or .gesture( DragGesture() .targetedToAnyEntity() .onChanged({ value in none of them triggers the drop
Posted Last updated
.
Post not yet marked as solved
1 Replies
388 Views
I am new to visionOS development, just slowly figuring out the difference in immersion styles to figure out how I want my app to behave. It seems that when you use a progressive immersive space the minimum immersion level (set via the digital crown) is not 0? Meaning, there is no way to go from mixed to full by using the Digital Crown. Even when I try to set it to 0 (such as in the Destination Video sample), it pops back up to around 30-40%, and I always see the background. Is this expected behavior, or are there some settings that allow me to change this minimum immersion level? Further, in the video 'Meet ARKit for spatial computing', it is stated that to get access to ARKit tracking data you must use a 'Full Space', not the 'Shared Space'. This wording is confusing to me. Is an ImmersiveSpace set to the .mixed (or .progressive) immersion style still a 'Full Space' (because it isn't in the shared space, with other apps)? OR, is ARKit only available in an ImmersiveSpace with the .full immersion style? Just feels like maybe 'full' is being used in two different ways here... Thanks in advance, -pj
Posted
by pj4533.
Last updated
.
Post marked as solved
1 Replies
416 Views
After migrating my ionic cordova app to ionic capacitor I am encountering a persistent white screen on a particular page. Along with this, I have observed the following error messages in the console: Error Message: [com.apple.VisionKit.RemoveBackground] Request to remove background on an unsupported device. Error Domain=com.apple.VisionKit.RemoveBackground Code=-8 "(null)" Error Message: [UILog] Called -[UIContextMenuInteraction updateVisibleMenuWithBlock:] while no context menu is visible. This won't do anything. The actual page becomes visible after clicking on that white screen. the same code is working fine for android build but facing issue on ios.
Posted Last updated
.
Post not yet marked as solved
0 Replies
316 Views
I'm using DataScannerViewController with SwiftUI to scan text and barcodes from a card. I would like the user to be able to hold the card in front of the device, but I am not finding a way to select the front camera with DataScannerViewController. Does anyone know of a way to select the front camera?
Posted
by alanlbird.
Last updated
.
Post not yet marked as solved
2 Replies
423 Views
Hello! I'm implementing cropping an object from an image mechanism. @MainActor static func detectObjectOnImage(image: UIImage) async throws -> UIImage { let analyser = ImageAnalyzer() let interaction = ImageAnalysisInteraction() let configuration = ImageAnalyzer.Configuration([.visualLookUp]) let analysis = try await analyser.analyze(image, configuration: configuration) interaction.analysis = analysis return try await interaction.image(for: interaction.subjects) } My app supports iOS 16 and a compiler doesn't complain about the code.However when I run it on simulator with iOS 16, I'm getting "symbol not found" error on the app launch. Does anybody know what can be the issue?
Posted Last updated
.
Post not yet marked as solved
0 Replies
371 Views
I'm using RealityKit to give an immersive view of 360 pictures. However, I'm seeing a problem where the window disappears when I enter immersive mode and returns when I rotate my head. Interestingly, putting ".glassBackground()" to the back of the window cures the issue, however I prefer not to use it in the UI's backdrop. How can I deal with this? here is link of Gif:- https://firebasestorage.googleapis.com/v0/b/affirmation-604e2.appspot.com/o/Simulator%20Screen%20Recording%20-%20Apple%20Vision%20Pro%20-%202024-01-30%20at%2011.33.39.gif?alt=media&token=3fab9019-4902-4564-9312-30d49b15ea48
Posted Last updated
.
Post not yet marked as solved
0 Replies
331 Views
Hi, I looked Diorama example, and I wanted to do the same. Which is to have custom PointOfInterestComponent in the anchor object in Composer Pro. In the Scene, I iterates to find that tag with attachment( which I created when adding that scene), and each the attachment update: { content, attachments in viewModel.rootEntity?.scene?.performQuery(Self.runtimeQuery).forEach { entity in guard let attachmentEntity = attachments.entity(for: component.attachmentTag) else { return } guard let component = entity.components[PointOfInterestRuntimeComponent.self] else { return } guard let attachmentEntity = attachments.entity(for: component.attachmentTag) else { return } guard attachmentEntity.parent == nil else { return } attachmentEntity.setPosition([0.0, 0.4, 0], relativeTo: entity) entity.addChild(attachmentEntity, preservingWorldTransform: true) } This doesn't show attachment Entity, but if I do content.addChild( attachmentEntity ) instead of entity.addChild, it shows up. What could be wrong?
Posted Last updated
.
Post not yet marked as solved
2 Replies
400 Views
Hello everyone, I don't know what to do with my problem. I have a barcode reader in my application which is solved via VisionKit. There are other pages in the bottom bar and they are resolved by TabView. The problem is that when I switch screens, my camera freezes. Does anyone know how to solve this? Thanks for the reply
Posted
by MORADENS.
Last updated
.
Post not yet marked as solved
0 Replies
381 Views
As per Apple Developer guidelines for Vision OS (https://developer.apple.com/design/human-interface-guidelines/immersive-experiences), If a person moves more than about a meter, the system automatically makes all displayed content translucent to help them navigate their surroundings. Here, what is intended by "translucent behavior"? Will the app content be fully invisible? Or displayed with some transparency?
Posted
by divya_ms.
Last updated
.
Post not yet marked as solved
0 Replies
540 Views
Vision Pro has a primary user mode and guest modes. Is it possible to change the primary user through factory reset or similar mechanism once it is configured?
Posted
by divya_ms.
Last updated
.
Post not yet marked as solved
0 Replies
409 Views
Heard Vision Pro device is expected to be available in US market very soon. But it will be delayed for other markets. Any idea whether Apple still accepts applications to get Vision Pro Developer Kit in loan mode?
Posted
by divya_ms.
Last updated
.
Post not yet marked as solved
0 Replies
365 Views
In our app, we needed to use visionkit framework to lift up the subject from an image and crop it. Here is the piece of code: if #available(iOS 17.0, *) { let analyzer = ImageAnalyzer() let analysis = try? await analyzer.analyze(image, configuration: self.visionKitConfiguration) let interaction = ImageAnalysisInteraction() interaction.analysis = analysis interaction.preferredInteractionTypes = [.automatic] guard let subject = await interaction.subjects.first else{ return image } let s = await interaction.subjects print(s.first?.bounds) guard let cropped = try? await subject.image else { return image } return cropped } But the s.first?.bounds always returns a cgrect with all 0 values. Is there any other way to get the position of the cropped subject? I need the position in the image from where the subject was cropped. Can anyone help?
Posted
by utshas.
Last updated
.