visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,202 Posts
Sort by:
Post not yet marked as solved
0 Replies
32 Views
Hi team, I'm running into the following issue, for which I don't seem to find a good solution. I would like to be able to drag and drop items from a view into empty space to open a new window that displays detailed information about this item. Now, I know something similar has been flagged already in this post (FB13545880: Support drag and drop to create a new window on visionOS) HOWEVER, all this does, is launch the App again with the SAME WindowGroup and display ContentView in a different state (show a selected product e.g.). What I would like to do, is instead launch ONLY the new WindowGroup, without a new instance of ContentView. This is the closest I got so far. It opens the desired window, but in addition it also displays the ContentView WindowGroup WindowGroup { ContentView() .onContinueUserActivity(Activity.openWindow, perform: handleOpenDetail) } WindowGroup(id: "Detail View", for: Reminder.ID.self) { $reminderId in ReminderDetailView(reminderId: reminderId! ) } .onDrag({ let userActivity = NSUserActivity(activityType: Activity.openWindow) let localizedString = NSLocalizedString("DroppedReminterTitle", comment: "Activity title with reminder name") userActivity.title = String(format: localizedString, reminder.title) userActivity.targetContentIdentifier = "\(reminder.id)" try? userActivity.setTypedPayload(reminder.id) // When setting the identifier let encoder = JSONEncoder() if let jsonData = try? encoder.encode(reminder.persistentModelID), let jsonString = String(data: jsonData, encoding: .utf8) { userActivity.userInfo = ["id": jsonString] } return NSItemProvider(object: userActivity) }) func handleOpenDetail(_ userActivity: NSUserActivity) { guard let idString = userActivity.userInfo?["id"] as? String else { print("Invalid or missing identifier in user activity") return } if let jsonData = idString.data(using: .utf8) { do { let decoder = JSONDecoder() let persistentID = try decoder.decode(PersistentIdentifier.self, from: jsonData) openWindow(id: "Detail View", value: persistentID) } catch { print("Failed to decode PersistentIdentifier: \(error)") } } else { print("Failed to convert string to data") } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
31 Views
Is there a maximum distance at which an entity will register a TapGesture()? I'm unable to interact with entities farther than 8 or 9 meters away. The below code generates a series of entities progressively farther away. After about 8 meters, the entities no long respond to tap gestures. RealityView { content in var body: some View { RealityView { content in for i in 0..<10 { if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) immersiveContentEntity.position = SIMD3<Float>(x: Float(-i*i), y: 0.75, z: Float(-1*i)-3) } } } .gesture(tap) } var tap: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in AudioServicesPlaySystemSound(1057) print(value.entity.name) } } }
Posted
by smicker.
Last updated
.
Post not yet marked as solved
0 Replies
40 Views
Hi! I was trying to port our sdk for visionOS. I was going through the documentation and saw this video: https://developer.apple.com/videos/play/wwdc2023/10089/ Is there any working code sample for it, same goes for arkit c api ? Couldn't find any links. Thanks in advance. Sahil
Posted
by saagn.
Last updated
.
Post not yet marked as solved
0 Replies
39 Views
I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand. (I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
Posted
by milanowth.
Last updated
.
Post not yet marked as solved
0 Replies
42 Views
Good day. I'm inquiring if there is a way to test functionality between Apple Pencil Pro and Apple Vision Pro? I'm trying to work on an idea that would require a tool like the Pencil as an input device. Will there be an SDK for this kind of connectivity?
Posted
by MrDanger.
Last updated
.
Post not yet marked as solved
0 Replies
38 Views
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so: Gestures: var drag: some Gesture { DragGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene) } .onEnded { value in itemTranslation += gestureTranslation gestureTranslation = .init() } } var rotate: some Gesture { RotateGesture3D() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureRotation = simd_quatf(value.rotation.quaternion).inverse } .onEnded { value in itemRotation = gestureRotation * itemRotation gestureRotation = .identity } } var magnify: some Gesture { MagnifyGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureScale = Float(value.magnification) } .onEnded { value in itemScale *= gestureScale gestureScale = 1.0 } } RealityView modifiiers: .simultaneousGesture(drag) .simultaneousGesture(rotate) .simultaneousGesture(magnify) RealityView update block: entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition entity.orientation = gestureRotation * itemRotation entity.scaleAll(itemScale * gestureScale)
Posted
by matti777.
Last updated
.
Post not yet marked as solved
2 Replies
262 Views
I've been getting random crashes in an immersive RealityView, using model entities with physics. While the crashes do randomly happen with fewer objects, they can be reproduced consistently by placing 100 objects on top of each other. The project I used as a starter, is here : https://developer.apple.com/documentation/visionos/incorporating-real-world-surroundings-in-an-immersive-experience To reproduce the error, create 100 cubes in a loop on top of each other in addCube, instead of creating just 1. This gives exc_bad_access, as soon as the cubes are created. Tested on a Vision Pro, not a simulator. Any advice on how to resolve this? I'm trying to have around 100 objects moving around the environment, but it still gives exc_bad_access, eventually. I'm trying to attach the crash log, but I keep getting the error about sensitive materials. Testing to see if I can post without it.
Posted
by app_log.
Last updated
.
Post not yet marked as solved
2 Replies
205 Views
I think it's kind of essential to have eye tracking data available to apps in VR mode (with the user's permission). The biggest problem I've observed is that Unity isn't able to implement dynamic foveated rendering without eye tracking data. Without the eye tracking it's only possible to have fixed foveated rendering. That gives a performance boost to rendering, but it also makes it so it gets blurry for the user if they look to the side without turning their head. I understand why it's a privacy issue to have apps tracking where the user is looking in the real world, but video passthrough is disabled in VR -- so it should be ok to enable eye tracking in VR (with the user's permission). Unity already supports dynamic foveated rendering (with eye tracking) for other VR headsets, and Vision Pro has the best eye tracking -- so Vision Pro should definitely have the best dynamic foveated rendering in VR.
Posted
by Hawaiianz.
Last updated
.
Post not yet marked as solved
0 Replies
66 Views
[visionOS Question] I’m using the hierarchy of an entity loaded from a RealityKit Pro project to drive the content of a NavigationSplitView. I’d like to render any of the child entities in a RealityKitView in the detail pane when a user selects the child entity name from the list in the NavigationSplitView. I haven’t been able to render the entity in the detail view yet. I have tried updating the position/scaling to no avail. I also tried adding an AnchorEntity and set the child entity parent to it. I’m starting to suspect that the way to do it is to create a scene for each individual child entity in the RealityKit Pro project. I’d prefer to avoid this approach as I want a data-driven approach. Is there a way to implement my idea in RealityKit in code?
Posted
by eisen.
Last updated
.
Post not yet marked as solved
1 Replies
75 Views
Im trying to use a RealityView with attachments and this error is being thowen. Am i using the RealityView wrong? I've seen other people use a RealityView with Attachments in visionOS... Please let this be a bug... RealityView { content, attachments in contentEntity = ModelEntity(mesh: .generatePlane(width: 0.3, height: 0.5)) content.add(contentEntity!) } attachments: { Text("Hello!") }.task { await loadImage() await runSession() await processImageTrackingUpdates() }
Posted
by teezy_dev.
Last updated
.
Post not yet marked as solved
2 Replies
92 Views
I am trying to verify my understanding of adding a HoverEffectComponent on entities inside a scene in RealityViews. Inside RealityComposer Pro, I have added the required Input Target and Collision components to one entity inside a node with multiple siblings, and left any options as defaults. They appear to create appropriately sized bounding boxes etc for these objects. In my RealityView I programmatically add the HoverEffectComponents to the entities as I don't see them in RCP. On device, this appears to "work" in the sense that when I gaze at the entity, it lights up - but so does every other entity in the scene - even those without Input Target and Collision components attached. Because the documentation on the components is sparse I am unsure if this is behavior as designed (e.g. all entities in that node are activated) or a bug or something in between. Has anyone encountered this and is there an appropriate way of setting these relationships up? Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
55 Views
Hi, just generated a HDR10 MVHEVC file, mediainfo is below: Color range : Limited Color primaries : BT.2020 Transfer characteristics : PQ Matrix coefficients : BT.2020 non-constant Codec configuration box : hvcC+lhvC then generate the segment files with below command: mediafilesegmenter --iso-fragmented -t 4 -f av_1 av_new_1.mov then upload the segment files and prog_index.m3u8 to web server. just find that can not play the HLS stream on Safari... the url is http://ip/vod/prog_index.m3u8 just checked that if i remove the tag Transfer characteristics : PQ when generating the MVHEVC file. above same mediafilesegmenter command and upload the files to web server. the new version of HLS stream is can play on Safari... Is there any way to play HLS PQ video on Safari. thanks.
Posted Last updated
.
Post not yet marked as solved
1 Replies
118 Views
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code: VIEWMODEL CODE: Observable class ImageTrackingModel { var session = ARKitSession() // ARSession used to manage AR content var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity var rootEntity = Entity() // Root entity to which all other entities are added let imageInfo = ImageTrackingProvider( referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper") ) init() { setupImageTracking() } func setupImageTracking() { if ImageTrackingProvider.isSupported { Task { try await session.run([imageInfo]) for await update in imageInfo.anchorUpdates { updateImage(update.anchor) } } } } func updateImage(_ anchor: ImageAnchor) { let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES if imageAnchors[anchor.id] == nil { rootEntity.addChild(entity) imageAnchors[anchor.id] = true print("Added new entity for anchor \(anchor.id)") } if anchor.isTracked { entity.transform = Transform(matrix: anchor.originFromAnchorTransform) print("Updated transform for anchor \(anchor.id)") } } } APP: @main struct MyApp: App { @State var session = ARKitSession() @State var immersionState: ImmersionStyle = .mixed private var viewModel = ImageTrackingModel() var body: some Scene { WindowGroup { ModeSelectView() } ImmersiveSpace(id: "appSpace") { ModeSelectView() } .immersionStyle(selection: $immersionState, in: .mixed) } } Content View: RealityView { content in Task { viewModel.setupImageTracking() } } //Im serioulsy so clueless on how to use this view
Posted
by teezy_dev.
Last updated
.
Post not yet marked as solved
1 Replies
169 Views
I just bought a Vision Pro to build and run my app on it, but I'm encountering this error from Xcode: Developer Mode is enabled. Computer is paired. Device is paired, and it's connected to my Mac via the Developer Strap. In the "VPN &amp; Device Management" setting it says to navigate to, there is no "Developer App certificate". Others have suggested clearing the ModuleCache in DerivedData, but that's also been fruitless. I've cleaned the build multiple times, and restarted both devices, and Xcode. I have no idea what else I can possibly do to resolve this. Does anyone have any other ideas?
Posted Last updated
.
Post not yet marked as solved
1 Replies
125 Views
I was executing some code from Incorporating real-world surroundings in an immersive experience func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.physicsBody = PhysicsBodyComponent(mode: .static) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } } } I would like to toggle the Occlusion mesh available on the dev tools below, but programmatically. I would like to have a button, that would activate and deactivate that. I was checking .showSceneUnderstanding but it does not seem to work in visionOS. I get the following error 'ARView' is unavailable in visionOS when I try what is available Visualizing and Interacting with a Reconstructed Scene
Posted
by Hygison.
Last updated
.
Post not yet marked as solved
2 Replies
106 Views
I'm trying to control the LOD of textures for an app for vision pro, With the default image node in composer pro the UV's are correct but the LOD is not what I want, I would like to have control over it. I see there is a node called "RealityKitTexture2DLOD" but as soon as I try to use that one the UV's are all messed up. Am I missing something ? Do we need to do something specific to use this node ? I tried to use the nodes "Place 2D" and "UsdTransform2d" but could not get the texture to align Any help appreciated
Posted
by gl5.
Last updated
.
Post not yet marked as solved
2 Replies
473 Views
I'm unable to figure out how to know when my app no longer has focus. ScenePhase will only change when the WindowGroup gets created or closed. UIApplication.didBecomeActive and UIApplication.didEnterBackgroundNotification are not called either when say you move focus to Safari. What's the trick?
Posted
by Lucky7.
Last updated
.
Post not yet marked as solved
1 Replies
275 Views
In SwiftUI, it looks like when a window is not in view its scene phase is set to ScenePhase.background after a minute or so. Now when every window of an app has been closed, the last window to be closed has its phase set to SchenePhase.background. This makes it quite difficult to differentiate reopening and app from a window that was out of view being glanced at again. I have tried implementing a solution where I count opened/closed window with an onChange that watches scenePhase, but it unfortunately looks like not every window closing gets detected (specifically if there are two windows, with one window is in the background, and the one active window is closed, its scene phase onChange is never triggered). Is there a better way to handle the case of differentiating between reopening an app with looking back at a suspended window that was never actually closed?
Posted
by ncitron.
Last updated
.