visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,203 Posts
Sort by:
Post not yet marked as solved
0 Replies
40 Views
Hi team, I'm running into the following issue, for which I don't seem to find a good solution. I would like to be able to drag and drop items from a view into empty space to open a new window that displays detailed information about this item. Now, I know something similar has been flagged already in this post (FB13545880: Support drag and drop to create a new window on visionOS) HOWEVER, all this does, is launch the App again with the SAME WindowGroup and display ContentView in a different state (show a selected product e.g.). What I would like to do, is instead launch ONLY the new WindowGroup, without a new instance of ContentView. This is the closest I got so far. It opens the desired window, but in addition it also displays the ContentView WindowGroup WindowGroup { ContentView() .onContinueUserActivity(Activity.openWindow, perform: handleOpenDetail) } WindowGroup(id: "Detail View", for: Reminder.ID.self) { $reminderId in ReminderDetailView(reminderId: reminderId! ) } .onDrag({ let userActivity = NSUserActivity(activityType: Activity.openWindow) let localizedString = NSLocalizedString("DroppedReminterTitle", comment: "Activity title with reminder name") userActivity.title = String(format: localizedString, reminder.title) userActivity.targetContentIdentifier = "\(reminder.id)" try? userActivity.setTypedPayload(reminder.id) // When setting the identifier let encoder = JSONEncoder() if let jsonData = try? encoder.encode(reminder.persistentModelID), let jsonString = String(data: jsonData, encoding: .utf8) { userActivity.userInfo = ["id": jsonString] } return NSItemProvider(object: userActivity) }) func handleOpenDetail(_ userActivity: NSUserActivity) { guard let idString = userActivity.userInfo?["id"] as? String else { print("Invalid or missing identifier in user activity") return } if let jsonData = idString.data(using: .utf8) { do { let decoder = JSONDecoder() let persistentID = try decoder.decode(PersistentIdentifier.self, from: jsonData) openWindow(id: "Detail View", value: persistentID) } catch { print("Failed to decode PersistentIdentifier: \(error)") } } else { print("Failed to convert string to data") } }
Posted
by
Post not yet marked as solved
0 Replies
37 Views
Is there a maximum distance at which an entity will register a TapGesture()? I'm unable to interact with entities farther than 8 or 9 meters away. The below code generates a series of entities progressively farther away. After about 8 meters, the entities no long respond to tap gestures. RealityView { content in var body: some View { RealityView { content in for i in 0..<10 { if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) immersiveContentEntity.position = SIMD3<Float>(x: Float(-i*i), y: 0.75, z: Float(-1*i)-3) } } } .gesture(tap) } var tap: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in AudioServicesPlaySystemSound(1057) print(value.entity.name) } } }
Posted
by
Post not yet marked as solved
0 Replies
45 Views
Hi! I was trying to port our sdk for visionOS. I was going through the documentation and saw this video: https://developer.apple.com/videos/play/wwdc2023/10089/ Is there any working code sample for it, same goes for arkit c api ? Couldn't find any links. Thanks in advance. Sahil
Posted
by
Post not yet marked as solved
0 Replies
40 Views
I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand. (I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
Posted
by
Post not yet marked as solved
0 Replies
44 Views
Good day. I'm inquiring if there is a way to test functionality between Apple Pencil Pro and Apple Vision Pro? I'm trying to work on an idea that would require a tool like the Pencil as an input device. Will there be an SDK for this kind of connectivity?
Posted
by
Post not yet marked as solved
0 Replies
38 Views
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so: Gestures: var drag: some Gesture { DragGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene) } .onEnded { value in itemTranslation += gestureTranslation gestureTranslation = .init() } } var rotate: some Gesture { RotateGesture3D() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureRotation = simd_quatf(value.rotation.quaternion).inverse } .onEnded { value in itemRotation = gestureRotation * itemRotation gestureRotation = .identity } } var magnify: some Gesture { MagnifyGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureScale = Float(value.magnification) } .onEnded { value in itemScale *= gestureScale gestureScale = 1.0 } } RealityView modifiiers: .simultaneousGesture(drag) .simultaneousGesture(rotate) .simultaneousGesture(magnify) RealityView update block: entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition entity.orientation = gestureRotation * itemRotation entity.scaleAll(itemScale * gestureScale)
Posted
by
Post not yet marked as solved
0 Replies
68 Views
[visionOS Question] I’m using the hierarchy of an entity loaded from a RealityKit Pro project to drive the content of a NavigationSplitView. I’d like to render any of the child entities in a RealityKitView in the detail pane when a user selects the child entity name from the list in the NavigationSplitView. I haven’t been able to render the entity in the detail view yet. I have tried updating the position/scaling to no avail. I also tried adding an AnchorEntity and set the child entity parent to it. I’m starting to suspect that the way to do it is to create a scene for each individual child entity in the RealityKit Pro project. I’d prefer to avoid this approach as I want a data-driven approach. Is there a way to implement my idea in RealityKit in code?
Posted
by
Post not yet marked as solved
1 Replies
76 Views
Im trying to use a RealityView with attachments and this error is being thowen. Am i using the RealityView wrong? I've seen other people use a RealityView with Attachments in visionOS... Please let this be a bug... RealityView { content, attachments in contentEntity = ModelEntity(mesh: .generatePlane(width: 0.3, height: 0.5)) content.add(contentEntity!) } attachments: { Text("Hello!") }.task { await loadImage() await runSession() await processImageTrackingUpdates() }
Posted
by
Post not yet marked as solved
2 Replies
92 Views
I am trying to verify my understanding of adding a HoverEffectComponent on entities inside a scene in RealityViews. Inside RealityComposer Pro, I have added the required Input Target and Collision components to one entity inside a node with multiple siblings, and left any options as defaults. They appear to create appropriately sized bounding boxes etc for these objects. In my RealityView I programmatically add the HoverEffectComponents to the entities as I don't see them in RCP. On device, this appears to "work" in the sense that when I gaze at the entity, it lights up - but so does every other entity in the scene - even those without Input Target and Collision components attached. Because the documentation on the components is sparse I am unsure if this is behavior as designed (e.g. all entities in that node are activated) or a bug or something in between. Has anyone encountered this and is there an appropriate way of setting these relationships up? Thanks
Posted
by
Post not yet marked as solved
0 Replies
56 Views
Hi, just generated a HDR10 MVHEVC file, mediainfo is below: Color range : Limited Color primaries : BT.2020 Transfer characteristics : PQ Matrix coefficients : BT.2020 non-constant Codec configuration box : hvcC+lhvC then generate the segment files with below command: mediafilesegmenter --iso-fragmented -t 4 -f av_1 av_new_1.mov then upload the segment files and prog_index.m3u8 to web server. just find that can not play the HLS stream on Safari... the url is http://ip/vod/prog_index.m3u8 just checked that if i remove the tag Transfer characteristics : PQ when generating the MVHEVC file. above same mediafilesegmenter command and upload the files to web server. the new version of HLS stream is can play on Safari... Is there any way to play HLS PQ video on Safari. thanks.
Posted
by
Post not yet marked as solved
1 Replies
118 Views
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code: VIEWMODEL CODE: Observable class ImageTrackingModel { var session = ARKitSession() // ARSession used to manage AR content var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity var rootEntity = Entity() // Root entity to which all other entities are added let imageInfo = ImageTrackingProvider( referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper") ) init() { setupImageTracking() } func setupImageTracking() { if ImageTrackingProvider.isSupported { Task { try await session.run([imageInfo]) for await update in imageInfo.anchorUpdates { updateImage(update.anchor) } } } } func updateImage(_ anchor: ImageAnchor) { let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES if imageAnchors[anchor.id] == nil { rootEntity.addChild(entity) imageAnchors[anchor.id] = true print("Added new entity for anchor \(anchor.id)") } if anchor.isTracked { entity.transform = Transform(matrix: anchor.originFromAnchorTransform) print("Updated transform for anchor \(anchor.id)") } } } APP: @main struct MyApp: App { @State var session = ARKitSession() @State var immersionState: ImmersionStyle = .mixed private var viewModel = ImageTrackingModel() var body: some Scene { WindowGroup { ModeSelectView() } ImmersiveSpace(id: "appSpace") { ModeSelectView() } .immersionStyle(selection: $immersionState, in: .mixed) } } Content View: RealityView { content in Task { viewModel.setupImageTracking() } } //Im serioulsy so clueless on how to use this view
Posted
by
Post not yet marked as solved
1 Replies
125 Views
I was executing some code from Incorporating real-world surroundings in an immersive experience func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = ModelEntity() entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.physicsBody = PhysicsBodyComponent(mode: .static) meshEntities[meshAnchor.id] = entity contentEntity.addChild(entity) case .updated: guard let entity = meshEntities[meshAnchor.id] else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } } } I would like to toggle the Occlusion mesh available on the dev tools below, but programmatically. I would like to have a button, that would activate and deactivate that. I was checking .showSceneUnderstanding but it does not seem to work in visionOS. I get the following error 'ARView' is unavailable in visionOS when I try what is available Visualizing and Interacting with a Reconstructed Scene
Posted
by
Post not yet marked as solved
1 Replies
169 Views
I just bought a Vision Pro to build and run my app on it, but I'm encountering this error from Xcode: Developer Mode is enabled. Computer is paired. Device is paired, and it's connected to my Mac via the Developer Strap. In the "VPN &amp; Device Management" setting it says to navigate to, there is no "Developer App certificate". Others have suggested clearing the ModuleCache in DerivedData, but that's also been fruitless. I've cleaned the build multiple times, and restarted both devices, and Xcode. I have no idea what else I can possibly do to resolve this. Does anyone have any other ideas?
Posted
by
Post not yet marked as solved
2 Replies
106 Views
I'm trying to control the LOD of textures for an app for vision pro, With the default image node in composer pro the UV's are correct but the LOD is not what I want, I would like to have control over it. I see there is a node called "RealityKitTexture2DLOD" but as soon as I try to use that one the UV's are all messed up. Am I missing something ? Do we need to do something specific to use this node ? I tried to use the nodes "Place 2D" and "UsdTransform2d" but could not get the texture to align Any help appreciated
Posted
by
gl5
Post not yet marked as solved
4 Replies
150 Views
We have a random issue that when ARKitSession.run() is called, monitorSessionEvents() receives .paused and it never transitions to .running If we exit Immersive Space and do ARKitSesssion.run() again it works fine. Unfortunately this is very difficult to manage in the flow of our App.
Posted
by
Post marked as solved
1 Replies
102 Views
Context https://developer.apple.com/forums/thread/751036 I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack At runtime I'm loading: Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project) A Model3D view that pulls in the robot model from the web url A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason. Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot). Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once? Screenshot Code struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at // https://developer.apple.com/ } } Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!) SkyboxView() // RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz") RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz") } }
Posted
by
Post not yet marked as solved
0 Replies
181 Views
When the dinosaur protrudes from the portal in the Encounter Dinosaurs app, it appears to be lit by the real room lighting, just like any other RealityKit content is by default. When the dinosaur is inside of the portal, it appears to be lit by the virtual environment, and the two light sources seem to be smoothly blended between at the plane of the portal. How is this done? ImageBasedLightReceiverComponent allows the IBL to be changed on a per-entity basis, but the actual lightning calculation shader code seems to be a black box, and I have not seen a way to specify which IBL texture is used on a per-fragment basis.
Posted
by
Post not yet marked as solved
1 Replies
123 Views
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit. As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/ I think the process should be: Download the file from the web and store in temporary storage with the FileManager API Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file Add the entity to content in the Reality View. I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect. Is there any official guidance or a code sample for this use case?
Posted
by