RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

429 Posts
Sort by:
Post not yet marked as solved
4 Replies
1.1k Views
What is the most efficient way to use a MTLTexture (created procedurally at run-time) as a RealityKit TextureResource? I update the MTLTexture per-frame using regular Metal rendering, so it’s not something I can do offline. Is there a way to wrap it without doing a copy? A specific example would be great. Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
48 Views
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video. I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view. Is this even possible?
Posted Last updated
.
Post marked as solved
1 Replies
77 Views
Hi all, I am trying to use ARWorldTrackingConfiguration to find any faces in my scene. However when I query the scene, using the same type of query one would use in ARFaceTrackingConfiguration, I don't get an Entity back. Here's my code: var entityCollection : Set<Entity> = [] let faceEntity = scene.performQuery(query1).first { $0.components[SceneUnderstandingComponent.self]?.entityType == .face } Every single time faceEntity returns as empty. Any help/pointers would be appreciated!
Posted
by rakin4321.
Last updated
.
Post not yet marked as solved
1 Replies
146 Views
I wanted to show a progress of a certain part of the game using an entity that looks like a "pie chart", basically cylinder with a cut-out. And as progress is changed (0-100) the entity would be fuller. Is there a way to create this kind of model entity? I know there are ways to animated entities, warp them between meshes, but I was wondering if somebody knows how to achieve it in a simplest way possible? Maybe some kind of custom shader that would just change how the material is rendered? I do not need its physics body, just to show it. I know how to do it in UIKit and classic 2d UI Apple frameworks but here working with model entities it gets a bit tricky for me. Here is example of how it would look, examples are in 2d but you can imagine it being 3d cylinders with a cut-out. Thank you!
Posted
by darescore.
Last updated
.
Post not yet marked as solved
0 Replies
111 Views
This is easy to replicate with the ObjectPlacement sample app. Just run the app, position the View behind you and enjoy building a tower with the blocks. Eventually the App will enter the background and exit Immersive Space. This is actually a big problem because while you can ignore the change of scenePhase to .background this removes the chance you have to knowing the user pinched the circle X button to close the view. You can run the Hello World app, enter Immersive Space and then close the View. Immersive Space stays up and you can't get the View back. As such you need to close Immersive Space if the user closes the View (like ObjectPlacement does). Obviously if you do that and the user doesn't look at the View then it exits Immersive Space. Either option is a bad user experience. Being able to hide the View while in Immersive Space (such that the user can't close it) would be a good option. Unfortunately while you can hide all the content, the bar and circle X button are still visible. A second option would be to have the View not go into the background if the user doesn't look at it iff they are in Immersive Space.
Posted Last updated
.
Post not yet marked as solved
0 Replies
80 Views
Hello. I am trying to load my own Image Based Lighting file in a visionOS RealityView. I used the code you get when creating a new project from scratch and selecting the immersive space to full when creating the project. With the sample file Apple provides, it works. But when I put my image in PNG, HEIC or EXR format in the same location the example file was in, it doesn't load and the error states: Failed to find resource with name "SkyboxUpscaled2" in bundle In this image you can see the file "ImageBasedLight", which is the one that comes with the project and the file "SkyboxUpscaled2" which is my own in the .exr format. if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) do{ let resource = try await EnvironmentResource(named: "SkyboxUpscaled2") let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) }catch{ print(error.localizedDescription) } Does anyone have an idea why the file is not found? Thanks in advance!
Posted
by Flex05.
Last updated
.
Post not yet marked as solved
1 Replies
105 Views
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
Posted
by vill33.
Last updated
.
Post not yet marked as solved
0 Replies
93 Views
The transparency in reality kit is not rendered properly from specific ordinal axes. It seems like it is a depth sorting issue where it is rejecting some transparent surfaces when it should not. Some view directions relative to specific ordinal axes are fine. I have not narrowed down which specific axes is the problem. This is true across particle systems and/or meshes. It is very easy to replicate this issues using multiple transparent meshes or particle systems. In the above gif you can see the problem in multiple instances, the fire and snow particles are sorted behind the terrain, which has transparency since it is a procedural blend of grass, rock, and ice, but it is correctly sorted in front of the opaque materials such the rocks and wood. In the above gif, it is two back to back grid meshes (since dual sided rendering is not supported) that have a custom surface shader to animate the mesh in a wave and also apply transperency. You can see in the distance, where the transparency seems to be rendered/overlapped correctely, but at the overlap approaches the screen (and crosses an ordinal axes) it renders black for the transparent portion of the surface, when the green of the mesh that is behind should be rendered. This is a blocking problem for the development of this demo.
Posted
by rngd.
Last updated
.
Post marked as solved
2 Replies
132 Views
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors. I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location). The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking. Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking? And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking). Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location. I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location. Here is how I captured the tap gesture and extracted the 3D location: var myTapGesture: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { event in let location3D = event.convert(event.location3D, from: .global, to: .scene) let entity = event.entity model.handleTap(location: location3D, entity: entity) } } Here is how I set the position of the purple cube: func handleTap(location: SIMD3<Float>, entity: Entity) { let positionEntity = Entity() positionEntity.setPosition(location, relativeTo: nil) ... }
Posted
by Todd2.
Last updated
.
Post marked as solved
2 Replies
136 Views
Hello, I tried to build something with scene reconstruction but I want to add occlusion on the Surfaces how can I do that? I tried to create an entity and than apply an Occlusion Material but I received an ShapeResourece and I should pass an MeshResource to create a mesh for the entity and than apply a material. Any suggestions?
Posted Last updated
.
Post not yet marked as solved
1 Replies
135 Views
Hello, I have started using the DragRotationModifier from the Hello World demo project by Apple. I have run into a bug that I can't seem to figure out for the life of me where everything seems to work fine for about 3-5 seconds of moment before it starts rapidly spinning for some reason. I took a video but it looks like I am unable to post any link to outside sources like imgur or youtube so ill try to describe it as best I can: Basically I can spin the sample USDZ Nike Airforce from the Apple sample objects perfectly, but after about 3-5 seconds it seems to rapidly snap between different other rotations and the rotation where the "cursor" is. A couple of additional notes, this only happens when the finger pinch/drag gesture is interacting with the object and this spin only affects the Yaw rotation axis of the object. I created an "Imported Model Entity" wrapper that does some additional stuff when importing a USDZ model similar to the Hello World demo. Then, within a RealityView I create an instance of this ImportedModelEntity and attach the Drag Rotation Modifier to the view like this: RealityView { content in let modelEntity = await ImportedModelEntity(configuration: modelViewModel.modelConfiguration) content.add(modelEntity) self.modelEntity = modelEntity content.add(BoundsVisualizer(bounds: [0.6, 0.6, 0.6])) //Scale object to half of the size of Volume view let bounds = content.convert(geometry.frame(in: .local), from: .local, to: content) let minExtent = bounds.extents.min() modelViewModel.modelConfiguration.scale = minExtent } update: { content in modelEntity?.update(configuration: modelViewModel.modelConfiguration) } .if(modelEntity != nil) { view in view.dragRotation( pitchLimit: .degrees(90), targetEntity: modelEntity!, sensitivity: 10, axRotateClockwise: axRotateClockwise, axRotateCounterClockwise: axRotateCounterClockwise) } For reference here is my ImportedModelEntity: import Foundation import RealityKit class ImportedModelEntity: Entity { // MARK: - Sub-entities private var model: ModelEntity = ModelEntity() private let rotator = Entity() // MARK: - Internal state // MARK: - Initializers @MainActor required init() { super.init() } init( configuration: Configuration ) async { super.init() if(configuration.modelURL == nil) { fatalError("Provided modelURL is NOT valid!!") } //Load the custom model on main thread // DispatchQueue.main.async { do { let input: ModelEntity? = try await ModelEntity(contentsOf: configuration.modelURL!) guard let model = input else { return } self.model = model // let material = SimpleMaterial(color: .green, isMetallic: false) // model.model?.materials = [material] //Add input components model.components.set(InputTargetComponent()) model.generateCollisionShapes(recursive: true) // Add Hover Effect model.components.set(HoverEffectComponent()) //self.model.components.set(GroundingShadowComponent(castsShadow: configuration.castsShadow)) // Add Rotator self.addChild(rotator) // Add Model to rotator rotator.addChild(model) } catch is CancellationError { // The entity initializer can throw this error if an enclosing // RealityView disappears before the model loads. Exit gracefully. return } catch let error { print("Failed to load model: \(error)") } // } update(configuration: configuration) } //MARK: - Update Handlers func update( configuration: Configuration) { rotator.orientation = configuration.rotation move(to: Transform( scale: SIMD3(repeating: configuration.scale), rotation: orientation, translation: configuration.position), relativeTo: parent) } } Any help is greatly appreciated!
Posted Last updated
.
Post not yet marked as solved
1 Replies
157 Views
I'm in Europe, Vision Pro isn't available here yet. I'm a developer / designer, and I want to find out whether it's worthwhile to try and sell the idea of investing in a bunch of Vision Pro devices as well as in app development for it, to the people overseeing the budget for a project I'm part of. The project is broadly in an "industry" where several constraints apply, most of them are security and safety. So far, all the Vision Pro discussion I've seen is about consumer-level media consumption and tippy-tappy-app-stuff for a broad user base. Now, the hardware and the OS features and SDK definitely look like professional niche use cases are possible. But some features, such as SharePlay, will for example require an Apple ID and internet connection (I guess?). This for example is a strict nope in my case, for security reasons. I'd like to start a discussion of what works and what doesn't work, outside the realm of watching Disney+ in your condo. Potentially, this device has several marks ticked with regards to incredibly useful features in general. very good indoor tracking pass through with good fidelity hands free operation The first point especially, is kind of a really big deal, and for me, the biggest open question. I have multiple make or break questions with regard to this. (These features are not available in the simulator) For sake of argument, lets say the app I'm building is Cave Mapper. it's meant to be used by archeologists inside a cave system where we have no internet, no reliable compass, and no GPS. We have a local network that we can carry around though. We can also bring lights. One feature of the app is to build out a catalog of cave paintings and store them in a database. The archeologist wants to walk around, look at a cave painting, and tap on it to capture its position relative to the cave entrance. The next day, another archeologist may work inside the same cave, and they would want to have synchronised access to the same spatial data from the day before. For that: How good, precise, reliable, stable is the indoor tracking really? Hyped reviewers said it's rock solid, others have said it can drift. How well do the persistent WorldAnchor objects work? How well do they work when you're in a concrete bunker or a cave without GPS? Can I somehow share a world anchor with another user? is it possible to sync the ARKit map that one device has built, with another device? Other showstoppers? in case you cannot share your mapped world or world anchors: How solid is the tracking of an ImageAnchor (which we could physically nail to the cave entrance to use as a shared positional / rotational reference) Other, practical stuff: can you wear Vision Pro with a safety helmet? does it work with gloves?
Posted
by jpenca.
Last updated
.
Post not yet marked as solved
3 Replies
796 Views
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that : It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug. 2. So i test this node with a IF node,i found that it output is weird. Below is zero should output,it is black but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0) I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
Posted
by bYsdTd.
Last updated
.
Post not yet marked as solved
0 Replies
139 Views
For me, any View that is an Attachment will rebuild at full frame rate even with nothing changing, indeed, even with no variables in the view. In addition to causing CPU usage that isn't needed, if there are @State variables in the View they do not always update. I am updating the var on DispatchQueue.main.async and most of the time it works. On occasions it is updated instantly. On other occasions it might take 30 seconds or more before the var changes are visible. If I set a BP where the @State variables are changed I can see that the change... but the new value is not visible in the View (on VisionPro). I have also used print("title (\title)") and I can see the correct version in the console but what you see in AVP in the View is not correct (though it will, eventually update). Important to note, 70% of the time the values are updated immediately. I've tried @StateObject with a class with ObservableObject and while that made it better, it doesn't fix the issue. The App is in FullImmersion at the time. I have no way of knowing if the is related or not. Below is the latest iteration of the variable. @StateObject var alertState = AlertState() class AlertState: ObservableObject { @Published var description: String = "" @Published var title: String = "" }
Posted Last updated
.
Post not yet marked as solved
5 Replies
445 Views
In the WWDC talk "Enhance your spatial computing app with RealityKit." we see how to create a portal effect with RealityKit. In the "Encounter Dinosaurs" experience on Vision Pro there is a similar portal, except this portal allows entities to stick out of the portal. Using the provided example code, I have been unable to replicate this effect. With the example code, anything that sticks out of the portal gets clipped. How do I get entities to stick out of the portal in a way similar to the "Encounter Dinosaurs" experience? I am familiar with the old way of using OcclusionMaterial to create portals, but if the camera gets between the OcclusionMaterial and the entity (such as walking behind the portal), this can break the effect, and I was unable to break the effect in the "Encounter Dinosaurs" experience. If it helps at all: I have noticed that if you look from the edge of the portal very closely, the rocks will not stick out the way that the dinosaurs do; The rocks get clipped. Therefore, the dinosaurs are somehow being rendered differently.
Posted
by CodeName.
Last updated
.
Post not yet marked as solved
0 Replies
251 Views
I have the following issue regarding running 2 AR service. I am trying to develop an app for my masters thesis. Case 1: I first scan the room using the roomplan api. Then I stop the roomplan api session and start the realitykit session. When the realitykit session starts, the camera is not showing anything but black screen. Case 2: When I had the issue with case one, I tried a seperate test app where I had 2 seperate screen for roomplan api and realitykit. There is no relation. but as soon as I introduced roomplan api, realitykit stopped working, having the same black screen as above. There might be any states that changed by the roomplan api, that's why realitykit is not able to access the camera. Let me know if you have any idea about it or any sample. I am using the following stack: Xcode - Latest; Swiftui; latest os in mac mini and iphone
Posted
by shohandot.
Last updated
.
Post not yet marked as solved
0 Replies
100 Views
Hello ! I'm working on an AR project using SwiftUI and RealityKit, and I've encountered a challenge. I need to pass a custom data type from a SwiftUI view to a RealityKit view(Full-Immersion). The data type in question is an Album, defined as follows: struct Album: Identifiable, Hashable { var id = UUID() var image: String var title: String var subTitle: String } Please help.
Posted
by Code2aum.
Last updated
.
Post not yet marked as solved
0 Replies
94 Views
I have no idea how to set the realityView at the bottom-trailing edge of the volume. Could someone help me? var body: some View { RealityView { content in if let scene = try? await Entity(named: "Volume", in: realityKitContentBundle) { content.add(scene) bookEntity = scene.findEntity(named: "Book") crossEntity = scene.findEntity(named: "Cross") } } .toolbar { if (isShowToolbar) { ToolbarItemGroup(placement: .bottomOrnament) { Text("The toolbar is shown") } } } .gesture(tapGesture()) } I have tried several ways, but none work, including adding a zstack to align with the bottum. For now, the bounds of my volume are as follows:
Posted
by cjlalala.
Last updated
.