RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

432 Posts
Sort by:
Post not yet marked as solved
2 Replies
47 Views
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
Posted Last updated
.
Post not yet marked as solved
0 Replies
271 Views
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch). Photogrammetry with ObjectCaptureSession was successful, but other trials are not. I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto. It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails. and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed. the settings are: camera: back Lidar camera, image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true I wonder iPad supports Photogrammetry with PhotogrammetrySamples I've already tested some sample codes provided by apple: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture What should I do to make Photogrammetry successful?
Posted Last updated
.
Post not yet marked as solved
1 Replies
129 Views
The transparency in reality kit is not rendered properly from specific ordinal axes. It seems like it is a depth sorting issue where it is rejecting some transparent surfaces when it should not. Some view directions relative to specific ordinal axes are fine. I have not narrowed down which specific axes is the problem. This is true across particle systems and/or meshes. It is very easy to replicate this issues using multiple transparent meshes or particle systems. In the above gif you can see the problem in multiple instances, the fire and snow particles are sorted behind the terrain, which has transparency since it is a procedural blend of grass, rock, and ice, but it is correctly sorted in front of the opaque materials such the rocks and wood. In the above gif, it is two back to back grid meshes (since dual sided rendering is not supported) that have a custom surface shader to animate the mesh in a wave and also apply transperency. You can see in the distance, where the transparency seems to be rendered/overlapped correctely, but at the overlap approaches the screen (and crosses an ordinal axes) it renders black for the transparent portion of the surface, when the green of the mesh that is behind should be rendered. This is a blocking problem for the development of this demo.
Posted
by rngd.
Last updated
.
Post not yet marked as solved
1 Replies
66 Views
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory. I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds. I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel. But, I'm kinda stuck here. I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds. If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
Posted
by Gregory_w.
Last updated
.
Post not yet marked as solved
5 Replies
1.2k Views
What is the most efficient way to use a MTLTexture (created procedurally at run-time) as a RealityKit TextureResource? I update the MTLTexture per-frame using regular Metal rendering, so it’s not something I can do offline. Is there a way to wrap it without doing a copy? A specific example would be great. Thank you!
Posted Last updated
.
Post not yet marked as solved
2 Replies
172 Views
Hello, I have started using the DragRotationModifier from the Hello World demo project by Apple. I have run into a bug that I can't seem to figure out for the life of me where everything seems to work fine for about 3-5 seconds of moment before it starts rapidly spinning for some reason. I took a video but it looks like I am unable to post any link to outside sources like imgur or youtube so ill try to describe it as best I can: Basically I can spin the sample USDZ Nike Airforce from the Apple sample objects perfectly, but after about 3-5 seconds it seems to rapidly snap between different other rotations and the rotation where the "cursor" is. A couple of additional notes, this only happens when the finger pinch/drag gesture is interacting with the object and this spin only affects the Yaw rotation axis of the object. I created an "Imported Model Entity" wrapper that does some additional stuff when importing a USDZ model similar to the Hello World demo. Then, within a RealityView I create an instance of this ImportedModelEntity and attach the Drag Rotation Modifier to the view like this: RealityView { content in let modelEntity = await ImportedModelEntity(configuration: modelViewModel.modelConfiguration) content.add(modelEntity) self.modelEntity = modelEntity content.add(BoundsVisualizer(bounds: [0.6, 0.6, 0.6])) //Scale object to half of the size of Volume view let bounds = content.convert(geometry.frame(in: .local), from: .local, to: content) let minExtent = bounds.extents.min() modelViewModel.modelConfiguration.scale = minExtent } update: { content in modelEntity?.update(configuration: modelViewModel.modelConfiguration) } .if(modelEntity != nil) { view in view.dragRotation( pitchLimit: .degrees(90), targetEntity: modelEntity!, sensitivity: 10, axRotateClockwise: axRotateClockwise, axRotateCounterClockwise: axRotateCounterClockwise) } For reference here is my ImportedModelEntity: import Foundation import RealityKit class ImportedModelEntity: Entity { // MARK: - Sub-entities private var model: ModelEntity = ModelEntity() private let rotator = Entity() // MARK: - Internal state // MARK: - Initializers @MainActor required init() { super.init() } init( configuration: Configuration ) async { super.init() if(configuration.modelURL == nil) { fatalError("Provided modelURL is NOT valid!!") } //Load the custom model on main thread // DispatchQueue.main.async { do { let input: ModelEntity? = try await ModelEntity(contentsOf: configuration.modelURL!) guard let model = input else { return } self.model = model // let material = SimpleMaterial(color: .green, isMetallic: false) // model.model?.materials = [material] //Add input components model.components.set(InputTargetComponent()) model.generateCollisionShapes(recursive: true) // Add Hover Effect model.components.set(HoverEffectComponent()) //self.model.components.set(GroundingShadowComponent(castsShadow: configuration.castsShadow)) // Add Rotator self.addChild(rotator) // Add Model to rotator rotator.addChild(model) } catch is CancellationError { // The entity initializer can throw this error if an enclosing // RealityView disappears before the model loads. Exit gracefully. return } catch let error { print("Failed to load model: \(error)") } // } update(configuration: configuration) } //MARK: - Update Handlers func update( configuration: Configuration) { rotator.orientation = configuration.rotation move(to: Transform( scale: SIMD3(repeating: configuration.scale), rotation: orientation, translation: configuration.position), relativeTo: parent) } } Any help is greatly appreciated!
Posted Last updated
.
Post not yet marked as solved
1 Replies
121 Views
SwiftUI in visionOS has a modifier called preferredSurroundingsEffect that takes a SurroundingsEffect. From what I can tell, there is only a single effect available: .systemDark. ImmersiveSpace(id: "MyView") { MyView() .preferredSurroundingsEffect(.systemDark) } I'd like to create another effect to tint the color of passthrough video if possible. Does anyone know how to create custom SurroundingsEffects?
Posted Last updated
.
Post not yet marked as solved
0 Replies
77 Views
When isAutoFocusEnabled is set to true, the entity in the scene keeps shaking. No focus when isAutoFocusEnabled is set to false. How to set up to solve this problem. override func viewDidLoad() { super.viewDidLoad() arView.session.delegate = self guard let arCGImage = UIImage(named: "111", in: .main, with: .none)?.cgImage else { return } let arReferenceImage = ARReferenceImage(arCGImage, orientation: .up, physicalWidth: CGFloat(0.1)) let arImages: Set<ARReferenceImage> = [arReferenceImage] let configuration = ARImageTrackingConfiguration() configuration.trackingImages = arImages configuration.maximumNumberOfTrackedImages = 1 configuration.isAutoFocusEnabled = false arView.session.run(configuration) } func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { anchors.compactMap { $0 as? ARImageAnchor }.forEach { let anchor = AnchorEntity(anchor: $0) let mesh = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005) let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = ModelEntity(mesh: mesh, materials: [material]) model.transform.translation.y = 0.05 anchor.children.append(model) arView.scene.addAnchor(anchor) } }
Posted
by caopengxu.
Last updated
.
Post marked as solved
3 Replies
464 Views
Hello! I've been trying to create a custom USDZ viewer using the Vision Pro. Basically, I want to be able to load in a file and have a custom control system I can use to transform, playback animations, etc. I'm getting stuck right at the starting line however. As far as I can tell, the only way to access the file system through SwiftUI is to use the DocumentGroup struct to bring up the view. This requires implementing a file type through the FileDocument protocol. All of the resources I'm finding use text files as their example, so I'm unsure of how to implement USDZ files. Here is the FileDocument I've built so far: import SwiftUI import UniformTypeIdentifiers import RealityKit struct CoreUsdzFile: FileDocument { // we only support .usdz files static var readableContentTypes = [UTType.usdz] // make empty by default var content: ModelEntity = .init() // initializer to create new, empty usdz files init(initialContent: ModelEntity = .init()){ content = initialContent } // import or read file init(configuration: ReadConfiguration) throws { if let data = configuration.file.regularFileContents { // convert file content to ModelEntity? content = ModelEntity.init(mesh: data) } else { throw CocoaError(.fileReadCorruptFile) } } // save file wrapper func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper { let data = Data(content) return FileWrapper(regularFileWithContents: data) } } My errors are on conversion of the file data into a ModelEntity and the reverse. I'm not sure if ModelEntity is the correct typing here, but as far as I can tell .usdz files are imported as ModelEntities. Any help is much appreciated! Dylan
Posted
by dganim8s.
Last updated
.
Post not yet marked as solved
0 Replies
96 Views
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video. I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view. Is this even possible?
Posted Last updated
.
Post marked as solved
1 Replies
108 Views
Hi all, I am trying to use ARWorldTrackingConfiguration to find any faces in my scene. However when I query the scene, using the same type of query one would use in ARFaceTrackingConfiguration, I don't get an Entity back. Here's my code: var entityCollection : Set<Entity> = [] let faceEntity = scene.performQuery(query1).first { $0.components[SceneUnderstandingComponent.self]?.entityType == .face } Every single time faceEntity returns as empty. Any help/pointers would be appreciated!
Posted
by rakin4321.
Last updated
.
Post not yet marked as solved
1 Replies
178 Views
I wanted to show a progress of a certain part of the game using an entity that looks like a "pie chart", basically cylinder with a cut-out. And as progress is changed (0-100) the entity would be fuller. Is there a way to create this kind of model entity? I know there are ways to animated entities, warp them between meshes, but I was wondering if somebody knows how to achieve it in a simplest way possible? Maybe some kind of custom shader that would just change how the material is rendered? I do not need its physics body, just to show it. I know how to do it in UIKit and classic 2d UI Apple frameworks but here working with model entities it gets a bit tricky for me. Here is example of how it would look, examples are in 2d but you can imagine it being 3d cylinders with a cut-out. Thank you!
Posted
by darescore.
Last updated
.
Post not yet marked as solved
0 Replies
140 Views
This is easy to replicate with the ObjectPlacement sample app. Just run the app, position the View behind you and enjoy building a tower with the blocks. Eventually the App will enter the background and exit Immersive Space. This is actually a big problem because while you can ignore the change of scenePhase to .background this removes the chance you have to knowing the user pinched the circle X button to close the view. You can run the Hello World app, enter Immersive Space and then close the View. Immersive Space stays up and you can't get the View back. As such you need to close Immersive Space if the user closes the View (like ObjectPlacement does). Obviously if you do that and the user doesn't look at the View then it exits Immersive Space. Either option is a bad user experience. Being able to hide the View while in Immersive Space (such that the user can't close it) would be a good option. Unfortunately while you can hide all the content, the bar and circle X button are still visible. A second option would be to have the View not go into the background if the user doesn't look at it iff they are in Immersive Space.
Posted Last updated
.
Post not yet marked as solved
0 Replies
121 Views
Hello. I am trying to load my own Image Based Lighting file in a visionOS RealityView. I used the code you get when creating a new project from scratch and selecting the immersive space to full when creating the project. With the sample file Apple provides, it works. But when I put my image in PNG, HEIC or EXR format in the same location the example file was in, it doesn't load and the error states: Failed to find resource with name "SkyboxUpscaled2" in bundle In this image you can see the file "ImageBasedLight", which is the one that comes with the project and the file "SkyboxUpscaled2" which is my own in the .exr format. if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) do{ let resource = try await EnvironmentResource(named: "SkyboxUpscaled2") let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) }catch{ print(error.localizedDescription) } Does anyone have an idea why the file is not found? Thanks in advance!
Posted
by Flex05.
Last updated
.
Post not yet marked as solved
1 Replies
138 Views
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
Posted
by vill33.
Last updated
.
Post marked as solved
2 Replies
168 Views
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors. I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location). The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking. Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking? And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking). Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location. I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location. Here is how I captured the tap gesture and extracted the 3D location: var myTapGesture: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { event in let location3D = event.convert(event.location3D, from: .global, to: .scene) let entity = event.entity model.handleTap(location: location3D, entity: entity) } } Here is how I set the position of the purple cube: func handleTap(location: SIMD3<Float>, entity: Entity) { let positionEntity = Entity() positionEntity.setPosition(location, relativeTo: nil) ... }
Posted
by Todd2.
Last updated
.
Post marked as solved
2 Replies
158 Views
Hello, I tried to build something with scene reconstruction but I want to add occlusion on the Surfaces how can I do that? I tried to create an entity and than apply an Occlusion Material but I received an ShapeResourece and I should pass an MeshResource to create a mesh for the entity and than apply a material. Any suggestions?
Posted Last updated
.
Post not yet marked as solved
1 Replies
183 Views
I'm in Europe, Vision Pro isn't available here yet. I'm a developer / designer, and I want to find out whether it's worthwhile to try and sell the idea of investing in a bunch of Vision Pro devices as well as in app development for it, to the people overseeing the budget for a project I'm part of. The project is broadly in an "industry" where several constraints apply, most of them are security and safety. So far, all the Vision Pro discussion I've seen is about consumer-level media consumption and tippy-tappy-app-stuff for a broad user base. Now, the hardware and the OS features and SDK definitely look like professional niche use cases are possible. But some features, such as SharePlay, will for example require an Apple ID and internet connection (I guess?). This for example is a strict nope in my case, for security reasons. I'd like to start a discussion of what works and what doesn't work, outside the realm of watching Disney+ in your condo. Potentially, this device has several marks ticked with regards to incredibly useful features in general. very good indoor tracking pass through with good fidelity hands free operation The first point especially, is kind of a really big deal, and for me, the biggest open question. I have multiple make or break questions with regard to this. (These features are not available in the simulator) For sake of argument, lets say the app I'm building is Cave Mapper. it's meant to be used by archeologists inside a cave system where we have no internet, no reliable compass, and no GPS. We have a local network that we can carry around though. We can also bring lights. One feature of the app is to build out a catalog of cave paintings and store them in a database. The archeologist wants to walk around, look at a cave painting, and tap on it to capture its position relative to the cave entrance. The next day, another archeologist may work inside the same cave, and they would want to have synchronised access to the same spatial data from the day before. For that: How good, precise, reliable, stable is the indoor tracking really? Hyped reviewers said it's rock solid, others have said it can drift. How well do the persistent WorldAnchor objects work? How well do they work when you're in a concrete bunker or a cave without GPS? Can I somehow share a world anchor with another user? is it possible to sync the ARKit map that one device has built, with another device? Other showstoppers? in case you cannot share your mapped world or world anchors: How solid is the tracking of an ImageAnchor (which we could physically nail to the cave entrance to use as a shared positional / rotational reference) Other, practical stuff: can you wear Vision Pro with a safety helmet? does it work with gloves?
Posted
by jpenca.
Last updated
.