Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit tag

354 Posts
Sort by:
Post marked as solved
1 Replies
76 Views
Hi all, I am trying to use ARWorldTrackingConfiguration to find any faces in my scene. However when I query the scene, using the same type of query one would use in ARFaceTrackingConfiguration, I don't get an Entity back. Here's my code: var entityCollection : Set<Entity> = [] let faceEntity = scene.performQuery(query1).first { $0.components[SceneUnderstandingComponent.self]?.entityType == .face } Every single time faceEntity returns as empty. Any help/pointers would be appreciated!
Posted
by rakin4321.
Last updated
.
Post marked as solved
1 Replies
91 Views
Im trying to take an object capture, and scale it. What I did so far is create a Reality Composer project, insert the .objcap file into the project, and then scaled it from 100%, to 200%. I then extracted it as a USDZ. it just won't show up in the Xcode preview now, and im not sure why it doesn't show. Is there any way to fix this? im going crazy trying to find a fix for this to work.
Posted Last updated
.
Post not yet marked as solved
0 Replies
61 Views
I am developing an iOS app intended to be used only a specific location (a campus). In this case, I'd like to use ARGeoAnchor to anchor content across a relatively large space, though in the pedestrian-only areas this is not supported and tracking begins to fail. Is it possible to additionally use ARReferenceObject to re-localize to a specific location when I am walking in an unsupported area. (FB13719373)
Posted
by Jake3231.
Last updated
.
Post not yet marked as solved
1 Replies
103 Views
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
Posted
by vill33.
Last updated
.
Post marked as solved
2 Replies
129 Views
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors. I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location). The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking. Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking? And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking). Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location. I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location. Here is how I captured the tap gesture and extracted the 3D location: var myTapGesture: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { event in let location3D = event.convert(event.location3D, from: .global, to: .scene) let entity = event.entity model.handleTap(location: location3D, entity: entity) } } Here is how I set the position of the purple cube: func handleTap(location: SIMD3<Float>, entity: Entity) { let positionEntity = Entity() positionEntity.setPosition(location, relativeTo: nil) ... }
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
3 Replies
179 Views
Flow: User enters app and starts an arkit session with worldtracking and scene reconstruction. User closes app so we stop the session. User re-enters app and we try to run the session but app crashes with error: "It is not possible to re-run a stopped data provider. If we remove code to stop the session, when the user re-enters the app the scene reconstruction doesn't work properly and shows inaccurate meshing data. Is this a bug or am I doing something wrong here? Any ideas or insight are appreciated
Posted Last updated
.
Post not yet marked as solved
3 Replies
131 Views
It appears that when a class like the following: " class RoomCaptureViewController: UIViewController, RoomCaptureViewDelegate,ARSCNViewDelegate, MTKViewDelegate, ARSessionDelegate, RoomCaptureSessionDelegate. " has multiple delegates, the ordering of the priority of each message is delivered to a delegate by a priority sensitive order based algorithm and that one message can be processed by only one delegate and not passed off to other delegates if they don't have the proper entry points. Specifically I noted that changing the order seems to result in a delegate not getting a message that it should be seeing. Is there a "handoff" call that can be made after a delegate has seen a message but needs to pass it off to another delegate for processing? This is a protocol typically utilized in Interrupt handlers for PCIe and other messaging protocols and I have not been able to find a similar capability In the voluminous documentation available for IOS and Mac systems. I would also like to know how a message is dispatched by a class to the particular delegate for which the message was intended. Is there a detailed document that explains how the messaging protocol works that is not so fragmented as to require having multiple monitors open in order to form a coherent picture of the messaging interface for Delegates belonging to a class?
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
2 Replies
144 Views
While WorldTrackingProvider.removeAnchor() completes without error, the WorldAnchor might be back the next time the App is run. This can easily be replicated by the ObjectPlacement sample. Just add 10 objects, Remove All, then run App again. The first run the anchors might be gone, but run the App a couple more times and the anchors will come back. This becomes a big problem when paired with the issue that anchors are not always found when the App enter Immersive mode. When an anchor is not found our App will add an anchor. That usually works fine for that run. The next run, however, the other anchors will show up. Anchors accumulate and it becomes difficult to track.
Posted Last updated
.
Post marked as solved
2 Replies
135 Views
Hello, I tried to build something with scene reconstruction but I want to add occlusion on the Surfaces how can I do that? I tried to create an entity and than apply an Occlusion Material but I received an ShapeResourece and I should pass an MeshResource to create a mesh for the entity and than apply a material. Any suggestions?
Posted Last updated
.
Post not yet marked as solved
1 Replies
155 Views
I'm in Europe, Vision Pro isn't available here yet. I'm a developer / designer, and I want to find out whether it's worthwhile to try and sell the idea of investing in a bunch of Vision Pro devices as well as in app development for it, to the people overseeing the budget for a project I'm part of. The project is broadly in an "industry" where several constraints apply, most of them are security and safety. So far, all the Vision Pro discussion I've seen is about consumer-level media consumption and tippy-tappy-app-stuff for a broad user base. Now, the hardware and the OS features and SDK definitely look like professional niche use cases are possible. But some features, such as SharePlay, will for example require an Apple ID and internet connection (I guess?). This for example is a strict nope in my case, for security reasons. I'd like to start a discussion of what works and what doesn't work, outside the realm of watching Disney+ in your condo. Potentially, this device has several marks ticked with regards to incredibly useful features in general. very good indoor tracking pass through with good fidelity hands free operation The first point especially, is kind of a really big deal, and for me, the biggest open question. I have multiple make or break questions with regard to this. (These features are not available in the simulator) For sake of argument, lets say the app I'm building is Cave Mapper. it's meant to be used by archeologists inside a cave system where we have no internet, no reliable compass, and no GPS. We have a local network that we can carry around though. We can also bring lights. One feature of the app is to build out a catalog of cave paintings and store them in a database. The archeologist wants to walk around, look at a cave painting, and tap on it to capture its position relative to the cave entrance. The next day, another archeologist may work inside the same cave, and they would want to have synchronised access to the same spatial data from the day before. For that: How good, precise, reliable, stable is the indoor tracking really? Hyped reviewers said it's rock solid, others have said it can drift. How well do the persistent WorldAnchor objects work? How well do they work when you're in a concrete bunker or a cave without GPS? Can I somehow share a world anchor with another user? is it possible to sync the ARKit map that one device has built, with another device? Other showstoppers? in case you cannot share your mapped world or world anchors: How solid is the tracking of an ImageAnchor (which we could physically nail to the cave entrance to use as a shared positional / rotational reference) Other, practical stuff: can you wear Vision Pro with a safety helmet? does it work with gloves?
Posted
by jpenca.
Last updated
.
Post not yet marked as solved
0 Replies
73 Views
Hi, Im Unity Developer and using Apple ARkit XR Plugin package for my project development. I want to access ARkit rgbimage frame and convert to texture2D in my project. It seems that ARkit overrides camera authorization both back and front, so grabbing wecamtexture using another API(e.g. unitywebcamtexture class) does not allowed. Also ARKit does not provide official root to directly get frame from AR camera. Is ther anyone resolved this issue? Thank you.
Posted
by JuChanSeo.
Last updated
.
Post not yet marked as solved
0 Replies
137 Views
Hi, i want to place a object in 3d world space without the use of hittest or plane detection in ios swift code. Suggest the best method. Now, I take the camera center matrix and use simd_mul to place the object, it works but the object gets placed at the centre of the mobile screen. I want to select the x and y positino on the screen 2d coordinate and place the object. I tried using the unprojectpoint function, to get the AR scene world coordinate of the point i touch on the mobile screen. I get the x, y,z values, they are very close to the values from camera center matrix. When i try to replace the unprojectpoint values in the cameracenter matrix, i dont see a difference in the location of the placed object. The below code always place object from center screen with specified depth, But i need to place object in user specified position(x,y) of the screen with depth. 2D pixel coordinate system of the renderer to the 3D world coordinate system of the scene. /* Create a transform with a translation of 0.2 meters in front of the camera. */ var translation = matrix_identity_float4x4 translation.columns.3.z = -0.2 let transform = simd_mul(view.session.currentFrame.camera.transform, translation) Refer from : [https://developer.apple.com/documentation/arkit/arskview/providing_2d_virtual_content_with_spritekit) The code i used for replacing the camera center matrix with the unprojectpoint is let vpWithZ = SCNVector3(x: 100.0, y: 100.0, z: -1.0) let worldPoint = sceneView.unprojectPoint(vpWithZ) var translation = matrix_identity_float4x4 translation.columns.3.z = Float(Depth) var translation2 = sceneView.session.currentFrame!.camera.transform translation2.columns.3.x = worldPoint.x translation2.columns.3.y = worldPoint.y translation2.columns.3.z = worldPoint.z let new_transform = simd_mul(translation2, translation) /* add object name you wanted in your project*/ let sphere = SCNSphere(radius: 0.03) let objectNode = SCNNode(geometry: sphere) objectNode.position = SCNVector3(x: transform.columns.3.x, y: transform.columns.3.y, z: transform.columns.3.z) The below image shows outline of my idea.
Posted Last updated
.
Post not yet marked as solved
0 Replies
249 Views
I have the following issue regarding running 2 AR service. I am trying to develop an app for my masters thesis. Case 1: I first scan the room using the roomplan api. Then I stop the roomplan api session and start the realitykit session. When the realitykit session starts, the camera is not showing anything but black screen. Case 2: When I had the issue with case one, I tried a seperate test app where I had 2 seperate screen for roomplan api and realitykit. There is no relation. but as soon as I introduced roomplan api, realitykit stopped working, having the same black screen as above. There might be any states that changed by the roomplan api, that's why realitykit is not able to access the camera. Let me know if you have any idea about it or any sample. I am using the following stack: Xcode - Latest; Swiftui; latest os in mac mini and iphone
Posted
by shohandot.
Last updated
.
Post not yet marked as solved
1 Replies
120 Views
I'm developing a motion tracking app that takes requires a real-time view of an iPhone camera to capture the person's body. The motion is mapped to a virtual body. Currently this appears overlayed on the person that the iPhone sees. However, I want to transmit this real time 3D virtual body to a different Apple device, as an AR app, that the other user can place in their environment. Any suggestions on how I can get this 3d model to be viewable by another user (and maintain live updating based on motion tracking)?
Posted Last updated
.
Post not yet marked as solved
1 Replies
141 Views
When running a modified version of the RoomPlan Demo I get frequent Session Interrupted conditions. In looking at the traces I find a status of SensorDidPause in the interruption Side of the error but am mystified as to how to determine which sensor it was that paused and how to diagnose it. It appears there is a bitmap of available and active sensor devices in the sensor info passed with the session data on the error. In looking at the error status I can see that one or two of the motion sensors have had a problem. How do I do further diagnostic checks on what the cause of the error is? I am also curious why the error occurred as soon as the AR Session for my test started via the “session.run” call. The documentation in this area seems difficult to find. Attached are traces from running the test and stack dumps for the calls. Please send me guidance on how to proceed. The device in question is an iPad iPhone(3) that is attached to the Mac mini named “Hawkeye”. There is no known direct involvement for the Hawkeye system
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
0 Replies
164 Views
I have a RealityKit based app in TestFlight and I see the following crash happening twice. It appears to be coming from the RealityKit framework itself in cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect has anyone seen this before and have you discovered what is causing it? Thread 32 Crashed: 0 libsystem_kernel.dylib 0x00000001cfd81fbc __pthread_kill + 8 (:-1) 1 libsystem_pthread.dylib 0x00000001f271f680 pthread_kill + 268 (pthread.c:1681) 2 libsystem_c.dylib 0x000000019069ab90 abort + 180 (abort.c:118) 3 Recon3D 0x0000000211b8cd7c cv3d::acv::surfacedetection::DepthMapPlaneDetector::detect(cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u>, float const*>, cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u... + 6136 (DepthMapPlaneDetector.cpp:346) 4 Recon3D 0x0000000211bb0fe4 cv3d::acv::surfacedetection::SurfaceDetector::detectAndTrack(cv3d::acv::surfacedetection::SurfaceDetector::DetectAndTrackWithDepthParams const&) + 844 (SurfaceDetector.cpp:635) 5 Recon3D 0x000000021142fd24 cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect(cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle const&) + 2672 (SurfaceDetection.cpp:645) 6 Recon3D 0x00000002114678ec cv3d::kit::concurrency::detail::ProcessorInputMessageHandlingStrategy<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd::Surf... + 92 (ProcessorInputMessageHandlingStrategy.h:136) 7 Recon3D 0x00000002114675b4 std::__1::__function::__func<void cv3d::kit::concurrency::detail::Processor<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd... + 184 (function.h:356) 8 Recon3D 0x0000000211794330 void std::__1::__invoke_void_return_wrapper<void, true>::__call<std::__1::future<void> cv3d::esn::thread::IWorkQueue::DispatchAsync<void>(std::__1::function<void ()>&&)::'lambda'()&>(std::__1::futu... + 68 (invoke.h:487) 9 Recon3D 0x0000000212387830 dispatch_async_C_CallBack + 76 (GrandCentralDispatchUtil.cpp:94) 10 libdispatch.dylib 0x00000001905e2300 _dispatch_client_callout + 20 (object.m:561) 11 libdispatch.dylib 0x00000001905e9964 _dispatch_lane_serial_drain + 956 (queue.c:3885) 12 libdispatch.dylib 0x00000001905ea3f8 _dispatch_lane_invoke + 432 (queue.c:3976) 13 libdispatch.dylib 0x00000001905eb6a8 _dispatch_workloop_invoke + 1756 (queue.c:4485) 14 libdispatch.dylib 0x00000001905f5004 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913) 15 libdispatch.dylib 0x00000001905f4878 _dispatch_workloop_worker_thread + 404 (queue.c:6507) 16 libsystem_pthread.dylib 0x00000001f271b964 _pthread_wqthread + 288 (pthread.c:2629) 17 libsystem_pthread.dylib 0x00000001f271ba04 start_wqthread + 8 (:-1)
Posted Last updated
.
Post not yet marked as solved
1 Replies
145 Views
Hello there, Do you know what happens if I call one of the following but the Joint is not tracked? var anchorFromJointTransform: simd_float4x4 The position and orientation of this joint relative to the base joint of the skeleton. var parentFromJointTransform: simd_float4x4 The transform from the joint to its parent joint’s coordinate system.
Posted
by kentvchr.
Last updated
.
Post not yet marked as solved
1 Replies
141 Views
Hello Community, I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible. Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part. Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue. Thank you in advance for any assistance you can provide. Best regards
Posted
by Ramneet.
Last updated
.