Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit tag

351 Posts
Sort by:
Post not yet marked as solved
3 Replies
1.2k Views
Hi can you support in ar6 api scan to bim and export in ifc, scan room or space which detect Doors,windows, furniture with using AI and make floor plan which can be exported to ifc or format for bim. Second can you make lidar scan to solid not pointcloud but to real solid like stl format also using AI to detect Shapes. https://aecmag.com/technology/oda-advances-with-scan-to-bim-sdk/ but also for solid parts Best regards happy new Ivo
Posted
by
Post not yet marked as solved
6 Replies
1.3k Views
Hi We're working on an app that uses RealityKit to present products in the customer's home. We have a bug where every once in a while (somewhere around 1 in 100 runs), all entities are rendered using the green channel only (see image below). It seems to happen to occur in all entities in the ARView, regardless of model or material type. Due to the flaky nature of this bug, I found it really hard to debug, and I can't seem to rule out an internal issue in RealityKit. Did anyone run into similar issues or have any hints of where to look for the culprit?
Posted
by
Post marked as solved
2 Replies
1.4k Views
Hi,  Im trying to determine if point in 3D space is covered by other objects like human hand or a wall. I do not want to use raycast, so my idea is to calculate two things: 1) distance between iPad camera and this point. 2) position of this 3D point projected to 2D arView and then find depth information from depthMap at this point If depth is smaller than distance to point I can assume that point is covered by something. My code works well when iPad is facing our 3D point straight, but when we rotate iPad a little then calculation 2 (based on depth) gain an error. It looks like calculation 1 and 2 take two different points on iPad as a reference (camera position) but I could not find any logic in it. This is my code: let viewSize = arView.bounds.size let frame = arView.session.currentFrame! // Transform to translate between arView and depth map let displayTransform = frame.displayTransform(for: arView.interfaceOrientation, viewportSize: viewSize) guard let depthPixelBuffer = frame.sceneDepth?.depthMap else { return } let depthWidth = CVPixelBufferGetWidth(depthPixelBuffer) let depthWidthFloat = CGFloat(depthWidth) let depthHeight = CVPixelBufferGetHeight(depthPixelBuffer) let depthHeightFloat = CGFloat(depthHeight) // Point in 3D space (our point, red square on images) let object3Dposition = self.position // Calculate distance between camera and point in 3D space // this always works good let distanceToObject = distance(object3Dposition, arView.cameraTransform.translation) // 2D point on ArView projected from 3D position (find where this point will be visible on arView) guard let pointOnArView = arView.project(object3Dposition) else { return } // Normalize 2D point (0-1) let pointOnArViewNormalized = CGPoint(x: pointOnArView.x/viewSize.width, y: pointOnArView.y/viewSize.height) // Transform form ArView position to depthMap position let pointOnDepthMapNormalized = CGPointApplyAffineTransform(pointOnArViewNormalized, displayTransform.inverted()) // Point on depth map (from normalized coordinates to true coordinates) let pointOnDepthMap = CGPoint(x: pointOnDepthMapNormalized.x * depthWidthFloat, y: pointOnDepthMapNormalized.y * depthHeightFloat) guard     pointOnDepthMap.x >= 0 && pointOnDepthMap.y >= 0 && pointOnDepthMap.x < depthWidthFloat && pointOnDepthMap.y < depthHeightFloat else {     // Point not visible, outside of screen     isVisibleByCamera = false     return } // Read depth from buffer let depth: Float32 CVPixelBufferLockBaseAddress(depthPixelBuffer, CVPixelBufferLockFlags(rawValue: 2)) let floatBuffer = unsafeBitCast(     CVPixelBufferGetBaseAddress(depthPixelBuffer),     to: UnsafeMutablePointer<Float32>.self ) // Get depth in 'pointOnDepthMap' coordinates (convert from X,Y coordinates to buffer index) let depthBufferIndex = depthWidth * Int(pointOnDepthMap.y) + Int(pointOnDepthMap.x) // This depth is incorrect when iPad is rotated depth = floatBuffer[depthBufferIndex]          CVPixelBufferUnlockBaseAddress(depthPixelBuffer, CVPixelBufferLockFlags(rawValue: 2)) if distanceToObject > depth + 0.05 {     isVisibleByCamera = false } else {     isVisibleByCamera = true } Thank you :)
Posted
by
Post not yet marked as solved
2 Replies
1.4k Views
Hi Apple community, Currently developing an iOS application using Room Plan API, I'd like to remove real world objects detected with Room Plan to my ARView. I already tried to use the following code but it deletes only the anchor entities (customText, UI instructions...) attached to the Anchor : arView.scene.removeAnchor(anchor) My aim is to delete real world object content to my ARView like in this example : (I have an error when uploading files like png, jpg or pdf so there is a link) https://ibb.co/yR8CRVy Is there any way to do that using Room Plan API and ARKit ? Thanks in advance, Goat
Posted
by
Post not yet marked as solved
3 Replies
1.4k Views
hi I am not sure what is going on... I have been working on this model for a while on reality composer, and had no problem testing it that way...it always worked out perfectly. So I imported the file into a brand new Xcode project... I created a new ARApp, and used SwiftUI. I actually did it twice ... And tested the version apple has with the box. In Apple's version, the app appears but the whole part where it tries to detect planes didn't show up. So I am confused. I found a question that mentions the error messages I am getting but I am not sure how to get around it? https://developer.apple.com/forums/thread/691882 // // ContentView.swift // AppToTest-02-14-23 // // Created by M on 2/14/23. // import SwiftUI import RealityKit struct ContentView : View {   var body: some View {     return ARViewContainer().edgesIgnoringSafeArea(.all)   } } struct ARViewContainer: UIViewRepresentable {       func makeUIView(context: Context) -> ARView {           let arView = ARView(frame: .zero)           // Load the "Box" scene from the "Experience" Reality File     //let boxAnchor = try! Experience.loadBox()     let anchor = try! MyAppToTest.loadFirstScene()           // Add the box anchor to the scene     arView.scene.anchors.append(anchor)           return arView         }       func updateUIView(_ uiView: ARView, context: Context) {}     } #if DEBUG struct ContentView_Previews : PreviewProvider {   static var previews: some View {     ContentView()   } } #endif This is what I get at the bottom 2023-02-14 17:14:53.630477-0500 AppToTest-02-14-23[21446:1307215] Metal GPU Frame Capture Enabled 2023-02-14 17:14:53.631192-0500 AppToTest-02-14-23[21446:1307215] Metal API Validation Enabled 2023-02-14 17:14:54.531766-0500 AppToTest-02-14-23[21446:1307215] [AssetTypes] Registering library (/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten. 2023-02-14 17:14:54.716866-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.743580-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arKitPassthrough.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.744961-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/drPostAndComposition.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.745988-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arSegmentationComposite.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.747245-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute0.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.748750-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute1.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.749140-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute2.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.761189-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute3.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.761611-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute4.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.761983-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute5.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.762604-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute6.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.763575-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute7.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle 2023-02-14 17:14:54.764859-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 18: Json Deserialization; unknown member 'EnableARProbes' - skipping. 2023-02-14 17:14:54.764902-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 20: Json Deserialization; unknown member 'EnableGuidedFilterOcclusion' - skipping. 2023-02-14 17:14:55.531748-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534559-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534633-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534680-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534733-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534777-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534825-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534871-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:55.534955-0500 AppToTest-02-14-23[21446:1307215] throwing -10878 2023-02-14 17:14:56.207438-0500 AppToTest-02-14-23[21446:1307383] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [2] 2023-02-14 17:17:15.741931-0500 AppToTest-02-14-23[21446:1307414] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1] 2023-02-14 17:22:07.075990-0500 AppToTest-02-14-23[21446:1308137] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1] code-block
Posted
by
Post not yet marked as solved
1 Replies
320 Views
I'm looking for ARKit standard face mesh with blendshapes to download, is this available?
Posted
by
Post not yet marked as solved
2 Replies
786 Views
If the anchor picture is out of the camera view, the AR experience disappears. Both with USDZ files, created with Reality composer, directly opened on iPhone or iPad (with AR Quick Look), or with Adobe Aero. So I suppose the bug is due to ARKit.
Posted
by
Post marked as solved
1 Replies
527 Views
Hi, I need to render thousands of simple shapes in our AR experience. I'm using ARView to render 3D models, handling raycast, lighting etc. and I'm looking for a correct way to "inject" Metal code in to ARView. I implemented rendering code inside ARView's renderCallbacks.postProcess: let blitEncoder = context.commandBuffer.makeBlitCommandEncoder() blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) blitEncoder?.endEncoding() let renderDescriptor = MTLRenderPassDescriptor() ... ... let commandEncoder = context.commandBuffer.makeRenderCommandEncoder(descriptor: renderDescriptor)! ... ... commandEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: count) commandEncoder.endEncoding() This solution seems to works for me but sometimes some strange things happens, like camera image reneded by blitEncoder starts to jitter or FPS drops to 20 and sametimes it just works fine. Question 1: Is my solution correct way to draw simple shapes in ARView? Question 2: What can cause those problems during rendering? Thanks :)
Posted
by
Post not yet marked as solved
0 Replies
971 Views
Currently, I have a requirement to use models created in Unity for use in Reality Kit, so I need to convert the models to USDZ formats. I used this approach (How to easily create AR content for iPhone using Unity), but the result was not as expected. Converted models do not display correctly, and animation on objects does not appear in their converted files. It was also noticed that objects made using the Unity particle system (e.g., confetti) were not converted using this approach. I also tried to convert by selecting the ‘Export selected as USDZ’ menu from Unity’s main menu bar, but nothing worked. So is there any effective way to convert the unity models, including the particle systems, to USDZ?
Posted
by
Post not yet marked as solved
0 Replies
739 Views
Hi everybody, I am an Engineering Student and at the University we have to create a little AR-App. Now, in Xcode I want to make an Image Tracking and above that Image, it should show my 3D Object. I followed this Video: "https://www.youtube.com/watch?v=VmPHE8M2GZI" until the minute 39:18. After that, it doesnt work. The simulator detects the Image and shows a light grey Plane above it, even if I move around. But the 3D Model doesn't show up. I imported the ns.obj file in art.scnassets Converted to SceneKit file .scn changed the texture "diffusion" to green I tried to scale it, but still no result I tried also with an 3D Object downloaded from the internet Long Story short.... it doesn't work. Does anyone knows what the Problem could be? Thank you very much. Greetings, Rosario PS: I use the Xcode Version 14.3. Thats my code in the ViewController.swift file: import SwiftUI import RealityKit import UIKit import SceneKit import ARKit class ViewController: UIViewController, ARSCNViewDelegate { @IBOutlet var sceneView: ARSCNView! var nsNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self sceneView.autoenablesDefaultLighting = true let nsScene = SCNScene(named: "art.scnassets/ns.scn") nsNode = nsScene?.rootNode } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARImageTrackingConfiguration() if let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: Bundle.main) { configuration.trackingImages = trackingImages configuration.maximumNumberOfTrackedImages = 2 } sceneView.session.run(configuration) } override func viewWillDisappear(_ animated: Bool) { super.viewWillDisappear(animated) sceneView.session.pause() } func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? { let node = SCNNode() if let imageAnchor = anchor as? ARImageAnchor { let size = imageAnchor.referenceImage.physicalSize let plane = SCNPlane(width: size.width, height: size.height) plane.firstMaterial?.diffuse.contents = UIColor.white.withAlphaComponent(0.5) plane.cornerRadius = 0.005 let planeNode = SCNNode(geometry: plane) planeNode.eulerAngles.x = -.pi / 2 node.addChildNode(planeNode) if let shapeNode = nsNode { node.addChildNode(shapeNode) } } return node } }
Posted
by
Post not yet marked as solved
1 Replies
415 Views
I am trying to make a simple 2d overlay for the FacialAnchor Mesh. I am unsure how to get my graphic to line up to what the mesh is showing. Is there a template image I should use to paint then apply that in xcode? Any tutorials or links to get me in the right direction would be much appreciated.
Posted
by
Post not yet marked as solved
0 Replies
672 Views
I am developing an application where I have an active ARSession. This session has the ARWorldTrackingConfiguration active, in which I am setting planeDetection to one of the modes, or both as [.horizontal, .vertical]. This is done via a button press and a dropdown menu. Dropdown menu chooses between horizontal, vertical or everything and button toggles the plane tracking entirely (sets planeTracking = []) When mode change or toggle happens, I want to print a list of anchors correctly to console because I will use the list of anchors later on. When a mode change occurs, I am performing the steps down below inside a switch: switch(planeTrackingMode) { case .horizontal: configuration.planeDetection = [.horizontal] for anchor in (planeAnchors ?? []) { if((anchor as? ARPlaneAnchor)?.alignment == .vertical) { session?.remove(anchor: anchor) } } case .vertical: configuration.planeDetection = [.vertical] for anchor in (planeAnchors ?? []) { if((anchor as? ARPlaneAnchor)?.alignment == .horizontal) { session?.remove(anchor: anchor) } } case .horizontalAndVertical: configuration.planeDetection = [.horizontal, .vertical] default: configuration.planeDetection = [] for anchor in (planeAnchors ?? []) { if(anchor is ARPlaneAnchor) { session?.remove(anchor: anchor) } } } session?.run(config, options: []) Button toggle just performs the same steps as default case. The issue here is that after disabling or removing vertical anchors, previous anchors are still in the memory with same UUIDs. Even if I disable via button, turn the phone in a way that the camera can't see the vertical plane, and then turn the tracking back on the vertical plane is still there. When I put a log after deletion, list of anchors seems fine. But when I put a print in session(_ session: ARSession, didUpdate anchors: [ARAnchor]) delegate function, I see that none of the previous vertical plane anchors were deleted, and some of them start spinning and moving away from their original position slowly, then their positions turn into NAN values. I tried getting the existing anchors list both with currentFrame.anchors and session.getCurrentWorldMap function. Results are the same. There is no ARSceneView or anything, just an ARSession, its camera and a Metal view to render things. Even if I go for session?.run(config, options: [.removeExistingAnchors]), which should remove all anchors, this behaviour is same. If ARSession is doing some cache magic, it seems like it is doing it wrong. If anyone can replicate this, please inform me if this is a bug or not. Behaviour is the same on iPhone 12 Pro Max and iPad Pro 6th Gen, with both iOS 15.* and 16.* versions.
Posted
by
Post marked as solved
1 Replies
668 Views
I have an ARSCNView with nodes in AR. The UI guides you in scaling an object and the images captured are used to create a 3D model. When I try take a picture using arView.snapshot() it basically takes a screenshot a the resolution is the screen resolution and the nodes are visible. I also noticed that the AR View displays a lower quality output than the camera app. If I try get a separate camera output with AVFoundation then the arView doesn't get an output and vive versa, it only allows one view to access the camera. Is it possible to get a full resolution image without the nodes visible from the AR View (if I hide them, capture the image and display them again they do flicker) or is it possible to have 2 camera streams one to the arView and the other used to solely capture images (this one does not need to visible on screen)
Posted
by
Post not yet marked as solved
0 Replies
235 Views
How do the virtual objects added in ARkit obtain their alpha channel for image processing, for example, when I use the painting transfer function, I want to process the virtual object part separately, because each virtual object has a collision plane, so I obtained before All channels are channels with this collision plane, but this is wrong. Is there any good way to get the Alpha channel of the virtual object? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
764 Views
MAXST SDK Version: 5.0 Development Environment: Unity-iOS 2019.4.29f1 Tracker/Scanner: Image Tracker License Type(Free / Pro-One Time Fee / Pro-Subscription / Enterprise): Pro-One Time Fee Target Device(Optional): iPhone + iPad, min IOS 11.0 Xcode version: 14.3.1 I've attached screenshots of the error below. This is very strange and I have not met before. All three packages are required to be added, but the program does not skip them. What am I doing wrong please tell me. Maybe it's the new version of Xcode. Before, I just opened a project, added packages, and everything worked. Now when I add packages it swears. Interestingly, if you specify don't embed for packages, then this application starts, but augmented reality does not work. When I put embed & sign, errors take off.
Posted
by
Post not yet marked as solved
0 Replies
545 Views
I can't post a video I don't think. But in the screen shots, I'm sightly making a circle with the phone and the green lines will disappear and then reappear. Those green lines are drawn via .addChildNode(). We're using RoomPlan to detect cabinets(.storage), and then we outline the cabinets with SCNNodes. We have other methods to capture cabinets that don't use RoomPlan. And the lines for those cabinets do not wink in and out. Perhaps there is a bug with visibility culling? We're pretty dang sure the nodes are not disappearing because we are calling .hide() anywhere. Perhaps object detection from RoomPlan running in the background is interfering?
Posted
by