Meet ARKit for spatial computing

RSS for tag

Discuss the WWDC23 Session Meet ARKit for spatial computing

View Session

Posts under wwdc2023-10082 tag

12 Posts
Sort by:
Post not yet marked as solved
3 Replies
865 Views
Hello! There was demonstrated TimeForCube sample in WWDC2023 session "Meet ARKit for spatial computing". It's great video-session to start, nevertheless there is no full source code and some important details have been left behind the scene. For example there is no code for createFingertip() method and so on... Where we can get full TimeForCube XCode project? Thanks!
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.3k Views
The new ARKit 3D hand tracking looks amazing, but most of the demos seem to be done with the Vision Pro which has far more sensors than other iOS devices. Will the ARKit 3D hand tracking also be available on iOS Devices with LiDAR? If not, is there any alternatives developers can do to achieve similar 3D hand tracking on mobile devices to make the interaction experience consistent across devices? (I know Vision only detects 2D hand pose) Thanks!
Posted
by Dave_evaD.
Last updated
.
Post not yet marked as solved
2 Replies
1.7k Views
Hi! With the following code I am able to receive the location of taps in my RealityView. How do I find out which of my entities was tapped (in order to execute an animation or movement)? I was not able to find anything close to ARView's entity(at:) unfortunately. Am I missing something, or is this not possible in the current beta of visionOS? struct ImmersiveView: View { var tap: some Gesture { SpatialTapGesture() .onEnded { event in print("Tapped at \(event.location)") } } var body: some View { RealityView { content in let anchor = AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: [0.3, 0.3])) // adding some entities here... content.add(anchor) } .gesture(tap.targetedToAnyEntity()) } }
Posted
by RK123.
Last updated
.
Post not yet marked as solved
1 Replies
673 Views
First, I start add the provider to the session: do { if WorldTrackingProvider.isSupported { try await session.run([worldTracking]) print("World Tracking Provider Started.") } else { print("World Tracking not supported >.>") } } catch { print("ARKitSession error:", error) } Then I try to add a world anchor: var task: Task<Void, Never>? func trackAnchor(_ anchor: WorldAnchor) { task = Task { do { try await self.worldTracking.addAnchor(anchor) print("Added anchor to tracking provider!") } catch { print("Error: \(error)") } } } The awaited call never finishes. A breakpoint is not hit and errors are not thrown. As such, when the app is quit and restarted, the system does not recover the tracked world anchor. Any ideas?
Posted
by J0hn.
Last updated
.
Post not yet marked as solved
0 Replies
635 Views
When Running RoomPlan in a Debug session Pausing in the middle of the job to look at the data structures being carried it was noted that there were no Polygon Edges nor Polygon Corners in the data set. When are they created? One would think that they would be created when the Polygon was so they could be joined up when the sibling surfaces came into existence. roomDataForProcessing RoomPlan.CapturedRoomData keyframes [Foundation.UUID : RSKeyframe] 24 key/value pairs coreAsset RSAsset 0x0000000281ebbd20 baseNSObject@0 NSObject _isCaptured bool 0x0000000000000000 _floorPlan RSFloorPlan? 0x00000002820e69a0 baseNSObject@0 NSObject groupId unsigned int 0x0000000000000000 ceilingHeight float 0x000000003fecf048 floorHeight float 0x00000000bf2557a1 rotationAngleAlongZ float 0x0000000000000000 walls __NSArrayI * 5 elements 0x0000000280847f00 [0] RS3DSurface? 0x0000000102819a20 baseNSObject@0 NSObject isa Class RS3DSurface 0x010000025bdbf4b9 type unsigned char '\0' individualUpdate char '\x01' merged bool false removed bool false confidence float 0.98292613 groupId unsigned int 0 wallStatus int 1 parentWallStatus int -1 offset float 0 depth float 0 depthWeight float 1 identifier __NSConcreteUUID * 0x28198e2a0 0x000000028198e2a0 parentIdentifier id 0x0 0x0000000000000000 room_id unsigned long long 0 room_class_idx unsigned long long 0 multiroom_all_idx unsigned long long 0 storyLevel long long 0 [1] RS3DSurface? 0x0000000102819b70 [2] RS3DSurface? 0x000000010282a620 [3] RS3DSurface? 0x000000010282a770 [4] RS3DSurface? 0x000000010281c250 doors __NSArrayM * 1 element 0x0000000281e4d200 windows __NSArrayM * 0 elements 0x0000000281e4cf60 openings __NSArrayM * 1 element 0x0000000281e4d140 opendoors __NSArrayM * 0 elements 0x0000000281e4f8d0 objects _TtCs19__EmptyArrayStorage * 0x207680b60 0x0000000207680b60 curvedWalls __NSArray0 * 0 elements 0x000000020be723b8 roomTypes __NSArray0 * 0 elements 0x000000020be723b8 floors __NSSingleObjectArrayI * 1 element 0x00000002850c6e00 curvedWindows __NSArrayM * 0 elements 0x0000000281e4c750 curvedDoors __NSArrayM * 0 elements 0x0000000281e4d260 wallLists id 0x0 0x0000000000000000 storyLevel long long 0x0000000000000000 _mirrorPoints __NSArray0 * 0 elements 0x000000020be723b8 _version long long 0x0000000000000002 _rawFloorPlan RSFloorPlan? 0x00000002820e6640 error Error? worldTrackingFailure self RoomPlanExampleApp.RoomCaptureViewController 0x0000000106811c00
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
0 Replies
536 Views
We enable with our SyncReality tool Placing 3D assets automatically onto the digital double of the home of the enduser. we - as the developer- won’t need access to the point cloud, not even access to the RoomPlan data of the enduser, at least not in the concrete sense, only in an abstract rule based sense: place on the biggest table UI no.1, place onto the largest free wall UI no.2 etc. would that be possible for the vision pro?
Posted Last updated
.
Post not yet marked as solved
0 Replies
584 Views
ARKit is not correctly building "polygonEdge" items with correct Edge enumeration values. It appears to be finding an edge that is of the set Top/Bottom/Left/Right and leaving garbage in the field that should identify where the edge belongs resulting in some confusion later on when processing. I attached the data for the edges of the polygons when I displayed it In the debugger panel. When the debugger finds an edge type that it does not recognize it dumps the raw value in the field out to the text. If it is valid, the enumeration label for the field is dumped. That is why I think that garbage is being passed along because of the early bailout of the edge classification processing and the attributes of the edge not having a known classification processing for the object This is from a posting for a bug in the Xcode Debugger pane, When running the Debugger for a session, the data for the PolygonEdges field is displayed for enumerated fields that contain unknown enumerations with shown for the label, With the selection of “raw data” one would presumably see the hex representing the value that fit the labeled enumerated field of the structure but that is not what shows. Attached is a somewhat lengthy sample of the data for the field. The fact that the fields are invalid will be taken up with those responsible for the ARKit implementation in another venue. It appears that formatting preferences really don’t mean anything and are ignored ([RoomPlan.CapturedRoom.Surface.Edge]) polygonEdges = 1456 values { [0] = (0xa9) [1] = (0xc5) [2] = (0xa2) [3] = (0xe1) [4] = right [5] = top [6] = top [7] = top [8] = (0xb2) [9] = (0x10) [10] = top [11] = top [12] = top [13] = top [14] = top [15] = top [16] = (0x40) [17] = (0x1a) [18] = (0x58) [19] = (0x4) [20] = bottom [21] = top [22] = top [23] = top [24] = (0xa9) [25] = (0xc5) [26] = (0xa2) [27] = (0xe1) [28] = right [29] = top [30] = top [31] = top [32] = (0xcb) [33] = (0x19) [34] = top [35] = top [36] = top [37] = top [38] = top [39] = top [40] = (0x40) [41] = (0x1a) [42] = (0x58) [43] = (0x4) [44] = bottom [45] = top [46] = top [47] = top [48] = (0xa9) [49] = (0xc5) [50] = (0xa2) [51] = (0xe1) [52] = right [53] = top [54] = top [55] = top [56] = (0x2f) [57] = (0x1c) [58] = top [59] = top [60] = top [61] = top [62] = top [63] = top [64] = (0x40) [65] = (0x1a) [66] = (0x58) [67] = (0x4) [68] = bottom [69] = top [70] = top [71] = top [72] = (0xa9) [73] = (0xc5) [74] = (0xa2) [75] = (0xe1) [76] = right [77] = top [78] = top [79] = top [80] = (0xd0) [81] = (0xc) [82] = top [83] = top [84] = top [85] = top [86] = top [87] = top [88] = (0x40) [89] = (0x1a) [90] = (0x58) [91] = (0x4) [92] = bottom [93] = top [94] = top [95] = top [96] = (0xa9) [97] = (0xc5) [98] = (0xa2) [99] = (0xe1) [100] = right [101] = top [102] = top [103] = top [104] = (0x92) [105] = (0xa) [106] = top [107] = top [108] = top [109] = top [110] = top [111] = top [112] = (0x40) [113] = (0x1a) [114] = (0x58) [115] = (0x4) [116] = bottom [117] = top [118] = top [119] = top [120] = (0xa9) [121] = (0xc5) [122] = (0xa2) [123] = (0xe1) [124] = right [125] = top [126] = top [127] = top [128] = (0x8) [129] = (0x1d) [130] = top [131] = top [132] = top [133] = top [134] = top [135] = top [136] = (0x40) [137] = (0x1a) [138] = (0x58) [139] = (0x4) [140] = bottom [141] = top [142] = top [143] = top [144] = (0xa9) [145] = (0xc5) [146] = (0xa2) [147] = (0xe1) [148] = right
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
0 Replies
1.2k Views
Hi there! From the documentation and sample code (https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera), my understand is that AVFoundation does provide more access to manual control as well as 2x higher resolution of depth image than ARKit. However, upon reading the https://developer.apple.com/augmented-reality/arkit/ website as well the WWDC vid (https://developer.apple.com/videos/play/wwdc2022/10126/), it looks like ARKit 6 now also supports 4K video capture (RGB mode) while scene understanding is running under the hood. I was wondering if anyone knows if the resolution of depth images is still a limitation of ARKit vs AVFoundation. I'm trying to build a capture app that relies on high-quality depth image/lidar info. What would you suggest or any other consideration I should keep in mind? Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
651 Views
Hi can you support ifc import and conversion to usdz file format , and also export usdz to ifc thru open source library like (three.js), just wall , openings, windows, and doors. and usdz file comparator to load two usdz file for differences. best regards, ivo
Posted
by Isain.
Last updated
.