Using CoreML in VisionOS with Multipeer Connectivity

I've been recently working on a VisionOS app which uses CoreMl to identify specific body parts and display a window with information of the identified body part, since the use of Vision Pro's cameras is blocked, I'm using an iPhone to perform image classification, and then send the label to the headset using Multipeer Connectivity, I'd like to display a volume once the user selects a body part, could my iPhone return enough spatial information for me to be able to fully take advantage of Vision Pro's mixed reality capabilities?

Updates in the Integration of Motion Tracking into a VisionOS App

Lately I've been looking for ways to perform motion tracking inside my visionOS app, the best approach I've found so far is to use an ARView in my current scanner iOS app in order for me to be able to use the device's integrated LiDar to measure the distance from the camera to the detected device, in order to generate coordinates as well as getting a precise distance metric, I also integrated a CoreML object detection model in order to use its bounding box data to use the coordinate data of the point closest to the center of the bounding box, the found coordinates are then sent to the Vision Pro headset via multi peer connectivity

Vision OS app Adjustments

In order for me to be able to manifest entities at the detected device's position, I created a function to place an entity with a world anchor on the position in which the user performs a tap gesture for the user to place a reference point in the scanning device's position, I may replace this for an image anchor later. The app will use the placed entity's coordinates as a reference to place a new entity in the received position

Using CoreML in VisionOS with Multipeer Connectivity
 
 
Q