Vision Pro CoreML inference 10x slower than M1 Mac/seems to run on CPU

Have a CoreML model that I run in my app Spatial Media Toolkit which lets you convert 2D photos to Spatial.

Running the model on my 13" M1 mac gets 70ms inference. Running the exact same code on my Vision Pro takes 700ms. I'm working on adding video support but Vision Pro inference is feeling impossible due to 700ms per frame (20x realtime for for 30fps! 1 sec of video takes 20 sec!)

There's a ModelConfiguration you can provide, and when I force CPU I get the same exact performance.

Either it's only running on CPU, the NeuralEngine is throttled, or maybe GPU isn't allowed to help out. Disappointing but also feels like a software issue. Would be curious if anyone else has hit this/have any workarounds