Is the Apple Neural Scene Analyzer (ANSA) backbone available to devs

Hello,

My understanding of the paper below is that iOS ships with a MobileNetv3-based ML model backbone, which then uses different heads for specific tasks in iOS.

I understand that this backbone is accessible for various uses through the Vision framework, but I was wondering if it is also accessible for on-device fine-tuning for other purposes. Just as an example, if I want to have a model to detect some unique object in a photo, can I use the built in backbone or do I have to include my own in the app.

Thanks very much for any advice and apologies if I didn't understand something correctly.

Source: https://machinelearning.apple.com/research/on-device-scene-analysis

Accepted Reply

Create ML Components has an image feature extractor that uses Vision, see https://developer.apple.com/documentation/createmlcomponents/imagefeatureprint

Add a Comment

Replies

Create ML Components has an image feature extractor that uses Vision, see https://developer.apple.com/documentation/createmlcomponents/imagefeatureprint

Add a Comment