SFSpeechRecognizer: Failed to access assets

Hello! We have an app that utilises the SpeechKit Framework. Especially the local on-device speech recognition for the audio files with the user selected language.

Up until recently it worked as expected. However after updating one of our testing device to iOS 17.4.1 we found out that the local recognition on it stopped working completely.

The error that we are getting has code 102 at its localised description reads: "Failed to access assets".

That sounds just like a rear though known issue in previous iOS versions. The solution was inconvenient for our users but at least it worked – they were to go to the System settings and tweak with the dictation setting in the keyboard section.

Right now no tweaks of this sort appear to help us fix the situation. We even tried to do the setting reset of the device (not the factory reset though). The error persists.

it appears one one of our devices 100% of the time, halting the local recognition process. It sometimes shows on other devices for some particular languages too, but it does not show for other languages.

As it is a UX breaking bug for our app, today I decided to check the logs of the Console app at the moment of the recognition attempt.

There are lots of errors with code 1101 which from our research appear to be the general notifications about some local recognition setup problems.

Removing the lines about the 1101 error from the log we have some interesting stuff remaining, that is (almost) never mentioned in any of the searchable webpages in the Internet. I assume they are the private API calls that the SpeechKit Framework executes under the hood:

default		localspeechrecognition			-[UAFAssetSet assetNamed:]_block_invoke 9067C4F1-0B29-4A57-85DD-F8740DF7C344: No assets in asset set com.apple.siri.understanding
default		localspeechrecognition			-[UAFAssetSet assetNamed:] 9067C4F1-0B29-4A57-85DD-F8740DF7C344: Returning com.apple.siri.asr.assistant from source none
error		localspeechrecognition			-[SFEntitledAssetManager _assetWithAssetConfig:regionId:] No asset found with name: com.apple.siri.asr.assistant, asset set: com.apple.siri.understanding, usage: <private>
error		localspeechrecognition			+[LSRConnection modelRootWithLanguage:clientID:modelOverrideURL:returningAssetType:error:] Fetch asset error (null)
error		localspeechrecognition			-[LSRConnection prepareRecognizerWithLanguage:recognitionOverrides:modelOverrideURL:anyConfiguration:task:clientID:error:] modelRoot is nil (null)
default		OurApp							[0x113e96d40] invalidated because the current process cancelled the connection by calling xpc_connection_cancel()

Looks like there are some language-model related problems that appeared after the device was updated to 17.4.1.

The Settings -> General -> Keyboard -> Dictation Languages appear to be configured correctly, the dictation toggle is On, we tried tweaking all these setting, rebooting the device and resetting the device settings. However the log lines still tell us that there is something wrong with the private resources of the SpeechKit framework.

We are very concerned as the speech recognition is the core of out application's logic. And we don't understand what is the scale of possible impact of such a faulty behaviour (rare occurrences / some users / all users?) and how we can fix it to provide our users with the desired behaviour.

Post not yet marked as solved Up vote post of AlekseiV Down vote post of AlekseiV
166 views