As this is a feature I had yet to implement in an app myself, but is one I'm planning for a future app, I did some investigation...
In no particular order, here are some points of interest from my investigation, and from helping them debug their app:
- You must register the voice command file (using VoiceCommandService.InstallCommandSetsFromFileAsync) - It's not enough just to include the VCD/XML file in the project.
- You must include the ID_CAP_SPEECH_RECOGNITION, ID_CAP_MICROPHONE, and ID_CAP_NETWORKING capabilities. - Otherwise you'll get an AccessException when you try and register the file.
- CommandSets are culturally sensitive. So, if you specify the language of the commandset as "en-US", it won't work on a phone with speech set to "English (United Kingdon)". You can, however, include multiple commandsets in a file and you can use a culturally neutral language.
- You can sepcify wildcards in the ListenFor section by using "{*}" and these will match anything. Unfortunately, the recongniser won't tell you what the user said though. YOu just get a ellipse in the response. (Some spelunking through some old mailing lists shows that this was escalated to Microsoft, by the developers behind some big apps, when the SDK was still very new so hopefully we'll be able to retrieve (be told) everything that was spoken in a future version.)
I hope this helps someone else save some time in future . :)
It seems to collect a lot of different ideas in the Word document. I do it in different sheets of paper in my notebook. You have what you do now?
ReplyDelete