The new voice control API is really great, it's the one where you can talk to your iOS device and perform actions. (Apple video)
My only complaint is that finding information about this is a bit difficult. I do not see any WWDC video on it, and I can not find any other documentation.
It is essentially powered by accessibilityLabels. Since each accessibilityElement element can only have one accessLabel, it is (after what I can see) limited to that.
Is it correct? Is there a way to provide users with more custom actions? For example, there is the Custom Accessibility Action API which allows you to add others by sliding up / down with VoiceOver, but these In no case do they seem to be available for Voice Control, it is simply the accessibility label.
It's a really great API, but with custom actions and VoiceOver rotor actions, I can normally provide actions more easily accessible to users, and I can not understand how to do that for a user who uses the control voice.