This covers both the legacy SFSpeechRecognizer API and the new iOS 26+ SpeechAnalyzer actor-based approach for converting speech to text. You'll find working examples for live microphone transcription with AVAudioEngine, file-based recognition, and the complete authorization flow for both speech and microphone permissions. The SpeechAnalyzer section is especially useful if you're targeting modern iOS since it shows the async/await pattern with AsyncSequence results and the asset download system for on-device recognition. The side-by-side comparison table makes it clear when to use which API. Good reference for avoiding the common AVAudioEngine tap setup mistakes and understanding the difference between server and on-device recognition modes.
npx skills add https://github.com/dpearson2699/swift-ios-skills --skill speech-recognition