Covers both the modern Swift-native Vision API (iOS 18+) with async/await and the legacy VNRequest pattern for backward compatibility. You get text recognition with language correction and custom vocabulary, face detection with landmarks, barcode scanning across a dozen symbologies, document scanning with structured layout understanding (paragraphs, tables, lists), plus object tracking for video. The modern API is cleaner,`try await request.perform(on: image)` instead of handler boilerplate,but you'll need the legacy patterns if you're supporting pre-iOS 18. Includes VisionKit's DataScannerViewController for live camera scanning and VNCoreMLRequest patterns for running custom Core ML models through Vision's pipeline.
npx skills add https://github.com/dpearson2699/swift-ios-skills --skill vision-framework