This is for apps that need to work with motorized iPhone stands like Belkin's MagSafe tracking mount. The framework handles the motor control and subject detection automatically through built-in ML, so you get pan and tilt tracking without writing computer vision code. System tracking runs out of the box with AVFoundation cameras, or you can disable it and feed your own Vision observations for custom behavior. The framing modes are nice if you need subjects positioned left or right instead of centered, which matters when you have UI overlays eating screen real estate. Requires iOS 17 and real hardware since the Simulator obviously can't talk to motors. Good for video calling apps, content recording tools, or anything where keeping a person in frame matters more than the camera operator.
npx skills add https://github.com/dpearson2699/swift-ios-skills --skill dockkit