This is your Layer 3 constraint reference for when you're building ML/AI apps in Rust and need to make the right tradeoffs. It maps domain rules like "large data means efficient memory" to concrete Rust implications like zero-copy views and streaming. The big win here is the decision matrix: it tells you tract for ONNX inference-only cases, candle or burn when you need training, tch-rs for PyTorch models. Includes patterns for model singletons with OnceLock and batched inference for GPU efficiency. The "trace down" section connects ML constraints to the right Layer 2 modules, so you're not guessing which skill to reach for when you need async data loading or lazy evaluation.
npx skills add https://github.com/zhanghandong/rust-skills --skill domain-ml