This handles the RAG chunking problem systematically, walking you through five strategy levels from basic fixed-size splits to semantic boundary detection with embedding similarity. It includes validation commands you can actually run to check coherence scores and retrieval precision, plus concrete parameter recommendations like starting at 512 tokens with 15% overlap. The iterative tuning guidance is solid: if precision drops below 0.7, reduce chunk size by 25%. What's useful here is the decision tree for matching strategies to document types, whether you're processing unstructured text, code with AST parsing, or complex documents with thematic shifts. It won't magically solve retrieval quality issues, but it gives you a methodical framework instead of guessing at chunk sizes.
npx skills add https://github.com/giuseppe-trisciuoglio/developer-kit --skill chunking-strategy