Configures and audits robots.txt for search engine and AI crawler control. Handles the usual suspects like blocking /admin/ and /api/, but the real value is in the AI crawler strategy table that breaks down which bots respect robots.txt and which don't (ChatGPT-User stopped respecting it in December 2025, for instance). Includes clean separation between crawl control (robots.txt) and index control (noindex meta tags), with clear guidance on when to use each. The critical blocker list is helpful: don't block CSS/JS, don't block pages that use noindex, don't block Next.js asset paths. Pairs with the indexing skill for the full picture.
npx skills add https://github.com/kostja94/marketing-skills --skill robots-txt