Handles the grunt work of loading CSV, JSON, Parquet, and Arrow files into Elasticsearch without eating your RAM. Stream-based architecture means you can ingest files larger than memory at 50k+ docs/sec on regular hardware. The transform layer is handy if you need to reshape, filter, or split records on the way in. It infers mappings from CSV headers when you need it, and lets you pipe data through stdin for batch workflows. Built by Elastic, so the connection handling and bulk indexing patterns are solid. This is for file-to-index jobs, not general pipeline design or reindexing existing data. If you're moving flat files into ES regularly, this beats writing your own streaming parser.
npx skills add https://github.com/elastic/agent-skills --skill elasticsearch-file-ingest