This walks you through getting Ollama running as your local embedding provider for GrepAI, which means your code never hits an API. It covers installation across macOS, Linux, and Windows, then shows you how to pull nomic-embed-text (the recommended 768-dimension model at 274MB) and verify everything works with a quick curl test. The guide is thorough on the basics like starting the service, checking if it's running, and troubleshooting connection refused errors. If you want private code search without API costs or network latency, this gets you from zero to a working local setup in about five minutes.
npx skills add https://github.com/yoanbernabeu/grepai-skills --skill grepai-ollama-setup