OpenViking replaces fragmented vector stores with a filesystem paradigm for agent context. You organize memories, resources, and skills in directories, then query them with tiered loading: L0 for always-loaded identity, L1 for on-demand resources, L2 for semantic skill retrieval. The session compression feature automatically extracts long-term memories from conversation turns, which is cleaner than manually managing embeddings. Setup requires wiring your VLM and embedding providers through a config file, with support for OpenAI, LiteLLM, and local models. The observable trajectory tracking is genuinely useful for debugging why your agent pulled the wrong context. Best for agents that need persistent memory across tasks without blowing your token budget.
npx skills add https://github.com/aradotso/trending-skills --skill openviking-context-database