If you're building an AI agent that can actually send transactions or move money, this gives you a layered defense checklist. It covers prompt injection detection tuned for wallet addresses and transfer commands, hard spend caps with daily limits, mandatory pre-send simulation with slippage checks, circuit breakers that halt on drawdown, and wallet isolation patterns. The examples are Python but the ideas port anywhere. The core insight is right: no single control is enough when a bad LLM output can drain funds. Treat every piece of external data as hostile, simulate before you sign, and keep the agent's keys separate from your treasury.
npx skills add https://github.com/affaan-m/everything-claude-code --skill llm-trading-agent-security