ideabrowser.com — find trending startup ideas with real demand
Try itnpx skills add https://github.com/affaan-m/everything-claude-code --skill autonomous-loopsCompatibility note (v1.8.0):
autonomous-loopsis retained for one release. The canonical skill name is nowcontinuous-agent-loop. New loop guidance should be authored there, while this skill remains available to avoid breaking existing workflows.
Patterns, architectures, and reference implementations for running Claude Code autonomously in loops. Covers everything from simple claude -p pipelines to full RFC-driven multi-agent DAG orchestration.
From simplest to most sophisticated:
| Pattern | Complexity | Best For |
|---|---|---|
| Sequential Pipeline | Low | Daily dev steps, scripted workflows |
| NanoClaw REPL | Low | Interactive persistent sessions |
| Infinite Agentic Loop | Medium | Parallel content generation, spec-driven work |
| Continuous Claude PR Loop | Medium | Multi-day iterative projects with CI gates |
| De-Sloppify Pattern | Add-on | Quality cleanup after any Implementer step |
| Ralphinho / RFC-Driven DAG | High | Large features, multi-unit parallel work with merge queue |
claude -p)The simplest loop. Break daily development into a sequence of non-interactive claude -p calls. Each call is a focused step with a clear prompt.
If you can't figure out a loop like this, it means you can't even drive the LLM to fix your code in interactive mode.
The claude -p flag runs Claude Code non-interactively with a prompt, exits when done. Chain calls to build a pipeline:
#!/bin/bash
# daily-dev.sh — Sequential pipeline for a feature branch
set -e
# Step 1: Implement the feature
claude -p "Read the spec in docs/auth-spec.md. Implement OAuth2 login in src/auth/. Write tests first (TDD). Do NOT create any new documentation files."
# Step 2: De-sloppify (cleanup pass)
claude -p "Review all files changed by the previous commit. Remove any unnecessary type tests, overly defensive checks, or testing of language features (e.g., testing that TypeScript generics work). Keep real business logic tests. Run the test suite after cleanup."
# Step 3: Verify
claude -p "Run the full build, lint, type check, and test suite. Fix any failures. Do not add new features."
# Step 4: Commit
claude -p "Create a conventional commit for all staged changes. Use 'feat: add OAuth2 login flow' as the message."
claude -p call means no context bleed between steps.set -e stops the pipeline on failure.With model routing:
# Research with Opus (deep reasoning)
claude -p --model opus "Analyze the codebase architecture and write a plan for adding caching..."
# Implement with Sonnet (fast, capable)
claude -p "Implement the caching layer according to the plan in docs/caching-plan.md..."
# Review with Opus (thorough)
claude -p --model opus "Review all changes for security issues, race conditions, and edge cases..."
With environment context:
# Pass context via files, not prompt length
echo "Focus areas: auth module, API rate limiting" > .claude-context.md
claude -p "Read .claude-context.md for priorities. Work through them in order."
rm .claude-context.md
With --allowedTools restrictions:
# Read-only analysis pass
claude -p --allowedTools "Read,Grep,Glob" "Audit this codebase for security vulnerabilities..."
# Write-only implementation pass
claude -p --allowedTools "Read,Write,Edit,Bash" "Implement the fixes from security-audit.md..."
ECC's built-in persistent loop. A session-aware REPL that calls claude -p synchronously with full conversation history.
# Start the default session
node scripts/claw.js
# Named session with skill context
CLAW_SESSION=my-project CLAW_SKILLS=tdd-workflow,security-review node scripts/claw.js
~/.claude/claw/{session}.mdclaude -p with full history as context| Use Case | NanoClaw | Sequential Pipeline |
|---|---|---|
| Interactive exploration | Yes | No |
| Scripted automation | No | Yes |
| Session persistence | Built-in | Manual |
| Context accumulation | Grows per turn | Fresh each step |
| CI/CD integration | Poor | Excellent |
See the /claw command documentation for full details.
A two-prompt system that orchestrates parallel sub-agents for specification-driven generation. Developed by disler (credit: @disler).
PROMPT 1 (Orchestrator) PROMPT 2 (Sub-Agents)
┌─────────────────────┐ ┌──────────────────────┐
│ Parse spec file │ │ Receive full context │
│ Scan output dir │ deploys │ Read assigned number │
│ Plan iteration │────────────│ Follow spec exactly │
│ Assign creative dirs │ N agents │ Generate unique output │
│ Manage waves │ │ Save to output dir │
└─────────────────────┘ └──────────────────────┘
Create .claude/commands/infinite.md:
Parse the following arguments from $ARGUMENTS:
1. spec_file — path to the specification markdown
2. output_dir — where iterations are saved
3. count — integer 1-N or "infinite"
PHASE 1: Read and deeply understand the specification.
PHASE 2: List output_dir, find highest iteration number. Start at N+1.
PHASE 3: Plan creative directions — each agent gets a DIFFERENT theme/approach.
PHASE 4: Deploy sub-agents in parallel (Task tool). Each receives:
- Full spec text
- Current directory snapshot
- Their assigned iteration number
- Their unique creative direction
PHASE 5 (infinite mode): Loop in waves of 3-5 until context is low.
Invoke:
/project:infinite specs/component-spec.md src/ 5
/project:infinite specs/component-spec.md src/ infinite
| Count | Strategy |
|---|---|
| 1-5 | All agents simultaneously |
| 6-20 | Batches of 5 |
| infinite | Waves of 3-5, progressive sophistication |
Don't rely on agents to self-differentiate. The orchestrator assigns each agent a specific creative direction and iteration number. This prevents duplicate concepts across parallel agents.
A production-grade shell script that runs Claude Code in a continuous loop, creating PRs, waiting for CI, and merging automatically. Created by AnandChowdhary (credit: @AnandChowdhary).
┌─────────────────────────────────────────────────────┐
│ CONTINUOUS CLAUDE ITERATION │
│ │
│ 1. Create branch (continuous-claude/iteration-N) │
│ 2. Run claude -p with enhanced prompt │
│ 3. (Optional) Reviewer pass — separate claude -p │
│ 4. Commit changes (claude generates message) │
│ 5. Push + create PR (gh pr create) │
│ 6. Wait for CI checks (poll gh pr checks) │
│ 7. CI failure? → Auto-fix pass (claude -p) │
│ 8. Merge PR (squash/merge/rebase) │
│ 9. Return to main → repeat │
│ │
│ Limit by: --max-runs N | --max-cost $X │
│ --max-duration 2h | completion signal │
└─────────────────────────────────────────────────────┘
Warning: Install continuous-claude from its repository after reviewing the code. Do not pipe external scripts directly to bash.
# Basic: 10 iterations
continuous-claude --prompt "Add unit tests for all untested functions" --max-runs 10
# Cost-limited
continuous-claude --prompt "Fix all linter errors" --max-cost 5.00
# Time-boxed
continuous-claude --prompt "Improve test coverage" --max-duration 8h
# With code review pass
continuous-claude \
--prompt "Add authentication feature" \
--max-runs 10 \
--review-prompt "Run npm test && npm run lint, fix any failures"
# Parallel via worktrees
continuous-claude --prompt "Add tests" --max-runs 5 --worktree tests-worker &
continuous-claude --prompt "Refactor code" --max-runs 5 --worktree refactor-worker &
wait
The critical innovation: a SHARED_TASK_NOTES.md file persists across iterations:
## Progress
- [x] Added tests for auth module (iteration 1)
- [x] Fixed edge case in token refresh (iteration 2)
- [ ] Still need: rate limiting tests, error boundary tests
## Next Steps
- Focus on rate limiting module next
- The mock setup in tests/helpers.ts can be reused
Claude reads this file at iteration start and updates it at iteration end. This bridges the context gap between independent claude -p invocations.
When PR checks fail, Continuous Claude automatically:
gh run listclaude -p with CI fix contextgh run view, fixes code, commits, pushes--ci-retry-max attempts)Claude can signal "I'm done" by outputting a magic phrase:
continuous-claude \
--prompt "Fix all bugs in the issue tracker" \
--completion-signal "CONTINUOUS_CLAUDE_PROJECT_COMPLETE" \
--completion-threshold 3 # Stops after 3 consecutive signals
Three consecutive iterations signaling completion stops the loop, preventing wasted runs on finished work.
| Flag | Purpose |
|---|---|
--max-runs N | Stop after N successful iterations |
--max-cost $X | Stop after spending $X |
--max-duration 2h | Stop after time elapsed |
--merge-strategy squash | squash, merge, or rebase |
--worktree <name> | Parallel execution via git worktrees |
--disable-commits | Dry-run mode (no git operations) |
--review-prompt "..." | Add reviewer pass per iteration |
--ci-retry-max N | Auto-fix CI failures (default: 1) |
An add-on pattern for any loop. Add a dedicated cleanup/refactor step after each Implementer step.
When you ask an LLM to implement with TDD, it takes "write tests" too literally:
typeof x === 'string')Adding "don't test type systems" or "don't add unnecessary checks" to the Implementer prompt has downstream effects:
Instead of constraining the Implementer, let it be thorough. Then add a focused cleanup agent:
# Step 1: Implement (let it be thorough)
claude -p "Implement the feature with full TDD. Be thorough with tests."
# Step 2: De-sloppify (separate context, focused cleanup)
claude -p "Review all changes in the working tree. Remove:
- Tests that verify language/framework behavior rather than business logic
- Redundant type checks that the type system already enforces
- Over-defensive error handling for impossible states
- Console.log statements
- Commented-out code
Keep all business logic tests. Run the test suite after cleanup to ensure nothing breaks."
for feature in "${features[@]}"; do
# Implement
claude -p "Implement $feature with TDD."
# De-sloppify
claude -p "Cleanup pass: review changes, remove test/code slop, run tests."
# Verify
claude -p "Run build + lint + tests. Fix any failures."
# Commit
claude -p "Commit with message: feat: add $feature"
done
Rather than adding negative instructions which have downstream quality effects, add a separate de-sloppify pass. Two focused agents outperform one constrained agent.
The most sophisticated pattern. An RFC-driven, multi-agent pipeline that decomposes a spec into a dependency DAG, runs each unit through a tiered quality pipeline, and lands them via an agent-driven merge queue. Created by enitrat (credit: @enitrat).
RFC/PRD Document
│
▼
DECOMPOSITION (AI)
Break RFC into work units with dependency DAG
│
▼
┌──────────────────────────────────────────────────────┐
│ RALPH LOOP (up to 3 passes) │
│ │
│ For each DAG layer (sequential, by dependency): │
│ │
│ ┌── Quality Pipelines (parallel per unit) ───────┐ │
│ │ Each unit in its own worktree: │ │
│ │ Research → Plan → Implement → Test → Review │ │
│ │ (depth varies by complexity tier) │ │
│ └────────────────────────────────────────────────┘ │
│ │
│ ┌── Merge Queue ─────────────────────────────────┐ │
│ │ Rebase onto main → Run tests → Land or evict │ │
│ │ Evicted units re-enter with conflict context │ │
│ └────────────────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────┘
AI reads the RFC and produces work units:
interface WorkUnit {
id: string; // kebab-case identifier
name: string; // Human-readable name
rfcSections: string[]; // Which RFC sections this addresses
description: string; // Detailed description
deps: string[]; // Dependencies (other unit IDs)
acceptance: string[]; // Concrete acceptance criteria
tier: "trivial" | "small" | "medium" | "large";
}
Decomposition Rules:
The dependency DAG determines execution order:
Layer 0: [unit-a, unit-b] ← no deps, run in parallel
Layer 1: [unit-c] ← depends on unit-a
Layer 2: [unit-d, unit-e] ← depend on unit-c
Different tiers get different pipeline depths:
| Tier | Pipeline Stages |
|---|---|
| trivial | implement → test |
| small | implement → test → code-review |
| medium | research → plan → implement → test → PRD-review + code-review → review-fix |
| large | research → plan → implement → test → PRD-review + code-review → review-fix → final-review |
This prevents expensive operations on simple changes while ensuring architectural changes get thorough scrutiny.
Each stage runs in its own agent process with its own context window:
| Stage | Model | Purpose |
|---|---|---|
| Research | Sonnet | Read codebase + RFC, produce context doc |
| Plan | Opus | Design implementation steps |
| Implement | Codex | Write code following the plan |
| Test | Sonnet | Run build + test suite |
| PRD Review | Sonnet | Spec compliance check |
| Code Review | Opus | Quality + security check |
| Review Fix | Codex | Address review issues |
| Final Review | Opus | Quality gate (large tier only) |
Critical design: The reviewer never wrote the code it reviews. This eliminates author bias — the most common source of missed issues in self-review.
After quality pipelines complete, units enter the merge queue:
Unit branch
│
├─ Rebase onto main
│ └─ Conflict? → EVICT (capture conflict context)
│
├─ Run build + tests
│ └─ Fail? → EVICT (capture test output)
│
└─ Pass → Fast-forward main, push, delete branch
File Overlap Intelligence:
Eviction Recovery: When evicted, full context is captured (conflicting files, diffs, test output) and fed back to the implementer on the next Ralph pass:
## MERGE CONFLICT — RESOLVE BEFORE NEXT LANDING
Your previous implementation conflicted with another unit that landed first.
Restructure your changes to avoid the conflicting files/lines below.
{full eviction context with diffs}
research.contextFilePath ──────────────────→ plan
plan.implementationSteps ──────────────────→ implement
implement.{filesCreated, whatWasDone} ─────→ test, reviews
test.failingSummary ───────────────────────→ reviews, implement (next pass)
reviews.{feedback, issues} ────────────────→ review-fix → implement (next pass)
final-review.reasoning ────────────────────→ implement (next pass)
evictionContext ───────────────────────────→ implement (after merge conflict)
Every unit runs in an isolated worktree (uses jj/Jujutsu, not git):
/tmp/workflow-wt-{unit-id}/
Pipeline stages for the same unit share a worktree, preserving state (context files, plan files, code changes) across research → plan → implement → test → review.
| Signal | Use Ralphinho | Use Simpler Pattern |
|---|---|---|
| Multiple interdependent work units | Yes | No |
| Need parallel implementation | Yes | No |
| Merge conflicts likely | Yes | No (sequential is fine) |
| Single-file change | No | Yes (sequential pipeline) |
| Multi-day project | Yes | Maybe (continuous-claude) |
| Spec/RFC already written | Yes | Maybe |
| Quick iteration on one thing | No | Yes (NanoClaw or pipeline) |
Is the task a single focused change?
├─ Yes → Sequential Pipeline or NanoClaw
└─ No → Is there a written spec/RFC?
├─ Yes → Do you need parallel implementation?
│ ├─ Yes → Ralphinho (DAG orchestration)
│ └─ No → Continuous Claude (iterative PR loop)
└─ No → Do you need many variations of the same thing?
├─ Yes → Infinite Agentic Loop (spec-driven generation)
└─ No → Sequential Pipeline with de-sloppify
These patterns compose well:
Sequential Pipeline + De-Sloppify — The most common combination. Every implement step gets a cleanup pass.
Continuous Claude + De-Sloppify — Add --review-prompt with a de-sloppify directive to each iteration.
Any loop + Verification — Use ECC's /verify command or verification-loop skill as a gate before commits.
Ralphinho's tiered approach in simpler loops — Even in a sequential pipeline, you can route simple tasks to Haiku and complex tasks to Opus:
# Simple formatting fix
claude -p --model haiku "Fix the import ordering in src/utils.ts"
# Complex architectural change
claude -p --model opus "Refactor the auth module to use the strategy pattern"
Infinite loops without exit conditions — Always have a max-runs, max-cost, max-duration, or completion signal.
No context bridge between iterations — Each claude -p call starts fresh. Use SHARED_TASK_NOTES.md or filesystem state to bridge context.
Retrying the same failure — If an iteration fails, don't just retry. Capture the error context and feed it to the next attempt.
Negative instructions instead of cleanup passes — Don't say "don't do X." Add a separate pass that removes X.
All agents in one context window — For complex workflows, separate concerns into different agent processes. The reviewer should never be the author.
Ignoring file overlap in parallel work — If two parallel agents might edit the same file, you need a merge strategy (sequential landing, rebase, or conflict resolution).
| Project | Author | Link |
|---|---|---|
| Ralphinho | enitrat | credit: @enitrat |
| Infinite Agentic Loop | disler | credit: @disler |
| Continuous Claude | AnandChowdhary | credit: @AnandChowdhary |
| NanoClaw | ECC | /claw command in this repo |
| Verification Loop | ECC | skills/verification-loop/ in this repo |