Wraps Google's Nano Banana 2 edit endpoint through RunComfy's CLI to preserve subject identity while swapping backgrounds, clothing, or making localized edits. The big trick here is batch consistency: you can feed up to 20 input images in one call and get coherent variations back, which is legitimately useful for SKU galleries or A/B creative tests. Prompting matters more than usual. Lead with what stays unchanged, end with the edit, and use spatial language like "background only" or "left object" instead of vague instructions. For multilingual in-image text edits you still want GPT Image 2, and for precise single-reference local edits Flux Kontext is sharper, but this handles the middle ground of identity-preserved transforms at scale.
npx skills add https://github.com/agentspace-so/runcomfy-agent-skills --skill nano-banana-edit