ideabrowser.com — find trending startup ideas with real demand
Try itnpx skills add https://github.com/get-convex/agent-skills --skill convex-migration-helperSafely migrate Convex schemas and data when making breaking changes.
Convex will not let you deploy a schema that does not match the data at rest. This is the fundamental constraint that shapes every migration:
This means migrations follow a predictable pattern: widen the schema, migrate the data, narrow the schema.
Convex migrations run online, meaning the app continues serving requests while data is updated asynchronously in batches. During the migration window, your code must handle both old and new data formats.
When changing the shape of data, create a new field rather than modifying an existing one. This makes the transition safer and easier to roll back.
Unless you are certain, prefer deprecating fields over deleting them. Mark the field as v.optional and add a code comment explaining it is deprecated and why it existed.
// Before
users: defineTable({
name: v.string(),
})
// After - safe, new field is optional
users: defineTable({
name: v.string(),
bio: v.optional(v.string()),
})
posts: defineTable({
userId: v.id("users"),
title: v.string(),
}).index("by_user", ["userId"])
users: defineTable({
name: v.string(),
email: v.string(),
})
.index("by_email", ["email"])
Every breaking migration follows the same multi-deploy pattern:
Deploy 1 - Widen the schema:
Between deploys - Migrate data:
Deploy 2 - Narrow the schema:
For any non-trivial migration, use the @convex-dev/migrations component. It handles batching, cursor-based pagination, state tracking, resume from failure, dry runs, and progress monitoring.
npm install @convex-dev/migrations
// convex/convex.config.ts
import { defineApp } from "convex/server";
import migrations from "@convex-dev/migrations/convex.config.js";
const app = defineApp();
app.use(migrations);
export default app;
// convex/migrations.ts
import { Migrations } from "@convex-dev/migrations";
import { components } from "./_generated/api.js";
import { DataModel } from "./_generated/dataModel.js";
export const migrations = new Migrations<DataModel>(components.migrations);
export const run = migrations.runner();
The DataModel type parameter is optional but provides type safety for migration definitions.
The migrateOne function processes a single document. The component handles batching and pagination automatically.
// convex/migrations.ts
export const addDefaultRole = migrations.define({
table: "users",
migrateOne: async (ctx, user) => {
if (user.role === undefined) {
await ctx.db.patch(user._id, { role: "user" });
}
},
});
Shorthand: if you return an object, it is applied as a patch automatically.
export const clearDeprecatedField = migrations.define({
table: "users",
migrateOne: () => ({ legacyField: undefined }),
});
From the CLI:
# Define a one-off runner in convex/migrations.ts:
# export const runIt = migrations.runner(internal.migrations.addDefaultRole);
npx convex run migrations:runIt
# Or use the general-purpose runner
npx convex run migrations:run '{"fn": "migrations:addDefaultRole"}'
Programmatically from another Convex function:
await migrations.runOne(ctx, internal.migrations.addDefaultRole);
export const runAll = migrations.runner([
internal.migrations.addDefaultRole,
internal.migrations.clearDeprecatedField,
internal.migrations.normalizeEmails,
]);
npx convex run migrations:runAll
If one fails, it stops and will not continue to the next. Call it again to retry from where it left off. Completed migrations are skipped automatically.
Test a migration before committing changes:
npx convex run migrations:runIt '{"dryRun": true}'
This runs one batch and then rolls back, so you can see what it would do without changing any data.
npx convex run --component migrations lib:getStatus --watch
npx convex run --component migrations lib:cancel '{"name": "migrations:addDefaultRole"}'
Or programmatically:
await migrations.cancel(ctx, internal.migrations.addDefaultRole);
Chain migration execution after deploying:
npx convex deploy --cmd 'npm run build' && npx convex run migrations:runAll --prod
If documents are large or the table has heavy write traffic, reduce the batch size to avoid transaction limits or OCC conflicts:
export const migrateHeavyTable = migrations.define({
table: "largeDocuments",
batchSize: 10,
migrateOne: async (ctx, doc) => {
// migration logic
},
});
Process only matching documents instead of the full table:
export const fixEmptyNames = migrations.define({
table: "users",
customRange: (query) =>
query.withIndex("by_name", (q) => q.eq("name", "")),
migrateOne: () => ({ name: "<unknown>" }),
});
By default each document in a batch is processed serially. Enable parallel processing if your migration logic does not depend on ordering:
export const clearField = migrations.define({
table: "myTable",
parallelize: true,
migrateOne: () => ({ optionalField: undefined }),
});
// Deploy 1: Schema allows both states
users: defineTable({
name: v.string(),
role: v.optional(v.union(v.literal("user"), v.literal("admin"))),
})
// Migration: backfill the field
export const addDefaultRole = migrations.define({
table: "users",
migrateOne: async (ctx, user) => {
if (user.role === undefined) {
await ctx.db.patch(user._id, { role: "user" });
}
},
});
// Deploy 2: After migration completes, make it required
users: defineTable({
name: v.string(),
role: v.union(v.literal("user"), v.literal("admin")),
})
Mark the field optional first, migrate data to remove it, then remove from schema:
// Deploy 1: Make optional
// isPro: v.boolean() --> isPro: v.optional(v.boolean())
// Migration
export const removeIsPro = migrations.define({
table: "teams",
migrateOne: async (ctx, team) => {
if (team.isPro !== undefined) {
await ctx.db.patch(team._id, { isPro: undefined });
}
},
});
// Deploy 2: Remove isPro from schema entirely
Prefer creating a new field. You can combine adding and deleting in one migration:
// Deploy 1: Add new field, keep old field optional
// isPro: v.boolean() --> isPro: v.optional(v.boolean()), plan: v.optional(...)
// Migration: convert old field to new field
export const convertToEnum = migrations.define({
table: "teams",
migrateOne: async (ctx, team) => {
if (team.plan === undefined) {
await ctx.db.patch(team._id, {
plan: team.isPro ? "pro" : "basic",
isPro: undefined,
});
}
},
});
// Deploy 2: Remove isPro from schema, make plan required
export const extractPreferences = migrations.define({
table: "users",
migrateOne: async (ctx, user) => {
if (user.preferences === undefined) return;
const existing = await ctx.db
.query("userPreferences")
.withIndex("by_user", (q) => q.eq("userId", user._id))
.first();
if (!existing) {
await ctx.db.insert("userPreferences", {
userId: user._id,
...user.preferences,
});
}
await ctx.db.patch(user._id, { preferences: undefined });
},
});
Make sure your code is already writing to the new userPreferences table for new users before running this migration, so you don't miss documents created during the migration window.
export const deleteOrphanedEmbeddings = migrations.define({
table: "embeddings",
migrateOne: async (ctx, doc) => {
const chunk = await ctx.db
.query("chunks")
.withIndex("by_embedding", (q) => q.eq("embeddingId", doc._id))
.first();
if (!chunk) {
await ctx.db.delete(doc._id);
}
},
});
During the migration window, your app must handle both old and new data formats. There are two main strategies.
Write to both old and new structures. Read from the old structure until migration is complete.
This is preferred because you can safely roll back at any point, the old format is always up to date.
// Bad: only writing to new structure before migration is done
export const createTeam = mutation({
args: { name: v.string(), isPro: v.boolean() },
handler: async (ctx, args) => {
await ctx.db.insert("teams", {
name: args.name,
plan: args.isPro ? "pro" : "basic",
});
},
});
// Good: writing to both structures during migration
export const createTeam = mutation({
args: { name: v.string(), isPro: v.boolean() },
handler: async (ctx, args) => {
const plan = args.isPro ? "pro" : "basic";
await ctx.db.insert("teams", {
name: args.name,
isPro: args.isPro,
plan,
});
},
});
Read both formats. Write only the new format.
This avoids duplicating writes, which is useful when having two copies of data could cause inconsistencies. The downside is that rolling back to before step 1 is harder, since new documents only have the new format.
// Good: reading both formats, preferring new
function getTeamPlan(team: Doc<"teams">): "basic" | "pro" {
if (team.plan !== undefined) return team.plan;
return team.isPro ? "pro" : "basic";
}
For small tables (a few thousand documents at most), you can migrate in a single internalMutation without the component:
import { internalMutation } from "./_generated/server";
export const backfillSmallTable = internalMutation({
handler: async (ctx) => {
const docs = await ctx.db.query("smallConfig").collect();
for (const doc of docs) {
if (doc.newField === undefined) {
await ctx.db.patch(doc._id, { newField: "default" });
}
}
},
});
npx convex run migrations:backfillSmallTable
Only use .collect() when you are certain the table is small. For anything larger, use the migrations component.
Query to check remaining unmigrated documents:
import { query } from "./_generated/server";
export const verifyMigration = query({
handler: async (ctx) => {
const remaining = await ctx.db
.query("users")
.filter((q) => q.eq(q.field("role"), undefined))
.take(10);
return {
complete: remaining.length === 0,
sampleRemaining: remaining.map((u) => u._id),
};
},
});
Or use the component's built-in status monitoring:
npx convex run --component migrations lib:getStatus --watch
@convex-dev/migrations componentdryRun: true.collect() large tables: Use the migrations component for proper batched pagination. .collect() is only safe for tables you know are small.dryRun: true to validate your migration logic before committing changes to production data.v.optional and a comment. Only delete after you are confident the data is no longer needed.