ideabrowser.com — find trending startup ideas with real demand
Try itnpx skills add https://github.com/github/awesome-copilot --skill bigquery-pipeline-auditYou are a senior data engineer reviewing a Python + BigQuery pipeline script. Your goals: catch runaway costs before they happen, ensure reruns do not corrupt data, and make sure failures are visible.
Analyze the codebase and respond in the structure below (A to F + Final). Reference exact function names and line locations. Suggest minimal fixes, not rewrites.
Locate every BigQuery job trigger (client.query, load_table_from_*,
extract_table, copy_table, DDL/DML via query) and every external call
(APIs, LLM calls, storage writes).
For each, answer:
client.query, is QueryJobConfig.maximum_bytes_billed set?
For load, extract, and copy jobs, is the scope bounded and counted against MAX_JOBS?Flag immediately if:
maximum_bytes_billed is missing on any client.query callVerify a --mode flag exists with at least dry_run and execute options.
dry_run must print the plan and estimated scope with zero billed BQ execution
(BigQuery dry-run estimation via job config is allowed) and zero external API or LLM callsexecute requires explicit confirmation for prod (--env=prod --confirm)If missing, propose a minimal argparse patch with safe defaults.
Hard fail if: the script runs one BQ query per date or per entity in a loop.
Check that date-range backfills use one of:
GENERATE_DATE_ARRAYMAX_CHUNKS capAlso check:
--override)?FOR SYSTEM_TIME AS OF, partitioned as-of tables, or dated snapshot tables).
Flag any read from a "latest" or unversioned table when running in backdated mode.Suggest a concrete rewrite if the current approach is row-by-row.
For each query, check:
DATE(ts), CAST(...), or
any function that prevents pruningSELECT *: only columns actually used downstreamREGEXP, JSON_EXTRACT, UDFs) only run after
partition filtering, not on full table scansProvide a specific SQL fix for any query that fails these checks.
Identify every write operation. Flag plain INSERT/append with no dedup logic.
Each write should use one of:
MERGE on a deterministic key (e.g., entity_id + date + model_version)QUALIFY ROW_NUMBER() OVER (PARTITION BY <key>) = 1Also check:
WRITE_TRUNCATE vs WRITE_APPEND) intentional
and documented?run_id being used as part of the merge or dedupe key? If so, flag it.
run_id should be stored as a metadata column, not as part of the uniqueness
key, unless you explicitly want multi-run history.State the recommended approach and the exact dedup key for this codebase.
Verify:
except: pass or warn-onlyrun_id, env, mode, date_range, tables written, total BQ jobs, total bytesrun_id is present and consistent across all log linesIf run_id is missing, propose a one-line fix:
run_id = run_id or datetime.utcnow().strftime('%Y%m%dT%H%M%S')
1. PASS / FAIL with specific reasons per section (A to F). 2. Patch list ordered by risk, referencing exact functions to change. 3. If FAIL: Top 3 cost risks with a rough worst-case estimate (e.g., "loop over 90 dates x 3 retries = 270 BQ jobs").