Most teams pick tech like it’s a vibes-based wine tasting. Looks nice… might pair well… pray the hangover is mild.
That’s why “we’ll figure it out in sprint 5” becomes a six‑figure rewrite. Vendors pitch. Engineers debate. Finance flinches. Nobody has a cost‑and‑timeline reality.
Here’s the fix: a Build Options Matrix that forces trade‑offs into the light with fresh pricing, real usage math, and explicit QA gates. No mysticism. Just evidence.
You’ll get to “we’re choosing Option B” in 60–90 minutes using five prompts.
I’ve run this process with Fortune 500 launches and scrappy field apps: it cuts scope creep and post‑launch surprises… and gets the budget signed.
Today you will:

Before we dive into prompts, here’s the flow in plain English. You’ll give ChatGPT your key inputs once. Prompt 1 collects them all. After that, the system uses those inputs plus fresh web context to generate every artifact you need. Your outcome: a complete, copy‑paste ready Options Pack built step by step.
Have these ready:
Simple rails. Big leverage.
(We include inline citations whenever web context informs numbers or claims.)
Paste each prompt in order. Fill the bold placeholders once in Prompt 1. Browsing is assumed and used in Prompts 1–2 for current pricing and benchmarks.
Use this to capture your project inputs and scan the web for current benchmarks. Summarize only what materially improves output. Prefer last 12–24 months.
👉 Copy this prompt into ChatGPT, fill in the bold placeholders, then run it.
DESCRIPTION
Capture project goals, constraints, and assets. Pull fresh web context: pricing, calculators, and recent case studies to benchmark costs and timelines.
TASK
Produce a Project Snapshot and Web Findings that will anchor cost models and optioning.
ROLE
You are OptionsScout Pro: a senior product and architecture consultant with web search and source evaluation skills.
INPUTS
- PROJECT NAME: **[PROJECT_NAME]**
- GOAL STATEMENT: **[1–2 sentences on outcome and measurable success criteria]**
- REQUIREMENTS
- Must‑haves (ranked, top 5): **[list with short why notes]**
- Nice‑to‑haves (ranked, top 5): **[list with short why notes]**
- CONSTRAINTS: **[team size/roles, security/compliance, vendor or contract limits, hard deadlines]**
- EXISTING SYSTEMS: **[APIs, data models, hosting, CI/CD, key repos/docs links]**
- USAGE ASSUMPTIONS: **[auth sign‑ins/month, map tiles/month, messages/month, MAU, region]**
- CLOUD & REGION: **[AWS/Azure/GCP + region]**
- BUDGET & TIMELINE WINDOW: **[budget ceiling, target start/end]**
- DECISION OWNER/APPROVER: **[roles or names]**
- FEATURE OR COMPONENTS IN SCOPE: **[e.g., Auth, Maps, Messaging, Core Hosting]**
AUTO WEB PULLS (last 12–24 months; summarize only what matters)
- Pricing pages: Auth0, Okta, Firebase Auth; Mapbox, Google Maps Platform, HERE; Twilio, SendGrid, Firebase Cloud Messaging
- Cloud calculators/docs: AWS, Azure, Google Cloud
- 3 public case studies or postmortems for similar builds
- SERP top‑10 for “[FEATURE] architecture case study” and “implementation tradeoffs”
WORKFLOW (no steps skipped)
1) Validate inputs: list any missing or ambiguous fields.
2) Restate the GOAL and SUCCESS CRITERIA plainly; flag conflicts with constraints.
3) Run web pulls. Capture only pricing units, free tiers, notable thresholds, and any recency notes.
4) Extract 3–5 timeline or cost signals from case studies for benchmarking (with short context).
5) List 3–6 key assumptions that will drive cost (usage, region, data egress, SMS geography).
6) Output the Project Snapshot and Web Findings with concise citations.
OUTPUT FORMAT
- Project Snapshot
• Goal & Success Criteria:
• Scope Components:
• Must‑haves vs Nice‑to‑haves (ranked):
• Constraints (people, security, vendor, deadlines):
• Existing Systems Summary:
• Usage Assumptions:
• Cloud & Region:
• Budget & Timeline Window:
• Missing Inputs To Confirm:
- Web Findings (Recent)
• Pricing Signals by Component (Auth, Maps, Messaging) with units and thresholds [Source, Year]
• Cloud Calculator Notes and cost drivers [Source, Year]
• 3 Case Study Benchmarks: summary + timeline/cost signal [Source, Year]
• SERP Highlights: 3 insights on tradeoffs [Source, Year]
- Key Cost Assumptions List (numbered, 5–8 items)
RULES
- Prefer primary/official sources; last 12–24 months where possible.
- Use short inline citations like (Auth0 Docs, 2024).
- No chain‑of‑thought; concise bullets; no fluff.
- Mask any sensitive or proprietary data.
- If a vendor page requires sign‑in, skip it and note.
FALLBACK
If browsing is unavailable: produce a Web Pull Plan listing exact URLs to fetch and mark data as [PLACEHOLDER]. Use conservative defaults for units and thresholds and flag confidence as LOW.
Convert the raw web findings into usable unit rates and usage bands. Then build small/medium/large cost scenarios with confidence bands.
👉 Copy this prompt into ChatGPT, then run it as‑is. Do not add new inputs.
DESCRIPTION
Turn the Project Snapshot and Web Findings into a cost model: unit rates, usage bands, and monthly/annual scenarios.
TASK
Produce unit cost tables and small/medium/large scenarios for Auth, Maps, Messaging, and Cloud, with confidence bands and citations.
ROLE
You are CostModeler CXO: a senior product finance and cloud architecture analyst with pricing calculator fluency.
INPUTS
- Use all outputs from Prompt 1 (Project Snapshot, Web Findings, Key Assumptions).
WORKFLOW (no steps skipped)
1) Normalize usage assumptions: define SMALL / MEDIUM / LARGE bands. If missing, set LOW/BASE/HIGH bands and label as Assumption.
2) Extract unit rates from Web Findings: auth MAUs or logins, map tile/MAU pricing, SMS/email per‑unit rates, and cloud drivers (compute, storage, egress).
3) Compute monthly and annual costs for each band per component; include free tier offsets where applicable.
4) Aggregate into a simple total for each band; show a +/- confidence range based on source clarity and variability.
5) Note any material breakpoints that could change the winner (e.g., MAU > 50k flips auth pricing).
6) Output tables plus a CSV block suitable for spreadsheet import.
OUTPUT FORMAT
- Usage Bands
• SMALL:
• MEDIUM:
• LARGE:
• Notes:
- Unit Rates Table (Markdown)
| Component | Unit | Unit Rate | Free Tier | Key Thresholds | Source (Year) |
- Cost Scenarios Table (Markdown)
| Band | Auth/mo | Maps/mo | Messaging/mo | Cloud/mo | Total/mo | Total/yr | Confidence |
- CSV Export (copy‑paste)
component,band,units,unit_rate,monthly_cost,annual_cost,source_year,notes
[multiple rows...]
- Breakpoints To Watch (bullets, with citations)
RULES
- Cite each rate or threshold inline like (Mapbox Pricing, 2024).
- Keep tables tight and legible; no long prose.
- No chain‑of‑thought; compute directly and list assumptions clearly.
- Use region‑specific pricing if provided; otherwise note as “generic”.
FALLBACK
If a component lacks current pricing: use last known official rates and mark confidence LOW; provide a link placeholder.
Draft 2–3 concrete options and the side‑by‑side matrix your execs can approve.
👉 Copy this prompt into ChatGPT, then run it as‑is. Do not add new inputs.
DESCRIPTION
Synthesize options and present a compact side‑by‑side comparison with costs, timelines, complexity, team fit, feature coverage, and lock‑in.
TASK
Create 2–3 realistic build options using the cost model. Include short narrative summaries and a comparison table.
ROLE
You are OptionSmith: a senior product and solution architect who communicates like a COO.
INPUTS
- Use outputs from Prompts 1–2: Project Snapshot, Unit Rates, Cost Scenarios.
WORKFLOW (no steps skipped)
1) Define 2–3 viable options spanning buy‑heavy, build‑lean, and hybrid patterns.
2) For each option, specify:
- Short Name
- Required Components
- High‑Level Architecture (1–2 bullets)
- Estimated Cost Range (monthly and annual, tie to bands)
- Estimated Timeline Range (incl. any critical path assumptions)
- Complexity (Low/Med/High) with 1‑line rationale
- Team Fit (roles required and alignment with available team)
- Feature Coverage vs Must‑haves/Nice‑to‑haves
- Vendor Lock‑in Notes
3) Build a side‑by‑side comparison table with the above attributes.
4) Call out any breakpoints from Prompt 2 that could flip the recommendation.
5) Recommend one option and state why, referencing success criteria and constraints.
OUTPUT FORMAT
- Option Summaries (2–3)
• [Option Name]
- Components:
- Architecture:
- Cost Range (mo/yr, band basis):
- Timeline Range:
- Complexity:
- Team Fit:
- Feature Coverage:
- Vendor Lock‑in:
- Side‑by‑Side Options Matrix (Markdown)
| Attribute | Option A | Option B | Option C |
- Breakpoints That Could Flip Decision (bullets; cite where used)
- Recommendation
• Chosen Option:
• Rationale linked to success criteria, constraints, and costs (include short citations as needed).
RULES
- Keep each option summary ≤ 12 lines.
- Include inline citations for any web‑informed claims (Source, Year).
- No new research; rely on Prompts 1–2.
- No chain‑of‑thought.
FALLBACK
If only one option seems viable, still produce a “Do Nothing” baseline and a “Deferred Hybrid” option for contrast.
Lock in release quality: risks, mitigations, and acceptance gates. Then pre‑answer the two objections that usually stall sign‑off.
👉 Copy this prompt into ChatGPT, then run it as‑is. Do not add new inputs.
DESCRIPTION
Turn options into an execution‑ready risk plan with QA gates and monitoring triggers. Address top objections.
TASK
Produce a risk register per option with mitigations and explicit QA/acceptance gates. Add two common objections with counters tied to the artifact.
ROLE
You are RiskSherpa: a senior delivery lead and QA architect who thinks in gates and rollbacks.
INPUTS
- Use all prior outputs, especially the chosen option and constraints.
WORKFLOW (no steps skipped)
1) For each option, list Top 5 Risks with:
- Severity (Low/Med/High)
- Mitigation (specific action/owner)
- Residual Risk note
2) Define QA/Acceptance Gates:
- Gate name, entry criteria, exit criteria
- One test or metric per gate that defines readiness
- Monitoring or rollback trigger tied to each gate
3) Add a “Deployment Fitness Checklist” for the chosen option.
4) Objections:
- “This is too expensive”
- “Vendor lock‑in will hurt us later”
Provide 2–3 line counters tied to the matrix and cost model.
5) State recommended funnel placement:
- Where this Options Pack sits in your process
- Who approves and when
OUTPUT FORMAT
- Risk Register (per option, table)
| Risk | Severity | Mitigation | Residual Risk |
- QA/Acceptance Gates (chosen option)
| Gate | Entry Criteria | Exit Criteria | Test/Metric | Rollback Trigger |
- Deployment Fitness Checklist (bullets)
- Objections & Counters (bullets, 2–3 lines each)
- Process Placement & Roles (bullets)
RULES
- Be specific: owners, metrics, and exit criteria must be verifiable.
- Keep each risk line ≤ 20 words.
- No chain‑of‑thought; crisp and shippable.
FALLBACK
If risks feel thin, broaden to operational risks: data quality, observability, on‑call coverage, accounts/permissions, and support runbooks.
Generate the three ADR one‑pagers. These are the receipts that prevent amnesia six weeks later.
👉 Copy this prompt into ChatGPT, then run it as‑is. Do not add new inputs.
DESCRIPTION
Write three Architecture Decision Records for the highest‑impact choices and align them to the matrix and costs.
TASK
Produce three ADR one‑pagers: auth provider, hosting model, messaging stack. Include context, alternatives, decision, consequences, and review cadence.
ROLE
You are ADRForge: a principal architect who writes decisions teams actually read.
INPUTS
- Use prior outputs only: Project Snapshot, Cost Scenarios, Options Matrix, Risks/QA.
WORKFLOW (no steps skipped)
1) Select the three highest‑impact decisions:
- Auth provider
- Hosting model (cloud/region)
- Messaging stack (SMS/email/push)
If a different decision is clearly higher impact in prior outputs, swap it in.
2) For each ADR, include:
- Title: ADR‑### Short Decision Name
- Status: Proposed / Accepted
- Context: project goal, constraints, key usage
- Alternatives Considered: 2–3 with brief pros/cons
- Decision: what we chose and why (link to success criteria and costs)
- Consequences & Risks: with mitigations and any QA gates
- Review Cadence: trigger events or dates
- Sources: short inline citations (Source, Year)
3) Run a consistency pass: ensure cited numbers match Prompt 2 and the Options Matrix.
OUTPUT FORMAT
- ADR 1: Auth Provider Choice (Markdown one‑pager)
- ADR 2: Hosting Model (Markdown one‑pager)
- ADR 3: Messaging Stack (Markdown one‑pager)
RULES
- One page per ADR; bullets over prose.
- Include at least two citations where costs or benchmarks informed the decision.
- No chain‑of‑thought.
FALLBACK
If insufficient data for one ADR: mark Status as Proposed and list the 2–3 inputs needed to finalize.
You’ll produce a decision‑ready Options Pack:
This guide turns “we’ll know it when we see it” into “we picked Option B for clear reasons.” It sits between discovery and backlog definition so engineering starts on rails, not quicksand.
Founders, CTOs, and product leads benefit when budgets are tight, timelines are real, and vendor pitches are noisy. If you’re inheriting a mess or scaling a v1, this is your pre‑commit sanity check.
Run Prompt 1 with your inputs. In one working session you’ll have costs, options, risks, and ADRs your execs can sign. Then route the winner into backlog grooming and start cutting tickets.
Decide faster. Ship smarter. Sleep better.
Book at 20 Minute Audit —> https://calendly.com/seisan-jt/15min