[Prompt Guide] The Legacy Stack Mapper: Map Every System Dependency in 30 Minutes

Most system audits die in the “Discovery” phase. You start with a spreadsheet, but it stalls when nobody can remember what talks to what. Three weeks in, you have a half-finished diagram that’s already outdated.

At Seisan, we use what we call the “Pilot vs. Engine” approach. We’ve developed a prompt chain that turns Claude into a Dependency Mapping Engine—one that systematically identifies every connection, categorizes the integration type, and scores each one for failure risk.

The Problem: The “Discovery Graveyard”

  • Tribal Knowledge: The person who built the original integration left the company in 2019. Nobody documented how SAP talks to the dispatch system.
  • Invisible Dependencies: A CSV export that runs every night is silently feeding data to three downstream systems. If it breaks, nobody knows until invoices stop going out.
  • Manual Transfer Blind Spots: Someone on the ops team has been copy-pasting data between two systems every Monday morning for three years. It’s now “the process.”
  • Zero Risk Visibility: Leadership can’t prioritize integration work because nobody can quantify which connection failures would actually halt operations.

Workflow: The 30-Minute Mapping Session

The Goal: Turn a tangled, undocumented legacy stack into a complete dependency map with risk scores—in a single 30-minute session with Claude.

The Strategy:

  1. List every system in your stack (CRM, ERP, databases, spreadsheets, billing, etc.).
  2. For each system, note the integration method: API, CSV/batch, webhook, or manual.
  3. Paste your system inventory into the prompt below.

The Prompt:

Follow-Up Prompt (The Deep Dive):

				
					You are an Enterprise Systems Architect specializing in legacy
integration mapping and dependency analysis. I am providing a
list of systems in our technology stack along with their known
connection methods.

Act as my Execution Engine to produce:

1. DEPENDENCY MAP: For every system pair, identify the connection
   type (REST API, SOAP, webhook, CSV/SFTP batch, direct DB query,
   manual copy-paste) and data flow direction.

2. SINGLE POINT OF FAILURE SCAN: Flag any system that, if it went
   down for 4 hours, would cascade into 2+ other systems failing.

3. INTEGRATION TYPE AUDIT: Categorize every connection as:
   - GREEN (API/Webhook: automated, real-time, recoverable)
   - YELLOW (CSV/Batch: automated but delayed, brittle)
   - RED (Manual: human-dependent, error-prone, unscalable)

4. RISK PRIORITY MATRIX: For each connection, score:
   - Failure Impact (1-5): What breaks if this connection fails?
   - Frequency (real-time / daily / weekly / monthly)
   - Recovery Time: How long to restore if it breaks?
   - Data Volume: How many records flow through per cycle?

5. COST OF INACTION: For every RED (manual) connection, estimate
   the monthly labor cost assuming $45/hr and the error rate
   based on typical manual data entry benchmarks.

Output: A structured dependency report with a prioritized
remediation roadmap. Start with the highest-risk connections
that should be automated first.

Here is our system inventory:
[PASTE YOUR SYSTEMS LIST HERE]
				
			

The 6-Point Integration Health Check

Use this checklist alongside the AI audit to pressure-test your integration layer:

  1. Authentication Audit: Is every API connection using token-based auth with rotation, or are there hardcoded credentials from 2017 still in a config file somewhere?
  2. Error Handling: When a nightly CSV transfer fails, does anyone get alerted—or does the data just silently stop flowing until someone notices downstream?
  3. Scalability Check: Your batch job processes 500 records per night today. What happens when that becomes 5,000? Will the connection, the target system, and the transform logic all handle 10x?
  4. Vendor Lock-in: List every third-party connector and middleware tool. If that vendor doubles their price or gets acquired, what’s your migration path?
  5. The Bus Factor: If the one person who understands the SAP-to-SQL integration quits tomorrow, can someone else debug it? Is there documentation, or is it tribal knowledge?
  6. Compliance Exposure: Which connections transmit PII, financial data, or regulated information? Are those connections encrypted in transit and at rest? Who has access?

The Seisan Rule: Don’t Just Connect. Architect.

A dependency map is a diagnostic, not a cure. If your map reveals more than two RED connections handling mission-critical data, you don’t have an integration problem—you have an architecture problem. And no amount of duct tape will fix it.

Every manual CSV transfer is costing you labor hours and introducing error rates that compound monthly. Every undocumented API connection is a ticking clock. The question isn’t whether it will break—it’s whether you’ll know when it breaks.

A Prompt Is Not an Architecture Review

This prompt guide is a starting point—a fast way to surface hidden dependencies and quantify integration risk. But it does not replace a formal systems architecture review conducted by engineers who understand failure modes, data governance, and enterprise middleware.

If your stack handles financial transactions, healthcare data, real-time operations, or regulated workloads, you need human architects validating every connection. Seisan offers full-scope integration audits that go beyond what any prompt can deliver—architecture review, middleware selection, data flow modeling, and hands-on remediation. Reach out to our team and we’ll scope an engagement that fits.


Case Study: From “Nobody Knows What Talks to What” to Full Visibility in One Sprint

A mid-market manufacturing company came to Seisan after a failed ERP migration. The migration stalled because nobody could document how their legacy systems were connected. They’d spent six weeks in discovery with their previous vendor and still didn’t have a complete dependency map.

When our team ran the equivalent of the mapping prompt above—and then followed it with a hands-on architecture review—we found:

  • 14 system-to-system connections that weren’t documented anywhere—including a nightly batch job feeding their dispatch system that nobody on the current team had set up.
  • 3 manual CSV transfers costing an estimated $2,800/month in labor—one of which had a 4% error rate that was causing invoice discrepancies.
  • A single-point-of-failure in their inventory sync: one middleware service that, if it went down, would silently halt order fulfillment across all three warehouse locations.
  • Hardcoded API credentials in two integration scripts that hadn’t been rotated since 2019.

We worked with their IT team over a focused 4-week sprint to remediate every finding:

  • → Replaced all 3 manual CSV transfers with automated API integrations, saving $33,600/year in labor alone.
  • → Built redundancy into the inventory sync with automatic failover and real-time alerting via PagerDuty.
  • → Rotated all credentials and implemented a secrets management vault with automated rotation.
  • → Produced a living dependency map that auto-updates when new connections are added.

The result: the ERP migration that had been stalled for 3 months was completed in 6 weeks, on budget, with zero unplanned downtime.

The takeaway: The AI prompt surfaced 11 of the 14 undocumented connections in under 30 minutes. But it took experienced integration architects to validate the findings, design the failover patterns, and ensure the remediation didn’t introduce new single points of failure.


Ready to see what’s actually connected in your stack? Whether you’re planning a migration, auditing your integration layer, or just want to stop worrying about what breaks next, we can help you get there.

Schedule a Discovery Call with John