25+ years of engineering experience means I know the difference between improvised prompting and structured, production-grade AI development. Here's how I actually build.
Every AI-assisted task begins with a SPEC — a structured document that eliminates ambiguity and ensures consistent, reproducible results. Here's what goes into each one:
The single, measurable outcome this task must achieve. No ambiguity, no scope creep.
Explicit boundaries — what's included, what's not, and why. Prevents AI hallucination into adjacent features.
Domain-specific terms explained. The AI and humans share the same vocabulary.
What data, context, and resources are available. No assumptions about what exists.
Expected deliverables with explicit format, structure, and acceptance criteria.
Constraints and requirements. What must always be true, what must never happen.
Anticipated exceptions and how to handle them. The AI knows what to do when things go wrong.
The critical piece. Explicit statements the AI must treat as absolute truth — no questioning, no reinterpretation.
Gather context, understand existing code, identify dependencies
Structure the approach, create SPEC, define success criteria
Execute with AI assistance, following the SPEC exactly
Test against criteria, review with human oversight
Most AI coding failures happen because developers jump straight to "Code" without reading existing context or planning the approach. The AI makes assumptions, those assumptions conflict with reality, and debugging spirals begin.
My workflow ensures the AI always has complete context before generating a single line. Planning happens explicitly, not implicitly. And validation isn't an afterthought — it's built into every cycle.
Architecture decisions, technology selection, environment setup. I establish the scaffolding that AI will work within — not the other way around. Human judgment defines the boundaries.
Break down requirements into SPEC-compliant tasks. Each task is scoped for a single AI session. Locked assumptions documented. Dependencies mapped.
Execute the Read → Plan → Code → Validate cycle for each SPEC. AI generates, humans review. Every output tested against acceptance criteria before proceeding.
Combine validated components. System-level testing. Performance validation. Security review. The AI assists, but integration decisions remain human.
SPECs become living documentation. New features follow the same process. Technical debt tracked and addressed systematically. The methodology scales.
AI debugging effectiveness drops dramatically after 2-4 attempts in a single session.
I've observed — and the research confirms — that AI debugging follows a decay curve. The first attempt at fixing an issue has the highest success rate. Each subsequent attempt in the same context has diminishing returns as the AI accumulates conflicting assumptions.
My solution: the strategic reset. When I hit the decay threshold (typically 2-4 attempts), I don't keep pushing. I reset with a fresh context, updated SPEC, and explicit documentation of what I've learned. This isn't giving up — it's engineering discipline applied to AI collaboration.
I don't chase the latest AI model announcements. I go deep on one stack — Anthropic's Claude — because depth beats breadth when you're building production systems.
If Claude can't do something, I build the tooling myself. If I can't build it, then — and only then — I look elsewhere. This discipline ensures I understand my tools completely, not superficially.
The result: predictable behavior, accumulated expertise, and no surprises when it matters most.