My Methodology

I'm an AI Architect, not a prompt hacker.

25+ years of engineering experience means I know the difference between improvised prompting and structured, production-grade AI development. Here's how I actually build.

The Problem

Vibe Coding vs. SPEC-Driven Development

What I don't do

Vibe Coding

  • Improvised, intuition-driven prompts
  • Hidden assumptions in every request
  • Unpredictable, untestable results
  • "Hope-based" debugging strategies
  • Context lost between sessions
  • No systematic approach to complexity
  • Delivery dates are guesswork
What I do

SPEC-Driven Development

  • Structured specifications with locked assumptions
  • Explicit context for every task
  • Deterministic, testable outputs
  • Systematic debugging with reset protocols
  • Persistent context architecture
  • Scalable methodology for any complexity
  • 90 days, guaranteed
The Framework

The SPEC Architecture

Every AI-assisted task begins with a SPEC — a structured document that eliminates ambiguity and ensures consistent, reproducible results. Here's what goes into each one:

Goal

The single, measurable outcome this task must achieve. No ambiguity, no scope creep.

Scope

Explicit boundaries — what's included, what's not, and why. Prevents AI hallucination into adjacent features.

Definitions

Domain-specific terms explained. The AI and humans share the same vocabulary.

Inputs

What data, context, and resources are available. No assumptions about what exists.

Outputs

Expected deliverables with explicit format, structure, and acceptance criteria.

Rules

Constraints and requirements. What must always be true, what must never happen.

Edge Cases

Anticipated exceptions and how to handle them. The AI knows what to do when things go wrong.

Locked Assumptions

The critical piece. Explicit statements the AI must treat as absolute truth — no questioning, no reinterpretation.

The Process

Read → Plan → Code → Validate

R

Read

Gather context, understand existing code, identify dependencies

P

Plan

Structure the approach, create SPEC, define success criteria

C

Code

Execute with AI assistance, following the SPEC exactly

V

Validate

Test against criteria, review with human oversight

Why This Order Matters

Most AI coding failures happen because developers jump straight to "Code" without reading existing context or planning the approach. The AI makes assumptions, those assumptions conflict with reality, and debugging spirals begin.

My workflow ensures the AI always has complete context before generating a single line. Planning happens explicitly, not implicitly. And validation isn't an afterthought — it's built into every cycle.

The Lifecycle

AI-Native SDLC

01

Foundation

Architecture decisions, technology selection, environment setup. I establish the scaffolding that AI will work within — not the other way around. Human judgment defines the boundaries.

02

SPEC Creation

Break down requirements into SPEC-compliant tasks. Each task is scoped for a single AI session. Locked assumptions documented. Dependencies mapped.

03

Build & Validate

Execute the Read → Plan → Code → Validate cycle for each SPEC. AI generates, humans review. Every output tested against acceptance criteria before proceeding.

04

Integrate & Test

Combine validated components. System-level testing. Performance validation. Security review. The AI assists, but integration decisions remain human.

05

Maintain & Evolve

SPECs become living documentation. New features follow the same process. Technical debt tracked and addressed systematically. The methodology scales.

The Science

Debugging Decay

100%
75%
45%
25%
12%

AI debugging effectiveness drops dramatically after 2-4 attempts in a single session.

I've observed — and the research confirms — that AI debugging follows a decay curve. The first attempt at fixing an issue has the highest success rate. Each subsequent attempt in the same context has diminishing returns as the AI accumulates conflicting assumptions.

My solution: the strategic reset. When I hit the decay threshold (typically 2-4 attempts), I don't keep pushing. I reset with a fresh context, updated SPEC, and explicit documentation of what I've learned. This isn't giving up — it's engineering discipline applied to AI collaboration.

The Stack

Anthropic-Only Architecture

One stack. Infinite depth.

I don't chase the latest AI model announcements. I go deep on one stack — Anthropic's Claude — because depth beats breadth when you're building production systems.

If Claude can't do something, I build the tooling myself. If I can't build it, then — and only then — I look elsewhere. This discipline ensures I understand my tools completely, not superficially.

The result: predictable behavior, accumulated expertise, and no surprises when it matters most.

Back to Zero2One