My AI Coding Workflow
How I'm using Claude Code to build a full-stack app while maintaining architectural understanding
I've been using AI agents to facilitate my work for a year. As a staff engineer, my work mostly revolves around coding (less and less), writing proposals, and driving technical strategy. I've seen a tremendous amount of hype, and valid concern, around software engineers being replaced by AI. Anecdotally, my role has shifted significantly as AI agents and LLMs have improved.
Before November of 2025, I spent far more time painstakingly tracing how different areas of the codebase worked, or wrangling complicated configuration by hand. I spent days or weeks writing RFCs. I also wrote codemods by hand for complex migrations. Claude Code has significantly sped up those aspects of my job.
Even so, with the help of AI I'm working more, not less, both in my day job at Honeycomb and on my side projects.
Any software engineer knows the backlog never ends. There's always another bug to fix, another feature request to ship, and another piece of tech debt to resolve. Before AI tools, the speed at which I could write code acted as a natural bottleneck. I'd finish a task, feel the weight of context-switching, and sometimes just stop for the day.
Now, that bottleneck is gone. I finish a task and immediately start the next one because starting is almost free. Oftentimes, I'm working on 4 or 5 tasks simultaneously. The activation energy has dropped to near zero.
This is not necessarily a benefit. I used to have to spend time validating my ideas, and proving why they were worth investing significant time in. Now I can build any idea that pops into my head –– but most of my ideas aren't all that good.
Going fast in the wrong direction is the same thing as going slow.
Let's assume for a moment that we actually are more productive with AI –– although I do think this remains to be seen.
Every company that has adopted AI tools in a significant way is getting the same speed boost. Shipping faster isn't a differentiator anymore. It's the new baseline.
And the system adapts. Stakeholders see what's possible, so they expect more. The backlog doesn't shrink –– it grows. The faster I ship, the faster new requests come in. The tool didn't reduce my workload. It increased my throughput, and the system (and my own brain) responded by feeding me more work.
If raw feature velocity isn't a differentiator anymore, what is?
I think the answer is platform work. The infrastructure, tooling, and guardrails make all future work cheaper and higher quality.
I have spent the last decade arguing with leaders, managers, and other ICs about how important maintaining a high quality codebase is for driving feature work forward. But AI is causing these norms to shift. I think this is the opportunity to seize the moment and build better and stronger platforms.
Here are a few places I'd start.
AI agents don't know that your team decided to stop using Redux two years ago. They don't know you're halfway through migrating from REST to GraphQL. They'll repeat old mistakes over and over unless that knowledge is encoded somewhere they can access.
You can make this easier in at least two ways.
Add backstops via linters. If you have an in-progress migration, add lint rules that prevent new code from using the old pattern. This was always good practice, but it's now essential. An engineer might remember the migration. An AI agent won't. A linter catches both. I wrote more about using codemods to scale migrations.
Colocate documentation with code. Docs in a wiki don't help an AI agent that's reading your codebase. Architecture decisions, pattern guides, and migration notes need to live next to the code they describe. I'm talking CLAUDE.md files, READMEs in directories, comments at the top of modules explaining why something exists.
If you work on the web, you're probably dealing with outdated dependencies constantly. AI makes this worse. The agent reads documentation for the latest version of a library, but your codebase is three versions behind. It writes code that looks right but doesn't work.
At Honeycomb, we decreased frontend dependency drift by 39.5% in a little less than a year. I wrote about the process we used. Keeping dependencies current isn't just maintenance –– it's infrastructure that makes AI tools more effective.
The faster engineers and agents get signal on whether something works, the more throughput actually converts to real output. This means investing in CI/CD speed and local development experience.
If your CI pipeline takes 20 minutes, that's 20 minutes where an AI agent is blocked or an engineer has moved on to something else. If your local dev server takes 30 seconds to reload, every iteration is slower than it needs to be. These are multipliers. Shaving a minute off your test suite pays dividends on every change, by every engineer, forever.
Code review is a bottleneck that doesn't get faster just because code gets written faster. If anything, it gets worse –– more PRs in the queue, more context-switching for reviewers, more fatigue.
The fix isn't to skip review. It's to make sure human reviewers are spending their time on what matters: design decisions, edge cases, architectural fit. Everything else –– style, formatting, import ordering, common anti-patterns –– should be caught by linters and automated PR review tools before a human ever looks at it.
Platform work has always been important. It's never been urgent enough to prioritize over the next feature. But now that feature velocity is table stakes, the teams that invest in their platform are the ones that will actually feel like they're moving forward instead of just running faster in place.
It makes me wonder why we're willing to invest in quality for machines, but not for people.
How I'm using Claude Code to build a full-stack app while maintaining architectural understanding
A technical deep-dive into hub-team, my multi-agent AI orchestrator