AI-Assisted Dependency Review
How I used Conductor and Claude to streamline my team's Dependabot review workflow โ and how a colleague turned it into a GitHub Action
Happy Wednesday!
Today's Issue: Debugging the TypeScript compiler, making assertions instead of suggestions, and mistakes engineers make in large, established codebases.
I recently proposed and executed a migration of a frontend codebase to TypeScript strict mode. This introduced over 10,000 previously unsurfaced TypeScript errors to the codebase, which I ignored with a combination of ts-migrate and pure grit. Eventually I turned the strict flag on. Celebration ensued.
Until...reports started rolling in that some devs were seeing the occasional TypeScript error appear when they were developing locally. On first pass, the TypeScript compiler reported no errors. If the dev then made an innocous change in the offending file (even adding a comment), the compiler would report the error! All of the errors looked something like this:
I was able to trace the issue to a difference between the output of tsc --noEmit and tsc --noEmit --watch. For some reason, tsc in incremental mode could find the error on the second pass.
My colleague Elliott was able to debug the TypeScript compiler codebase to discover that there is actually a caching bug in TypeScript related to specifically the Symbol.iterator error, and found the very line that caused it.
Extremely small bug reproduction
They used the Debugging the TypeScript Codebase blog post to learn how to do debug the TypeScript Codebase in order to find the exact line that caused TypeScript to skip reporting this error on the first pass.
Pnpm recently introduced a new feature called PNPM catalogs. It allows you to bucket your dependencies to any number of categories, moving beyond dependencies and devDependencies. It seems like early days so far, but Anthony Fu is bullish about future tooling around categorized dependencies.
If you want that next promotion, consider making assertions instead of suggestions. Assertions involve an element of risk โโ you have to defend your position, convince others your approach is the right one, and take the blame if your idea fails โโ but it's far more valuable than just sharing an insight or making a suggestion.
Working in a large codebase isn't easy, and there are traps even seasoned engineers fall into. Even well-intentioned changes (like trying to protect your nice new code from all the old legacy stuff everything else relies on) could result in bugs or an incident that you didn't foresee.
How I used Conductor and Claude to streamline my team's Dependabot review workflow โ and how a colleague turned it into a GitHub Action
Instead of starting a migration by rewriting code, set up backstops first. A lint rule that prevents new code from using the old pattern stops the problem from growing, even if the migration stalls.
If everyone's shipping faster with AI, nobody has an advantage. The real edge is platform work.