AI-Assisted Dependency Review

Juliana Gomez · February 23, 2026Guest PostHow I used Conductor and Claude to streamline my team's Dependabot review workflow — and how a colleague turned it into a GitHub Action

I can't stop raving about Conductor.build. I'm not really a terminal person — there's something about a GUI that just works for me. So when everyone started using multiple Claude instances and git worktrees, I sat out the hype. Lucky me, Conductor emerged. If you haven't heard of it yet, Conductor is a new tool that lets you manage multiple Claude instances with git worktrees inside a clean UI that keeps track of all your instances.

When I started using Conductor, it coincided with my Goalie week at Honeycomb. Each week, someone on our team (the Goalie) is responsible for any urgent tasks that pop up and for taking care of Dependabot PRs. To be honest, the Dependabots take the longest, and any time I've taken down prod in the past it's been with a dependency upgrade — so I'm always extra careful with them.

With my new Conductor powers, I sent this prompt to several Claude Code instances at once:

Review this Dependabot PR and let me know if it's safe to merge: [link to PR]

We have GitHub integrated with Claude, so Claude was able to see the PR, understand the version change and the library in question, and check whether the build was passing. Running this across several instances in parallel meant I could quickly get the information I needed to assess whether each dependency was safe to update. I got through most of the Dependabots much faster than when I was manually tracing where a dependency was used and whether the update affected our code.

Happy with my progress, I shared the workflow with my team — and then more magic happened. A colleague, Dean, suggested turning this into a GitHub Action that runs automatically when a Dependabot PR is opened. Now the review is even faster, because a comment with Claude's recommendation is already waiting in the PR whenever the Goalie sits down to review it.

It's not a perfect process. Sometimes the build takes a long time to finish, but the GitHub Action is set to time out after 15 minutes. When that happens, the Goalie has to re-run the GHA manually after the build passes. Sometimes the build fails on a flaky test, and the GHA still flags the PR as potentially unsafe because of that failure — so the Goalie needs to use their judgment about whether it was a flake or a real problem.

We could look into auto-merging, but I'm not comfortable with that. Another colleague, Drew, shared some of her PhD research and gave us a vision where AI helps us make better decisions, but where the final call always belongs to a person. I love that framing — especially for something that's taken down prod on me more than once. I'll be watching how this evolves, but here's what I'd take away:

  • Look for ways Claude can surface information faster so you can make better decisions.
  • Look for ways to incorporate GitHub Actions with Claude.
  • Share your work with colleagues — it sparks new ideas and better approaches.

Hope this helps, and if you think of ways to improve it, I'm all ears! You can reach out to me on LinkedIn.

Continue Reading

ai

Working More, Not Less

If everyone's shipping faster with AI, nobody has an advantage. The real edge is platform work.

Read Post