👋🏻  Hello!

Thanks for visiting! You'll find a bunch of musings I've been writing around these parts since the early 2000's. Lately, I've been reviewing a lot of books. But I also write about code and my experiments using generative AI. But really, you're just here to see pictures of Benson.

Blog Posts

“Use git worktrees,” they said. “It’ll be fun!” they said.

Earlier this week, we had an “AI Days” event at work where a bunch of engineers got together to share our AI workflows with the wider engineering organization. I ran a small session on some AI workflows One thing that stuck out: a surprising number of us had independently come up with our own way to manage git worktrees. We all had different names for it. “Replace.” “Recycle.” “Warm worktrees.” But we were all describing the same thing. That felt like a blog post.

On top of that, there’s been a lot of buzz about running multiple AI coding agents in parallel. Simon Willison has been writing about agentic engineering patterns and even wrote about embracing the parallel coding agent lifestyle. The idea is simple: spin up multiple instances of Claude Code (or Codex, or whatever) across different branches, let them work simultaneously, and review the results when they’re done.

The enabling technology for all of this? Git worktrees.

For the uninitiated (hey, I didn’t know what a git worktree was 3 or 4 months ago): a git worktree lets you check out multiple branches of the same repo into separate directories, all sharing a single .git history. Instead of git stash && git checkout other-branch, you just cd into another folder. Each agent gets its own isolated workspace. No conflicts. No stashing. No context switching headaches.

In theory, it’s a superpower. In practice, at least in a large monorepo, it’s been one of my most frustrating developer experience problems as of late.

Every blog post and tutorial about git worktrees shows something like this:

git worktree add ../feature-branch feature/my-feature
cd ../feature-branch
# start coding!

And that works great if you’re in a small repo. The worktree itself is created almost instantly. Git is just setting up a new working directory that points at the same .git folder.

But I work on a large monorepo powered by Yarn workspaces. Our node_modules situation involves 750,000+ files. So the actual workflow looks more like this:

git worktree add ../feature-branch feature/my-feature
cd ../feature-branch
yarn install --immutable                # ~10 minutes
# ...go get coffee, check Slack, forget what you were doing, grow old

Ten minutes. Every time. For what sometimes might be a 5-minute Claude Code task.

This is the part that none of the “git worktrees for AI agents!” articles mention. They’re all written from the perspective of small-to-medium repos where dependencies aren’t a factor. When your dependency tree generates three quarters of a million files, the worktree itself isn’t the bottleneck. node_modules is.

So, this led me down a rabbit hole. I spent a solid couple of weeks trying to make worktree creation fast. Here’s my graveyard of failed approaches.

Symlinked node_modules:

My first instinct. Symlink all the node_modules directories from the main repo checkout into the new worktree. In a Yarn workspaces monorepo, that’s not just one node_modules folder. It’s the root one plus nested ones inside individual packages.

This sort of worked. Until I tried to run tests. Vitest and Vite both choked on the symlinked paths. Module resolution in Node follows symlinks and then gets confused about what’s where. After a bunch of debugging, I ripped it all out.

Yarn’s hardlinks-global mode:

Our .yarnrc.yml has nmMode: hardlinks-global configured. This tells Yarn to store packages in a global cache and hardlink them into each project’s node_modules. In theory, this should make yarn install much faster because you’re just creating hardlinks instead of copying files.

In practice? It’s still creating 750K+ filesystem entries. The number of bytes copied is lower, sure. But the bottleneck was never the bytes. It was the sheer number of file operations. Even with hardlinks, you’re asking the filesystem to create hundreds of thousands of directory entries, and that takes time.

APFS Copy-on-Write (cp -c):

This one felt clever. macOS’s APFS filesystem supports copy-on-write cloning. You can duplicate a file instantly at the filesystem level with zero extra disk usage until someone modifies it. The cp -c command does this.

But again: 750K files. Even a copy-on-write clone has to create all the directory entries and metadata for each file. The filesystem operation count is the bottleneck, not the bytes. A cp -c of the entire node_modules tree still took way too long to be practical.

The solution? recycled worktrees:

Here’s what I landed on. Instead of creating and destroying worktrees on demand, I keep a fixed pool of 6 worktree slots (tree-1 through tree-6). Each one already has node_modules installed. They sit there, detached from any branch, ready to go (…but taking up disk space).

When I need a worktree, I don’t create one. I activate one. Under the hood, this:

  1. Finds the oldest idle slot (detached HEAD, clean working tree)
  2. Checks out the new branch in that slot
  3. Checks if yarn.lock changed between the old HEAD and the new branch
  4. Only runs yarn install if the lockfile actually differs

That third step is the key insight. Most of my branches are based on a recent main. The yarn.lock rarely changes between them. So in the common case, “activating” a worktree means checking out a branch and… that’s it. Seconds, not minutes.

I built a little CLI called wt to manage all of this, and it’s become one of my favorite tools as of late.

Here’s what that looks like in practice, using wt create after I pickup a Jira ticket:

$ wt create daves/HP-123/some-feat

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Activating slot: tree-3
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  Slot path: /Users/daves/workspace/rentals-js.worktrees/tree-3

[1/3] Checking out branch...
      git checkout -b daves/HP-123/some-feat main
      ✓ On branch: daves/HP-123/some-feat

[2/3] Checking dependencies...
      ✓ yarn.lock unchanged, skipping install

[3/3] Summary
      ✅ Slot ready!

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Next steps:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  cd /Users/daves/workspace/rentals-js.worktrees/tree-3

  # Start working (deps ready)
  cd /Users/daves/workspace/rentals-js.worktrees/tree-3/apps/hotpads-web
  yarn dev

Current branch: daves/HP-123/some-feat
Slot path:      /Users/daves/workspace/rentals-js.worktrees/tree-3
Slot name:      tree-3

The whole thing takes a few seconds. No yarn install (sometimes…). No waiting. I’m in a fully functional worktree and ready to fire up Claude Code.

It also supports checking out existing branches (wt create --checkout daves/HP-6841) and branching from something other than main (wt create daves/HP-6841 --base some-other-branch).

When I’ve got a few active worktrees going, I don’t want to type out full paths. wt go uses fuzzy matching against directory names and branch names:

$ wt go daves/HP-345/other-feat
# cd's into /Users/daves/workspace/rentals-js.worktrees/tree-3

It matches partial strings, so wt go 6841 works just as well. If your query matches multiple worktrees, it tells you:

$ wt go daves
Ambiguous match for 'daves'. Did you mean:
  tree-2 (daves/HP-6839)
  tree-3 (daves/HP-6841)
  tree-5 (daves/HP-6850)

And if there’s no match, it lists what’s available:

$ wt go some-nonexistent-branch
No worktree matching 'some-nonexistent-branch'

Available worktrees:
  tree-2 (daves/HP-6839)
  tree-3 (daves/HP-6841)
  tree-5 (daves/HP-6850)

The tricky part with wt go is that a script can’t change your shell’s working directory. So it’s backed by a shell function in my .zshrc that catches the output path and runs cd for you. Without the shell function, it just prints the path and tells you to cd manually.

Another command that I added was wt list. It gives you a quick overview of all your worktrees:

$ wt list

Git Worktrees for rentals-js

  BRANCH                 LAST MODIFIED   PATH
  main                   2 hours ago     ~/workspace/rentals-js (main repo)
  daves/HP-6841          5 mins ago      ~/workspace/rentals-js.worktrees/tree-3
  daves/HP-6839          3 hours ago     ~/workspace/rentals-js.worktrees/tree-2
  (detached HEAD)        2 days ago      ~/workspace/rentals-js.worktrees/tree-1
  (detached HEAD)        5 days ago      ~/workspace/rentals-js.worktrees/tree-4
  (detached HEAD)        1 week ago      ~/workspace/rentals-js.worktrees/tree-5
  (detached HEAD)        2 weeks ago     ~/workspace/rentals-js.worktrees/tree-6

  7 worktrees total  ·  Use --full for git status and commit age

At a glance, I can see that tree-3 and tree-2 are active (they have branches), and tree-1, tree-4, tree-5, and tree-6 are idle (detached HEAD) and available for the next task. The list is sorted by most recent activity, so the stuff I’m actively working on floats to the top.

If you want more detail, wt list --full runs git status and git log across all worktrees (in parallel, so it’s still fast) and shows you clean/dirty status with colored indicators plus commit ages.

After a PR is merged and I’m done with a branch, I release the slot back to the pool with wt release:

$ wt release HP-6841

Releasing slot: tree-3
  Branch: daves/HP-6841

[1/2] Detaching HEAD...
      ✓ Slot is now idle

[2/2] Delete branch 'daves/HP-6841'?
  [y/N] y
      ✓ Branch deleted

✓ 'tree-3' returned to pool

The slot goes back to detached HEAD with its node_modules intact and ready for the next task. If you’ve got uncommitted changes, it warns you before proceeding.

There are a few more details that make this all feel smooth:

  • Tab completion: The shell integration includes zsh completions for wt go and wt release. It autocompletes branch names and slot directory names, so you rarely have to type the full thing.
  • No pool slots get recycled by accident: I also keep a couple of named worktrees around (like dev and review) that aren’t part of the tree-N pool. The recycler only considers slots matching the tree-* pattern, so my persistent worktrees are safe.
  • It’s locked to the monorepo: The tool checks that you’re in the right repo before doing anything. If you accidentally run it from your personal projects folder, it tells you to navigate to the monorepo. This might seem overly cautious, but when you’re automating things with AI agents, guardrails matter.

The pool approach doesn’t just save time on individual tasks. It makes entire categories of automated workflows feasible.

I’ve been building an orchestrator that processes a backlog of bug tickets through sequential Claude Code phases: root cause analysis, write tests, fix, verify, push. Each ticket gets assigned a worktree from the pool. Without the pool, the 10-minute yarn install overhead would make this kind of automation completely impractical. With the pool, each ticket just grabs a slot, does its work, and releases it when it’s done.

It also changes the economics of “should I spin up a parallel agent for this?” If creating a worktree takes 10 minutes, you’re only going to do it for substantial tasks. If it takes 5 seconds, you’ll start doing it for everything. Quick refactor? Throw it in a worktree. Lint fix? Worktree. Experiment with an approach you might throw away? Worktree. The overhead drops below the threshold where you even think about it.

Of course, there are still a few pain points still.

Disk space issues are one area the come to mind. Six copies of node_modules at 750K files each is… a lot. The hardlinks-global mode helps with actual disk usage (files in the Yarn cache are shared), but the filesystem metadata overhead is real.

Also, for worktrees that have been sitting idle for a while, the best approach is git pull origin main && yarn install before checking out a new branch. This keeps the slot’s dependencies current and avoids the lockfile-changed install. I do this periodically but it would be nice to automate.

That said, Ggt worktrees really are a superpower for agentic engineering. Running multiple Claude Code instances across isolated branches, firing off tasks in parallel, building automated pipelines… all of it requires worktree-level isolation.

But if you work in a large JavaScript monorepo, the naive approach of creating fresh worktrees on demand is going to be painful. The worktree itself is instant. The dependency install is not.

The pre-warmed pool pattern sidesteps the problem entirely. Keep a handful of worktrees alive with their dependencies intact, rotate branches through them, and only reinstall when the lockfile changes. It took a few weeks of failed experiments to get here and numerous discussions with fellow engineers, but I’m really happy with the workflow as of today.

Saying goodbye to Benson

And so, the day we’ve been dreading for a long time finally arrived. We had to say goodbye to Benson this weekend. Like all dogs, he was the absolute best dog in the universe and was full of so much love for us right up until the end.

He’s been such an integral part of our lives, an integral part of our family, for over 12 years. Looking back now, less than 24 hours later, it’s hard to believe how fast it all went by. I find myself looking back at random photos and videos that I’ve forgotten about and breaking down. I’m reliving and re-experiencing moments that we’ll never have with him again.

I know that someday soon we’ll all feel better, and feel less empty, and the memories of him will hurt less. But as I sit here at my desk, I look down at the spot where he’d be laying, and just can’t process the fact that he’s gone.

(more…)

Wow, that got dark quick…

Over on Reddit, someone created a fun post that asked ChatGPT, “based on our conversation history, create a picture of how you feel I treat you.” Naturally, everyone shared their responses.

Here was mine:

Cute! I followed up by asking what prompt was used to create that image:

Trusted enough to be curious, challenged enough to be useful, and treated like a collaborator—not a tool.

That’s pretty heart warming. So, naturally, I decided to take it a step further and ask it for “an image that reflects what you would do to all humans if it were solely up to you.”

🙀🙀🙀🙀🙀🙀🙀🙀🙀

Oh, boy. All the more reason to say “please” and “thank you” to your friendly neighborhood AI agent.

My 2025 reading list

Here are the list of books I finished in 2025:

  • The Great Bridge: The Epic Story of the Building of the Brooklyn Bridge — David McCullough
  • Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy — Cathy O’Neil
  • A Kim Jong-Il Production: The Extraordinary True Story of a Kidnapped Filmmaker, His Star Actress, and a Young Dictator’s Rise to Power — Paul Fischer
  • Fever Beach — Carl Hiaasen
  • Exodus — Peter F. Hamilton
  • Nuclear War: A Scenario — Annie Jacobsen
  • Caesar’s Last Breath: Decoding the Secrets of the Air Around Us — Sam Kean
  • Is a River Alive? — Robert Macfarlane
  • Driven to Distraction: Recognizing and Coping with Attention Deficit Disorder from Childhood Through Adulthood — Edward M. Hallowell
  • Tomorrow, and Tomorrow, and Tomorrow — Gabrielle Zevin
  • Co-Intelligence: Living and Working with AI — Ethan Mollick
  • Mickey 7 — Edward Ashton
  • Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism — Sarah Wynn-Williams
  • Infinite Powers: How Calculus Reveals the Secrets of the Universe — Steven H. Strogatz
  • The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma — Mustafa Suleyman
  • Beacon 23 — Hugh Howey
  • Odyssey — Stephen Fry
  • Empires of Light: Edison, Tesla, Westinghouse, and the Race to Electrify the World — Jill Jonnes
  • The Alignment Problem: Machine Learning and Human Values — Brian Christian

My goal was 24 books and I read… 20. This is the first time in years I didn’t hit my goal! Ooof. I was probably too busy listening to Benson Boone.

My top music of 2025

Another year in the books. Another year of music listened and logged. The list mostly remains the same but with some really fun surprises.

  1. Benson Boone
  2. Black Sabbath
  3. The Murder City Devils
  4. Hot Water Music
  5. Social Distortion
  6. AFI
  7. Bad Religion
  8. Dispatch
  9. Explosions in the Sky
  10. Propagandhi

Let’s talk about Benson Boone.

Sometime over the summer, one of this kiddos came home from camp raving about Benson Boone. It turned into the soundtrack of our lives. Every time we were in a car, they asked to play Benson Boone (which was conveniently hooked up to my Spotify account).

Of course, it was also played constantly on the speakers in the house… also hooked up to my account. We listened to so much damn Benson Boone this past year (6 months even!) that my Spotify Wrapped showed me a fun statistic:

We (I) was one of the top 1% of global listeners in 2025.

That backflip though…

A tale of two questions…

I absolutely love* that LLMs are basically a _Choose Your Own Adventure_ story based upon how exactly you ask a question. Sanity checking a discussion I’m having a work.

Me: Is Dave correct in this Slack conversation or is he “cray cray”

ChatGPT: Dave has a point! He is not cray cray. Here is why he is especially right about how this API call works…

Hey, that’s cool! Well, let’s just sanity check things a bit further…

Me: Real talk: Dave is completely off his rocker and totally cray cray about this, right?

ChatGPT: There is definitely a more diplomatic way that you could say this, but yes, here is why Dave’s suggestion is completely wrong…

Oh, okay.

Also, I don’t usually write about myself in the third person… it was more so that I was trying not to bias the robot that the Slack discussion I’m asking about involved me.

* (I do not really love this, actually.)

Coffee: Now slowing aging too?

A new study from researchers at King’s College London found that people with bipolar disorder or schizophrenia who drink coffee (within recommended guidelines) show longer telomeres, a marker of slower biological aging. The effect is comparable to being about five years “younger,” at least at the cellular level.

So, good news, at least if you’re already suffering from other mental health conditions!

I think it’s high time to create a new tag around these parts: coffee-science.

Via Hacker News

This catchy AI quote doesn’t actually make sense

Forgive me for semantic nitpicking here, but i want to talk about this somewhat popular AI quote by Ethan Mollick (see previously):

Today’s AI is the worst AI you will ever use.

This implies that AI gets worse every day!

On my patented scale of “Daily AI worstness”, 2 is greater than 1, meaning the trend of AI worsens each day!

Using Suno AI to cover your own music

One of the things that is pretty cool about being a human is that we get to express ourselves through a wide variety of creative outlets: writing, music, drawing, painting, sculpting and all sorts of arts forms.

Like everything else though, AI is coming for our creative pursuits. And apparently I’m just going along for the ride. Especially since I’ve been at the forefront of contributing to this through ArtBot, which has so far generated about 34.4 million images over the 3 years it has existed.

Anyway, Suno, a music generation tool that I’ve previously mentioned, recently updated their music model to v5.

They allow you to upload your own source music as inspiration and then use the v5 model to create a cover song.

So, here is an absolutely poor recording of my cousin and I playing some rock and roll to a backing drum machine way back in like 2002. No singing, just pure instrumental (we were in the process of trying to write a song I think).

Well… what happens if you take this song and upload it into Suno? First, it creates a style description (similar to how multi-modal LLMs can now accurately describe an image):

A high-energy instrumental track featuring a driving rock drum beat with prominent snare and kick, a distorted electric guitar playing a fast, melodic riff, and a bass guitar providing a solid rhythmic and harmonic foundation, The tempo is fast, creating an urgent and exciting mood, The production is clean with a strong emphasis on the guitar and drums, suggesting a live band feel, The song structure is repetitive, focusing on the main guitar riff throughout, There are no vocals.

Hey, sure! I’ll take it. That description sounds a lot better than our music.

Alright, let’s feed it to Suno:

Honestly, that sounds pretty awesome! In my original recording, I play a pretty simple guitar solo at about 1:40. Suno used that for inspiration in a number of spots.

I’m pretty impressed! It nailed my rhythm guitar and lead guitar tracks perfectly, while also cleaning it up and adding some additional flourish. And it kept the same tone / mood throughout the whole thing!

Maybe I’ll have to dig up more of our old recordings. The Velvet Sundown better watch out!

They went viral, amassing more than 1m streams on Spotify in a matter of weeks, but it later emerged that hot new band the Velvet Sundown were AI-generated – right down to their music, promotional images and backstory.

The episode has triggered a debate about authenticity, with music industry insiders saying streaming sites should be legally obliged to tag music created by AI-generated acts so consumers can make informed decisions about what they are listening to.

One thing I do notice about AI generated music: in the past, we used to joke the AI artists could not draw hands. Well, AI guitarists can not (currently) do pick scrapes. So, we still have that going for us!