← Back to blog

Letting Go

Header image for Letting Go

Many years ago I became a team lead for the first time. I remember thinking how awesome it would be - there were three of us now; we could build three times as fast. Of course, it didn’t work out like that. Sure the team did good work, but often it wasn’t what I wanted. Part of that was my fault - I hadn’t learnt to brief them well. And part of it was inevitable - they weren’t me. They had different strengths and weaknesses. They hit problems and solved them in their own ways. They heard something different from what I meant.

Over time I learnt to let go - to accept that the team would do things differently from the way I’d do it. To accept that I could only guide and encourage; I couldn’t dictate. To work out new success metrics. To develop a sense of smell for the things that might go wrong.

But letting go is hard. You are giving up a big part of your identity in exchange for skills that seem far more nebulous. You are no longer a builder; you are a facilitator.

Years later I watched as other new leads struggled with the exact same problem. Some tried to cope by micro-managing. Others tried to try to do all the work of the team themselves. Several decided leadership wasn’t for them and went back to a technical role. Learning to let go is hard.

Something similar applies to AI coding. Multiple engineers have told me that the AI code is fine, but it’s not the way they’d write it - so they rewrite it. Some micromanage the AI to force it do exactly what they want. Others abandon AI and go back to writing by hand. It seems awfully familiar. New leads struggling to let go?

The skills you need

The rise of agentic coding makes these team lead skills more important than ever. And, simultaneously, makes traditional technical skills matter less.

This weekend I continued to build out mdhavers (my new not-entirely-serious programming language). Over the weekend Claude extended the LLVM backend, and added over 5,000 unit tests. Last week I could play tetris using the interpreter; now I can compile to a native binary and play in a terminal. A year ago getting an AI to build tetris was considered a hard task. Now we can build a new language and implement tetris in it. There’s 100+kloc of increasingly well tested Rust behind this. All implemented as a throwaway see-how-far-I-can-push-the-models project.

This is me working with my team of AI agents - Codex, Claude, Gemini. Finding the bits each does best. Guiding and course correcting. I don’t need to understand how to build a compiler. Or optimise it. But I do need to apply software engineering fundamentals. Review designs and plans. Ask questions. Ensure the tests are up to snuff. Correct Claude when it decided to #[ignore] a slew of tests rather than fixing the underlying bugs the tests revealed.

AI feels human…

Each tool has a personality. They remind me of people I’ve worked with before; I find myself reprising the techniques I used when I managed people with similar traits.

Codex is the conversationally awkward know-it all engineer who gives you answers full of jargon that are hard to parse. You wouldn’t want to go to the pub alone with them.

Gemini is knowledgeable and keen. But isn’t comfortable admitting it doesn’t know - and has no qualms about lying instead. You need to take whatever it says with a pinch of salt.

Claude is friendly and a little loquacious. But good at explaining and smart as well. Definitely someone I want on my team.

And then there’s the uncanny valley behaviour. I built a MCP server to allow Codex CLI to spawn additional Codex agents. The idea was the agents did the work while Codex CLI monitored. But Codex CLI got antsy - it polled the agents incessantly. And when the agents didn’t progress fast enough, it killed them and did the task itself. Echoes of the new lead who doesn’t trust their team and does all the work themself?

And so?

A couple of years ago I assumed all software engineers would develop the skills to use AI effectively. But that’s not how its played out. It’s becoming clear using AI effectively requires a different skill set from those of a traditional software engineer. Raw technical knowledge matters less; tech lead skills are far more important. What happens to the engineers who can’t make this transition? I’ve watched enough team leads fail to know not everyone will make it. What does “going back to a technical role” mean now?

Engineers who thrived as ICs five years ago might not have the personality traits needed for effective AI collaboration. Being technically knowledgeable, caring about how code is written rather than that it just works - these used to be strengths. Are they now liabilities?


Originally published on Martin Davidson’s Substack. Follow Martin for more on AI and software engineering.