I Fell In Love With Writing Code Again
I recently fell in love with writing code again. Not because anything changed about my free time as a VP of Engineering, but because the nature of building software itself changed.
My job doesn’t afford me a lot of time to actually write code. Two years ago, that sentence would have felt like a real loss. Even early last year, coding with GenAI felt like working with an overly ambitious autocomplete plugin — occasionally faster than searching StackOverflow, but not fundamentally different. Fast-forward to today and the conversation has shifted from “should you be using AI coding assistants?” to something more interesting: over 90% of engineers are already using them. The question now is how to adapt your workflow to make the most of them.
For me, the answer has been agentic development, and it has made building software feel genuinely exciting again.
The Problem With The Old Way
Recently, I needed to help a team meet some regulatory requirements that involved verifying data against a series of repositories provided by a third-party. They had an API, so the plan was to build a small tool to simplify interaction with these data sources and eventually automate the process entirely. Multiple endpoints, rate limits, batching logic for partial failures, a range of error codes, multi-environment support — not a trivial project, but it was relatively self-contained and not related to anything on any of my teams’ roadmaps.
In a pre-AI world, my workflow would have looked something like this: pick a language, find boilerplate examples, wrestle with authentication, build a stub, test, fail, revisit docs, test again, succeed. Then repeat for every endpoint. Then remember all the things I forgot: error handling, secrets management, test coverage.
Given that my opportunities for deep, focused coding time are rare, getting to a working MVP normally took a week or more, and I’d be working in fits and starts the whole time.
The worst part was that almost none of that learning transferred. Every API is different. I’d spend most of my time acquiring knowledge I’d never use again. The boilerplate (the part that isn’t actually adding value to the solution) consumed the most time every single time.
The Agentic Approach
This time, I wanted to try something different: a more agentic workflow. This is fundamentally different from AI-assisted coding, where the engineer is still the primary driver and the AI is making just-in-time suggestions.
In an agentic workflow, you’re guiding an AI agent while the agent does most of the work. It’s not unlike how a senior engineer might work with an early-career engineer today: describe the desired end state, the constraints, the available resources, and then let them go solve the problem.
I started with a plan. Instead of jumping into code, I gave the agent all of the API documentation as context, described the endpoints I needed to consume, explained the input format, and specified what the output should look like. Then I asked it to draft an approach before writing a single line of code. This mirrors how we already work on complex engineering problems. We write an RFC before building, catching gaps in logic before the expensive work begins.
The plan was solid. The agent understood the API specs from the PDF I’d shared, and even suggested support for multiple data upload formats I hadn’t considered. I approved the plan and moved on to other work while it built.
A few minutes later, I had working software to test against. The happy path ran fine. I tried breaking it and found missing error states with unhelpful messages. I pointed the agent to the error code documentation and asked it to update the solution accordingly. A few moments later, I had a more robust iteration.
This continued for about 90 minutes. Sometimes I told the agent what to do. Other times I asked it to compare two approaches and recommend one. By the end, I had a production-ready tool that handled failure cases, multi-threading, and rate limiting. The tool did its job: the regulatory verification process that previously required manual effort can now run automatically.
Building this the old way would have taken weeks.
Software Engineering Is Dead. Long Live Software Engineering.
Here’s what surprised me most: this experiment worked because of my software engineering background, not despite it.
One API endpoint supported concurrent requests. I knew that meant I could process data faster, but also that any one of those requests could fail partially. I asked the agent to record successes and failures separately so I could inspect failures without reprocessing everything. That’s not intuition the agent would have surfaced on its own — it came from experience with distributed systems. A less experienced guide might not have caught it.
In another moment, my tests failed due to a variable referenced outside the scope where it was defined. I spotted the issue immediately and told the agent exactly where to look and how to fix it, which was faster and cheaper (in tokens) than asking it to debug from scratch.
This points to something important about where the industry is heading:
Software engineering principles still apply. The how matters less. You no longer need to know which HTTP library a language favors or how to structure a project scaffold from memory. But you still need to understand resiliency, testability, and performance to guide an agent toward a solution that actually holds up.
Understanding the goal matters more than ever. They say you truly understand something when you can teach it. To be effective in an agentic workflow, you need to deeply understand what you’re building and why because that context is the primary input.
This is a meaningful unlock for engineering leaders. Most writing (and research) about AI-assisted coding addresses engineers who code all day. This is for the ones who used to. Agentic tools lower the floor on staying technically close to the work, whether that’s building ad-hoc tooling, supporting a team under pressure, or prototyping a solution quickly. The old methods are changing in ways that make them more accessible to those of us who haven’t had the maker time to participate in the same way.
When you don’t have to fight the learning curve, building is fun again. Previously, my limited coding time was consumed just getting setup working and boilerplate in place. Now I can jump to the actual problem in a fraction of the time.
What This Means Going Forward
As an industry, we’re still working through the implications: converging roles, upended assumptions about who can write code and what good code looks like, genuine uncertainty about what engineering careers look like in five years.
But the leaders who will thrive in this next phase won’t be the ones who hand off technical judgment to the agent. They’ll be the ones who bring deep enough understanding of what good software looks like to guide it well.
The floor on staying technically close just dropped. The ceiling on what that closeness can produce just went up.
I first learned to code because I wanted to build things — to solve my own problems, then other people’s. In a world where engineering leaders are being asked to deliver more in less time with fewer resources, time is the most precious commodity we have. Agentic workflows give some of that time back, and they do it in a way that makes the building feel like building again.

