You Bought The Tools. Now What?
Across my engineering teams, everyone has equal access to the same AI tools. Same licenses, same capabilities, same starting line. And yet when I look across the organization, I see something that has nothing to do with seniority or which part of the business someone works in: some engineers have fundamentally changed how they work, while most are still trying to add AI to their existing workflows.
Same tools. Wildly different outcomes.
This isn’t unique to my teams. The real question isn’t what separates the early adopters from the rest; it’s what prevents the majority of an organization from adopting these new ways of working at all.
Geoffrey Moore wrote about a version of this problem in Crossing the Chasm. His framework describes the gap between early adopters and the mainstream majority as the central challenge of technology adoption. Early adopters move on novelty; they read the docs unprompted, piece together their own workflows, and show up energized. The majority won’t cross unless they see that the technology solves their specific problem, in a way they can immediately apply to their actual work. They need to feel the benefit before they’ll change the behavior.
Buying the tool gets you early adopters. Getting to the other side of the chasm requires something else entirely.
Your Team Is Having an Identity Crisis
One thing I’ve learned to expect when rolling out AI tools: at some point, someone is going to have a quiet moment of “what is my job now?”
For a lot of engineers, writing code is deeply tied to professional identity. When a tool shows up that can produce a working implementation from a short description, it can feel less like a productivity boost and more like a referendum on their career. I’ve heard it described, directly and indirectly, as “cheating.” That word is telling. It implies there’s a right way to do the work, and that leaning on an agent violates it somehow.
The most useful reframe I’ve found: writing code is just one of the tools engineers use to do their job. The job itself is something different. It’s thinking critically about hard problems, making decisions under uncertainty, understanding user needs well enough to translate them into working software. AI doesn’t do that. It can help with that, if you know what you’re trying to accomplish.
And real transformation is more than just vibe-coding your way to Production. Just because someone on your team one-shotted an agent to build a task tracker doesn’t mean your company is ready to replace Jira. Security, resiliency, availability, performance, and user experience are still deeply human concerns. The bar for production software hasn’t moved.
AI Works Like We Work
Here’s an analogy I keep coming back to: when was the last time someone handed you a single-sentence requirement and you built exactly what they envisioned, with no bugs, on the first pass?
Probably never.
So why do we expect AI to work that way? The quality of the output is directly proportional to the quality of the input. Starting with a clear understanding of what needs to be built and why is more important with AI, not less. The planning work that engineers sometimes skip when working alone becomes essential when working with an agent.
The other piece is that AI needs coaching, and it needs context that is specific to your organization. There’s a reason the capabilities you configure for an agent are called “skills” — they represent accumulated knowledge about how your team works, what your codebase expects, and what good looks like in your specific environment. Your newest engineer didn’t arrive on day one knowing your team prefers tabs over spaces, or how you structure a PR description, or which corners are never worth cutting in a regulated context. You built that understanding together over time. An agent is no different. The ones that work well have been shaped; the ones that frustrate people usually haven’t been.
I’ve started encouraging engineers to treat AI like a capable coworker rather than a search engine. That means starting conversations the way you’d start a working session: “Here’s what I’m trying to build. I’m thinking about Option A versus Option B. What am I missing? Is there an Option C I haven’t considered?” Brainstorm before you build. Get alignment on the approach before you start generating output.
The other shift worth pushing for: stop thinking of AI as a tool that assists while you do the work, and start asking how it changes the shape of the work. The goal isn’t to keep doing your job the same way with an AI reviewing your output. The goal is to shift toward instructing AI to do the work while you review. That’s a fundamentally different workflow, and it takes real time to internalize.
Making It Easier to Do the Right Thing
Beyond individual workflows, you need structure. Policies and frameworks that make it easy to do the right thing and harder to accidentally do the wrong one. This is especially true in regulated environments, where the stakes of getting it wrong are real.
Some questions worth answering explicitly: Can engineers use any MCP server, or is there an approved list? What’s the process for piloting a new AI tool before it becomes standard practice? Where does AI-generated code require additional review? Leaving these unanswered doesn’t create freedom; it creates ambiguity that prevents even your early adopters from experimenting and learning to their fullest potential.
Equally important: show people what great looks like. Not “look at my AI-generated code” as an example, but “I used to spend three hours a week on this, and now I can focus on the thing that actually moves the needle.” Use cases that were too expensive or time-consuming to attempt before are the ones worth amplifying. People need to see a different way of working before they can imagine it for themselves.
Training, Experimentation, and the Question You Have to Answer
You can’t stumble people into competency. AI is not a stepwise change; it’s foundational, and it doesn’t happen overnight. Guided training matters, and so does making time for it — which means treating it as an organizational priority, not a personal development side project that engineers are supposed to squeeze in between sprints.
Create space for experimentation, but set the expectation that some share of those experiments convert into real process improvements. Exploration that doesn’t feed back into the system is just tinkering.
And then, after all of that: measure. If the goal is “be more productive,” what does that mean specifically? Has issue cycle time improved? Has deployment frequency increased? Have you been able to redirect time from reactive support work toward roadmap investments? If you don’t have answers to those questions, you’re spending a lot of money to feel more productive without knowing whether you actually are.
The tools are table stakes. Getting your team to the other side of the chasm is the actual work.

