<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Jordan Stone]]></title><description><![CDATA[Engineering leadership, AI strategy, and building high-performing teams.]]></description><link>https://www.jordanstone.tech</link><generator>Substack</generator><lastBuildDate>Sat, 18 Apr 2026 15:43:33 GMT</lastBuildDate><atom:link href="https://www.jordanstone.tech/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jordan Stone]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[engineeringslo@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[engineeringslo@substack.com]]></itunes:email><itunes:name><![CDATA[Jordan Stone]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jordan Stone]]></itunes:author><googleplay:owner><![CDATA[engineeringslo@substack.com]]></googleplay:owner><googleplay:email><![CDATA[engineeringslo@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jordan Stone]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[I Fell In Love With Writing Code Again]]></title><description><![CDATA[I recently fell in love with writing code again.]]></description><link>https://www.jordanstone.tech/p/i-fell-in-love-with-writing-code</link><guid isPermaLink="false">https://www.jordanstone.tech/p/i-fell-in-love-with-writing-code</guid><dc:creator><![CDATA[Jordan Stone]]></dc:creator><pubDate>Wed, 04 Mar 2026 13:03:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U_Lu!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d5e0658-52c1-4698-b887-3e77ceeb7e9f_800x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently fell in love with writing code again. Not because anything changed about my free time as a VP of Engineering, but because the nature of building software itself changed.</p><p>My job doesn&#8217;t afford me a lot of time to actually write code. Two years ago, that sentence would have felt like a real loss. Even early last year, coding with GenAI felt like working with an overly ambitious autocomplete plugin &#8212; occasionally faster than searching StackOverflow, but not fundamentally different. Fast-forward to today and the conversation has shifted from &#8220;should you be using AI coding assistants?&#8221; to something more interesting: over 90% of engineers are already using them. The question now is how to adapt your workflow to make the most of them.</p><p>For me, the answer has been agentic development, and it has made building software feel genuinely exciting again.</p><h2>The Problem With The Old Way</h2><p>Recently, I needed to help a team meet some regulatory requirements that involved verifying data against a series of repositories provided by a third-party. They had an API, so the plan was to build a small tool to simplify interaction with these data sources and eventually automate the process entirely. Multiple endpoints, rate limits, batching logic for partial failures, a range of error codes, multi-environment support &#8212; not a trivial project, but it was relatively self-contained and not related to anything on any of my teams&#8217; roadmaps.</p><p>In a pre-AI world, my workflow would have looked something like this: pick a language, find boilerplate examples, wrestle with authentication, build a stub, test, fail, revisit docs, test again, succeed. Then repeat for every endpoint. Then remember all the things I forgot: error handling, secrets management, test coverage.</p><p>Given that my opportunities for deep, focused coding time are rare, getting to a working MVP normally took a week or more, and I&#8217;d be working in fits and starts the whole time.</p><p>The worst part was that almost none of that learning transferred. Every API is different. I&#8217;d spend most of my time acquiring knowledge I&#8217;d never use again. The boilerplate (the part that isn&#8217;t actually adding value to the solution) consumed the most time every single time.</p><h2>The Agentic Approach</h2><p>This time, I wanted to try something different: a more agentic workflow. This is fundamentally different from AI-assisted coding, where the engineer is still the primary driver and the AI is making just-in-time suggestions.</p><p>In an agentic workflow, you&#8217;re <em>guiding</em> an AI agent while the agent does most of the work. It&#8217;s not unlike how a senior engineer might work with an early-career engineer today: describe the desired end state, the constraints, the available resources, and then let them go solve the problem.</p><p>I started with a plan. Instead of jumping into code, I gave the agent all of the API documentation as context, described the endpoints I needed to consume, explained the input format, and specified what the output should look like. Then I asked it to draft an approach before writing a single line of code. This mirrors how we already work on complex engineering problems. We write an RFC before building, catching gaps in logic before the expensive work begins.</p><p>The plan was solid. The agent understood the API specs from the PDF I&#8217;d shared, and even suggested support for multiple data upload formats I hadn&#8217;t considered. I approved the plan and moved on to other work while it built.</p><p>A few minutes later, I had working software to test against. The happy path ran fine. I tried breaking it and found missing error states with unhelpful messages. I pointed the agent to the error code documentation and asked it to update the solution accordingly. A few moments later, I had a more robust iteration.</p><p>This continued for about 90 minutes. Sometimes I told the agent what to do. Other times I asked it to compare two approaches and recommend one. By the end, I had a production-ready tool that handled failure cases, multi-threading, and rate limiting. The tool did its job: the regulatory verification process that previously required manual effort can now run automatically.</p><p>Building this the old way would have taken weeks.</p><h2>Software Engineering Is Dead. Long Live Software Engineering.</h2><p>Here&#8217;s what surprised me most: this experiment worked <em>because</em> of my software engineering background, not despite it.</p><p>One API endpoint supported concurrent requests. I knew that meant I could process data faster, but also that any one of those requests could fail partially. I asked the agent to record successes and failures separately so I could inspect failures without reprocessing everything. That&#8217;s not intuition the agent would have surfaced on its own &#8212; it came from experience with distributed systems. A less experienced guide might not have caught it.</p><p>In another moment, my tests failed due to a variable referenced outside the scope where it was defined. I spotted the issue immediately and told the agent exactly where to look and how to fix it, which was faster and cheaper (in tokens) than asking it to debug from scratch.</p><p>This points to something important about where the industry is heading:</p><p><strong>Software engineering principles still apply.</strong> The <em>how</em> matters less. You no longer need to know which HTTP library a language favors or how to structure a project scaffold from memory. But you still need to understand resiliency, testability, and performance to guide an agent toward a solution that actually holds up.</p><p><strong>Understanding the </strong><em><strong>goal</strong></em><strong> matters more than ever.</strong> They say you truly understand something when you can teach it. To be effective in an agentic workflow, you need to deeply understand what you&#8217;re building and why because that context is the primary input.</p><p><strong>This is a meaningful unlock for engineering leaders.</strong> Most writing (and research) about AI-assisted coding addresses engineers who code all day. This is for the ones who used to. Agentic tools lower the floor on staying technically close to the work, whether that&#8217;s building ad-hoc tooling, supporting a team under pressure, or prototyping a solution quickly. The old methods are changing in ways that make them more accessible to those of us who haven&#8217;t had the maker time to participate in the same way.</p><p><strong>When you don&#8217;t have to fight the learning curve, building is fun again.</strong> Previously, my limited coding time was consumed just getting setup working and boilerplate in place. Now I can jump to the actual problem in a fraction of the time.</p><h2>What This Means Going Forward</h2><p>As an industry, we&#8217;re still working through the implications: converging roles, upended assumptions about who can write code and what good code looks like, genuine uncertainty about what engineering careers look like in five years.</p><p>But the leaders who will thrive in this next phase won&#8217;t be the ones who hand off technical judgment to the agent. They&#8217;ll be the ones who bring deep enough understanding of what good software looks like to guide it well.</p><p>The floor on staying technically close just dropped. The ceiling on what that closeness can produce just went up.</p><p>I first learned to code because I wanted to <em>build things</em> &#8212; to solve my own problems, then other people&#8217;s. In a world where engineering leaders are being asked to deliver more in less time with fewer resources, time is the most precious commodity we have. Agentic workflows give some of that time back, and they do it in a way that makes the building feel like building again.</p>]]></content:encoded></item><item><title><![CDATA[Why I Write a Weekly Engineering Update]]></title><description><![CDATA[Clear communication is one of the most important skills a leader can develop.]]></description><link>https://www.jordanstone.tech/p/why-i-write-a-weekly-engineering</link><guid isPermaLink="false">https://www.jordanstone.tech/p/why-i-write-a-weekly-engineering</guid><dc:creator><![CDATA[Jordan Stone]]></dc:creator><pubDate>Tue, 03 Feb 2026 13:02:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-rz6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Clear communication is one of the most important skills a leader can develop. More than technical depth or strategic insight, the ability to communicate ideas clearly determines whether those ideas ever turn into impact. If people can&#8217;t understand what you&#8217;re thinking, they can&#8217;t act on it&#8212;and the value of your thinking stays locked in your own head.</p><p>In a remote-first world, where most collaboration happens asynchronously and an increasing amount of work depends on how well we structure context for other humans <em>and</em> for LLMs, written communication has quietly become a force multiplier for leadership.</p><p>Writing doesn&#8217;t just communicate intent&#8212;it sharpens it. The act of writing forces you to slow down, clarify assumptions, and pressure-test your own thinking before asking others to engage with it. When shared, writing scales your priorities far beyond what meetings or real-time conversations can reach. It creates durable signal. People look to leaders for cues about what matters, and writing those priorities down reinforces them in a way spoken communication rarely can.</p><p>This is why establishing a weekly internal writing practice has been one of the highest-leverage habits I&#8217;ve developed as an engineering leader.</p><h3><strong>Communicating Is Hard. Communicating Broadly Is Harder.</strong></h3><p>As organizations grow, communication doesn&#8217;t get incrementally harder. It changes entirely.</p><p>Early on, when a team is fewer than ten people, communication is direct. Everyone can be involved in most discussions, context travels quickly, and consensus is relatively easy to reach. Somewhere around ten people, that starts to break down. Roles specialize, attention fragments, and not everyone needs, or wants, to be part of every conversation.</p><p>You tend to see additional inflection points around 25, 50, and again around 100 people. The exact numbers vary, but the pattern is consistent: as headcount grows, the cost of shared context rises faster than the team itself.</p><p>Metcalfe&#8217;s Law helps explain why. Originally developed to explain telecommunications network complexity, the law states that as the number of nodes in a network grows, the number of possible connections between those nodes grows exponentially. A team of 25 people has roughly 300 possible communication paths. At that scale, it&#8217;s no longer feasible to rely on synchronous conversation or informal channels to keep everyone aligned.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-rz6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-rz6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!-rz6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!-rz6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!-rz6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-rz6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/acc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7835742,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://engineeringslo.substack.com/i/186639181?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-rz6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!-rz6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!-rz6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!-rz6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facc7e8f4-571f-42eb-93ba-b7ff1f938635_2816x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The key takeaway is simple: while team size grows linearly, communication complexity grows exponentially. At a certain point, it becomes impossible to traverse every path and leaders must shift from <em>participating in communication</em> to <em>designing for it</em>.</p><h3><strong>My Journey</strong></h3><p>For most of my career, collaborative chat tools have been the primary mode of communication (RIP HipChat). I&#8217;ve leaned heavily into ChatOps across multiple roles, pulling deployments, incidents, customer support requests, and operational alerts into tools like Slack and Teams.</p><p>While these tools are incredibly powerful, they&#8217;re also incredibly noisy.</p><p>I&#8217;ve written before about the <a href="https://open.substack.com/pub/engineeringslo/p/got-a-sec-is-killing-your-productivity?r=4t7y0&amp;utm_campaign=post&amp;utm_medium=web">cost of constant interruption</a>, but there&#8217;s a second-order effect that shows up as teams scale: important announcements get lost. Major changes, new initiatives, or business updates scroll by in a sea of messages. Some people see them in the moment. Others catch up later. Some never see them at all.</p><p>When teams are small, this is manageable. Daily standups and regular team meetings provide natural reinforcement. But as organizations grow, those shared forums disappear. You now have multiple teams, each with their own priorities and roadmaps. Chat remains great for real-time coordination, but its ephemeral nature makes it a poor substitute for durable communication.</p><p>What gets harder still is communicating <em>why</em> certain initiatives matter: product changes, engineering investments, or shared learning across teams. People who were around in the early days often miss the sense of omniscience they once had. They want context, but there&#8217;s no longer a single place to get it.</p><h3>Publishing Alignment</h3><p>By the time an engineering organization reaches ~20 people, no one, including the leader, has full visibility anymore. The challenge becomes maintaining alignment without sacrificing autonomy.</p><p>This is where a lightweight, weekly written update becomes incredibly effective.</p><p>I&#8217;ve found that a short internal post, shared with Engineering and key partners in Product, Design, and other functions, strikes the right balance. The exact format matters less than the consistency, but this structure has worked well for me:</p><p><strong>Intro</strong></p><p>Start with an industry article, research paper, or internal topic. Summarize it briefly and share your perspective. This sets the tone and signals the kinds of problems you&#8217;re spending time thinking about.</p><p><strong>Org-Wide Updates</strong></p><p>Use this section to highlight initiatives that affect the entire engineering organization. This reinforces priorities and helps teams understand what&#8217;s happening outside their immediate scope.</p><p>This is also where you should share <em>your</em> work. As a leader, your job is to work on the system, not in it. Making that work visible helps others understand how decisions are made and what outcomes you&#8217;re driving toward.</p><p><strong>Team Updates</strong></p><p>A few sentences per team is enough. Focus on outcomes: launches, decisions, progress against goals. This keeps people connected to the broader organization without dragging them into day-to-day details.</p><p><strong>Pro Tip:</strong> Once you have managers reporting to you, delegate this section. It gives them practice writing for a broad audience and reinforces an outcomes-over-output mindset.</p><p><strong>Optional Sections</strong></p><p>Add sections as needed. Reminders for infrequent but important tasks. A &#8220;tip of the week&#8221; when rolling out a new tool or process. Evolve the format as your organization evolves.</p><h3><strong>Publishing Tips</strong></h3><p><em>Consistency matters</em>. Block time on your calendar, publish on the same day each week, and share the post automatically in Slack or Teams. I&#8217;ve found mid-day Friday tends to get more engagement than end-of-day posts that disappear into the weekend.</p><p><em>Templates help</em>. Whether you use Confluence, Notion, or another tool, formalizing the structure reduces friction and lowers the activation energy to write.</p><p><em>Start early</em>. Create a draft on Monday and add to it throughout the week. By the time your writing block arrives, you&#8217;re editing, not starting from scratch. This also creates a subtle accountability mechanism for the initiatives you plan to push forward that week.</p><h3><strong>The Benefits</strong></h3><p>Writing clarifies thinking. Publishing amplifies it. This practice also creates another opportunity for you to start to <a href="https://en.wikipedia.org/wiki/Nemawashi">nemawashi</a> ideas with a broader audience, which helps to simplify the decision-making process later on.</p><p>Over time, these posts also create a living record of your organization&#8217;s progress. Quarterly and annual reviews become much easier when you already have a weekly trail of decisions, outcomes, and lessons learned.</p><p><strong>Pro Tip:</strong> Periodically feed your posts into an LLM to generate summaries of your organization&#8217;s most impactful work over longer time horizons. It&#8217;s an efficient way to surface patterns you might otherwise miss, and helps to avoid recency bias.</p><h3>Summary</h3><p>As teams scale, communication stops being something you do and starts being something you design. The informal mechanisms that work for small groups do not survive growth, and real-time tools alone cannot carry shared context across an organization.</p><p>A weekly internal writing practice is a simple but powerful way to address that gap. It forces clearer thinking, creates durable signal, and scales alignment without pulling people into unnecessary meetings or conversations. Over time, it also builds a record of decisions, priorities, and progress that makes reflection and course correction easier.</p><p>Writing is a leadership tool. Used consistently, it helps teams stay aligned as complexity grows, which is exactly when alignment matters most.</p>]]></content:encoded></item><item><title><![CDATA[AI Doesn’t Fix Queues]]></title><description><![CDATA[Why Coding Assistants Don&#8217;t Always Improve Organizational Performance]]></description><link>https://www.jordanstone.tech/p/ai-doesnt-fix-queues</link><guid isPermaLink="false">https://www.jordanstone.tech/p/ai-doesnt-fix-queues</guid><dc:creator><![CDATA[Jordan Stone]]></dc:creator><pubDate>Wed, 21 Jan 2026 13:03:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U_Lu!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d5e0658-52c1-4698-b887-3e77ceeb7e9f_800x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You&#8217;ve seen the headlines, read the articles in your leadership newsletter, and set aside some budget for the tools. You announce to your team that everyone has access to, and must use, GitHub Copilot (or OpenAI Codex, or Gemini Code Assist, or Claude Code, or insert-your-tool-of-choice-here). Productivity will grow exponentially. Features will ship in a fraction of the time. Parades will be held in your hometown, heralding your achievements. </p><p>Then reality kicks in.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jordanstone.tech/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Engineering Systems, Leadership, and Organizations! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Weeks or months after the roll out, you review the key metrics you identified to measure impact (you <em><strong>did</strong></em> identify metrics to measure impact, right?) and see that pull request throughput is up significantly: engineers are opening more PRs now than they were before the rollout of the coding assistant.</p><p>But wait&#8230;PR cycle time isn&#8217;t down. In fact, it might be flat or even climbing. Your engineers are writing more code faster, but you&#8217;re not shipping features any faster. Local speed increased while end-to-end delivery did not. What gives?</p><h2>It&#8217;s a Queueing Problem</h2><p>This is a classic queueing problem.</p><p>When I was studying for my undergraduate degree in Industrial and Systems Engineering, one of my courses centered around queueing theory: the mathematical study of waiting lines. It was in this course that I learned the basic principles of work-in-progress (WIP), scheduling policies, and customer waiting behavior. It also helped explain why intuitive local optimizations often fail at the system level.</p><p>In queueing theory, the major components include the arrival process, service process, number of servers, queue capacity, and the customer population.</p><p>To use a non-technical example, think of a coffee shop. Customers arrive and get in line behind one or more registers, where a barista takes a customer&#8217;s order and makes their coffee. When the customer receives their coffee, they leave the shop.</p><p>How efficiently that queue runs depends on several factors:</p><ul><li><p>How many people are in line?</p></li><li><p>How many baristas are working?</p></li><li><p>How long does each order take to complete?</p></li><li><p>Is it the 7 AM rush or the 4 PM lull?</p></li></ul><p>This relationship is often summarized in <strong>Little&#8217;s Law</strong>, which describes how work accumulates in a system over time:</p><p><code>L = &#955;W</code></p><p>In practical terms, if people arrive faster than they can be served, the size of the queue will grow. Pretty obvious, right?</p><h2>The Software Development Queue</h2><p>I first began connecting queueing theory to software development when trying to understand why a team&#8217;s velocity didn&#8217;t improve even after finding ways to speed up coding time. Even adding engineering capacity didn&#8217;t drive meaningful results. It followed a familiar pattern: a local improvement that failed to produce a system-wide result.</p><p>Sure, there&#8217;s the whole <a href="https://en.wikipedia.org/wiki/The_Mythical_Man-Month">mythical man month</a> problem, but surely adding just a single engineer should have a positive impact. My explanation ultimately had less to do with the number of software engineers and more to do with the number of QA engineers.</p><p>In a team where dedicated QA is part of the process, and where software must be verified before it can be considered complete, accelerating development simply increases the size of the queue. In the equation above, &#955; represents the arrival rate of work in the form of tickets requiring verification, while W represents the time it takes a QA engineer to review and verify a single ticket.</p><p>If QA capacity remains fixed, writing more software increases arrivals without increasing service capability. Waiting time grows, work piles up, and overall throughput remains unchanged. To resolve this, we had to either increase QA capacity or reduce verification time. Only then did throughput across the system improve.</p><h2>Coding Assistants Don&#8217;t Speed Up The System</h2><p>Software development is a series of interconnected queues. There&#8217;s a queue for planning and estimation, a queue for writing code, a queue for reviewing that code, and a queue for deploying to production.</p><p>A coding assistant improves the code-writing queue, but on its own it does nothing to change the service rate of the others. The output of the writing queue becomes the input to the review queue, increasing the arrival rate. If the rest of the system remains unchanged, throughput does not increase.</p><p>Just like in Braess&#8217;s Paradox, where adding roads can <em>decrease</em> traffic flow, adding more PRs to be reviewed can slow down overall team productivity.</p><h2>Addressing Service Time In AI-Enabled Systems</h2><p>A <a href="https://www.faros.ai/blog/key-takeaways-from-the-dora-report-2025">recent study</a> from FarosAI found that while AI significantly improved individual developer performance, code review time increased by 91%. Code was produced faster than it could be reviewed, creating a bottleneck that resulted in no measurable improvement in organizational performance.</p><p>Software development is a symphony of systems that must work together to produce quality output. Improving a single component can generate impressive local gains while leaving overall system performance unchanged. To see real impact, service time must be addressed across the system, not just at the point of code creation. That means looking beyond where AI <em>feels</em> productive and applying it deliberately at the constraints that limit flow. Some practical ways to address constraints on flow include:</p><ul><li><p>Incorporate AI into later stages of the SDLC, including code review and deployment. Productivity gains achieved during code writing must propagate downstream. When review and deployment service times decrease alongside code creation, queues stabilize and throughput improves.</p></li><li><p>Consider automatic approval of certain classes of pull requests. This approach requires mature test automation, observability, and governance, and may not be viable in highly regulated environments. </p></li></ul><p>The 2025 DORA report describes AI as an organizational mirror, reflecting existing strengths and weaknesses. Teams lacking clear standards, documentation, testing, or release discipline will see those gaps amplified. Teams with mature systems in place are more likely to experience AI as an accelerant.</p><h2>Wrapping It Up</h2><p>As an industry, we are still early in AI adoption. Even with widespread use, teams continue to grapple with tools that do not fully understand intent and occasionally produce incorrect output. But we have moved past asking whether AI is useful and into the harder work of learning how to use it well.</p><p>There are no silver bullets. Bolting an AI tool onto existing workflows is not enough. Teams that ground themselves in fundamental software engineering principles, then deliberately adapt those principles to new tools, new collaboration patterns, and new delivery models, will see the greatest benefit.</p><p>AI does not fix queues. It makes them visible. If you want organizational impact, start by mapping your delivery system end to end, then invest where AI increases arrival rates or reduces service time to improve flow across the system.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.jordanstone.tech/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Engineering Systems, Leadership, and Organizations! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Impact Over Productivity: Rethinking Engineering Metrics]]></title><description><![CDATA[This post is about Engineering Metrics, but it won&#8217;t tell you how to measure the productivity of your engineering teams.]]></description><link>https://www.jordanstone.tech/p/impact-over-productivity-rethinking</link><guid isPermaLink="false">https://www.jordanstone.tech/p/impact-over-productivity-rethinking</guid><dc:creator><![CDATA[Jordan Stone]]></dc:creator><pubDate>Sun, 01 Jun 2025 19:51:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U_Lu!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d5e0658-52c1-4698-b887-3e77ceeb7e9f_800x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This post is about Engineering Metrics, but it won&#8217;t tell you how to measure the productivity of your engineering teams. On the contrary, if you don&#8217;t have systems in place to identify low performers and mechanisms to move them out of your organization; if you&#8217;re trying to squeeze every hour of coding time out of your engineers; or if you think you can just &#8220;turn on&#8221; data collection and see improvements, an Engineering Metrics program is likely to fail from the start. </p><p>Answering a Basic Question</p><p>I started my own journey toward establishing an Engineering Metrics program years ago in an attempt to answer a basic question: How can I know if the Engineering organization I lead is best-in-class? And what does &#8220;best-in-class&#8221; even mean? Okay, so maybe that&#8217;s two questions, but there&#8217;s an important distinction here: if you&#8217;re trying to use metrics to answer questions like &#8220;is my team working enough&#8221; or &#8220;should people be writing more code?&#8221;, you&#8217;ll erode any trust you might have built and end up spending more time working against people gaming metrics than you will working with your team to improve performance. More on trust and metrics later. But my original questions, the very basic question you may also be asking, is surprisingly hard to answer. </p><p>Measuring in the Wrong Direction</p><p>Like any good Engineering leader, I was well aware of <a href="https://cloud.google.com/devops/state-of-devops">Google&#8217;s annual State of DevOps Report (DORA)</a> and the four key metrics the research group espoused as being indicative of elite organizations. For the uninitiated among us, there are plenty of other articles explaining what DORA metrics are and how to measure them and I will leave researching them in detail as an activity for the reader. The first time I tried to institute an Engineering Metrics program, I built it around the DORA metrics. After all, if it was good enough for Google, it must be good enough for us, right? As it turns out, I found starting with DORA to be difficult for two reasons: First, I found instrumenting systems to reliably measure some of the DORA metrics, such as change failure rate, to be exceedingly difficult. Measuring a failed deployment is relatively easy, but that&#8217;s really just a signal of your automation tooling&#8217;s quality. If you want to measure a failed software change, I&#8217;ve found that you also need a way to reliably measure if that software failure is an escaped failure or a latent one and <em>that</em> differentiation tends to be more of a rule-of-thumb and a judgement call than it is something you can automatically measure. And if you can&#8217;t automatically measure it &#8212; if you rely on people to &#8220;mark it down&#8221; every time it happens &#8212; you&#8217;re unlikely to get strong, consistent measurement of any metric as your organization grows. Some organizations do have ways to measure this, though, but I found trying to solve for this with a brand new program tended to be too much inertia to overcome at the start. Setting aside the hard-to-measure metrics, though, even the easier metrics, like deployment frequency, revealed a second challenge with DORA: no one outside of engineering really cared. I believe this was ultimately due to the fact that DORA metrics are <em>engineering</em> metrics, not impact metrics, and Engineering&#8217;s ultimate responsibility is to positively impact the business. There was something missing in the first few metrics programs I started that failed to launch because we weren&#8217;t able to get broader buy-in from the business about the importance of measuring the things that more research than I could ever commit to the topic had shown was highly correlated with elite performance among companies. What was I doing wrong?</p><p>Measuring Where, Not How</p><p>DORA metrics are ultimately a measurement of <em>how</em> a team or organization delivers. Teams who want to improve deployment frequency or lead time should first introduce automation and reduce manual toil, for example. The missing link here is that my peers and other leaders were more interested in <em>what</em> we were delivering. Product had their roadmaps and the things they wanted to ship each quarter to move the needle on specific business metrics, and while we were delivering against that roadmap, we knew our time was also being spent in other areas &#8212; we just didn&#8217;t know how much. This made capacity planning a challenge. We tried fancy spreadsheets with people-to-hours formulas and we built in plenty of buffer for time off, unplanned work, and even some effort to &#8220;pay down tech debt&#8221;, but there was still a gap we couldn&#8217;t reconcile. Where was our time going? Eventually, this became my new single question to answer and is the basis around which a successful metrics program was started. I set out to answer the question &#8220;where is Engineering spending our time, and how can we shift those investments into the places we <em>want</em> to spend time as opposed to where we <em>have</em> to spend time?&#8221; Ok, again, that was two questions but if you noted that, at least I know you&#8217;re paying attention. Once we began to measure where teams were spending their time, the details of which I will leave for a future post, we were able to get a much better understanding of why our original approach to planning wasn&#8217;t capturing everything that was being worked on. I think this is an incredibly important story to tell for Engineering organizations, because while no sustainable organization is spending all of their time adding new features, it&#8217;s often the case that stakeholders outside of Engineering are focused only on the new features and enhancements being built. This makes sense, since these stakeholders are primarily focused on <em>growing the business</em>. However, what is often missed is acknowledgement that Engineering plays a critical role in <em>running the business</em>, as well &#8212; there is time invested in keeping software running even to minimally meet customer expectations. Once we had a better understanding of where we were spending our time, it allowed us to focus on improving Engineering impact. I prefer &#8220;impact&#8221; instead of &#8220;productivity&#8221; because I think the words we choose here matter. &#8220;Productivity&#8221; implies a machine-like focus on inputs and outputs. It also tends to suggest that we are measuring how hard a person is working, or how many hours they spend behind the keyboard. Instead, &#8220;impact&#8221; communicates that we trust people to do the right thing and that we are focused on leveraging those people to drive the best possible outcome for the business. By focusing on &#8220;impact,&#8221; you clarify that you&#8217;re measuring the outcomes of the Engineering organization as a whole on the business, and not the output of individuals. This focus on impact allowed us to shift the conversation around things like DORA metrics from being the goal towards being a tool we could use to measure the impact of changes <em>after</em> we&#8217;ve identified a specific challenge or opportunity.</p><p>Starting Your Metrics Program</p><p>How you start your Engineering Metrics program depends a lot on the size and culture of your organization. If your organization is large (I&#8217;d say anything over 50+ engineers), it&#8217;s best to start measuring metrics on a single team with a single goal and to scale from there. For example, if you&#8217;re looking to improve delivery predictability, find a relatively well-performing team and work together to categorize where that team is spending their time. Surprised by how much effort is going into getting software out to Production? Now you can use DORA metrics like lead time or deployment frequency to test changes to team process or to measure improvements from automation. As these metrics improve, you&#8217;ll see the time invested in Production deployments go down and an ability to reinvest that time elsewhere increase. Now you can use that success story to scale your metrics program to other teams across the organization. Again, the goal here is not &#8220;get to 10 deployments a day because that&#8217;s what Elite organizations do.&#8221; Instead, use deployment frequency to measure if changes, like additional automation or improved PR processes, impact the metric. If your organization is small enough (say, about 10 or fewer), you can take the same measurement approach but you&#8217;ll likely be able to just roll it out to everyone. While starting a metrics program with smaller teams may seem like a premature optimization, using metrics to improve team processes and deliver more business value takes time, and there&#8217;s a reasonably large barrier to overcome in building trust throughout the organization that metrics are being used the right way. I&#8217;ve found this much easier to accomplish in smaller teams, and it helps to set the culture early on of using metrics and data to measure and improve effectiveness. As you grow, that culture of measurement and trust in how that data is used is much easier to scale. You start hiring people who are data-driven and focused on continuous improvement, and a flywheel effect takes place. When you&#8217;re first starting to collect data, from investment areas to things like DORA metrics, start collecting passively at first. Don&#8217;t roll into a team retrospective waving your Metrics Bible and telling the team their lead time is too long. Listen to the challenges your teams raise in one-on-ones and retrospectives, and see if you can identify the metrics you can watch to validate whether or not changes on those teams are having an impact. Celebrate the wins publicly, and avoid using metrics to point out failures or things that aren&#8217;t moving in the right direction.</p><p>I&#8217;ve also found it helpful to identify champions on teams who are data-driven and interested in the topic of using metrics for continuous improvement. Teach them how to review the same metrics dashboards you&#8217;re looking at. Tell them what you&#8217;re looking for when reviewing this data, and ask what stands out to them. Ask what&#8217;s hard about getting work done, and then look together through the data to identify what metrics you can watch to measure improvement based on different initiatives. Finally, encourage them to suggest these initiatives and to advocate for using those metrics to monitor change. When teams are bought into using data to improve the way they get work done and the impact they have on the business, your metrics efforts are far more likely to be successful.</p><p>When we decided to fully invest in our metrics program at <a href="https://www.paytient.com">Paytient</a>, it meant we needed to commit to collecting quality data. For us, this meant adding a new required field to our issue tracking system. There were two aspects of this change I underappreciated initially &#8212; first, I was worried that adding a new required field would introduce too much friction in our process and discourage the creation of issues, thus reducing overall visibility. This turned out not to be the case, I think largely because the issue creation process was already fairly lightweight so one more field didn&#8217;t meaningfully change that and because we focused on investment areas that were <a href="https://en.wikipedia.org/wiki/MECE_principle">mutually exclusive and collectively exhaustive</a> (MECE). Even still, the second thing I learned from this is that the definitions of the investment areas you choose needs regular, consistent reiteration. You want the categories to be clear enough that people don&#8217;t need to spend too much time thinking about which category a piece of work falls into, but that requires talking about the categories, providing examples of prior work and how that was categorized, and reviewing the categorization of future work regularly. Finally, remember that when it comes to categorizing work into investment areas, good enough is good enough. You&#8217;re looking for trends and patterns, not the precise allocation of every activity by every individual. Encourage your team to use their best judgement, and provide the examples and review opportunities to create a shared understanding.</p><p>Don&#8217;t Use Engineering Metrics in Isolation</p><p>Engineering Metrics, from investment areas to metrics like DORA, never tell the whole story. You can have the shortest lead time, lowest change failure rate, and spend 80% of your time building against the roadmap, but none of that necessarily means you&#8217;re helping to grow the business. You need to work with Product to combine your organization&#8217;s metrics with theirs to truly understand the impact the Engineering organization is making on the business. Deployment frequency doesn&#8217;t really matter if you keep deploying things your customers don&#8217;t want. Use investment area data to improve the product roadmap planning process. Knowing where your team has historically spent their time helps you and your partners in Product to be better informed about how much time you&#8217;ll have to invest in the roadmap in the future. And you can tell a much better story about how the time your teams spent improving documentation and application logging led to shorter incident resolution times and ultimately more time being available to build new features!</p><p>Benchmarking Internally and Externally</p><p>I think the hardest part of implementing a metrics program is figuring out what to do <em>after</em> you&#8217;ve started measuring things. If your teams spend 60% of their time on product roadmap work, is that enough? Should you be aiming for 80%? What if a team&#8217;s PR cycle time is two days &#8212; how much time and energy should you spend trying to shorten that even further? I&#8217;ve found benchmarking to be a useful tool in figuring out where to go after your first data points are collected. First, start benchmarking internally and only within the team you&#8217;re measuring; trying to benchmark across teams can be difficult due to different team compositions and areas of focus. Where has the team historically invested their time? Does it shift in a predictable pattern, such as spending more time on customer support and incident response after a major feature release? Is time spent deploying to Production steadily increasing over time? The goal here is not to compare teams or to pit them against each other. We&#8217;re using metrics for learning and improvement within a team, remember? If each team can make even marginal improvements, those aggregate across the organization.</p><p>That said, it&#8217;s helpful at an organizational level to understand how all teams are performing relative to your peers (or competition), and external benchmarking can be very helpful here. The challenge is where to get this data from, and the most reliable source I&#8217;ve found is the benchmark data available in Engineering Performance Management tools like Jellyfish, LinearB, etc. These tools have benchmark data that goes beyond just DORA metrics and includes things like investment allocations, sprint predictability, issue lifecycle, and more. Typically, you can segment the benchmark data by things like industry and company size to ensure you&#8217;re benchmarking against companies similar to your own. Investment allocations is where this benchmarking data can be especially useful in conversations with Product and other business stakeholders because it can help qualify the story you&#8217;re telling around striking a balance between growing the business versus running it.</p><p>When Metrics Won&#8217;t Help</p><p>Collecting and using metrics in-and-of-itself won&#8217;t solve any of your problems. If you&#8217;re trying to use metrics to identify low performers, don&#8217;t. If you lack the feedback mechanisms to identify these folks on your teams outside the context of Engineering metrics, you likely lack the psychological safety required to make any metrics program successful in the first place. It requires a significant amount of trust from the people doing the work you&#8217;re trying to measure to launch and scale a metrics program and if they think this data is going to be used to micromanage them, you&#8217;re sure to fail. Similarly, do not tie Engineering metrics to performance reviews or financial incentives like compensation or bonuses. These are the kinds of initiatives that fuel the horror stories of failed metrics programs and engineers gaming the system come from. Metrics should be used to measure engineering effectiveness, to diagnose problems, and to measure the impact of changes to the ways teams work. </p><p>Iterate Regularly</p><p>Once you&#8217;ve established an Engineering Metrics Program, it becomes simpler to answer all kinds of questions. But start small. Don&#8217;t overload your issue tracker with required fields in the hopes that you&#8217;ll one day need or want to slice and dice the data along those dimensions. You&#8217;ll be drowning in data at first, and will need time to acclimate to your newfound insights. From there you can start asking new questions and incrementally adding fields, labels, issue types, etc. to gain new insights. Some of the questions I&#8217;ve been able to answer by adding new dimensions after the launch of a successful program are things like &#8220;how much time do we spend doing work for specific customers?&#8221;, &#8220;what was the ROI of that engineering initiative&#8221;, and &#8220;how much planned  versus unplanned work are we doing?&#8221; Look for ways to improve the process as you learn more about what data you have, what data you need, and how you can collect it. And remember: never stop improving!</p><p></p>]]></content:encoded></item><item><title><![CDATA[‘Got a sec?’ Is Killing Your Productivity]]></title><description><![CDATA[One of the most challenging aspects of leadership isn&#8217;t formulating strategy or organizational design.]]></description><link>https://www.jordanstone.tech/p/got-a-sec-is-killing-your-productivity</link><guid isPermaLink="false">https://www.jordanstone.tech/p/got-a-sec-is-killing-your-productivity</guid><dc:creator><![CDATA[Jordan Stone]]></dc:creator><pubDate>Mon, 24 Mar 2025 16:04:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74Wt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the most challenging aspects of leadership isn&#8217;t formulating strategy or organizational design. While these present their own unique hurdles, I&#8217;ve found <em>context switching and urgent interruptions</em> to be far more disruptive. In fact, constant context switching often undermines effective strategy formulation or coherent organizational design far more than either of the latter two inhibit one&#8217;s ability to context switch. </p><p>To a certain extent, managing interruptions has always been a core challenge of leadership, but I&#8217;ve seen this become exponentially more difficult to manage since 2020, when COVID-19 forced companies into remote work. </p><p>Already popular, tools like Slack took on new levels of business criticality as we leaned heavily into chat-first work. Early reviews of our new work cultures popularized the notion that remote-first companies were more productive and that office culture, especially the open-office layout, served to foster distraction more than it did to foster collaboration. Over time, I&#8217;ve come to disagree with the notion that remote-first <em>equals</em> async-first (which I believe is where those productivity gains truly come from), or that remote workers are inherently more productive. Without careful consideration for how you encourage remote collaboration &#8212; and without clear personal execution strategies &#8212; leaders at all levels within the organization risk becoming less impactful and more prone to burn out.</p><p>We&#8217;ve all been there: You&#8217;re just getting started on that major presentation to the executive leadership team when you hear a <em>ding</em> and a notification slides in from the top-right of your screen:</p><blockquote><p>@your-name can you take a look at this document and provide your comments?</p></blockquote><p>In the worst case, you stop what you are doing, assuming the request is urgent. Much has been written about urgency vs importance and how tools like the Eisenhower Matrix can help with determining how to treat those situations. But even in the best case &#8212; where you determine that this ask isn&#8217;t urgent &#8212; you&#8217;ve still lost time and focus. First, the alert drew your attention away from the task at hand. Then you needed to review the message and document, assess its urgency, and possibly delegate it. That means finding the right person, passing along context, and ensuring they&#8217;re set up for success. </p><p>By the time you get back to preparing for your major presentation, 30-60 minutes has passed. And if your schedule is anything like mine, that might&#8217;ve been your only block of time to focus during the workday. Worse still, the more teams you support, the more likely another interruption will arrive before you finish the first one. Each interruption delays your ability to move your prior task forward until you arrive at the end of the day only to realize you got 80% of ten different things completed and none of them are the one thing you <em>needed</em> to finish.</p><p>Let me be clear: I believe that remote and/or hybrid work environments remain the best way to build strong, talented, diverse teams and cultures. I also believe that tools like Slack have increased our overall productivity by helping to centralize traditionally disparate tools into one place and have increased our overall knowledge by creating spaces where context is shared and information silos are reduced. </p><p>But those benefits aren&#8217;t automatic &#8212; and they&#8217;re not free. I&#8217;ve also seen tools like Slack become a crutch for eschewing the responsibility of self-serving information. Instead of searching internal documentation, wikis, or intranets, chat tools unconsciously encourage making the act of finding information someone else&#8217;s task: <em>Can the product do X? What&#8217;s the status of Project Y?</em> This behavior replaces active information-seeking with passive interruption. </p><p>Unlike in a physical office &#8212; where interrupting someone meant walking over and catching their attention &#8212; chat tools eliminate that friction. This has created a set-it-and-forget it culture of information acquisition, where people toss questions into chat and move on, expecting someone else to provide the answers. That switch has made context switching even more costly for many leaders.</p><p>So how can you reduce context switching and increase productivity and impact? I&#8217;ve found a few strategies that have been helpful over the years. Each of these will work in isolation but becomes much more valuable when used in concert with the other strategies. These also require a certain level of commitment from you to adhere to these practices and to set the right boundaries and expectations with your peers and coworkers that you will be operating using a modified engagement model moving forward. </p><h3>Schedule Time for Certain Tasks</h3><p>This has been the most impactful change I&#8217;ve made in the last few years to the way I work. In his book &#8220;<a href="https://www.nirandfar.com/indistractable/">Indistractable</a>&#8221;, Nir Eyal describes using his calendar to allocate time for specific tasks. I started by manually scheduling time for things like contract reviews, architectural reviews, and hiring. </p><p>Now I use <a href="http://Reclaim.ai">Reclaim.ai</a> to manage most of this for me. I can create Tasks, tell Reclaim how long I need to complete the task, when it&#8217;s due, how important it is, and even if the task can be broken into chunks and it will automatically schedule those tasks onto my calendar around existing meetings. I also set up Habits for things I do regularly like reviewing architectural docs, recruiting, metrics reviews, monthly budget reviews, catching up on email, and my own professional learning and development. </p><p>This time blocking sends a signal: bot to your <em>future self </em>and to others trying to schedule time with you. It makes your focus work visible and reinforces accountability. And that&#8217;s critical &#8212; because if you block the time and don&#8217;t honor it, this practice won&#8217;t help you.</p><h3>Establish Deep Work and Response Time Expectations</h3><p>When the leaders I coach tell me they are having trouble getting other work done due to constant interruptions and urgent requests for feedback, the first thing I tell them to do is to shut down Slack. </p><p>Actually, the first step is to let people know that you are shutting down Slack and to provide a way for them to reach you in a true emergency. That message might look something like this:</p><blockquote><p>Hey all, I am shutting down Slack to work on {important task} for the next hour and a half. If you need to reach me urgently in that time, please call me at 123-456-7890. Otherwise, I&#8217;ve got time blocked off at {time} to review and respond to any Slack messages and mentions. Thanks!</p></blockquote><p>I&#8217;m not saying it&#8217;ll <em>never</em> happen, but I&#8217;m yet to hear of someone who sent a message like this and then received call. That little bit of friction helps others consider whether their request is truly urgent or if it can wait until a better time.</p><h3>Create Contiguous Blocks of Time</h3><p>Lara Hogan&#8217;s post on <a href="https://larahogan.me/blog/manager-energy-drain/">defragging your calendar</a> inspired me to begin optimizing my calendar as a tool to <em>enable</em> deep work, not prevent it. My old calendar was filled with short 30-minute gaps between meetings, looking something like the example calendar below: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!74Wt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!74Wt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg 424w, https://substackcdn.com/image/fetch/$s_!74Wt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg 848w, https://substackcdn.com/image/fetch/$s_!74Wt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!74Wt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!74Wt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg" width="2332" height="1668" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/acd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:1668,&quot;width&quot;:2332,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:0,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!74Wt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg 424w, https://substackcdn.com/image/fetch/$s_!74Wt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg 848w, https://substackcdn.com/image/fetch/$s_!74Wt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!74Wt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd2bf87-ffb3-4e5f-8d63-0ecaca242578_2332x1668.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">My calendar, with plenty of short blocks of white space</figcaption></figure></div><p>Over the years I&#8217;ve come to learn that it takes me about 30-45 minutes to just get into a creative, deep-work headspace and all of those open 30 minute blocks between meetings were just serving to allow time for shallow work. Now, I try to schedule meetings back-to-back (especially similar ones like 1:1&#8217;s) to create larger uninterrupted blocks for deep work. And if someone schedules a meeting with me, I&#8217;m pretty open about my preference not to schedule short gaps in my day and will ask for meetings to be moved such that they back up against other already-scheduled time blocks. With this small change in place, my calendar typically looks something more like this. Notice the larger blocks of white space, which I can use for deep work:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DNPr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DNPr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DNPr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DNPr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DNPr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DNPr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg" width="2335" height="1670" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:1670,&quot;width&quot;:2335,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:0,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DNPr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DNPr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DNPr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DNPr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc2ee221-0d15-40e8-ba47-3be4f63d2ecc_2335x1670.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">My calendar after grouping meetings into contiguous blocks</figcaption></figure></div><h3>Use Your Chat Tool&#8217;s Chat Management Features</h3><p>Recognizing that realtime chat can often become a realtime distraction, many chat tools have introduced features to help users manage the deluge of mentions, DMs, and notifications. If you&#8217;re not ready to deal with a message in a particular moment, use features like &#8220;Remind me in 30 minutes&#8221; or &#8220;Remind me on Monday at 9am&#8221;. I use these regularly to triage messages. </p><p>Slack&#8217;s &#8220;Save For Later&#8221; can also help &#8212; provided you&#8217;re disciplined about reviewing saved items. Some people use an emoji (such as a &#128204;) to mark important messages for follow-up. You can even use Slack workflows to trigger automations when you add a reaction to a message. </p><p>One small but powerful habit: include need-by dates in requests. Consider difference between:</p><blockquote><p>Hey, I need you to review this presentation.</p></blockquote><p>vs</p><blockquote><p>Hey, I need you to review this presentation by 2pm on Thursday.</p></blockquote><p>The second message includes just four more words and took about two seconds longer to write, but provides infinitely more information about the urgency of the request and helps you to prioritize completing the task. If you received the second message on Monday at 8am, you&#8217;ll prioritize this request very differently than if you received it at 4pm on Wednesday. Without the need-by date, you don&#8217;t know how to prioritize the first message and are more likely to delay more <em>important</em> work to handle something that you thought was more urgent. You can then use your chat tool&#8217;s reminder or message management features to deal with the message at a more appropriate time. If someone sends you a message like the first one, ask for a need-by date. And be sure to lead by example and provide your teams with the information they need to prioritize their own work by being sure to include need-by dates in your own requests.</p><h3>Bringing it all Together</h3><p>While any one of these suggestions in helpful on its own, the real power comes from using them in combination.</p><ul><li><p>Automatically block time for deep work</p></li><li><p>Group meetings to creating larger blocks of uninterrupted time</p></li><li><p>Use message management tools to keep chat asynchronous and prioritize messages</p></li><li><p>Shut down tools like Slack entirely when work demands full focus</p></li></ul><p>None of these strategies are perfect. They take practice, discipline, and the ability to reset when you slip. But learning to manage context switching &#8212; instead of being managed by it &#8212; is what separates good leaders from great ones. </p>]]></content:encoded></item><item><title><![CDATA[What is Engineering Leadership, Anyways?]]></title><description><![CDATA[Great software engineering organizations adopt best practices such as CI/CD, infrastructure-as-code, and automated testing.]]></description><link>https://www.jordanstone.tech/p/what-is-engineering-leadership-anyways</link><guid isPermaLink="false">https://www.jordanstone.tech/p/what-is-engineering-leadership-anyways</guid><dc:creator><![CDATA[Jordan Stone]]></dc:creator><pubDate>Fri, 14 Feb 2025 16:30:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U_Lu!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d5e0658-52c1-4698-b887-3e77ceeb7e9f_800x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Great software engineering organizations adopt best practices such as CI/CD, infrastructure-as-code, and automated testing. These are truisms our industry has adopted over the years and if you move from one organization to another, you will see that the quality of a software team&#8217;s code tends to be reflective of the adoption of these practices (or not). Why, then, are the leadership roles most responsible for establishing, implementing, and maintaining these practices so less strongly defined? An Engineering Manager at one company is a Team Lead at another. Or maybe it&#8217;s called a Tech Lead, Dev Manager, or just Manager. And what the hell does a Director or VP of Engineering even do, anyways? If engineers are responsible for writing code to drive towards positive outcomes for the business, and an engineering leader isn&#8217;t writing code, what is their impact? Why do we need middle-managers in the first place? Why do we promote the most prolific software engineers into roles where they no longer do real work? In 2023 and 2024 we saw this prioritization of &#8220;doers&#8221; over managers in Meta&#8217;s Year of Efficiency which focused on <a href="https://www.businessinsider.com/mark-zuckerberg-flatter-org-chart-middle-managers-comments-2024-9">flattening the company by removing middle managers</a>. Amazon <a href="https://www.businessinsider.com/amazon-ceo-return-to-office-mandate-5-days-week-2024-9">quickly followed</a>. If companies such as Meta and Amazon, renowned for their innovation and technical supremacy, were reducing waste by reducing managers and increasing a manager&#8217;s number of direct reports to as high as 20, surely this must be the way.</p><h2>Doing Real Work</h2><p>I read a book recently about why startups are so much better positioned than large, traditional organizations to change (or create) new markets and products. In that book, the author posited that in successful startups, frontline managers occasionally do &#8220;real work&#8221; like writing code or chipping in on software development, as opposed to in traditional companies where managers just manage people and tasks. I think this is a very misguided, and often times destructive, line of thinking. Engineering Managers should not be software engineers who also do people management. I&#8217;ve seen too many companies, and interviewed too many candidates, where an engineering manager&#8217;s role is 50% hands-on-keyboard writing code and 50% people management (read: performance reviews). This diminishes the value of <em>both</em> roles. Software engineers should not be valued for their ability to write code. Instead, they should be valued for their ability to solve problems. Yes, writing code is part of how problems are solved, but so is talking to customers, understanding the intended value or impact of a given feature, and being able to run and maintain that software in Production after it&#8217;s been deployed. Similarly, an Engineering leader should not be valued for their ability to manage people. Yes, managing people is part of leadership, but so is consistently delivering new capabilities in line with estimates, developing (or hiring) new skills on a team to unlock new ways for technology to solve business problems, and ensuring the individuals that report to you have an opportunity for growth and development. Neither software engineering nor engineering leadership is a part time job, so it doesn&#8217;t make sense to ask a person to do both. This isn&#8217;t to say that engineering leaders shouldn&#8217;t be technical. A good engineering leader understands and can empathize with the challenges their teams face, and empathy is built through experience. Review a pull request or technical design document. Try to deploy code to Production, or pair with someone on your team as they do it. But your real work as an engineering leader is not realized through the occasional code that you write. To the contrary, you should be impactful far more often than that! The thing I find most difficult for companies, and sometimes new leaders, to come to terms with is how the timescale of impact differs for engineers relative to engineering leaders. While an engineer&#8217;s output is seen in Production in hours or days (you <em>are</em> deploying to Production this often, right?), an engineering manager, director, or VP may not see the outcome of their work for weeks, months, or even years. The decisions tend to be less easily reversible, as well. It&#8217;s much easier to back out a code change than it is to back out a re-org. Coaching and mentorship takes time. Establishing new processes or fixing broken ones takes time. Changing code is hard; changing people is even harder. Most often in startups, we over-value immediacy and under-value the future. This isn&#8217;t to say no one is thinking ahead, but that startups tend to live on the knife&#8217;s edge of existence such that being able to <em>see</em> outcomes now makes it easier for startups to justify the value. I won&#8217;t go into organizational design and team structure in this article, but suffice it to say if you&#8217;re expecting your engineering leaders to also be software engineers, you are prioritizing short-term benefit over long-term viability.  </p><h2>An Engineering Leader&#8217;s Focus</h2><p>Any engineering leader&#8217;s primary focus should be delivery. And I don&#8217;t mean &#8220;we shipped it&#8221;. That&#8217;s the easy part. Instead, great engineering leaders focus on the quality of delivery. Was it on time? Did it launch without much fanfare (boring launches are just as strategic a decision as <a href="https://mcfunley.com/choose-boring-technology">boring technology</a>). Are you able to measure the impact your team has on broader business goals? These are the questions to ask when measuring how well your teams are delivering and if you&#8217;re helping to drive continuous improvement in this area. How do you impact delivery excellence? I believe engineering leaders have the greatest influence over three things: team, people, and process. While org design is another post for another time, team design is a place engineering leaders at any level can apply their vantage point to great effect. If the company is making moves in AI and you have no skills in machine learning or artificial intelligence on your team, you need to decide whether you build it or hire it. Time and money are two limiting factors here, but you need at least one of them to be able to effectively fill the skills gap. Even if you have the expertise, understanding the product roadmap and ensuring the team is prepared to adopt and maintain any new technologies that might be required to meet Product&#8217;s needs is another way leaders can leverage their team&#8217;s collective expertise to help drive value. Teams, though, are nothing more than a construct for the people that comprise them. As an Engineering Leader, it is your responsibility to make sure that the individuals on your team are motivated, challenged, and rewarded for their work. It&#8217;s your responsibility to tell them when they&#8217;ve done well just as much as it&#8217;s your responsibility to coach them in the areas they need to grow. Establishing a career framework (and establishing it early) is a great tool to use in helping to develop the people in your team. Lastly, <em>how</em> the team executes and <em>how</em> the individuals on your team feel about how work gets done is achieved through process. For a variety of reasons, &#8220;process&#8221; is a dirty word in software engineering but I believe it is one of those most effective ways to drive impact as an engineering leader. Doing the work is hard (and it should be, if it&#8217;s worth doing). The work required to do the work shouldn&#8217;t also be hard. What is this meta-work I speak of? How decisions are made, how work is estimated, how tasks are broken up, how code moves through the deployment pipeline or release train, and how Engineering engages with other organizations across the company all describe <em>how</em> work gets done. This is where getting closer to the work (but being careful not to devolve into taking on hands-on-keyboard responsibilities) is the most beneficial. As an engineering leader, it is your responsibility to make sure your teams have the tools they need to do their best work. Could you build a house with only a hammer? Sure. But it wouldn&#8217;t be very fun to do. And it would take a lot longer than it would if you had the right tools, the right blueprints, and the right processes to build it better. If there is friction in your team&#8217;s daily work, it is your responsibility to remove it. In doing so, you empower your teams to drive the kind of impact that makes high-functioning software engineering organizations a rewarding place for the people on your teams to build and grow their careers.</p><p>All of this is real work. It&#8217;s just different work. For this reason, I think one of the most detrimental things an Engineering organization can do is build career paths that make management something you get promoted into, as opposed to a different role you transition into. How engineering leaders drive impact is different than that of individual contributors. It&#8217;s not less, or more, but different. If you&#8217;re an engineering leader trying to figure out how you can have the most impact on your team(s), think about working <em>on</em> the system and not <em>in</em> the system. In so doing, you will help your team achieve outcomes they couldn&#8217;t have reached with just a manager focused on performance reviews, or a manager who moonlights as a software engineer. It&#8217;s this unique combination of perspective and focus that makes engineering leaders at all levels a critical part of the organization.</p>]]></content:encoded></item><item><title><![CDATA[Musings on Systems, Leadership, and Organizations]]></title><description><![CDATA[Oh, hello.]]></description><link>https://www.jordanstone.tech/p/musings-on-systems-leadership-and</link><guid isPermaLink="false">https://www.jordanstone.tech/p/musings-on-systems-leadership-and</guid><dc:creator><![CDATA[Jordan Stone]]></dc:creator><pubDate>Sat, 01 Feb 2025 22:39:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U_Lu!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d5e0658-52c1-4698-b887-3e77ceeb7e9f_800x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Oh, hello. I didn&#8217;t see you there. My name is Jordan Stone and I&#8217;m a Scorpio. Maybe you are, too (a Scorpio, at least. If you&#8217;re also Jordan Stone, please reach out). Either way, you&#8217;re here, by accident or on purpose, because you lead Software Engineering teams or an entire Engineering organization, and want to do that better. You want to do it better <em>on purpose. </em>That&#8217;s what I&#8217;ve been doing for the last 10 years or so. I haven&#8217;t always succeeded. In fact, I&#8217;ve failed a few times and messed up a lot more than that. But I&#8217;ve also learned a few things along the way, tailoring my philosophy on leadership and the practical steps needed to build strong Engineering organizations. In software engineering we rely on best practices and patterns to build software. Those patterns and practices ebb and flow over time as we identify better ways to build and scale software systems, but they provide us with a template we can use. We identify a problem, find a design pattern best suited to solve that problem, and implement it. Building teams, or entire organizations, often times lacks the same kind of established best practices and patterns we&#8217;ve come to rely on when building software. There are patterns, to be sure, but you can&#8217;t blindly apply them without considering the context, culture, and stage of your particular organization. This newsletter (this stack? Is &#8220;stack&#8221; the noun here?) is an attempt to bridge that gap. </p><p>This is a topic I&#8217;ve been wanting to write about, and have been encouraged to write about, for a long time. I&#8217;ve struggled to get started. I&#8217;ve struggled because, as with most topics on the Internet, there is so much content available that I didn&#8217;t want to just contribute to an already crowded conversation. There are plenty of resources available about improving the way you lead a team, about transitioning from an IC role to a manager role, and about broad industry trends and practices. However, I&#8217;ve found fewer resources on how to implement a re-org, considerations when rolling out a career framework, the challenges in introducing new team practices, or using data to help improve an organization&#8217;s effectiveness. When we make a change in our code, the software we&#8217;re writing happily complies (though maybe in ways we didn&#8217;t intend). Changing tools, processes, culture, or entire organizations doesn&#8217;t respond in the same way. Every team is unique, which makes pattern matching the way we do with software design more difficult. And so, &#8220;blog name&#8221; will focus primarily on aspects of leadership such as moving from one-team-to-multiple-teams, scaling organizational and operational excellence <em>alongside</em> scaling software, pushing (and sometimes pulling) your teams along an organizational maturity curve, and working to meet both business and technology objectives. Many of these topics apply to more than just Engineering orgs. While some topics may relate specifically to leading technology organizations, many are what I have found to be truisms in any industry and in any team. And that&#8217;s my goal: to contribute back to the collective consciousness on ways to lead effectively, to build sustainable organizations, and to help you make your organization a place people can build and grow their careers.</p><p>Ok, so interesting set of topics, but why me? Why not just Google (or ask ChatGPT) &#8220;how to do squads&#8221; and lead your organization the same way Spotify does? Leading organizations is about doing it for the org you are <em>today</em>, not the one you hope to be in 3-5 years or, worse yet, the one you read about online. And, more importantly, it&#8217;s about doing it in a way that best serves your unique culture and the people who comprise it. I&#8217;ve had the opportunity to lead at scale-up and growth organizations for much of my career. I&#8217;ve done the hyper-growth thing a few times. I understand how culture changes when you go from being the only engineer to a team of five, ten, twenty-five, and beyond. I&#8217;ve not only managed managers, but introduced the concept of engineering management to companies several times. And I&#8217;ve done it while still being involved in the technical details and occasionally being hands-on-keyboard myself. If you&#8217;re leading an organization in a growth phase and trying to figure out how to maintain your culture, establish best practices, or push your org to success, this newsletter is for you. If you&#8217;re at a larger, established organization and are wanting to create more leverage, grow the next round of leaders, or become a more data-driven organization, this newsletter is also for you. Fortunately, I don&#8217;t know everything. I&#8217;ve messed up a lot. And from my failings, I hope to also share learnings to help others avoid the same mistakes. I want to contribute to the collective conversation about engineering leadership, organizational design, decision making, team building, culture, and operational excellence. And it should be just that &#8212; a conversation. If you have questions, if a future newsletter speaks to you or a challenge you&#8217;re currently facing, please don&#8217;t hesitate to reach out. Our teams, our organizations, and our industry-at-large gets better when we work together to lift all ships. I look forward to being one more rising tide that lifts others, helps to cultivate stronger leaders, fosters healthier cultures, and drives meaningful progress in the way we build and scale engineering organizations.</p>]]></content:encoded></item><item><title><![CDATA[Coming soon]]></title><description><![CDATA[This is Jordan Stone, a newsletter about Musings on technology and leadership.]]></description><link>https://www.jordanstone.tech/p/coming-soon</link><guid isPermaLink="false">https://www.jordanstone.tech/p/coming-soon</guid><dc:creator><![CDATA[Jordan Stone]]></dc:creator><pubDate>Wed, 29 Dec 2021 13:57:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!U_Lu!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d5e0658-52c1-4698-b887-3e77ceeb7e9f_800x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>This is Jordan Stone</strong>, a newsletter about Musings on technology and leadership.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.jordanstone.tech/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.jordanstone.tech/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>