Velocity Is a Shadow; Throughput Is Reality

In the previous post, we argued that estimation is a smell—a signal that teams are compensating for unmanaged uncertainty. Velocity, as a direct descendant of estimation, inherits that smell (Fowler, 2018).

Scrum reminds us that the purpose of a Sprint is to produce an Increment of value, not to consume a predetermined number of points (Schwaber & Sutherland, 2020). Velocity measures effort expended inside the team. Customers, however, experience outcomes: features delivered, bugs fixed, and capabilities unlocked. No customer has ever benefited from a higher velocity. In the previous post, we argued that estimation is a smell—a signal that teams are compensating for unmanaged uncertainty. Velocity, as a direct descendant of estimation, inherits that smell.

Scrum reminds us that the purpose of a Sprint is to produce an Increment of value, not to consume a predetermined number of points. Velocity measures effort expended inside the team. Customers, however, experience outcomes: features delivered, bugs fixed, and capabilities unlocked. No customer has ever benefited from a higher velocity.

Why Velocity Persists

Velocity survives because it is visible, numeric, and easy to chart. It gives leaders something to point at and teams something to optimize. Unfortunately, that optimization rarely aligns with value delivery.

Research on metrics and incentives consistently shows that when a measure becomes a target, it ceases to be a good measure (Goodhart, 1975). When velocity becomes important, teams respond predictably. Estimates inflate. Stories are sliced to satisfy point targets rather than user needs. Lower-risk, lower-value work gets pulled forward because it fits neatly into a sprint. The metric begins shaping behavior—and not in a way that improves outcomes. Velocity survives because it is visible, numeric, and easy to chart. It gives leaders something to point at and teams something to optimize. Unfortunately, that optimization rarely aligns with value delivery.

When velocity becomes important, teams respond predictably. Estimates inflate. Stories are sliced to satisfy point targets rather than user needs. Lower-risk, lower-value work gets pulled forward because it fits neatly into a sprint. The metric begins shaping behavior—and not in a way that improves outcomes.

Throughput Reframes the Conversation

Throughput shifts the question from “How busy were we?” to “What did we actually finish?”

Instead of abstract units of effort, throughput counts completed work items over time. It anchors delivery conversations in reality: features shipped, defects resolved, and capabilities made available to users. This reframing matters because it reconnects measurement to outcomes rather than internal activity. Flow-based metrics such as throughput, lead time, and cycle time have deep roots in Lean systems and queuing theory, where stability and predictability emerge from managing flow rather than utilization (Little, 1961; Anderson, 2010).

At the organizational level, this distinction becomes even more important. Strategy is not executed in story points—it is realized through delivered features. Throughput shifts the question from “How busy were we?” to “What did we actually finish?”

Instead of abstract units of effort, throughput counts completed work items over time. It anchors delivery conversations in reality: features shipped, defects resolved, and capabilities made available to users. This reframing matters because it reconnects measurement to outcomes rather than internal activity.

At the organizational level, this distinction becomes even more important. Strategy is not executed in story points—it is realized through delivered features.

Applying Throughput at the Right Level

In many enterprise environments, meaningful commitments are made at the Epic level, often on a quarterly cadence. That makes Epics a far more useful unit of throughput than sprint-level stories.

Tracking completed Epics per quarter aligns measurement with how the organization actually plans, funds, and communicates progress. It also exposes reality quickly: when Epics linger quarter after quarter, the issue is no longer estimation accuracy—it is flow. This aligns with empirical forecasting approaches that use historical completion rates to reason about future delivery ranges rather than deterministic commitments (Vacanti, 2015).

This does not eliminate the need for stories or tasks. It simply recognizes that stories are implementation details, while Epics represent customer-visible outcomes. In many enterprise environments, meaningful commitments are made at the Epic level, often on a quarterly cadence. That makes Epics a far more useful unit of throughput than sprint-level stories.

Tracking completed Epics per quarter aligns measurement with how the organization actually plans, funds, and communicates progress. It also exposes reality quickly: when Epics linger quarter after quarter, the issue is no longer estimation accuracy—it is flow.

This does not eliminate the need for stories or tasks. It simply recognizes that stories are implementation details, while Epics represent customer-visible outcomes.

Making Throughput Visible (Without Turning It Into Another Ritual)

A throughput-first approach does not require elaborate tooling, but it does require discipline.

Work should be clearly linked so that delivery rolls up meaningfully from stories to Epics. Flow metrics such as lead time and cycle time should describe how long work actually takes, not how long it was supposed to take. Work-in-progress must be constrained so that finishing becomes more important than starting—an idea consistently supported by Lean and Kanban research (Anderson, 2010; Reinertsen, 2009).

Tools like Jira can support this view—but they should remain servants, not masters. The goal is not better dashboards. The goal is faster feedback and fewer surprises. A throughput-first approach does not require elaborate tooling, but it does require discipline.

Work should be clearly linked so that delivery rolls up meaningfully from stories to Epics. Flow metrics such as lead time and cycle time should describe how long work actually takes, not how long it was supposed to take. Work-in-progress must be constrained so that finishing becomes more important than starting.

Tools like Jira can support this view—but they should remain servants, not masters. The goal is not better dashboards. The goal is faster feedback and fewer surprises.

The Bottom Line

Velocity is an abstraction layered on top of estimation. Throughput is a direct expression of delivered value.

When teams optimize for velocity, they optimize for internal efficiency theater. When they optimize for throughput, they are forced to confront reality: how work flows, where it stalls, and what actually reaches users.

Reality > rituals holds here as well. Stop asking how many points were burned. Start asking which features landed—and how reliably they do so.

Agile Engineering: Rhetoric vs Reality series

References

Anderson, D. J. (2010). Kanban: Successful evolutionary change for your technology business. Blue Hole Press.

Fowler, M. (2018). Metrics are not goals. martinfowler.com.

Goodhart, C. A. E. (1975). Problems of monetary management: The U.K. experience. Papers in Monetary Economics, 1, 216–231.

Little, J. D. C. (1961). A proof for the queuing formula: L = λW. Operations Research, 9(3), 383–387.

Reinertsen, D. G. (2009). The principles of product development flow. Celeritas Publishing.

Schwaber, K., & Sutherland, J. (2020). The Scrum Guide. scrumguides.org.

Vacanti, D. (2015). Actionable Agile metrics for predictability. ActionableAgile Press. 


Comments

Popular posts from this blog

AWS Re:Invent 2024

Tariffs are bad for you.