Estimation is a Smell
Part two in the Agile Engineering: Rhetoric vs Reality series.
Agile teams frequently assert that they value outcomes over outputs, learning over prediction, and adaptability over rigid plans. Yet in practice, many teams devote a disproportionate amount of time and emotional energy to debating whether a piece of work is a 3 or an 8. This tension reveals a persistent gap between Agile rhetoric and Agile reality.
This post advances a simple claim: when estimation becomes a focal point of contention, it is a smell. Not because estimation itself is inherently flawed, but because prolonged debate over story points often signals avoidance of deeper issues—namely uncertainty, risk, and value delivery.
Estimation and the Illusion of Precision
Story points were originally introduced as a lightweight, relative mechanism to support short-term planning under uncertainty (Schwaber & Sutherland, 2020). They were never intended to function as precise measurements of effort or productivity. However, once estimates are tracked, compared, and reported, they acquire an aura of objectivity and control.
Research on complex systems suggests that this desire for precision is misplaced. In environments characterized by high variability and interdependence, attempts to impose exact forecasts tend to increase error rather than reduce it (Taleb, 2007). The more confidently a team defends a specific estimate, the more likely it is mistaking guesswork for knowledge.
This dynamic explains why estimation discussions so often become contentious. The argument is rarely about the number itself. Instead, it reflects unresolved ambiguity about scope, hidden dependencies, or incomplete understanding of the problem space.
What Teams Are Actually Arguing About
Empirically, estimation disagreements correlate with work that is poorly defined or novel. Teams sense risk but lack a shared language to articulate it. As a result, uncertainty is displaced onto numerical proxies.
Rather than explicitly stating, “We have insufficient information,” teams negotiate point values that implicitly encode fear of commitment or concern about downstream accountability. This phenomenon runs counter to the Agile Manifesto’s emphasis on transparency and inspection (Beck et al., 2001).
From an engineering management perspective, this behavior should be treated as diagnostic data. When estimates require prolonged negotiation, the problem is not calibration—it is understanding.
Velocity as a Misapplied Metric
Velocity compounds the problem when it is elevated from an internal planning heuristic to an external performance signal. Empirical studies of high-performing technology organizations consistently show that delivery speed and stability are better predicted by flow metrics, such as cycle time and throughput, than by capacity-based metrics like velocity (Forsgren et al., 2018).
Little’s Law further reinforces this insight by demonstrating a stable mathematical relationship between work in progress, throughput, and cycle time (Little, 1961). Notably absent from this formulation is any notion of story points. Velocity offers no explanatory power once work leaves the boundary of a single team, yet it is frequently used for precisely that purpose.
Three Practical Shifts Toward Healthier Practice
If contentious estimation is a smell, the appropriate response is not to replace story points with a different estimation technique, but to change how teams engage with uncertainty. The following practices offer a lightweight, procedural starting point.
1. Constrain Estimation to Enable Learning
Treat estimation as a brief sense-making activity, not a negotiation. Time-box the discussion and explicitly surface assumptions. If consensus cannot be reached quickly, that outcome should trigger a learning activity (such as a spike), rather than further debate. This reframes uncertainty as something to investigate rather than suppress.
2. Manage Flow Instead of Optimizing Velocity
Shift planning and review conversations toward observable delivery behavior. Track cycle time, work-in-progress limits, and throughput trends to understand system performance over time. These measures align more closely with empirical evidence on software delivery effectiveness and discourage gaming behaviors associated with velocity targets (Forsgren et al., 2018).
3. Review Outcomes, Not Estimates
In retrospectives and stakeholder reviews, focus on what was delivered, what was learned, and what remains uncertain. Estimates should not be defended after the fact. Their sole purpose is to support forward-looking decisions, not to evaluate past performance. This distinction reinforces psychological safety and supports continuous improvement.
Conclusion
Agile frameworks emphasize adaptation, learning, and responsiveness. Yet when teams cling to estimation debates as a source of certainty, they undermine those very principles. The persistent defense of story points is rarely a sign of rigor. More often, it reflects discomfort with ambiguity.
Organizations seeking to close the gap between Agile rhetoric and reality would do well to treat estimation arguments as signals. When teams feel safer debating numbers than confronting uncertainty, the system—not the people—needs attention.
Agile Engineering: Rhetoric vs Reality series
- Part 1: Agile isn't broken We just stopped practicing Engineering
- Part 2: Estimation is a Smell
- Part 3: [Upcoming]
- Part 4: [Upcoming]
References
Beck, K., et al. (2001). *Manifesto for Agile Software Development*. [https://agilemanifesto.org/](https://agilemanifesto.org/)
Forsgren, N., Humble, J., & Kim, G. (2018). *Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations*. IT Revolution Press.
Little, J. D. C. (1961). A proof for the queuing formula: L = λW. *Operations Research, 9*(3), 383–387.
Schwaber, K., & Sutherland, J. (2020). *The Scrum Guide*. [https://scrumguides.org/](https://scrumguides.org/)
Taleb, N. N. (2007). *The Black Swan: The Impact of the Highly Improbable*. Random House.

Comments
Post a Comment