Estimates are waste. Not only are they not necessary, but they introduce dysfunction into the team. We should really just stop doing them.
This thinking comes from the #NoEstimates movement (of which I’m a card-carrying member), which came onto the scene a couple years ago when @WoodyZuill created the #NoEstimates hash tag on Twitter. #NoEstimates is not a thing: a process or practice or methodology or framework. It’s a discussion. Consequently, there are as many variants as there are adherents. I’m going to talk about my approach in this article, but mine is not the only approach. The point of the exercise is the discussion.
I’m making this point up front because a #NoEstimates approach has real value, and I don’t want it to go go the way of Agile(tm), which has transmogrified from a valuable development culture and philosophy that encourages certain ways of doing things into a set of rigid by-the-book practices that have no bearing whatever on real agility. You can have stand-up meetings until your knees collapse, but that won’t move you one iota towards real agility.
In fact, the term “Agile” has become so separated from the original intent that I’m reluctant to use the term when I go into a company to train or do consulting. I teach agility, not “Agile.” The word has to apply to the process, too. I’ve found that starting with #NoEstimates and the underlying Lean principles is by far the most effective way to get across what it means to work in an agile way, so that’s another advantage to this particular discussion. Introducing agility by eliminating estimation also carries a whole host of benefits, like moving the culture forward—changing the way people think—along with the introducing the practices.
So, neither Agile nor #NoEstimates are about proscribed practices. They’re both a culture and philosophy from which effective practices emerge. #NoEstimates, then, is an agile-development technique that moves you in a more Lean direction by eliminating a major source of waste.
So, why do we estimate?
Estimates are always inaccurate, usually wildly so. There’s only one situation, in fact, where an estimate can be viable: when you’re using a team to build something that that same team has built in the past using identical technology. If you’re building yet another web site for the same sort of company based on the same WordPress template, go ahead and estimate. Everything else is guesswork.
We’re hampered, here, by two problems.
First, we don’t really know how to estimate. The much maligned CHAOS report (which says that something like 80% of projects are late) is proof of that. These projects are not “late.” They just didn’t meet some estimate-based deadline. All that tells me is that we don’t know how to estimate! People who believe in the estimation myth always address this issue by saying that we just have to get better at estimation. Well, we’ve been trying to do that for well over 60 years, and haven’t improved our ability one iota.
This is not like estimating house construction, where you can come up with an accurate list of materials based on a detailed plan that won’t change, and which also implies a precise list of well-understood tasks. There’s not a software project on the planet that’s that constrained. Building software is nothing like building houses.
For one thing, in software, you don’t know what you’re building or how you’ll build it when you start the project. You might think you do, but you’re usually wrong. You learn as you work. The problem is not just a technical one: your users learn, too, but only after they have working software in their hands. That concrete software lets them see things that they didn’t see when they were thinking in the abstract. No amount of diligence in an up-front requirements-gathering process will have the impact of early delivery. Our goal is to discover significant flaws in our thinking as early as possible (rather than a year from now, as would be the case in a waterfall project). Given that constant discovery, however, you don’t have much information to base an estimate on.
Realistically, then, there’s no value in wasting time estimating something that you know is going to change, and making long-term plans based on those estimates is foolhardy. Building the wrong thing on time and in budget doesn’t buy you much.
Moreover, planning around something that you know to be inaccurate can be actively destructive. The inevitable “push,” to get done “on time” burns people out, drives people to leave the company, and creates considerable dysfunction. You also have to set up an enormous (and enormously expensive) infrastructure to make sure you’re “on schedule.” Most middle managers do nothing but that, in fact. That infrastructure sucks up money that would be better spent doing actual work.
We all know all this. So why do we do estimate, anyway?
I personally think that it’s just habit—ritual. Nobody questions the need for an estimate because “we’ve always done it that way.” We do it even when we can’t come up with an actual reason.
For example, an answer I often get to the “why do you estimate when you know it’s going to be wrong” question is “It helps us think about the problem.” Leaving aside the obvious response that nothing’s stopping you from thinking about the problem without using estimating as an excuse, I’d argue that the focus of much of that thinking is so narrow as to be ineffective. You estimate by breaking down a problem into small implementation tasks and assessing each one, but the interesting question is whether you need to do those tasks at all, not how long they’ll take. If you focus on implementation too early, you tend to subconsciously reject better implementations that come up later—you don’t want to “throw away all that work.”
Estimation thinking tends to be pervasive, though. Consider how dysfunctional estimation has infected the Scrum sprint. The basic notion of the sprint timebox is that it gives a rhythm to your work. It forces you to pause every couple weeks to assess what you’re doing, for example. The end of the sprint is not, and never was, meant to be an arbitrary deadline by which you guarantee you’ll finish the work you’d estimated two weeks ago. Nonetheless, people impose estimation onto this system.
Many “Scrum” (deliberately in quotes) shops use points for up-front estimation. They then commit to doing some arbitrary number of points of work by the end of the sprint. They worry about ridiculous (to me) things like “earning” points and improving “velocity,” which they define as points per sprint. (But velocity is something you measure, not something you can control.)
This notion of points is not in the Scrum Guide or the Agile Manifesto, so it has nothing to do with Scrum or agility (which are different, but related topics). Ron Jeffries, who’s credited with the invention of story points, said
Story Points were invented to obfuscate duration so that certain managers would not pressure the team over estimates.
Seems bizarre to use them as a form of estimate of effort (person hours, for example). More to the point (so to speak), committing to complete a certain number of points in a certain amount of time is no different than the usual waterfall approach. It’s just a shorter time frame. I believe that we’re doing it because we came to Scrum from waterfall, and can’t imagine giving up such a basic part of the old process.
The fact is that if you “commit” to a scope of work that can’t change, you’re not agile in any real sense of the word. The word “commit” does not appear in the Scrum Guide, which uses the word “forecast.”
The Scrum Guide also says that, as soon as you realize that you can’t complete the work you set out to do within the sprint (almost always the case), you sit down with the PO and change the scope of the work to something you can complete. Scope changes constantly. You should deliver something of value at the end of the sprint, but there’s no guarantee that you’ll deliver anything that you set out to deliver. You do have to satisfy the “Sprint Goal”, but the Guilde defines that as “any…coherence that causes the Development Team to work together rather than on separate initiatives,” not as an arbitrary set of stories that you decided on two weeks ago.
So, rather than estimating a fixed set of stories and working towards an arbitrary deadline, it’s best to just always work on the most important thing next. Sprint planning is easy. Your backlog should be prioritized by value to the user. Just pick the top couple stories and work on those. It’s not a big deal if you get it wrong. You just put the work you can’t do back into the backlog. If the story your working turns out to be too much, then narrow it, and create a new story from whatever you left out. Point values are irrelevant.
Among other things, this sort of thinking encourages stretch goals—stories that you know you can’t complete within the sprint—because you also know that the inevitable scope reduction is valuable. Reducing scope often adds focus. This thinking also encourages you not to waste time on small low-value stories that you’d otherwise pick from the backlog only to fill out the timebox.
But, what about business-level planning? Don’t you need an estimate to decide whether or not to do a project?
Putting on my strategic-CTO hat for a moment, I’d say that you’re asking the wrong question. If your main worry about whether or not to build some piece of software is cost, I’d question whether you should be building it at all. The real question is: can the business survive (much less thrive) if we don’t build the software? Not doing the project is not an option.
Given that immediacy, the question becomes how can we build it with the resources at hand? Can we do this essential work before we run out of funds (or how much funding do we need to build it)? If we can’t finish, we need to decide to build something else. If we have plenty of funds but are under time pressure, we need to add people. (Note, by the way, that Brooks Law—adding manpower to a late project makes it later—doesn’t apply here. The project is not late.)
It turns out that you can answer all those questions, and make all those decisions without estimates. However, you need to actually do some work before making the decision. That is, the core of the #NoEstimates approach is to make decisions based on projections that come from actual measurements. To do that, you need to do some work to measure. #NoEstimates will not tell you not to start the project at all, or what sort of funding you’ll need before you’ve touched finger to keyboard. What it will do is let you make those decisions very early—within 4-6 weeks of project inception; less time, typically, than would be required to come up with an estimate. The worst case is that you’ll end up killing a project that’s 6 weeks old, but the cost of that is way lower than finding out that you won’t finish a year from now. You’ll ask for funds 6 weeks into the project (think of it as a proof-of-concept phase followed by a development phase) rather than entirely up front. You can’t eliminate all risks (as adherents to an estimate-drive approach often seem to think), but you can minimize them.
You also have to base your planning on reality. There’s what you’d like to have and what you can have. We all know from experience that we’re just being realistic when we say that we can’t guarantee that anything will be done by a specific delivery date. #NoEstimate-style projections will help you focus on the can part of that equation.
That delivery date is important, however. Time is money. In an agile shop, our burn rate is constant (teams are stable, overhead is predictable, everybody’s working at a steady, “sustainable,” pace), so the budget and the delivery date become the same thing. If that date (budget) is fixed, then the only variable that’s left is scope. Fixing scope is, of course, a classic death march, so you can’t do that.
Basic Lean principles also tell us that defects are a form of waste. Time spent delivering software with no known defects is immediately recovered in shorted development time. In an agile environment, where you’re constantly reworking existing code, bugs can slow you down to the point where you’ll fail outright. So, deliberately delivering buggy software to meet a deadline is also off the table.
The real questions then become: “what can we in six months,” and “what’s critical and what’s nice to have?” Agile planning takes that a step further and asks “what’s the highest priority thing—the thing that we can’t not have?
You can answer those questions, and also provide the tools to dynamically reconfigure as the project progresses, without estimates. What you do need are projections, and you want to base those projections on actual measurement, not guesses. The main metric you need for that is average number of stories completed per unit time (one week, two weeks?). Note that I haven’t said “points per sprint.” I’m not doing any estimating, here. I’m just counting stories. Size doesn’t matter because we’re using averages. (I should say that the smaller the stories the more accurate our average will be, so it’s worth narrowing them to something that you can do in a couple days or so.)
Think of a pile of blocks, where each block is a story and the size of the block is proportional to the point value of the story. The height of the pile represents some amount of effort. Now, replace that pile with the same number of blocks, but each block is the average point value; they’re all the same size. The height of the pile is the same. You can translate effort into cost because the burn rate is constant. You know how many stories are in the pile, and you know how many stories you can complete in a week, so you know how much it will cost to do the work (and roughly when the work will be done).
Using those tools (stories/week, number of stories), you can easily project where you’ll be in the backlog at any point in the future, which gives you the power to do reality-based management rather than just guessing. If the feature set at a given date is inadequate, or if the time to deliver all the stories is too far out, you can adjust scope—the number of stories—by removing the less critical ones at the bottom of the pile. If you can’t remove any of them you need to reassess whether you want to continue at all, knowing that you can’t possibly be done “on time.”
Also, the averages tend to stabilize very quickly (with a few weeks). Consequently, you can make useful projections very early in the project. Early enough to hire people if necessary. In an estimate-driven approach, you usually don’t know that you’re in trouble until it’s too late to do anything about it.
Finally, tools like Story Maps and kanban Cumulative Flow Diagrams are extremely helpful in organizing the stories and visualizing your projections. (Electronic tools are not—a physical drawing up on the wall will radiate information in a way that an electronic tool will never do.)
Note that the process I’m describing (measure, make projections, adjust scope) is a dynamic one. It’s not something that you do once before you start the project and then hold people to. Your projections change every time you complete a story. They change every time you add something to the backlog (which happens more often than you’d like). Planning happens constantly.
So that’s my take on #NoEstimates in a nutshell: Don’t estimate. Instead use projections based on real measurements to dynamically adjust your scope as you learn about the problem. If you’d like to learn more, check out Vasco Duarte’s No Estimates Book. There’s also a video of my #NoEstimates keynote at last year’s DevWeek on the video’s page. The video goes into a bit more detail on story counting and Cumulative-Flow Diagrams.