Estimating when a new software feature will be done is a tricky but important task. “When will you be done?” Typically on the other end of this question is the person writing your check. Maybe not directly, but follow the request up the chain and you will certainly end up eye to eye with the check writer. Another important business fundamental is predictability of the development pipeline. Voice of the customer goes in one end, software features come out the other end. The ability to estimate how long the pipeline will be tied up on a feature is important for scheduling resources and planning. Thankfully, applying some fundamentals will provide faster more accurate measurements.
Fundamentals of Estimation
First let’s look at a few examples from every day life. Here is the first estimation problem:
An object traveling exactly 100 miles per hour is traveling a distance of exactly 100 miles.
Estimate how long it will take for the object to travel the distance. A bit of simple math (remember high school physics?)
Solving for time we get
In this case we plug-in 100 miles for the distance and 100 miles per hour for the velocity and we calculate 1.0 hours as the time for the object to travel the distance. What is the certainty we have in our estimate? In the above problem, both the distance and the velocity were exactly known so the accuracy of our estimate is 100%. The error is zero.
This is really not an estimate, but more of a calculation. Estimate generally involve some approximations. However, from this simple problem we can already see some fundamentals of estimation that we will use. The relation ship between time, velocity and distance will be useful.
In the case of software development ‘distance’ is usually a measure of total size (or effort) of the features to be estimated. A common measure of size is ‘story points’. Mike Cohn has a great blog post on ‘story points’. Software development teams can also measure their ‘velocity’ in terms of ‘story points per unit of time’. For example in scrum the velocity might be measure as 40 story points per sprint. Sprint are always time-boxed. So if the sprint time is 2 weeks, this team has a velocity of 20 story points per week. Before discussing estimation in software development further, lets discuss uncertainty.
Uncertainty Affects Accuracy
Let’s add a bit of uncertainty to our first example.
You walk from your current position to the nearest door.
Estimate how long it will take you to accomplish this task? Unless you have a satellite phone and are in the desert, your estimate is probably in units of minutes or seconds. What is the uncertainty with your estimate? Most likely your uncertainty is probably in the units of seconds.
Let’s try another estimate.
You walk from Seattle, Washington to Washington D.C.
Estimate how long will this journey take? Go ahead and really do this. I’ll wait.
What thought process did you go through. It was probably a bit different than the previous estimate. In this case, you probably needed to find out the distance (unless you are a geography wizard). Most people don’t possess the domain knowledge required to make this estimate without a bit of homework. No problem, a quick Google search tells it is about 2700 miles. Then you probably estimated your velocity. An average person’s walk is around 3 miles per hour. So that is about 900 hours of walking. Let’s say we walk 8 hours of the day. That puts us in Washington D.C. in about 113 days. What is the uncertainty in your estimate? Is it in units of hours? or days? Because of the size of this task, the initial uncertainty is probably in days (and maybe even weeks).
The estimate to the nearest door, it was too simple and you were probably were able to provide an estimate without the formal process of a distance and velocity approximation. In the second example, the size (or effort) of the problem (in terms of distance) was much larger. There was a lot more uncertainty in our distance and velocity approximations. The uncertainty in our estimate in the second case was much larger.
Now lets do one more estimation task. Imagine you are 100 days into your journey to Washington. Traveling 100 days has certainly provided a lot of experience. You probably have a much more accurate approximation to your velocity. You also have gained some confidence and understanding of your abilities. What has happened to your estimates? When will you arrive? What is the uncertainty?
What happens to your estimation and accuracy the closer you get? When the city limit sign for Washington D.C. becomes visible you could certainly provide an estimate with an accuracy in the minutes. There is a relationship between the accuracy and the amount of time remaining in the trip. Within steps of the finish line your uncertainty is in measured in seconds. While at the start of the trip it was probably in weeks.
From our thought experiments we can deduce the following ‘principles’:
Estimates require domain knowledge.
Tasks with a large size (or effort) are harder to estimate than smaller tasks.
Uncertainty decreases as we gain confidence and understand our abilities.
Uncertainty continually decreases as we get closer to the finish.
So let’s apply this knowledge to estimation in our software projects.
Size The Work
First let’s spend some time discussing how we determine the size of the features we must estimate. We must learn from the past and not fall into the trap of the waterfall development process. Most development teams are moving towards a more agile or lean development methodology. We have to be careful in the process of sizing our features that we do not fully analyze, break down into tasks, and plan the development for all the features up front.
On the surface this can appear to be desirable. The problem is that a lot of energy (resources and time) go into the analysis, breakdown and planning. But the tasks we plan are subject to more uncertainty the farther in the future they will occur (principle #4). This uncertainty generally manifests itself in the form of change. Change to requirements, design, development, testing…etc. The can result in rework and in terms of lean development represents waste. One of the be
st discussions of ‘what went wrong’ with waterfall and why agile methodologies are better was in a video by Glenn Vanderburg at JRubyConf 2010.
Sizing of a task requires that the people with the domain knowledge be involved (principle #1). Be certain to include a cross-functional (requirements, designers, developers, testers) representatives. Minimize the sizing team for efficiency, but try to include a representative from all the domains on the team. If a story includes a specialized domain that is normally not present then size the story with the current team. Outside the sizing meeting query the domain expert to determine if they concur with the estimate. If the original estimate is not congruent with the expert’s, then cycle this story back through the sizing team.
Typically sizing is done using relative units and not hours. Sizing the work in terms of hours is more difficult and requires the higher level story to be broken down into tasks. Answering which is bigger (relative measurement) a Chihuahua or a German Sheppard is easier than answering how tall is a Chihuahua or German Sheppard in inches (absolute measurement). A common relative sizing scale that is used is the Fibonacci series (1, 2, 3, 5, 8, 13, 21, 34, etc.) or a derivative there of (1, 2, 3, 5, 8, 13, 20, 40, 100). Other sizing teams apply t-shirt sizes. The challenge with the t-shirt sizes is they are not numerical (you could always map them to numbers) and are more difficult to perform computations (velocity, burn down…etc).
Features should be broken down into smaller units of work. If the size of the unit of work is too large (e.g. greater than 50), an epic for example, then it introduces more uncertainty in the estimate (principle #2). To minimize the uncertainty, break these large unit of works into user stories with sizes less than 20 (rule of thumb). Once you are in that range, only break it down further if there is a logical reason for doing so.
A popular scrum activity for sizing is called planning poker. Essentially the team is issue cards (shown above). Individually, each team member estimates the story. The average of every team member’s size can be used as the final size. If there are outliers (everyone votes around 8 except one team member who votes 50) they should be discussed. This will generally result in an equal amount of over and under estimates.
The goal of the sizing team is to quickly move through the stories (thus the timer in the above image). If the story is too large or not clear, do not spend too much time on it. Make a note and handle it off-line. When all the stories have been sized, total up the numbers to determine the size of the release.
The key to sizing and our ability to measure velocity is to consistently size stories. In other words, a story sized today and the same story sized six months from now should result in the same numerical value. This consistency will result in less uncertainty in the final estimate.
Measure Your Velocity
Recall in our trip to Washington at the 100 day mark. We had a much better estimate of our velocity, more confidence in our ability and could better estimate our arrival time. This is true for our development teams also. The teams should be allowed to work through the stages of group development (forming – storming – norming – performing). This allows the team to gain confidence and understanding of its abilities (principle #3).
If we want to have a good approximation to our team’s velocity we also need to have metrics. If you don’t measure the velocity, then you will not be able to provide the average value or understand the uncertainty. All of the agile methodologies already have a baked in velocity concept.
In scrum the team iterates on time-boxed intervals called sprints.
During the sprint planning the team selects the highest priority stories from the product backlog. They select enough stories to fill their sprint backlog based upon the current estimate for their velocity (story points per sprint). When the sprint concludes, the team’s velocity is recalculated based upon the sprint just completed.
In pull-based methodologies (e.g. kanban), a cumulative flow diagram (shown below) is generally used to calculate the lead time.
The lead time is the time it takes a story point to propagate through the production pipeline. This is done by recording when a story enters the pipeline and when it exits. As stories exit, the lead time calculation is adjusted to provide a more accurate velocity for the team.
One very important point is to empower the team to have repeatable performance and thus a predictable velocity. Yes, this is a business and the team should be expected to perform. However, it can be detrimental to predictability to interfere with the team by changing team members, expecting the team to pull all nighters for a week to meet a deadline, or not enforcing work in process limits. These types of activities generally are not sustainable or undermine the performance of the team.
Dealing With Uncertainty
Now we have our best approximations to the sizes of the stories. Our team has been working together for awhile and are performing with a consistent velocity. Adding up all the story points and dividing by the velocity provides an estimate for the lead time for these features. This is our very best estimate. Even though we have done our very best, there is still some uncertainty in our estimate. What can we do?
First, realize that nothing is perfect. Striving for perfection can lead to paralysis by analysis. It is better to be professional about your estimates by discussing the uncertainty with the customer. Explain the sizing process, velocity and how estimates are calculated. End with an explanation of the ‘cone of uncertainty’ or the concept that uncertainty continually decreases as we near the finish (principle #4).
The key to dealing with this is communication with the customer. Many of the agile mythologies include structure to encourage this communication. As the software evolves, the customer should get frequent opportunities to inspect the product. Explain your development methodology.
Describe and relay the importance of their role in the process. Just as the software evolves, so should our estimate. The estimate will become more accurate as we proceed. Customers are not surprised by the product, schedule or costs when professionals are involved. Constant communication is key.
The customer has the responsibility to prioritize the stories to ensure the most important stories get developed first. There may be exceptions with this related to retiring risky stories. Generally, the product should evolve with the customers highest priority requirements first.
There are uncertainties in our estimate, but there are also uncertainties in the customer’s expectations. Many times, during a customer inspection they will begin to speak about new features. Often seeing a prototype in action opens up creativity. This can lead to new user stories. The customer may even prioritize these new stories above the currently scoped work. Because this is common, a frank and professional discussion should happen at the start of the project with respect to scope changes. There may be financial, time, or other constraints that prevent the scope from expanding. To prevent disappointment customers must understand your scope change policy.
Remember estimation is not the same thing as guessing. As estimate is comprised of approximations that introduce uncertainty into our answer. Although we can do our best to minimize the uncertainty in our approximations, we simply cannot make it go away. Apply the ‘4 principles’ and deal with uncertainty in a professional manner. The key to providing good estimates is to collect and leverage metrics on the team. This post avoided the complexity of mathematical statistics. Adding a bit of statistical analysis to the metrics can provide valuable information. Providing good estimates can be hard. Hopefully, this discussion makes it a bit easier.