Poor man’s project forecasting

Three seemingly independent occurrences happened to me recently:

  • A couple of weeks ago I had the privilege to read an upcoming book about #NoEstimates by Vasco Duarte.
  • A couple of days ago I read a very interesting blog post called ”How far have we come?” by Marcus Hammarberg.
  • My boss recently assigned me with the following task: ”Can you make a status report where we are in the project?”

I thought for a while, can I connect these things? There must be a way to solve my immediate task by using what I’ve learnt recently. This is what I came up with.

Poor man’s project forecasting

Some time ago I asked the team to start measuring how many stickies they complete peer week. I got this idea from the book ”Kanban in Action” that you really need to buy! If you wonder why you should do that, I’ve done a review of it. The reason we started up this measuring was to see if our kaizen efforts (continuous improvements) was positive or negative.

Every Friday the team leader counts all the stickies that are in the ”Done” column on the Kanban board, write the figure on the board, and then remove them. This is repeated every Friday. Each sticky correspond in our kanban to a bug fix or a sub-task of a story/feature. Hence, the measuring is done on the lowest level of work that we track, and not on the story- or feature-level (that is suggested in the book by Vasco Duarte).

””Stickies

By plotting the numbers stored for each week the picture above was created. In the spread sheet program I’m using it was also possible to calculate an average, in this case 18 tasks/week.

You can argue that some of those tasks were one hour of work, while others may have been taking a much longer time to complete (days or even weeks). That is true of course, but by measuring over a longer period of time, these things ”evens out”.

One more argument against this kind of measurement is that work in progress is not considered or valued. Ongoing tasks are not considered, for example a task that was originally estimated to five days, and now only have one day left on it, i.e. should be 80% done. My answer to that is, how do you know that it’s 20% left? That is just an estimate. You don’t know that it’s done, before it’s done! 🙂

Remember that the effort for performing this measurement is less than five minutes per week!

Actions possible thanks to forecasting

Someone once did a time plan stating that the project needed 6 weeks for bug fixing. I have to admit, it was me, back in my ”old life”. 🙂

Looking inside our bug tracking system, I found that 213 bugs were assigned to the project. Huh, do you just like me hate to manage long lists in bug tracking systems or spread sheets? Maybe you can make use of the priority pyramid.

Now it was time fire up my favorite calculator:

  1. Actual time to solve all bugs = 213 / 18 = 12 weeks (double the planned time)
  2. Number of possible bugs to solve within the given time plan = 6 * 18 = 108 (roughly half the scope)

””Burndown””

Consulting The Iron Triangle we could come to the following conclusions:

  • Cost – Investigate the possibility to borrow some members from another team to help out with this project
  • Time – Investigate the possibility to delay the project delivery with the steering group
  • Scope – Initiate activities to manage the scope. Must this bug be solved in this project, or can it wait? Is it old and can be removed? Is this bug already fixed, but our tracking system has not been updated? And so on.

Summary

Wait a minute, couldn’t you have come to the same conclusions without forecasting? Well, we probably could in some way. We know that the project was ”running late” (that’s why the status report activity was initiated). Based on our so called ”gut feeling” we could have reached similar conclusions. However, I rather base my decisions on actual data that my brain can analyze, than ”feelings” from my stomach. 🙂

Was it hard to do this? No. Was it useful? Yes. We took a meeting to discuss the outcome, and immediate actions could be taken. This way of doing poor man’s project forecasting will be in my toolbox from now on!

Finally, #NoEstimates is now so established that jokes about it are popping up. This is one of them, enjoy!

How many #NoEstimates advocates does it take to change a lightbulb?
Sorry, I can’t say until they have a stable throughput in 3-4 sprints.” – Neil Killick (@neil_killick)

All the best,
 Tomas from TheAgileist

Advertisements

10 comments

  1. The average doesn’t tell you much about the future. The “average” of 40 and 60 is 50. The “average” of 10 and 90 is 50. You need to Standard Deviation of the time series of values and the Mode. Them you’ll see the STD is 6.6 on an average of 18 or a likely ±30% swing in the likelihood of the next outcome for Tasks.

    This is useful information, i that it tells you you need a 30% margin if you’re going to make decisions based on the past performance of the project.

    This approach, by the way, is called “Estimating,” the type of estimating done where microeconomics of decision making is used by the business to determine if the “value” being produced in exchange for the cost is “earning its keep.”

    Like

    1. Hi, and thanks for your comment! I agree that average is not the best way of calculating. Definitely room for improvement here. I deliberately named this blog post ”poor man’s” to high-light that this is something simple to start out with and then build upon. However those ”next steps” are not covered, but that can be a topic for an upcoming blog post.

      Like

  2. Yeah, exactly.

    I tend more and more to steer away from making estimates and instead just present the data. In your case you augmented that with some calculation for worst and best case.

    This bring the real facts to the people wanting the estimates themselves and they can do the prognosis in their head, if they want to.

    If we then further add the fact that you keep updating this frequently (right now weekly for you, another option would be the moment you complete a sticky is the lowest possible but not very useful maybe) it can be a really good decision-making foundation.

    Tell it like it is!

    Like

  3. The real effect in this approach is, that you base the forecast on the real capabilities (!) of the system (!) and not some gut instinct – even if its based on experience – how long it takes to solve a task, never solved before (which is common in knowledge work).

    Like

    1. Yes, decisions based on “informed” estimates are mandatory for any credibility. But there are many ways to have an informed estimate in the absence of past data as well. Monte Carlo Simulation based on reference class, or simple ± ranges to test the viability of the model are a good start. For Example -5% +15% range on all the work items to see what the outcome might be. If the “modeled” outcomes is “close enough” that might be a good start.
      But using the Average is always a BAD idea in the absence of variance.
      Then there’s a much bigger problem of using small samples of highly variant past performance as suggested by the person producing that original chart.
      The future is rarely like the past, so without the assessment in the post here http://goo.gl/r75zug you’re going to be disappoinetd you didn’t see the outcomes coming

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s