Feature Velocity and Forecasting

If you can’t measure it, you can’t manage it.
— Peter Drucker

The idea that teams do not have to provide delivery dates under the guise of "being agile" is a myth. It is ironic because the underlying data gathered in empirical methods like agile are effective in establishing reliable forecasts. It is only reasonable that we set expectations with our customers and stakeholders on the value we deliver.

The root of the myth seems to be two-fold. First, teams are often concerned that their forecasts will be interpreted as commitments. This approach is reasonable for short time boxes, like sprints, but loses its efficacy in longer duration time frames like traditional releases. Second, teams may not have the knowledge to develop the relevant measures to support forecasting, especially at scale. We will address this second challenge here with the introduction of Feature Velocity.

The Challenge

I was visiting my friend Hiren Doshi recently when the topic of relevant measurement came up. The basis of agility is empirical process control, so data and effective measurement are foundational. We came upon an analogy. 

Imagine an airline executive wanted to assess overall airline efficacy. Visiting an aircraft, the crew ushers the executive to the cockpit where the pilot explains the dial for ground speed, another for altitude, and another for outside air temperature. Someone proudly quotes the route on-time arrival statistics. While all of these data seem relevant, they are operational and efficiency measures, demonstrating a single aspect of delivering happy customers, safely, to their destinations.

Engineering teams often measure their own capacities in terms of sprint velocity or cycle times. The mistake is equating this measure directly to the delivery of feature value. Consider the following complicating factors:

  • Focus factor
  • Several contributing teams
  • Deadlines

The first challenge is to understand that not all of a team's capacity goes to feature development. Some percentage will go to defects, infrastructure, among other things and it can account for over half of the overall capacity of a team. This concept is known as Focus Factor. (e.g. The need to stock coffee or maintain baggage transport machinery.)

Next, we must consider the number of teams that may be working concurrently on a feature. There are limits of course, however the ability to bring different teams to a piece of work is a significant advantage of organizational agility. (e.g. Baggage handlers contribute to the front and back end of a flight.)

Finally, delivering on a deadline is often over-emphasized. Agility does not dissolve the constraints of the iron triangle and fixing time and scope usually results in compromised quality. (Imagine on-board clocks only show the scheduled time of arrival and testing gets bypassed when falling behind to meet the schedule.)

Team and Feature Velocities

Into Practice

The solution is to apply the concept of Feature Velocity. Aggregate each team's velocity contribution to a given feature over time and then develop forecasts for the delivery of that feature. In practice each team must track its own focus factor and include the contribution of each feature in that focus factor. This is usually trivial with most agile tooling but spreadsheets work well too. In the figure we see the contributions of Team 1 and Team 2 to Feature A and Feature B as well as the aggregated roll up for Feature A.

Forecasting is a natural outcome when evaluating feature velocity. Using the same methods described in developing team hurricane charts to assess team future capacity, we can assess feature velocity for an accurate forecast of delivery. Assess variability with feature velocity to develop forecast date ranges, a more accurate representation of the data over a single date. Note the high and low forecast dates shown above.

Feature growth

    Continuously growing work is a common problem and a forecast will demonstrate this as the date range keeps moving out sprint after sprint. To gain insights here establish a simple measure, feature size over time. Record the total size of the feature over time and assess at regular intervals. This keeps the problem visible and encourages a discussion about tradeoffs – time, scope or resources.

    Take Aways

    • Embrace the complexity in mature software engineering and develop the right measures to assess progress.
    • Use feature velocity to develop forecast ranges for feature delivery.
    • Use feature size over time to gain insights into changing scope.
    • Assess these measures regularly, ideally every sprint.
    Previous
    Previous

    Measuring the Impact of AI on Operations

    Next
    Next

    Why Normalizing Velocities Doesn't Matter