Estimating Defects: Should Bugs Be Part Of Velocity Metrics?

Defining Software Defects and Velocity Metrics

Before diving into the debate around including bugs in velocity metrics, it is important to clearly define key terminology. A software defect or “bug” refers to an error, flaw, failure, or fault in a computer program that causes it to produce an incorrect or unexpected result or behave in unintended ways. Defects can arise due to mistakes made by developers during coding, complex system interactions, gaps in requirement specifications, and various other root causes.

On the other hand, velocity refers to the amount of software development work a team can complete and deliver over a certain time period, often measured in terms of story points per sprint iteration in agile frameworks like Scrum. Velocity metrics provide insight into team capacity and progress to support planning and forecasting. They indicate if the team is taking on an appropriate and sustainable amount of work.

The Ongoing Debate: To Include or Not Include Bugs

There are two schools of thought on whether defect resolution efforts should be included as part of team velocity calculations. Proponents argue that fixing bugs is as much real work as building new features, so it should factor into velocity as it reduces the team’s available capacity. However, opponents counter that bugs do not directly equate to new functionality delivered, so mixing defect fixes with new development inflationary and makes velocity less accurate for planning.

In favor of including bugs, proponents consider defect resolution as essential technical debt repayment work that should have an equal place next to new feature development. Bugs left unfixed deprecate code quality and incur interest payments in the form of extra maintenance efforts down the road. Hence spending time on bugs frees up future capacity. Additionally, excluding bugs from velocity may incentivize teams to focus on only new features at the cost of code health.

On the other side, those arguing against including bugs believe that defects are failures in delivery, so counting them positively inflates estimates of a team’s capacity to deliver working software. Teams may then take on more work than they can handle, setting unrealistic expectations. Additionally, some view bug-fixing as interruptive rework rather than planned progress, while others see it an overhead cost separate from direct development efforts.

Factors to Consider When Deciding

There are several important considerations when determining if bug resolution efforts should contribute to velocity metrics:

  • Development team skills and experience – less seasoned teams may generate more defects unintentionally
  • Codebase quality and technical debt – poor structure with accumulated debt tends to manifest more bugs
  • Testing practices and automation – comprehensive testing and coverage lowers defect escape rates

For teams with many defects largely outside their control due to legacy code issues or inexperienced members, including bugs in velocity may unfairly deflate their metrics. On the other hand, for a senior team working in a relatively pristine codebase, bugs could reasonably count as capacity spent.

Examples and Sample Calculations

Below are two examples of how velocity metrics would differ based on whether bugs get included:

Example 1: Bugs Excluded from Velocity

  • Sprint 1:
    • Completed Stories: 50 points
    • Bugs Fixed: 20 points
  • Sprint 2:
    • Completed Stories: 40 points
    • Bugs Fixed: 10 points

Velocity with bugs excluded = 50 + 40 = 90 story points

Example 2: Bugs Included in Velocity

  • Sprint 1
    • Completed Stories: 50 points
    • Bugs Fixed: 20 points
  • Sprint 2
    • Completed Stories: 40 points
    • Bugs Fixed: 10 points

Velocity with bugs included = 50 + 20 + 40 + 10 = 120 story points

In this simplified example, including defects inflates velocity by around 33% compared to excluding bugs. This could skew forecasts if used for planning without adjustments.

Best Practices and Recommendations

In most cases, the best approach is to track bugs separately from new development velocity. However, their impact on capacity should still be understood and accounted for during planning. Consider including bugs selectively for teams with quality norms enforcing minimal defect rates.

Specific guidelines on when bugs may contribute to velocity metrics include:

  • Defect escape rate stays below 5% consistently
  • Most bugs arise from unclear requirements rather than poor coding
  • Comprehensive unit, integration and automation testing is implemented
  • Teams are staffed with senior engineers proficient with the code

If these indicators do not apply, exclude bugs from velocity, but have the team track and report on defect resolution efforts separately to understand the full picture.

Key Takeaways and Conclusion

Counting bug fixes as part of team velocity remains a divisive topic with reasonable arguments on both sides. Those in favor consider all capacity spent on defects as technical debt repayment that frees up future bandwidth. But opponents see bugs as capacity lost towards interruptive rather than planned productive work.

In the end, software teams need to decide based on their own context if bug resolution aligns with their interpretations of velocity. Factors like code quality, testing rigor, system stability, and team proficiency should inform this decision. Guideposts around maximal defect escape rates can indicate when including bugs may or may not inflate velocity figures.

By understanding the full impact of bugs on available capacity over time, teams can plan and track productivity more effectively towards meeting their project commitments.

Leave a Reply

Your email address will not be published. Required fields are marked *