Accounting For All Work Items In Scrum, Including Bugs And Technical Tasks

In Scrum, all work related to a project must be accounted for and tracked transparently. This includes product backlog items, bugs, technical tasks,assignments, progress reporting, and closures. Comprehensive tracking ensures visibility into the true state of a project and what work remains.

Defining Work Items in Scrum

Scrum teams track progress by breaking down work into manageable chunks called work items. Common categories of work items include:

  • Product Backlog Items: Features, user stories, or requirements for the product.
  • Bugs: Defects in existing functionality that must be fixed.
  • Technical Tasks: Work needed to support development like infrastructure changes, refactoring, automation, etc.

Each work item should produce a shippable increment of value for stakeholders. Good work items follow the INVEST criteria: Independent, Negotiable, Valuable, Estimable, Small, and Testable.

Attributes of Work Items

Work items share common attributes including:

  • Title: Short description of the work.
  • Description: More detailed explanation of the scope and requirements.
  • Type: Category of work item (feature, bug, task, etc.).
  • Priority: Relative importance and order for addressing.
  • Effort: Expected work required to complete.
  • Assignee: Team member responsible for the work.
  • Status: Current state of progress (not started, in progress, complete).

Tracking these standard attributes makes reporting and planning easier for scrum teams when many work items are in play.

Tracking Product Backlog Items

The Product Backlog is an ordered list of desired functionality for the product. It evolves over time based on customer feedback, technology changes, and business objectives. The Product Owner manages the backlog and prioritizes items for development by the team. Teams estimate effort for backlog items during backlog refinement to enable sprint planning.

Defining Product Backlog Items

Properly defined product backlog items (PBIs) clearly communicate requirements to the development team. Effective PBIs exhibit specificity, testability, and customer focus. Best practices for writing PBIs include:

  • User Story Format: As a [user role], I want [feature] so that [benefit].
  • Acceptance Criteria Checklists: Detailed success metrics that must be met to close an item.
  • Mockups and Examples: Visual representations to demonstrate desired functionality.
  • Non-Technical Language: Express outcomes rather than technical implementation.

Tracking PBIs

There are two primary methods for tracking PBIs – either at the Product Backlog level or as separate items under sprints:

  1. Backlog Tracking: The backlog contains detailed PBIs to account for all desired work. Items move to a sprint backlog when selected for an iteration. The Product Owner owns monitoring here.
  2. Sprint-Level Tracking: High-level epics or themes reside in the product backlog. These encompass large functionality areas and are broken down into granular PBIs under sprints. The Delivery Team tracks progress in each sprint.

The optimal approach depends on how dynamic the project scope is and the pace of development. More variable projects benefit from sprint-level elaboration of items. Tools like JIRA allow teams to track items at both the product and sprint levels for full accountability.

Tracking Bugs

Bugs refer to defects present in existing functions of a product. They surface via quality assurance testing, customer reports, or discovery during development work. Bugs are inevitable on complex products and Scrum teams must make fixing them a priority to deliver a shippable increment each sprint.

Bug Tracking Life Cycle

Effective bug tracking processes will include these steps:

  1. Reporting: QA or users document bugs with steps to reproduce, expected vs. actual behavior, screenshots, etc.
  2. Triaging: Dev team reviews and sets priority level based on severity and scope.
  3. Assignment: Bug gets assigned to a developer to investigate and fix.
  4. Status Updates: Assignee updates progress during the sprint.
  5. Verification: Fixed code gets tested to confirm resolution of the bug.
  6. Closure: Bug gets marked as Done when the fix meets requirements.

Tracking Bug Metrics

Tracking key bug measures provides insight into broader product stability and areas needing attention. Useful metrics include:

  • Total Active Bugs
  • Bug Volume by Priority Level
  • Mean Time to Resolution
  • Bug Reopen Rates
  • Bugs by Assignment Area

These metrics can highlight spikes that lead to process improvements like shifted QA resources or stabilized components prone to defects.

Tracking Technical Tasks

Technical tasks encompass work that supports development but isn’t part of the shippable product. These sustaining activities are essential for long-term velocity. Common examples include:

  • Infrastructure enhancements
  • Technical debt repayment
  • Automated testing
  • Tooling upgrades
  • Legacy code refactoring

These tasks don’t directly produce new customer functionality but are imperative for an efficient, sustainable pace of development.

Risks of Neglecting Tasks

What happens when teams ignore technical tasks?

  • Reliability suffers as more defects emerge.
  • Code quality declines, slowing productivity.
  • Team spends time firefighting rather than innovating.
  • Upgrades take longer due to outdated platforms.
  • Bottlenecks materialize as technical debt accumulates.

Technical shortcut early on causes exponential pain over time. Tracking tasks counters this by making the work transparent.

Tracking Techniques

Teams have two options for monitoring technical activities:

  1. Dedicate Sprint Capacity: Reserve iteration time exclusively for tasks without PBIs.
  2. Allocate Within Sprints: Mix tasks in with normal product backlog work on the sprint backlog.

The best approach depends on the technical debt backlog size and available skills mix. Dedicated sprints relieve immediate impairments while embedding tasks distributes the responsibility.

Assigning Work Items

For work items to progress, they must have clear ownership with assignees responsible for completion. Conscious assignment considers several factors:

  • Skills and strengths
  • Current workload
  • Career growth areas
  • Cross-training opportunities

Optimizing these dimensions ensures timely advancement of work and increased capabilities across all team members.

Automated Assignment

As teams scale, manual work item assignment becomes infeasible. “Intelligent” assignment algorithms can augment human judgment for optimal routing. These algorithms assess aspects like:

  • Actor experience on skill domain
  • Proficiency based on past item resolutions
  • Relative actor availability
  • Affinity between actors and work categories

Automated assigners free humans to focus on complicated decisions while consistently handling repetitive matching based on historical data.

Tracking Progress on Work Items

Once work begins, teams need ways to gauge advancement. In Scrum, tracking occurs on two levels:

  1. Day-to-Day: Assignees update status during sprint execution. This demonstrates accountability to the team.
  2. Sprint Reviews: The Scrum Team presents functionality built that sprint to stakeholders, confirming its fit with expectations documented in PBIs and meeting the Definition of Done for those items. This signals progress on customer requests to the organization.

These inspection points ensure issues surface promptly for the fastest resolution with maximum transparency.

Indicators of Progress

Teams quantify progression by considering metrics like:

  • % complete
  • Story points done
  • Hours remaining
  • # of tests executed and passed

Items nearing completion transition their status to reflect that stabilized state. Common markers include In QA, In Staging, Ready for Prod, Pending Acceptance, and Done.

Closing Out Work Items

Closure indicates a work item’s fulfillment according to “done” criteria. But how does a team confirm item completion?

Definition of Done

Teams reference their Definition of Done (DoD), which outlines all necessary steps and quality standards for product functionality. Typical DoD elements include:

  • Peer code reviews passed
  • All tests automated
  • Meets coding standards
  • Security verified
  • UX validated
  • Performance benchmarked
  • Production deployment
  • End user acceptance signoff

With a robust DoD, teams know exactly what hurdles work must clear before it graduates from “in progress” to “complete.”

Actual Hours vs Estimates

Comparing initial estimates to actual hours spent also signals item resolution. Assuming the estimator properly accounted for all scope initially, close alignment on hours implies full delivery of the promised functionality at the expected level of effort.

Material deviations could indicate poor practices like scope creep, inadequate testing, or architecting misses. Scrutinizing these gaps uncovers areas for team and process improvement.

Reporting on All Work Items

Holistic reporting tells a transparent story about how the team is tracking toward delivering a product vision that delights customers.

Scrum Artifacts

Core Scrum artifacts to reference when reporting status include:

  • Product Backlog: What functionality could incrementally improve the product?
  • Sprint Backlog: Which items did the team commit to this iteration?
  • Increment: What shippable functionality was delivered across the sprints?
  • Metrics: How productive is the team quantitatively?

Automated Reporting

Thankfully, manual reporting is no longer necessary in most environments thanks to abundant tooling. Scrum tools like JIRA automatically generate digestible reports and visualizations that tell the story including:

  • Burndowns: Work remaining across sprints
  • Velocity: Average story points per sprint
  • Ageing: Time work has been in progress
  • Throughput: Average cycle time from “to do” to “done”
  • Forecasting: Predicted completion dates for releases based on historical data

These exhibits shine light on progress impediments like inadequate testing bandwidth and dependencies holding up change approval.

Example Code Snippets for Tracking Work

Tools provide graphical models for digesting information, but it can be instructive to query databases directly using code. Here are sample scripts demonstrating common work item reporting known to engineering teams:

Active JIRA Tickets By Assignee

SELECT 
  a.name as ASSIGNEE, 
  count(i.id) AS Num_Assigned_Issues
FROM 
  jiraissue i 
  JOIN jirauser a ON a.id = i.assignee
WHERE
  i.STATUS != 'CLOSED'
GROUP BY a.name
ORDER BY Num_Assigned_Issues DESC;

All GitHub Issues Without Labels

query GraphQL {
  repository(owner: "myorg", name: "frontend") {
    issues(first: 100, filterBy: {labels: null}) {
      edges {
        node {
          title
          url
        }
      }
    }
  }
}  

These snippets demonstrate pulling custom slices of data from systems holding work items to uncover insights like risk areas or policy violations. Scripting provides flexibility to explore all aspects of the status quo.

Leave a Reply

Your email address will not be published. Required fields are marked *