Policies For Handling Defects In A Kanban Workflow

Identifying Defects

The identification of defects is a critical first step in handling them appropriately within a Kanban workflow. Teams should establish clear policies and procedures to detect defects early and consistently throughout the development process.

Using Acceptance Criteria to Flag Defects Early

Well-defined acceptance criteria provide a mechanism to detect defects at the start of the workflow. The product owner should collaborate with team members to specify detailed, testable criteria that outline the functional and non-functional expectations for user stories. As work items flow through the Kanban board, the assigned resources can validate code against these specifications to uncover gaps or issues. This approach enables the team to identify bugs early, before significant rework is required.

Performing Regular Code Reviews

In addition to verifying user stories against acceptance criteria, teams should perform proactive code reviews to find defects missed during implementation. Based on the risk profile and complexity of new code, the appropriate frequency for reviews should be determined – more complex modules likely warrant more fine-grained inspection. Code reviews also provide broader opportunities for knowledge sharing, mentoring amongst team members, and improving overall code quality.

Potential policies could mandate that all code must go through peer review before final testing stages, or that modules ranked high-risk must pass through two levels of code review. Teams should clarify the goals, scope, and procedures in written guidelines for code reviews.

Monitoring Production Errors and Exceptions

Robust monitoring of production systems allows teams to detect defects that were missed during earlier testing phases. Tracking error logs, user complaints in help desks, and operational issues can point to larger software problems. Teams should implement centralized logging, monitoring, and alerting infrastructure to properly assess issues in deployed products.

Policies can specify required monitoring capabilities based on application type and environment. For example, systems handling sensitive data may require more stringent tracing, audit logs, and access controls to pass compliance processes. Teams should regularly tune alert rules and log aggregation queries as the application moves through new releases in production.

Prioritizing Defect Resolution

Not all defects uncovered have the same level of urgency or impact on end-users. Kanban teams must categorize and prioritize bugs to sequence remediation activities appropriately across the workflow states.

Categorizing Priority Based on Severity and Impact

Defects can be tagged with a priority level (P1, P2, P3) according to detailed criteria on severity, scope, and downstream impact. Teams should clearly define each level in policies, but an example scheme could be:

  • P1 – Critical: Blocks key product releases; causes data loss or compliance/security issues
  • P2 – Major: Affects core functionality; high customer visibility; results in crashes
  • P3 – Minor: Small fixes; edge cases; UI adjustments; does not cause work stoppage

Additionally, the frequency of occurrence can elevate lower-ranked bugs. For example, a medium severity memory leak (P3) happening very frequently may be raised to P2 and fast-tracked.

Considering Downstream Work and Dependencies

Priority decisions should account for downstream dependencies so higher-level defects blocking other initiatives are clearly visible. For example, bugs stopping the transition of core features from development environments to production may require the highest focus. Assigning priority should trigger further analysis on second and third order impacts.

Policies can mandate documenting connected initiatives, risks, and resources needs as part of onboarding new defects.

Example: P1 – Critical, Blocks Releases; P2 – Major, Affects Core Functionality; P3 – Minor, Small Fixes

As described above, P1 defects critically stall key releases and should be solved ahead of all other work. P2 defects still require urgent resolution given their impact on major functionality – they can be worked on concurrently and split across teams if needed and feasible. P3 defects have minimal user impact and can be batched together into larger fixes. Their low severity permits flexible timelines.

Assigning and Tracking Defects

Kanban teams must establish policies on bringing defects onto the board, assigning ownership, updating status, and tracking through resolution. This provides full visibility into quality issues and the remediation process.

Adding Defects as Kanban Cards with Priority Tags

As defects are identified via testing activities, code reviews, or production monitoring, they should be formalized into Kanban cards. Required fields should capture summary, descriptions, steps to reproduce, priority categorization, affected components, and other relevant metadata. Unique IDs can be assigned to reference bugs across tools.

Tagging cards by priority gives visual cues in the workflow. Color codes, flags, and clear labeling of P1/P2 classifications make sure assignment accounts for urgency. Teams define standards for bringing defects onto boards.

Assigning Defects to Developers/Teams

Once properly captured, defects must be carefully assigned to resources or teams for resolution based on experience, capability, and capacity. Critical P1 bugs likely require pulling in specialized senior resources, even if temporarily re-shuffling capacity. Tools should allow flexible assignment, notifications, and handover support.

Policies could govern documentation needs working across teams and enable expertise location for specialized components. Assignment policies also determine if individuals handle specific priority levels vs spreading classes of bugs across larger groups.

Updating Status as Defects Move Through To Do, In Progress, In Review, Done

As defects are worked through the workflow, their card position from left to right should be updated to capture latest status – teams gain insight into cycle time and throughput. When resources pick up defects from the backlog queue to start work, the card moves to ‘In Progress’ from ‘To Do’.

Further policy standardization around more finely-grained steps within each major state can provide greater visibility. For example, additional Review and Validation states help track obstacles even if defects are technically done from a coding perspective.

Example Card: Title: Cart Calculation Defect, Assignee: Payments Team, Priority: P2, Status: In Progress

Kanban cards for defects should consolidate all relevant information for shared understanding. An example is shown below with key fields called out:

Title: Cart miscalculating discounts
Description: Coupon codes not applied correctly to total purchase amount if items from Category A in cart
Steps to reproduce: Add items from Category A, apply coupon code, check final amount

Assignee: Payments Team

Priority: P2
Status: In Progress

Verifying and Closing Defects

Once code fixes for defects are complete, teams follow well-defined confirmation processes before final closure and reporting on relevant data and trends.

Performing QA on Resolved Defects Before Closing

Code fixes should go through QA verification to ensure proper resolution and prevent regressions. Engineers resolve tickets from their perspective, but structured testing using a range of inputs validates from end user point of view. Based on severity, third-party QA may also be required for the most critical issues. Mandating testing via policy hardens software quality.

Documenting Root Cause Analysis and Solution Details

Requirements should clearly specify analysis expectations, evidence, and reporting requirements for future improvements. Teams perform thorough root cause analysis on defects, especially for high severity and wide impact issues. Collating background context, interim solutions, and eventual permanent fixes provides long-term learnings.

Standards can cover format, timeline, and storage locations for retrospective analysis. Lessons learned play an important role in maturing practices.

Example of Verifying Closure: Defect: Cart Miscalculating Discounts; Resolution Notes: Fixed Discount Coupon Logic, Added Test Coverage

Release management processes should look for details on resolution testing and steps to prevent regression, as in the example below:

Defect: Cart miscalculating discounts

Resolution Notes:
– Fixed discount coupon logic not checking category exclusions
– Added explicit test cases with category and coupon combinations

– All coupon tests now passing, defect closed

Analyzing Defect Metrics and Trends

Analyzing defects and quality trends over time offers useful indicators into the efficacy of processes, training gaps, and potential technical debt hotspots. Teams should implement consistent reporting.

Tracking Number of Defects by Priority/Type

A basic outcome metric is number of open bugs by priority level over time – itmeasures backlog and queue management. Tracking daily/weekly trends shows impact of code freezes, system issues, and validation bottlenecks on the full pipeline. Data can unlock resource allocation decisions.

Type dimensions (requirement gaps, runtime crashes, data errors) spot weak points by origin. Count metrics by assignee indicates training needs.

Monitoring Average Time to Resolution

Cycle time by priority provides process optimization insights – fluctuations may reveal capability gaps. Long resolutions may necessitate policy/staffing changes or technical improvements. Stratifying average resolution time by type and owner similarly highlights capability building opportunities and arch bottleneck areas.

Identifying Common Sources/Root Causes

Pareto categorization of defect origins inform targeted resolutions – 20% of causes lead to 80% of issues. For example, unclear requirements disproportionately lead to a high number of defects. drill-down analysis leads to courses of action like tighter specifications or increased collaboration touchpoints.

Example Graph: Defects Opened/Fixed Over Time

A sample monthly trend reported could track the simple opened vs fixed rates to spot backlog issues, as well as breakdown to see if lower priority defects languish despite process changes. The graph indicates a growing gap that requires attention on quotas or personnel.

Month – Defects Opened – Defects Fixed
June – 82 – 32
July – 132 – 51

Aug – 88 – 35

Leave a Reply

Your email address will not be published. Required fields are marked *