Refining Task Breakdowns To Improve Estimates For Unfamiliar Work

Understanding the Problem: Inaccuracies When Estimating Unfamiliar Tasks

Estimating the effort required to complete an unfamiliar task is inherently challenging. Without previous experience as a baseline, estimators often struggle to account for all components that factor into the total workload. This leads to inaccurate estimates that are frequently optimistic due to overlooking steps or misjudging difficulty of portions of the work.

When estimates are too low, projects risk being under resourced, requiring staff to work longer hours to complete the work in the allotted timeframe. Quality can also suffer when teams feel pressured to deliver without adequate time. Customers may be left unhappy if deadlines are missed or products fail to meet specifications.

To improve estimate accuracy for unfamiliar tasks, the work must be broken down into detailed components that can be sized based on historical data. Where unknowns remain, research is required to transform them into known entities. Building buffers into estimates can account for residual uncertainty, while ranges provide flexibility versus point estimates.

Breaking Down Tasks into Granular Components

The first step towards better estimates is a work breakdown structure that decomposes deliverables into hierarchies of granular subtasks. High-level activities must be divided into small, discrete pieces of effort that can be estimated based on past experience.

For example, an unfamiliar feature like “Implement predictive text engine” remains hard to size directly. Breaking it down into components like “profile existing algorithms”, “develop candidate generation rules”, “create language model”, and “optimize with machine learning” creates chunks simple enough to estimate.

Ideally, checklists and templates can guide the breakdown process so components are not overlooked. Subject matter experts can validate completeness of the structure.

When estimating the overall work, bottom-up summing of detailed subtask estimates will improve accuracy over guessing effort for high-level tasks.

Categorizing Components by Knowns vs Unknowns

With work decomposed into subtasks, the next step is categorizing each piece as a “known” or “unknown” element. Knowns represent activities similar enough to past work to leverage historical data for sizing.

In our predictive text example, steps like “profile existing algorithms” or “optimize with machine learning” may match prior analytics and machine learning projects, enabling statistics like velocity or hourly burn rates to inform estimates.

Unknown portions are those unfamiliar enough previous estimate data does not apply. For instance, if our team lacked experience generating linguistic rules, then subtask “Develop candidate generation rules” would be an unknown.

Understanding known knowns, known unknowns, and unknown unknowns (unk-unks) enables estimating knowns from data, researching unknowns, and reserving buffers for unk-unks that emerge later

Estimating Knowns Based on Past Experience

For subtasks identified as known activities, historical estimate data can provide a solid baseline for forecasting effort and cost. Over time, tracking actuals against initial estimates also allows calculating metrics like average variance.%. These guide setting contingency buffers.

Ideally, benchmarks distinguish between different activity types, system components impacted, skills required, and so forth. The more granular the historical data, the more precisely it can inform estimates for new efforts based on key parameters.

For our predictive text example, past completion rates for comparable analytics algorithms provide good estimates for subtask “profile existing algorithms”. Machine learning optimization velocities inform baseline estimates for later tuning efforts as well.

Estimate data may reside in organizational repositories, resource management systems, or project tracking tools. If unavailable, notebooks tracking individual teams’ velocities can provide initial benchmarks.

Researching Unknowns to Make Them Known

For subtasks identified as unknowns, guesses about level of effort lack reliable grounding. To improve estimate accuracy, unknowns must be researched until enough information emerges that allows credible forecasting.

This reconnaissance may involve breaking down subtasks further until portions align with past experience. Subject matter experts can also advise based on precedents. Reference checks with colleagues who have completed similar work provide another perspective.

External research introduces relevant data as well. In our example, reading research papers about linguistic rule generation may reveal insights about algorithms and models that improve estimates for that subcomponent.

Other techniques like creating prototypes, conducting spikes, or enumerating scenarios help transform unknown activities into known work items. These upfront investments mitigate surprises down the road.

Building Time Buffers Into Estimates

Despite breaking down and researching subtasks, some uncertainty around estimates inevitably remains. Unknown unknowns also lurk, waiting to emerge during later project stages.

To account for these risks, building schedule and resource buffers into estimates at both the subtask and overall project levels helps ensure adequate protection. Conservative commitments also leave flexibility for emerging priorities.

Typical buffers range from 15-30% beyond base estimates, depending on variability risk factors like system complexity, organizational maturity, resource skill levels, and so forth. Too little buffer and teams remain vulnerable to variability exceeding reservss. Too much pads estimates unnecessarily.

By tracking actuals against estimates and specifically documenting when and why buffers get used, organizations can optimize buffer sizing over time. Such data also fuels estimate improvement initiatives.

Updating Estimates As Knowledge Increases

Initial estimates represent best case guesses, even for researched subcomponents. But as work begins and knowledge grows, assumptions get tested and replaced with real data.

To leverage this new information, estimators should plan to revisit and update estimates periodically. Checkpoints might align with iteration boundaries, phase transitions, or set intervals like bi-weekly.

Updating estimates requires comparing completed work against expectations, validating remaining subtasks, and adjusting forecasts to align with emerging trends. New unk-unks surfaced get evaluated and factored in as well.

If velocity or productivity thus far falls short of baseline assumptions, future phases get adjusted to reflect the differential. Surfaced unknowns also increase remaining workload unless offset by newly completed subtasks.

By continuously updating estimates with latest data, accuracy improves. Such evidence-based adjustments also aid buy-in from stakeholders who balk at surprises and perceived overruns.

Providing Estimates As Ranges Rather Than Absolute Values

Since many underlying assumptions prove untrue during execution, precise estimates mislead stakeholders about level of variability inherent in initial forecasts. Presenting a prediction like “7 months” falsely signals high confidence when real completion times may range from 5-9 months.

An antidote involves expressing estimates as value ranges based on variability factors like historical variance, technical uncertainty, known unknowns and unk-unks.

In the predictive text example, “Develop candidate generation rules” could be sized as 3-6 weeks rather than 4 based on research. Such ranges signal customers that actuals may miss point estimates. Still, by updating estimates as work progresses, the spread should decrease.

Anchoring estimates to empirical data-driven ranges also combats tendencies to pad estimates across the board or prematurely narrow ranges despite Delivery risks. Such dysfunction undermines organizational improvement plans..

Setting Expectations on Accuracy Upfront

Legacy cultures and contracts often penalize teams that exceed initial estimate targets, driving dysfunction like padding estimates or declining to call out new discoveries mid-flight.

But when scoped properly upfront, stakeholders understand estimates entail a margin of error that narrows over time rather than fixed immutable commitments. Such alignment reduces politics around updating projections to reflect new learnings.

For example, framing predictions as statistically driven ranges based on past relative error at various stage gates sets realistic expectations. Any agreement should account for uncertainty, not just project initial range midpoints.

Enlightened governance models accept some variability as normal rather than reflecting flawed processes or teams. Exploring alternate collaboration models minimizes conflicts of interest around evolving projections.

Example Code for Task Breakdown and Estimation

Automating aspects of the task breakdown and estimating process improves consistency while reducing manual overhead. Tools can also incorporate historical data more easily.

For example, Python code could ingest a new product feature, decompose it into subtasks leveraging an ontology, classify them based on similarity to past work, query historical metrics to inform estimates for known elements, and output project plans including ranges based on variability buffers:

import project as p
 
feature = "Predictive text engine" 

subtasks = p.decompose_work(feature) 

for subtask in subtasks:
   classification = p.assess_precedent(subtask)  
   if (classification == "Known"):
      metric = p.check_database(subtask)
      lower_bound, upper_bound = p.add_buffers(metric)
      subtask.estimate = [lower_bound, upper_bound]
   else: 
      subtask.estimate = p.compute_placeholder(subtask)

project = p.recompose_work(subtasks)
 
print(project.output_plan())

Automated breakdowns launch analysts closer to complete estimates more quickly. Advanced systems also update projections in response to tracking data, taking humans out of the estimate maintenance loop.

Leave a Reply

Your email address will not be published. Required fields are marked *