In a distressed MtO project, the feature burndown shows whether the team is closing scope gaps (link). But there is a second dimension that the burndown does not capture: product quality. Features can be declared “done” while carrying unresolved defects that compound across releases and eventually lead to ugly customer escalations—or worse: field failures. That can result in shipping broken products—not because the team hasn’t worked hard on developing new features, but because product quality hasn’t been managed early on.
An important visual metric to assess the trend in the MtO project is the defect curve—the quality counterpart to the feature burndown. Together, they form a complete picture of project health: one tracks what gets delivered, the other tracks how well it was built.
No Specification, No Defect
As a crucial precondition for managing defects effectively, identifying the failed requirement is essential. A defect can only be defined against a specification. This is not a technicality—it is the legal and technical foundation of defect management that sometimes gets overlooked in the hectic pace of an MtO delivery.
A defect is a deviation from specified, agreed, or contractually mandated behavior—a “requirements baseline.” Without a specification—a document that stipulates what the system is supposed to do—there is no objective basis for calling anything a defect. An engineer who believes something is broken and an engineer who believes it is working as intended will frequently argue indefinitely, because neither has a reference point.
Some projects don’t actually see the value in understanding this crucial distinction. That’s why proper coaching is so decisive in project management. This is why specification quality is a prerequisite for meaningful defect tracking. In MtO projects, the specification spans the entire V-model: from system-level behavioral specs, through architecture and design decisions, down to software-level interface and module specifications. The granularity and formality of traceability between V-stages vary by project complexity and customer requirements, but at every level, the specification serves as the baseline—no specification—no defect.
The practical implication is that, in a turnaround project, one of the first diagnostic questions is whether specifications actually exist at the level of granularity needed to drive defect assessment. If they do not, defect tracking is noise—and the first fix is not in the defect tracker; it is in the specification.
Minimum Viable Traceability
Traceability is one of the most over-engineered topics in MtO project management. Teams frequently spend months building elaborate trace matrices across hundreds of artifacts (sometimes driven by assessors or redundant quality representatives)—only to produce something nobody reads or maintains. That is not traceability. That is compliance theater.
The goal of traceability, in practice, is not completeness for its own sake. It is the ability to answer one question when a defect surfaces: what was specified, how was it tested, and what did the test reveal? Everything beyond that is fundamentally optional.
For defect management specifically, the minimum viable traceability chain has four links:
- Specification → Test Case. Every test case must trace back to the specification element it is verifying. This is the foundational link. Without it, there is no way to determine whether a failing test reflects a real deviation from a requirement or an error in the test itself. It also answers the question that arises in every Quality Triage: “Is this a defect against the spec, or did the test case misinterpret the spec?” The trace makes the distinction possible.
- Test Case → Test Run(s). A test case, as an isolated document, says little about the product quality. A test run is an instance of that test case executed at a specific point in time, against a specific build, with a specific result. One test case typically produces multiple test runs across builds, releases, and configurations. The trace from test case to test run enables the team to distinguish between a defect that appeared in build 1.3.2 and was resolved in 1.4.0 versus one that has been consistently failing for six builds.
- Test Run → Test Run Data. Each test run must record its inputs, configuration, build identifier, execution environment, and result (pass, fail, or blocked). That helps ensure that the issue is not a random—perhaps test-environment-induced—aberration. Without a cleanly recorded test trace, finding the root cause can prove impossible. The data must be captured at the moment of execution, automatically where possible.
- Defect → Test Case → Specification. When a defect is logged, it must trace back to the test case that revealed it, and through that test case, back to the specification element that defines the expected behavior. This chain makes the Quality Triage efficient.
This four-link chain is not optional in a non-trivial MtO project.
The Distinction between Feature, Defect, and Change Request
Once specifications exist, the team must agree on what a defect actually is—and what it is not. Often, the issue categories are confused, and the confusion is expensive.
A feature is a defined unit of customer-relevant scope. It exists because the customer or an applicable standard demands it. It is planned, owned, sized, and sequenced into a release. A feature is a delivery commitment.
A defect is a deviation from the specification, which, in turn, defines the project scope. The expected behavior or a product quality aspect, such as timing or data transfer rate, does not match the implementation. A defect is not a missing feature. It is a broken promise on something that was already agreed upon.
A change request is a request to modify the specification—to add, modify, or remove a scope-relevant product attribute. It comes from the customer, from a regulatory update, or from a technical decision that invalidates a prior agreement (requirements baseline). A change request is not a defect. It is a new or modified scope that must be assessed, negotiated, and planned like any other feature.
Misclassifying a change request as a defect inflates the defect backlog and obscures real scope changes.
The rule is simple: defects go in the defect tracker. Change requests go through the scope change process. These are different workflows, different owners, different planning implications. It can still all be tracked using one tool, as long as the issue types are clearly defined; it only implies different workflows and responsibilities.
Frequent Symptoms in Troubled Projects
Most distressed projects log bugs in Jira, a spreadsheet, or a makeshift team-specific board. The data is not systematically used to improve product quality.
The symptoms are familiar:
Defect inflation. A single issue spawns multiple duplicates logged by different engineers across disciplines. The backlog balloons, but nobody can tell how many real problems exist.
Defect hiding. Critical issues are quietly downgraded before milestone reviews or quickly fixed without logging them at all.
No closure discipline. Defects are opened enthusiastically and closed reluctantly—or not at all. Nobody feels responsible for driving issues to resolution, because nobody owns them.
No trend analysis. Management asks: “How many open bugs do we have?” The answer is a number. That number, in isolation, is meaningless. What matters is the trend—and that requires a tool capable of generating it.
A Word on Tooling: Spreadsheets Are Not an Option
In a non-trivial MtO project—anything with more than a handful of engineers, multiple suppliers, and a formal release cycle—managing defects in a spreadsheet or a makeshift Kanban board is a path to disaster. It is not a question of preference; it is a structural problem.
Spreadsheets do not enforce ownership. They do not generate trends automatically. They cannot link defects to features, to specifications, or to release plans. They break under concurrent edits. They have no audit trail. And they require manual effort to produce any report, which means reports are produced infrequently, and always too late.
The defect curve requires daily data. Daily data requires a proper issue management system. The tool must support automated status tracking, configurable severity schemas, traceability to features and specifications, and report generation without human intervention.
The investment is not large. The cost of not having it—in lost data, opaque status, and undetected trends—is enormous.
What Is the Defect Curve?
The defect curve is not a single data point; rather, it is a set of three tracked metrics, plotted over time—typically daily or weekly (“time unit”), aligned to release cycles:
- New defects discovered (inflow) — how many new issues are found per time unit?
- Defects closed (outflow) — how many are resolved and verified per time unit?
- Open defect backlog (net) — how many valid, unresolved defects exist right now?
These three metrics tell a story that no status meeting can replicate. In a healthy release cycle, the curve follows a predictable shape: discovery peaks early in integration, closure accelerates behind it, and the open backlog rises briefly, then falls as the release stabilizes. In a distressed project, the story is different: discovery keeps rising, closure is flat, and the backlog compounds week over week. The wave never breaks. Sometimes, no defects are discovered at all, and then—often just at the time of customer delivery—the product falls apart, and everyone acts surprised.
That discrepancy is the earliest warning signal of a quality crisis. If you are watching the curve, you see it in time to act. If you are not, you discover it at system integration, when it is too late.
Interactive defect curve — click Healthy, Plateau warning, or Widening gap crisis buttons to switch between release quality patterns across a 12-week cycle
Select a pattern to explore
Figure 1: Insert defect curve chart here — three-line chart showing New defects/week (coral), Closed/week (teal dashed), and Open backlog (blue) across a 12-week release cycle with phase bands for Construction, Integration, Stabilization, and Release/SOP. Red dashed line = SOP acceptance threshold.
Why Trends Matter — More Than the Count
The defect count at any given moment is a snapshot that says nothing about the health of the product release. Trends, on the other hand, provide a context for assessing the direction the project release is taking. Trends matter for three reasons:
Visibility. The customer often expects to see the defect curve—not as a courtesy, but as a control mechanism. In the final phase before SOP (Start of Production), most customers will insist on it. A team that can produce a credible, data-backed defect trend curve earns trust.
Risk management. A widening gap between discovery and closure is a risk indicator. It tells you, weeks in advance, that the release timeline is at risk. That is early enough to act: add resources, cut scope, adjust the release date. Detected at the milestone review, the same information arrives too late for anything other than damage control.
Resource demand. A rising defect backlog indicates insufficient team velocity. The closure rate is not keeping pace with the discovery rate. This is a concrete, measurable signal that either more people are needed, or the scope of “done” needs to be restructured.
Severity Classification
The defect curve only works if defects are classified honestly and consistently. A minimum severity model for MtO projects has four levels:
- Critical (Blocker): Safety, security, or compliance-relevant function fails. Release is blocked until resolved or explicitly accepted with documented rationale and customer agreement.
- Major: Significant functional degradation. Customer-visible and reproducible. Must be resolved or formally accepted before release.
- Minor: Limited impact. Accepted with rationale and logged in the release notes. Planned for a future release.
- Cosmetic / Observation: No functional impact. Tracked but not included in the release curve.
Severity is assigned by the VVT engineer (Verification, Validation, and Test) who discovers the defect during testing. That initial rating is then reviewed—and, if necessary, overruled—in the Quality Triage. Developer self-classification is a conflict of interest. The person who wrote the code is not the right person to assess its severity.
I suggest using a more constructive term, such as Quality Triage, instead. The name reflects what it actually is: a fast, focused daily or near-daily review, attended by relevant feature owners and the Project Lead when needed.
The Quality Triage answers three questions for each defect:
- Severity confirmed? Does the initial VVT engineer rating hold under technical scrutiny?
- Impact assessed? Which feature and specification are affected? Which release? Which customer-visible behavior?
- Owner assigned and release planned? Who owns the fix, and in which release does it land?
That third question is where the Triage connects directly to release planning. A defect that cannot be fixed in the current release gets planned into the next one. It becomes a work item in the future release scope, with an owner and a target date.
Defects without a planned release are a wasteful dead end — deferred until no one remembers the original context of the defect.
Preventing Duplicate Defects
Duplicate defects are one of the most persistent sources of waste in any large defect backlog. Two engineers encounter the same failure in different test contexts, log it separately, and the triage team spends time debating two entries that describe the same root cause. In a project with hundreds of open defects and multiple suppliers logging independently, duplicate rates of 20–30% are not unusual.
This is a problem LLMs are well-suited to solve—and in 2026, there is no good reason not to use them for it.
Before it reaches the Quality Triage, a new defect is automatically screened against the existing open backlog using an LLM-assisted deduplication step. The model compares the new defect’s description, affected component, failure mode, and reproduction steps against the open issues and returns a ranked list of likely duplicates, with a confidence score. The VVT engineer reviews the candidates in seconds. If a true duplicate is found, the new entry is linked and closed immediately.
Beyond deduplication, LLMs can also assist the triage process itself: pre-assessing likely severity based on the failure description and specification context, suggesting a probable owner based on component and historical patterns, and flagging whether the defect description is sufficient. This does not replace the Quality Triage, but it significantly compresses preparation time.
Every Defect Has an Owner
Ownerless defects are backlog theater — they exist in the tracker, surface in meetings, and never get resolved.
Ownership is assigned in the Quality Triage, no later than 24 hours after the defect is logged. The owner is responsible for the defect from that point until closure. The owner drives the resolution (not necessarily personally).
The daily Sync applies the same ownership logic to defects as to features: “This critical defect has not moved in three days. What is the actual blocker?”
Definition of Done for Defects
A feature can be declared done while carrying open defects. This is not a contradiction—it is a deliberate and documented quality decision.
The rule is: a feature is done when all open defects against it are rated Minor or Cosmetic, and those defects are formally logged, owned, and planned for a future release. A feature with open Critical or Major defects is not “done.”
The VVT Lead, together with the Project Lead, makes the release recommendation on this basis, not on a zero count. The test report and release notes are the formal record: every unresolved defect that ships with the release is listed, classified, and owned.
The customer sees this document. That is not a risk — it is professionalism.
Escalations
Serious customer complaints, by definition, are escalations. Escalations must be treated with urgency and transparency. Routing a customer escalation through standard backlog processes is a trust-destroying mistake. The customer who escalates is already frustrated. Making them wait for the next triage cycle makes it worse.
The practical response is structural: plan a buffer of resources for escalations. In every release cycle, reserve a fixed percentage of available engineering bandwidth—held back from planned feature work—for escalation response.
The Anatomy of a Release Quality Cycle
Every MtO release has a natural quality lifecycle. Understanding it prevents the most common misreadings of the defect curve.
Phase 1 — Construction: Features are being implemented. Unit tests run. Defect discovery is low, not because quality is high, but because systematic integration testing has not yet begun. A suspiciously flat discovery curve in this phase is not reassuring; it signals that testing is not aggressive enough.
Phase 2 — Integration: Subsystems connect. Integration tests run. Discovery accelerates sharply. This is expected. A rising defect count during integration is the system doing its job. The critical question is whether the closure rate is keeping pace.
Phase 3 — Stabilization: New discovery slows. Closure dominates. The open backlog falls. The Quality Triage shifts from assessment-heavy to closure-heavy. Remaining defects are classified and owned, and either resolved in this release or explicitly planned for the next.
Phase 4 — Release: Open Critical defects: resolved or formally accepted. Open Major defects: resolved or planned. All defects documented in the test report and release notes. The product ships with a known quality state — not a hoped-for one.
The defect curve makes each phase visible and the transitions legible. If Phase 3 never starts — if discovery keeps rising with no closure acceleration — that is data. It tells you the product is not ready, regardless of the schedule.
Predicting the Defect Curve
One of the most important things to understand about the defect curve is that it looks very different depending on where you are in the product’s release lifecycle — and that the shape is predictable.
Early releases tend to be quiet. The scope is limited. Test coverage is growing but not yet comprehensive. Defect counts are low. This is normal. A low count in early releases is a function of coverage, not quality.
Middle releases are where defect volumes ramp up. Features are delivered in larger batches. Integration testing reveals cross-feature interactions that unit tests missed. The discovery curve steepens.
The final release before SOP is where the curve peaks. Every feature that has been deferred, every integration edge case that was “noted for later,” every customer complaint from field testing, etc., all converge. This is the defect “storm,” and it must be planned for. It is not a surprise. It is a structural feature of the MtO project lifecycles, and teams that are unprepared for it get destroyed by it.
There are several approaches to planning the “storm” phase. I will mention two of the most frequently used in my practice.
Tiger teams. A dedicated group of the project’s most experienced engineers. They are absolute insiders who know the product in depth. This team is assembled to attack the Critical and Major backlog head-on. This approach works best for systemic or deeply rooted defect clusters that require expert knowledge to resolve quickly.
Feature owner-driven resolution. For feature-specific, well-understood defects, the feature owner drives resolution directly with their development team. This is the default path. The feature owner who delivered the feature is responsible for its defect closure, with the same urgency and ownership logic as the original delivery.
Both approaches require deliberate capacity planning — without it, the customer may pressure the team into relying on weakly qualified “best-cost” resources.
The Customer Is Watching
In the late project phase, most customers will not ask for a defect count. They will ask for the defect curve. They will often impose, as part of the contract, a limit on the number of open defects. The product cannot proceed to SOP unless the open defect count for each severity class is below a defined threshold.
That may be perceived as annoying, but it is a healthy expectation. A customer who tracks the defect curve is engaged in quality. Limiting the number of high-severity defects helps assess the project risk early.
This is also why the defect curve must be established early in the project. The customer needs history. A curve that only covers the last month of a two-year project proves nothing.
The CORE SPICE Connection
The five project turnaround measures are also reflected in the article’s measures (Feature-Based Project Tracking: How to Regain Control in Distressed MtO Projects).
No task left behind. Every defect is an owned task. Unowned defects do not exist. Every open issue has a name and a planned release.
Maintain the sense of urgency. The Critical backlog is reviewed daily. A Critical defect that has not moved in 48 hours is a Sync conversation, not a footnote.
End-to-end responsibility. Feature owners own their feature’s defect state, even after implementation is complete. Defects against their feature are their problem until they are closed.
Radical transparency. The defect curve, the Quality Triage outcomes, and the release notes are visible to everyone. That includes the core team, suppliers, and customers. This is especially important in the SOP phase, when the customer is actively tracking the curve.
Automate everything. The defect curve must be generated automatically from the issue management system. That must happen daily, without manual effort. In a non-trivial project, any other approach is not just inefficient—it is a data integrity risk.
Putting It All Together
The feature burndown and the defect curve are the two instruments of a distressed project’s recovery dashboard.
- Feature burndown converging: delivery is on track.
- The defect curve converging indicates that quality is on track.
- Defect backlog planned into future releases ensures that nothing is lost and everything is actively managed.
- The escalation buffer in place ensures that the customer relationship is protected.
- The defect curve is shared with the customer. That helps build trust and team confidence.
The lifecycle of defect volume is predictable: quiet in early releases, rising through integration, peaking before SOP. Plan for that peak. Staff the tiger team. Protect the feature owner’s bandwidth. Set the customer’s expectations with data, not assurances.
In a distressed project, provided the sponsor actively supports the aforementioned measures, a turnaround is always possible.
If the Critical backlog is not falling—or not falling fast enough to meet the customer’s SOP threshold—there is no quality. But if it is falling with a known, documented, owned residual state, the product is under control. And everyone can see it.
References
- Feature-Based Project Tracking — The companion burndown article: projectcrunch.com/feature-based-project-tracking-how-to-regain-control-in-distressed-mto-projects/
- CORE SPICE Coaching Concept — The 12 CORE SPICE principles: projectcrunch.com/core-spice-coaching-concept/
- Car IT Reloaded — Disruption in the Car Industry. Springer Verlag, 2025. ISBN 3658476907.
I am a project manager (Project Manager Professional, PMP), a Project Coach, a management consultant, and a book author. I have worked in the software industry since 1992 and as a manager consultant since 1998. Please visit my United Mentors home page for more details. Contact me on LinkedIn for direct feedback on my articles.
