A developer pushes code to production. The ticket moves to "Done." Two days later, the bug reports start rolling in. QA never tested it. Documentation wasn't updated. The feature flag is still off for 90% of users.
Is it done? Well, the code merged. But is it done done?
This is the Definition of Done (DoD) problem: every team has one, but half the time it's either too vague ("all work complete") or so exhaustive nobody actually follows it.
The DoD isn't just a formality—it's your quality bar. When it's unclear, you get half-finished features, technical debt, and endless "is this ready to ship?" Slack threads. When it's too strict, you get developers checking boxes instead of thinking, and velocity grinds to a halt.
Here's how to build a DoD that your team will actually use.
The Three Levels of Done
Most teams make one critical mistake: they try to create one Definition of Done for everything. But "done" means different things at different stages.
You need three:
1. Code Done (Developer)
What must be true before code review?
- Feature works locally
- Unit tests written and passing
- No linter errors
- Self-reviewed (yes, really—read your own diff first)
2. Story Done (Team)
What must be true before moving to "Ready for Deploy"?
- Code reviewed and approved
- Automated tests passing (unit + integration)
- Acceptance criteria met
- Tested in staging environment
- Edge cases handled (error states, loading states, empty states)
3. Production Done (Product)
What must be true before calling it shipped?
- Deployed to production
- Feature flag enabled (or rollout plan executed)
- Documentation updated
- Support team notified (if customer-facing)
- Monitoring/alerts configured
Most confusion happens because someone says "done" when they mean Code Done, but the PM hears Production Done. Be explicit about which level you're talking about.
Real Example: The Checkbox That Caught Fire
I once worked with a team whose DoD included "Code reviewed by two engineers." Sounds reasonable, right?
Problem: On a 4-person dev team, this meant every PR needed half the team to stop work and review. Velocity tanked. Reviews became rubber-stamps because nobody had time for deep analysis.
We changed it to: "Code reviewed by one engineer, plus automated test coverage >80%." Single reviewer for most PRs, second reviewer only for architecture changes or security-sensitive code.
Result: Review time dropped 40%, but quality didn't suffer because the tests caught what humans missed.
Lesson: Your DoD should enable shipping, not block it.
What Doesn't Belong in a DoD
Some things teams put in their DoD that should live elsewhere:
- "Product Owner approves" → That's acceptance criteria, not DoD. The PO approves the story, not every commit.
- "No bugs" → Unrealistic. Change it to "No known critical bugs."
- "100% test coverage" → Arbitrary metric. Better: "All happy paths and critical error cases tested."
- "Stakeholders notified" → That's a release checklist item, not a per-story DoD.
Your DoD is about quality standards, not process steps. Keep it focused.
The Async DoD Template
For remote teams, your DoD needs to be self-service. No "check with Sarah before deploying" steps. Here's a template that works across timezones:
## Story Done Checklist
**Code Quality:**
- [ ] Code reviewed and approved
- [ ] Automated tests passing (CI green)
- [ ] No linter/type errors
**Functionality:**
- [ ] Acceptance criteria met
- [ ] Tested in staging by developer
- [ ] Edge cases handled (errors, loading, empty states)
**Documentation:**
- [ ] Inline code comments for complex logic
- [ ] README updated (if public API changed)
- [ ] Changelog entry added (if customer-facing)
**Deploy Readiness:**
- [ ] Feature flag configured (or deployment plan documented)
- [ ] Monitoring/logs in place
- [ ] Rollback plan identified
Notice: No "ask someone" steps. Everything is verifiable by the developer or automated tools.
What to Avoid
- The "Aspirational" DoD: Your DoD says "all features have E2E tests," but nobody has time to write them. The result? Constant exceptions, and the DoD becomes a joke. Make your DoD reflect reality, or change reality to match it.
- The "One-Size-Fits-All" Trap: A 2-line hotfix and a 3-month feature shouldn't have the same DoD. Create a "Fast Track DoD" for critical bugs: code review + passing tests + deployed with monitoring. Skip the full ceremony.
- The "Unspoken" DoD: If your DoD exists only in the PM's head, it doesn't exist. Write it down, link it from your Jira/Linear templates, and review it quarterly.
When Your DoD Is Working
You know your DoD is good when:
- Developers don't ask "is this ready to merge?"
- QA isn't finding obvious gaps (missing error handling, broken edge cases)
- Production incidents decrease over time
- Code review comments focus on architecture, not "you forgot tests again"
You know it's broken when:
- Every story needs an exception ("we'll add tests later")
- Bugs slip through because "I thought someone else checked that"
- Developers are checking boxes without thinking
Takeaways
- Use three levels of "done": Code Done, Story Done, Production Done. Be explicit about which one you mean.
- Your DoD should enable shipping, not block it. If it slows velocity without improving quality, fix it.
- Avoid "ask someone" steps—make your DoD self-service for remote teams.
- Review your DoD quarterly. If you're granting constant exceptions, the DoD is wrong—not the team.
Resources
- Atlassian: Definition of Done vs Definition of Ready
- Scrum.org: Walking Through a Definition of Done
- ThinkCloudly: DoD and Delivery Quality
Modern Project Management for Distributed Teams
PM Squared shares practical tools, templates, and lessons for PMs navigating remote work in 2026.
Browse Resources →