What Are Entry and Exit Criteria?
Entry and exit criteria are essential checkpoints in the software testing lifecycle that define when a testing phase should begin and when it can be concluded. These criteria act as quality gates, ensuring that testing activities start only when prerequisites are met and conclude only when objectives are achieved.
Entry Criteria specify the conditions that must be satisfied before testing can commence. They ensure that the testing environment is ready and that the product is in a testable state.
Exit Criteria define the conditions that must be met to conclude a testing phase successfully. They ensure that testing objectives have been achieved and that the product meets quality standards.
Why Entry and Exit Criteria Matter
Benefits of Well-Defined Criteria
- Clear expectations: All stakeholders understand when testing starts and stops
- Resource optimization: Prevents wasted effort on premature or incomplete testing
- Risk mitigation: Identifies potential issues before they impact the project
- Objective decision-making: Removes subjectivity from testing progression
- Stakeholder confidence: Provides transparent, measurable quality indicators
Consequences of Missing Criteria
Without proper entry and exit criteria, projects often experience:
- Premature testing: Starting tests before the system is ready, leading to false failures
- Endless testing: Lack of clear completion signals, causing scope creep
- Scope confusion: Unclear testing boundaries resulting in missed defects
- Resource waste: Inefficient allocation of testing resources
- Quality uncertainty: No objective measure of when quality goals are met
Entry Criteria: Prerequisites for Testing
Common Entry Criteria Examples
Category | Criteria |
---|---|
Environment | Test environment configured and accessible |
Test data prepared and loaded | |
Required tools installed and licensed | |
Documentation | Test plan approved and baselined |
Requirements specifications finalized | |
Test cases reviewed and approved | |
Product | Build deployed successfully |
Smoke tests passed | |
Known blockers resolved | |
Resources | Test team allocated and available |
Required skills and training completed | |
Access permissions granted |
Entry Criteria by Testing Level
Unit Testing Entry Criteria
✓ Code compiled without errors
✓ Code review completed
✓ Unit test framework configured
✓ Code coverage tool integrated
✓ Developer testing guidelines available
Integration Testing Entry Criteria
✓ All unit tests passing (minimum 80% pass rate)
✓ Modules/components deployed to integration environment
✓ API documentation completed
✓ Integration test scenarios defined
✓ Mock services/stubs ready (if needed)
✓ Database schemas synchronized
System Testing Entry Criteria
✓ Integration testing completed with 95% pass rate
✓ System requirements traceability matrix available
✓ Complete build deployed to test environment
✓ Performance baseline established
✓ Security scan completed
✓ User acceptance test scenarios prepared
User Acceptance Testing (UAT) Entry Criteria
✓ System testing completed with <5% open defects
✓ All critical and high-priority defects resolved
✓ UAT environment configured to mirror production
✓ Business users trained and available
✓ Acceptance test scripts finalized
✓ Sign-off from system testing obtained
Practical Example: Web Application Testing
Consider a web application release:
Entry Criteria for System Testing:
Build Quality
- Application successfully deployed to staging environment
- Smoke test suite executed with 100% pass rate
- No P1/P2 defects from previous sprint unresolved
Documentation
- Release notes published with feature descriptions
- API documentation updated for new endpoints
- Database (as discussed in Testing Levels: Unit, Integration, System, and UAT) migration scripts tested in pre-production
Test Readiness
- Regression test suite updated with new scenarios
- Test data refreshed from production sanitized dump
- Performance testing baseline captured
Team Readiness
- QA team completed feature walkthrough
- Access to logging and monitoring tools verified
- On-call support schedule confirmed
Exit Criteria: Completion Signals
Common Exit Criteria Examples
Category | Criteria |
---|---|
Coverage | 100% of planned test cases executed |
90%+ requirements covered by tests | |
All critical business flows tested | |
Quality | 95%+ test pass rate achieved |
No open critical or high-priority defects | |
Defect density within acceptable limits | |
Documentation | Test execution report completed |
Known issues documented | |
Traceability matrix updated | |
Sign-off | Stakeholder approval obtained |
Risk assessment accepted | |
Go-live checklist completed |
Exit Criteria by Testing Level
Unit Testing Exit Criteria
✓ Code coverage ≥ 80% (statements)
✓ All unit tests passing
✓ No critical code quality violations
✓ Cyclomatic complexity within standards
✓ No memory leaks detected
Integration Testing Exit Criteria
✓ All integration test scenarios executed
✓ API contract tests passing
✓ Data flow validation completed
✓ Error handling verified for all interfaces
✓ 90%+ integration test pass rate
✓ Integration defects ≤ 10% of total test cases
System Testing Exit Criteria
✓ 95% of test cases passed
✓ 100% of high-priority test scenarios executed
✓ No open P1 defects
✓ P2 defects < 5% of total executed tests
✓ Performance benchmarks met
✓ Security vulnerabilities assessed and mitigated
✓ Regression test suite 98%+ passing
UAT Exit Criteria
✓ All business-critical scenarios validated
✓ User acceptance obtained from business stakeholders
✓ Training materials validated
✓ Production readiness checklist completed
✓ Rollback plan verified
✓ Go-live approval signed
Practical Example: Mobile App Release
Exit Criteria for Pre-Production Testing:
Functional Quality
- 98% of test cases passed
- Zero P1 defects open
- P2 defects ≤ 3 (with documented workarounds)
- All user journeys tested on iOS and Android
Performance Quality
- App launch time < 2 seconds on target devices
- API response time < 500ms for 95th percentile
- Memory consumption within 200MB threshold
- Battery drain < 5% per hour of active use
Compatibility
- Tested on top 10 device models (80% market share)
- iOS 15+ and Android 11+ compatibility verified
- Different screen sizes and orientations validated
Compliance
- App store guidelines compliance checked
- Privacy policy review completed
- Accessibility standards (WCAG 2.1 AA) met
- Security penetration testing passed
Defining Effective Entry and Exit Criteria
SMART Criteria Framework
Effective entry and exit criteria should be SMART:
- Specific: Clearly defined, not vague or ambiguous
- Measurable: Quantifiable with objective metrics
- Achievable: Realistic given project constraints
- Relevant: Aligned with project goals and risks
- Time-bound: Tied to specific project phases
Example: Transforming Vague to SMART
Vague Criteria | SMART Criteria |
---|---|
“Most tests passed” | “95% of planned test cases executed with pass status” |
“Environment is ready” | “Test environment deployed with v2.3.1 build, database seeded with 10K test records, and smoke tests passing” |
“Critical bugs fixed” | “Zero open defects with P1 severity, P2 defects ≤ 5 with documented workarounds” |
“Team is prepared” | “QA team (5 members) trained on new features, with test case review completed and approved” |
Stakeholder Alignment Process
graph TD
A[Identify Testing Phase] --> B[Draft Initial Criteria]
B --> C[Review with Development Team]
C --> D[Review with Product/Business]
D --> E[Review with Management]
E --> F{Consensus Reached?}
F -->|No| G[Revise Criteria]
G --> C
F -->|Yes| H[Document in Test Plan]
H --> I[Baseline and Communicate]
Common Pitfalls and Solutions
Pitfall 1: Overly Rigid Criteria
Problem: Criteria so strict that testing never starts or never ends.
Solution: Balance rigor with pragmatism. Use risk-based prioritization to focus on critical criteria and allow flexibility for lower-risk items. This approach aligns with the principles outlined in Test Plan vs Test Strategy.
Example: Instead of “100% of test cases passing,” use “95% overall pass rate with 100% pass rate for critical business flows.”
Pitfall 2: Unmeasurable Criteria
Problem: Criteria that cannot be objectively verified.
Solution: Define clear metrics and measurement methods for each criterion.
Bad Example: “Code quality is good” Good Example: “SonarQube quality gate passed with A rating, technical debt ≤ 5 days”
Pitfall 3: Ignoring Context
Problem: Using generic criteria without considering project specifics.
Solution: Tailor criteria to project risk profile, timeline, and business criticality.
Example: A hotfix release may have relaxed entry criteria (minimal test case updates) but strict exit criteria (100% regression test pass rate for affected areas).
Pitfall 4: Set and Forget
Problem: Criteria defined once and never revisited.
Solution: Review and adjust criteria at retrospectives or when project conditions change.
Example: If defect density trends upward, tighten exit criteria to require additional test coverage or lower acceptable defect rates.
Implementing Entry and Exit Criteria in Practice
Integration with Test Management
Most test management tools support criteria tracking:
# Example: TestRail API integration for entry criteria check
def check_entry_criteria(test_run_id):
criteria_results = {
'environment_ready': check_environment_status(),
'smoke_tests_passed': get_smoke_test_results(),
'test_cases_reviewed': check_review_completion(),
'team_availability': verify_team_capacity()
}
all_met = all(criteria_results.values())
if all_met:
update_test_run_status(test_run_id, 'ready_to_start')
notify_team('Entry criteria met. Testing can begin.')
else:
failed = [k for k, v in criteria_results.items() if not v]
notify_stakeholders(f'Entry criteria not met: {failed}')
return all_met
Exit Criteria Dashboard
Create a visual dashboard for exit criteria tracking. Monitoring key testing metrics and KPIs helps teams make data-driven decisions about test phase completion:
Exit Criterion | Target | Current | Status |
---|---|---|---|
Test Execution | 100% | 98% | ⚠️ In Progress |
Pass Rate | ≥95% | 97% | ✅ Met |
P1 Defects | 0 | 0 | ✅ Met |
P2 Defects | ≤5 | 7 | ❌ Not Met |
Code Coverage | ≥80% | 85% | ✅ Met |
Performance | <500ms | 450ms | ✅ Met |
Checkpoint Meetings
Schedule formal go/no-go meetings to evaluate criteria:
- Pre-Test Checkpoint: Review entry criteria before starting testing phase
- Mid-Test Checkpoint: Assess progress toward exit criteria
- Exit Checkpoint: Final review of exit criteria before phase closure
Real-World Case Study
Scenario: E-commerce Platform Major Release
Context: Major release adding new payment gateway and redesigned checkout flow.
Entry Criteria for System Testing:
- New payment gateway integration completed and deployed to staging
- Checkout flow UI changes reviewed and approved by UX team
- Regression test suite updated with 25 new test cases
- Performance baseline established: checkout completion < 3 seconds
- Test data: 500 test customer accounts with varied payment methods
- 100% of API integration tests passing
Exit Criteria for System Testing:
- 95% of 500 test cases passed (475+ passing)
- All payment methods tested across 3 browsers
- Checkout flow completion rate: 98%+ in test scenarios
- Zero P1 defects (payment processing failures)
- P2 defects ≤ 10 (with workarounds documented)
- Performance: 95th percentile checkout time < 3 seconds
- Security scan: No high-severity vulnerabilities
- Cross-browser testing: Chrome, Safari, Firefox compatibility verified
Outcome: Testing revealed 12 P2 defects (exceeded limit). Team decided to:
- Fix 3 critical P2 defects affecting user experience
- Document workarounds for remaining 9 issues
- Re-run affected test cases (achieved 96% pass rate)
- Obtained stakeholder approval for conditional release
- Planned hotfix for remaining issues in next sprint
Best Practices Checklist
✅ Define criteria early: Establish criteria during test planning, not mid-testing
✅ Make criteria visible: Share criteria in test plans, dashboards, and stakeholder communications
✅ Quantify everything: Use metrics and numbers, avoid subjective language
✅ Align with risk: Prioritize criteria based on business risk and impact
✅ Get stakeholder buy-in: Ensure all parties agree on criteria before testing begins
✅ Document exceptions: When criteria aren’t met, document reasons and mitigation plans
✅ Review and adapt: Revisit criteria regularly and adjust based on learnings
✅ Automate checks: Where possible, automate criteria verification (e.g., code coverage, test pass rates). Consider implementing test automation using the pyramid strategy to efficiently validate entry and exit criteria at different testing levels
Conclusion
Entry and exit criteria are fundamental to disciplined, effective software testing. They transform testing from an ambiguous activity into a structured, measurable process with clear boundaries and objectives.
Well-defined criteria provide:
- Clarity for when to start and stop testing
- Objectivity in measuring testing completeness
- Confidence that quality goals are met
- Efficiency through focused, purposeful testing
By implementing SMART entry and exit criteria tailored to your project’s context, you create a framework for consistent, high-quality testing that delivers value to stakeholders and users alike.
Remember: the best criteria are those that balance rigor with practicality, serving as helpful guideposts rather than rigid constraints. Review, refine, and adapt your criteria as you learn what works best for your team and projects.