What Is the Software Testing Life Cycle?

The Software Testing Life Cycle (STLC) is a systematic approach to testing that defines the steps and activities performed during each testing phase. Just as software development follows the SDLC (Software Development Life Cycle), testing follows the STLC.

The STLC ensures that testing is organized, thorough, and traceable. It transforms testing from an ad-hoc activity (“let me click around and see if it works”) into a structured process with clear inputs, outputs, and quality criteria.

The Six Phases of STLC

graph LR RA[1. Requirements
Analysis] --> TP[2. Test
Planning] TP --> TCD[3. Test Case
Design] TCD --> ES[4. Environment
Setup] ES --> TE[5. Test
Execution] TE --> TC[6. Test
Closure] style RA fill:#4CAF50,color:#fff style TP fill:#2196F3,color:#fff style TCD fill:#FF9800,color:#fff style ES fill:#9C27B0,color:#fff style TE fill:#F44336,color:#fff style TC fill:#795548,color:#fff

Phase 1: Requirements Analysis

Objective: Understand what needs to be tested and determine testability.

Entry criteria:

  • Requirements document (BRD, SRS, or user stories) is available
  • Stakeholders are available for clarification

Activities:

  • Review requirements for completeness, clarity, and testability
  • Identify types of testing required (functional, performance, security)
  • Determine test scope (what is in scope and out of scope)
  • Identify automation feasibility for each requirement
  • Conduct Three Amigos sessions (Business + Dev + QA) for complex requirements
  • Identify gaps, ambiguities, and contradictions in requirements

Deliverables:

  • Requirements Traceability Matrix (RTM) — maps requirements to test cases
  • List of testable requirements
  • Automation feasibility report
  • List of questions and clarifications for stakeholders

Exit criteria:

  • All requirements reviewed and signed off
  • RTM created
  • Automation feasibility determined

Phase 2: Test Planning

Objective: Define the testing strategy, scope, effort, and schedule.

Entry criteria:

  • Requirements analysis complete
  • Project plan and schedule available

Activities:

  • Define test strategy (manual vs. automated, testing types)
  • Estimate testing effort (time, people, tools)
  • Plan test environment requirements
  • Identify testing tools needed
  • Define roles and responsibilities
  • Create risk analysis and mitigation plan
  • Define entry and exit criteria for testing
  • Estimate budget

Deliverables:

  • Test Plan document
  • Effort estimation document
  • Resource planning document

Exit criteria:

  • Test Plan reviewed and approved
  • Resources allocated
  • Tools and environments identified

Phase 3: Test Case Design

Objective: Create detailed test cases and test data.

Entry criteria:

  • Test Plan approved
  • Requirements clear and stable

Activities:

  • Write test cases based on requirements and acceptance criteria
  • Design test data sets (positive, negative, boundary values)
  • Create test scripts for automated tests
  • Review and prioritize test cases
  • Identify reusable test cases from previous projects
  • Map test cases to requirements in the RTM

Deliverables:

  • Test cases (with steps, expected results, preconditions)
  • Test data sets
  • Automated test scripts
  • Updated Requirements Traceability Matrix

Exit criteria:

  • Test cases reviewed and approved
  • Test data prepared
  • RTM updated with test case mappings

Phase 4: Environment Setup

Objective: Prepare the test environment and verify it is ready.

Entry criteria:

  • Test Plan specifies environment requirements
  • Infrastructure team available

Activities:

  • Set up test environment (hardware, software, network)
  • Install and configure the application under test
  • Prepare test data in the environment
  • Configure test tools (test management, automation, monitoring)
  • Perform smoke testing to verify environment readiness
  • Document environment configuration

Deliverables:

  • Test environment ready and verified
  • Environment configuration document
  • Smoke test results

Exit criteria:

  • Environment matches specifications
  • Smoke tests pass
  • All tools configured and accessible

Note: Environment setup often runs in parallel with Test Case Design to save time.

Phase 5: Test Execution

Objective: Execute test cases, log results, and report defects.

Entry criteria:

  • Test environment ready
  • Test cases reviewed
  • Build deployed to test environment

Activities:

  • Execute test cases (manual and automated)
  • Compare actual results with expected results
  • Log defects for failed test cases
  • Re-test fixed defects
  • Execute regression testing
  • Track test execution progress
  • Update test case status in test management tool

Deliverables:

  • Test execution results (pass/fail/blocked/skipped)
  • Defect reports
  • Daily/weekly test status reports
  • Updated RTM with execution results

Exit criteria:

  • All planned test cases executed
  • All critical and high-severity defects resolved or deferred with approval
  • Exit criteria defined in Test Plan are met

Phase 6: Test Closure

Objective: Evaluate testing completeness and capture lessons learned.

Entry criteria:

  • Test execution complete
  • Exit criteria met

Activities:

  • Evaluate test completion criteria against actuals
  • Analyze test metrics (pass rate, defect density, coverage)
  • Document lessons learned
  • Create test summary report
  • Archive test artifacts (cases, data, scripts, reports)
  • Provide feedback for process improvement

Deliverables:

  • Test Summary Report (also called Test Closure Report)
  • Lessons learned document
  • Test metrics report
  • Archived test artifacts

Exit criteria:

  • Test Summary Report approved by stakeholders
  • All test artifacts archived
  • Lessons learned documented and shared

STLC vs SDLC

The STLC does not exist in isolation — it aligns with the SDLC:

SDLC PhaseSTLC PhaseQA Activity
RequirementsRequirements AnalysisReview requirements, create RTM
DesignTest PlanningCreate test strategy and plan
ImplementationTest Case Design + Environment SetupWrite tests, set up environment
TestingTest ExecutionExecute tests, report defects
DeploymentTest ClosureSummarize results, lessons learned
MaintenanceOngoing testingRegression testing for patches

Key difference: SDLC focuses on building the product. STLC focuses on validating that the product meets requirements. They run in parallel, not sequentially.

Entry and Exit Criteria

Every STLC phase has entry criteria (what must be true before starting) and exit criteria (what must be true before moving on). These criteria prevent two common problems:

  1. Starting too early — beginning test execution without a stable environment
  2. Moving on too soon — closing testing when critical defects are still open
PhaseEntry Criteria ExampleExit Criteria Example
Requirements AnalysisRequirements document availableRTM created, all questions resolved
Test PlanningRequirements analysis completeTest Plan approved
Test Case DesignTest Plan approvedTest cases reviewed, test data ready
Environment SetupEnvironment specs definedSmoke tests pass
Test ExecutionBuild deployed, environment readyAll critical tests executed, exit criteria met
Test ClosureTest execution completeTest Summary Report approved

The Requirements Traceability Matrix (RTM)

The RTM is one of the most important QA artifacts. It maps every requirement to its corresponding test cases, ensuring complete coverage:

Req IDRequirementTest Case IDsStatus
REQ-001User can register with emailTC-001, TC-002, TC-003Passed
REQ-002User can log inTC-004, TC-005Passed
REQ-003User can reset passwordTC-006, TC-007, TC-0081 Failed
REQ-004Admin can export reportsTC-009Not Executed

Why RTM matters:

  • Ensures no requirement is untested
  • Shows test coverage at a glance
  • Tracks requirement status through testing
  • Identifies gaps in test coverage
  • Required by many quality standards (ISO, CMMI)

Exercise: Create STLC Deliverables for a Project

You are the QA lead for a new online banking feature: Bill Payment. The feature allows users to:

  • Add billers (electricity, water, internet providers)
  • Schedule one-time or recurring payments
  • View payment history
  • Receive notifications for upcoming and completed payments

Your task:

Create the following STLC deliverables:

  1. Requirements Analysis: Create an RTM with at least 6 requirements and identify 2 testability concerns
  2. Test Planning: Write a brief test plan outline (scope, testing types, environment needs, risks)
  3. Test Case Design: Write 3 detailed test cases (one positive, one negative, one boundary)
  4. Test Execution: Given these results, determine if exit criteria are met:
    • 45 test cases executed out of 50
    • 3 critical bugs found, 2 fixed, 1 deferred
    • 8 medium bugs found, 6 fixed, 2 open
    • 40 passed, 3 failed, 2 blocked
Hint

For the RTM, think about both functional and non-functional requirements. Bill payment involves financial transactions — security and accuracy are critical.

For the exit criteria evaluation, consider: would you approve release with 1 deferred critical bug and 2 open medium bugs?

Sample Solution

1. Requirements Traceability Matrix

Req IDRequirementTest CasesPriority
BP-001User can add a biller by name or account numberTC-001, TC-002, TC-003High
BP-002User can schedule a one-time paymentTC-004, TC-005, TC-006High
BP-003User can schedule a recurring payment (weekly/monthly)TC-007, TC-008, TC-009, TC-010High
BP-004User can view payment history (last 12 months)TC-011, TC-012Medium
BP-005User receives notification 2 days before scheduled paymentTC-013, TC-014Medium
BP-006Payment amount must not exceed account balanceTC-015, TC-016, TC-017Critical

Testability concerns:

  1. Recurring payment testing requires time-based triggers — need ability to simulate date changes in test environment
  2. Payment processing involves third-party APIs (bank, biller) — need mock services or sandbox environments

2. Test Plan Outline

Scope: Functional testing of bill payment feature including add biller, schedule payment, payment history, and notifications.

Out of scope: Payment processing backend (tested by backend team), mobile app version (separate test cycle).

Testing types: Functional, security (payment data encryption), performance (concurrent payments), usability, regression.

Environment: Staging environment with mock payment gateway, test database with sample billers, notification service in test mode.

Risks:

  • Mock payment gateway may not reflect real gateway behavior (mitigation: smoke test with sandbox)
  • Recurring payments are time-dependent (mitigation: time simulation tool)
  • Concurrent payment testing requires load testing infrastructure (mitigation: provision early)

3. Test Cases

TC-004: Schedule a one-time payment (Positive)

  • Precondition: User logged in, has added biller “City Electric”, balance $500
  • Steps: 1) Select “City Electric” 2) Enter amount $100 3) Select date: tomorrow 4) Click “Schedule Payment”
  • Expected: Payment scheduled, confirmation shown, balance unchanged until payment date

TC-016: Payment exceeds account balance (Negative)

  • Precondition: User logged in, balance $50
  • Steps: 1) Select biller 2) Enter amount $100 3) Click “Schedule Payment”
  • Expected: Error message “Insufficient funds. Your balance is $50.00”, payment not scheduled

TC-017: Payment equals exact account balance (Boundary)

  • Precondition: User logged in, balance $100.00
  • Steps: 1) Select biller 2) Enter amount $100.00 3) Click “Schedule Payment”
  • Expected: Payment scheduled successfully (balance boundary)

4. Exit Criteria Evaluation

Results summary:

  • Execution: 45/50 (90%) — 5 not executed
  • Critical bugs: 3 found, 2 fixed, 1 deferred
  • Medium bugs: 8 found, 6 fixed, 2 open
  • Pass rate: 40/45 = 89%

Recommendation: DO NOT approve release.

Reasons:

  • 1 deferred critical bug is a release blocker for a financial feature
  • 2 open medium bugs should be resolved
  • 5 test cases not executed may cover critical scenarios
  • 89% pass rate is below typical 95% threshold for financial features

Action items:

  1. Fix or re-evaluate the deferred critical bug
  2. Fix 2 open medium bugs
  3. Execute remaining 5 test cases
  4. Re-test after fixes
  5. Re-evaluate exit criteria

Common STLC Pitfalls

1. Skipping Requirements Analysis

Many teams jump straight to test case writing. Without requirements analysis, you write tests that miss critical scenarios.

2. Incomplete Test Planning

A vague test plan leads to scope creep, missed deadlines, and inadequate resource allocation.

3. No Entry/Exit Criteria

Without formal criteria, phases start and end based on dates rather than readiness, leading to quality issues.

4. Poor Test Closure

Teams often skip test closure because they are rushing to the next project. This means lessons are never learned and the same mistakes repeat.

Pro Tips

  1. Automate RTM updates. Use test management tools (TestRail, Zephyr, Xray) that automatically link test cases to requirements and update status.

  2. Start environment setup early. Environment issues are the number one cause of testing delays. Begin setup in parallel with test case design.

  3. Make entry/exit criteria measurable. “Testing is complete” is not an exit criterion. “95% of test cases executed, zero critical open defects” is.

  4. Conduct test closure even in agile. In agile, test closure happens at the end of each sprint (mini-closure) and at the end of the release (full closure).

  5. Use the RTM in stakeholder meetings. When someone asks “are we ready to release?” the RTM provides an objective, data-driven answer.