Assessment Overview

Congratulations on reaching the end of Module 1: Software Testing Fundamentals. This assessment tests your understanding of all topics covered in lessons 1.1 through 1.29.

The assessment has three parts:

PartFormatQuestionsTime Estimate
Part 1Multiple-choice quiz10 questions10 minutes
Part 2Scenario-based questions3 scenarios15 minutes
Part 3Practical exercise1 exercise20 minutes

How to Use This Assessment

Before you begin:

  • Review your notes from Module 1
  • Do not use reference materials during the quiz (Part 1) — test your recall
  • For Parts 2 and 3, you may reference earlier lessons
  • There are no trick questions — every question has a clearly correct answer

Scoring guide:

  • Part 1: 10 points (1 point per correct answer)
  • Part 2: 15 points (5 points per scenario)
  • Part 3: 15 points (rubric provided)
  • Total: 40 points
  • Passing score: 28/40 (70%)

Topics Covered

This assessment covers all major topics from Module 1:

  1. Testing fundamentals — What is testing, why it matters, testing vs debugging
  2. The testing mindset — Quality mindset, tester vs developer thinking
  3. SDLC and testing — Waterfall, Agile, V-Model, and testing’s role in each
  4. STLC — Software Testing Life Cycle phases
  5. Test levels — Unit, integration, system, acceptance
  6. Test types — Functional, non-functional, regression, smoke, sanity
  7. Entry and exit criteria — Gates for testing phases
  8. Test metrics and KPIs — DRE, defect density, coverage, MTTR
  9. Requirements Traceability Matrix — Forward/backward traceability
  10. Test process improvement — TMMi and TPI Next
  11. Regulated industries — Healthcare, finance, automotive standards
  12. Standards — IEEE 829 and ISO 29119
  13. Test strategy — Building a strategy from scratch

Part 1: Multiple-Choice Quiz

The quiz questions are in the frontmatter of this lesson (10 questions). Take the quiz first before proceeding to Parts 2 and 3.

After completing the quiz, check your answers against the explanations. Note any topics where you answered incorrectly — these are areas worth reviewing before moving to Module 2.

Part 2: Scenario-Based Questions

Scenario A: The Startup QA Challenge

Context: You have just been hired as the sole QA engineer at a fintech startup. The company has a mobile banking app with 100K users. There are 20 developers and no other testers. The team deploys twice per week. Last month, three critical bugs reached production: a payment calculation error, a security vulnerability in the login flow, and a data display issue on Android.

Questions (5 points):

  1. Based on the three production incidents, what STLC phase appears to be most deficient? Explain your reasoning. (2 points)

  2. What are the top 3 metrics you would start tracking immediately and why? (3 points)

Solution

1. Most deficient STLC phase: Test Design and Test Execution

The variety of escaped defects (calculation logic, security, platform-specific display) suggests insufficient test coverage rather than a single process gap. This points to the Test Design phase — test cases are either not being created or do not cover critical scenarios. Additionally, the Test Execution phase may be deficient if tests exist but are not being run before deployment.

A secondary deficiency is in the Requirements Analysis/Planning phase — if risk-based testing were in place, payment calculations and login security would be the highest-priority test areas.

2. Top 3 metrics:

  1. Defect Escape Rate — Track how many defects reach production per release. This directly measures the problem (bugs escaping). Target: reduce from current rate by 50% within 3 months.

  2. Test Coverage (by risk level) — Measure what percentage of high-risk features (payments, security, core flows) have test cases. This will reveal where gaps exist. Target: 100% coverage for Critical and High-risk features.

  3. Deployment Success Rate — Track what percentage of deployments go to production without rollback. This gives a single number that management can track. Target: 95%+ success rate.

Why these three: They directly address the current problem (production defects), are easy to start collecting immediately, and provide actionable data (coverage gaps tell you where to focus testing effort).

Scenario B: The RTM Gap

Context: Your team is preparing for a regulatory audit of a healthcare application. The auditor requests to see your Requirements Traceability Matrix. You discover the following:

  • 200 requirements are documented
  • 180 requirements have at least one test case (20 have none)
  • 50 test cases exist that are not linked to any requirement
  • 15 requirements are marked as “deferred” with no test cases
  • All test cases linked to requirements have been executed

Questions (5 points):

  1. Calculate the requirements coverage percentage. (1 point)
  2. What are the three types of gaps in this RTM and what risk does each present? (2 points)
  3. What would you recommend to the QA Lead before the audit? (2 points)
Solution

1. Requirements coverage:

  • Total active requirements: 200 - 15 (deferred) = 185 active
  • Requirements with test cases: 180 - (some may be deferred)
  • If all 20 unlinked requirements include the 15 deferred: 5 active requirements with no tests
  • Coverage = (185 - 5) / 185 × 100 = 97.3%
  • Or conservatively: 180/200 × 100 = 90%

The exact percentage depends on whether the 15 deferred requirements are among the 20 without test cases. For the audit, present both the total and active coverage.

2. Three types of gaps:

Gap TypeCountRisk
Requirements without test cases5-20Regulatory risk — auditor will flag untested requirements as non-compliance. If any of these relate to patient safety, this is a critical finding.
Orphaned test cases (no requirement)50Scope creep risk — these may be testing features not in the requirements, wasting effort. Alternatively, they may test valid scenarios where the link was simply not documented.
Deferred requirements without tests15Tracking risk — deferred requirements need formal documentation explaining why they are deferred and when they will be addressed. Without this, the auditor may question whether they were simply forgotten.

3. Recommendations before the audit:

  1. Immediately link or justify the 20 unlinked requirements. For the 5 non-deferred requirements without tests, either write test cases or document a risk-based justification for not testing them. This is the most critical action.

  2. Review the 50 orphaned test cases. For each, either link to an existing requirement or document why the test exists without a formal requirement (e.g., security best practice, edge case coverage). Remove any truly unnecessary tests.

  3. Document the 15 deferred requirements. Create a formal deferral record for each: why it was deferred, who approved the deferral, what the plan is for future implementation, and confirmation that the deferral does not affect patient safety.

  4. Prepare an RTM summary report showing: total coverage percentage, coverage by risk level, list of gaps with justifications, and sign-off from the QA Lead and Compliance Officer.

Scenario C: Process Improvement Decision

Context: You are a QA Manager at a 300-person software company. The CTO asks you to evaluate whether to adopt TMMi or TPI Next for test process improvement. The company has:

  • 5 product teams, each with different testing approaches
  • Some teams use automation, others are manual-only
  • No organizational test policy
  • Plans to pursue ISO 27001 certification next year
  • Limited budget for process improvement ($50K/year)

Questions (5 points):

  1. Which framework would you recommend and why? (2 points)
  2. What would be your first 3 improvement actions? (3 points)
Solution

1. Recommendation: TPI Next

Reasoning:

  • Flexibility: With 5 teams at different maturity levels, TPI Next’s ability to improve individual key areas independently is more practical than TMMi’s requirement to satisfy all areas at a level.
  • Budget: $50K/year does not support formal TMMi assessment and certification (which can cost $20K+ per assessment). TPI Next can be done with internal resources.
  • Quick wins: TPI Next allows targeting specific weak areas for immediate improvement, delivering visible results that build momentum and executive support.
  • ISO 27001 alignment: TPI Next’s security-related key areas (test environment, test data management) directly support ISO 27001 preparation.

However, if the company later needs formal maturity certification (e.g., for outsourcing contracts), TMMi can be pursued on top of TPI Next improvements.

2. First 3 improvement actions:

  1. Create an Organizational Test Policy (TPI Next: Stakeholder Management, Controlled level)

    • Define minimum testing standards all 5 teams must follow
    • Include: mandatory test types, minimum coverage thresholds, defect management process, reporting requirements
    • Timeline: 1 month to draft, 1 month for review and rollout
    • Cost: Internal effort only (~$5K in time)
    • Why first: This establishes the foundation. Without a policy, each team will continue doing things differently.
  2. Standardize Reporting across all teams (TPI Next: Reporting, Controlled level)

    • Define a common test status report template
    • Implement weekly reporting from all 5 teams
    • Create a consolidated quality dashboard for the CTO
    • Timeline: 2 months
    • Cost: ~$5K (tool configuration)
    • Why second: Visibility is essential. The CTO needs data to justify continued investment, and teams need to see how they compare.
  3. Establish a common defect management process (TPI Next: Defect Management, Controlled level)

    • Define defect lifecycle, severity/priority definitions, SLAs by severity
    • Ensure all teams use the same tracking tool and workflow
    • Start tracking DRE and defect escape rate across all teams
    • Timeline: 2 months
    • Cost: ~$5K (tool configuration, training)
    • Why third: Consistent defect management enables meaningful cross-team metrics and identifies which teams need the most help.

Total Phase 1 cost: ~$15K, leaving $35K for tool investments, training, and Phase 2 improvements.

Part 3: Practical Exercise

Create a Test Plan Outline

Scenario: You are the QA Lead for an online learning platform (similar to this course). The platform has:

  • Web application (desktop and mobile responsive)
  • Features: user registration, course enrollment, lesson viewing, quiz taking, progress tracking, certificate generation
  • 50K students, 200 courses, 10K lessons
  • Payment processing for premium courses (Stripe integration)
  • Tech stack: Next.js frontend, Go backend, PostgreSQL, deployed on AWS
  • Team: 25 developers, 5 testers
  • Release cycle: bi-weekly (every 2 weeks)
  • No regulatory requirements, but GDPR compliance needed for EU users

Your task: Create a test plan outline that includes:

  1. Scope — What is in/out of scope for testing
  2. Risk assessment — Risk matrix for the top 6 features
  3. Test approach — Testing types, automation strategy
  4. Entry/Exit criteria — For system testing
  5. Key metrics — 4-5 metrics you would track
  6. Resource allocation — How to use the 5 testers

Scoring rubric (15 points):

CriterionPointsDescription
Scope clarity3Clear in/out scope with justification
Risk assessment quality3Realistic risk ratings with reasoning
Test approach completeness3Covers all relevant testing types
Entry/exit criteria2Measurable, realistic criteria
Metrics relevance2Metrics tied to quality goals
Resource allocation2Practical team utilization
Hint
  • Payment processing is the highest-risk feature (financial data, Stripe integration)
  • Quiz taking must be accurate (affects certification)
  • Certificate generation must be reliable (has legal/professional value)
  • Consider GDPR for user data handling
  • With 5 testers and bi-weekly releases, you cannot test everything manually every sprint
  • The testing pyramid applies: automate regression, manually explore new features
Solution

Test Plan Outline: Online Learning Platform

1. Scope

In scope:

  • All student-facing features (registration, enrollment, lessons, quizzes, progress, certificates)
  • Payment processing (Stripe) — all payment scenarios
  • API testing — all public endpoints
  • Cross-browser: Chrome, Firefox, Safari, Edge (latest 2 versions)
  • Mobile responsive: iOS Safari, Android Chrome
  • Performance: page load times, video streaming, quiz submission
  • Security: authentication, payment data, user data protection
  • GDPR: consent management, data export, data deletion, cookie policy
  • Accessibility: WCAG 2.1 AA for all student-facing pages

Out of scope:

  • Course content creation admin panel (low risk, internal users only, separate test cycle)
  • Analytics and reporting dashboard (informational only, no data modification)
  • Infrastructure testing beyond application layer (AWS responsibility)
  • Load testing beyond 5x current traffic (planned separately)

2. Risk Assessment

FeatureBusiness ImpactLikelihoodTest Priority
Payment processingCritical (revenue)MediumHighest
Quiz accuracyHigh (certification validity)MediumHigh
Certificate generationHigh (professional value)LowHigh
Lesson viewingHigh (core value)LowMedium
User registrationMediumLowMedium
Progress trackingMediumMediumMedium

3. Test Approach

Testing TypeApproachCoverage Target
Unit testingAutomated (developers)80% code coverage
API testingAutomated (Playwright API)All endpoints
E2E regressionAutomated (Playwright)Critical paths
New feature testingManual exploratoryEach sprint
Payment testingAutomated + manual sandbox testingAll payment scenarios
PerformanceAutomated (k6)Monthly
SecurityOWASP ZAP + manual reviewQuarterly
Accessibilityaxe-core + manual auditQuarterly
GDPR complianceManual checklistBi-annually

Automation strategy: automate regression first (payment flow, quiz flow, enrollment flow). Target: 70% regression automation within 6 months.

4. Entry/Exit Criteria for System Testing

Entry:

  • Integration testing complete with 95%+ pass rate
  • Build deployed to staging environment
  • Test data prepared (courses, users, payment sandbox)
  • All critical/high defects from previous sprint resolved

Exit:

  • 95% of planned test cases executed
  • Zero open Critical defects, fewer than 3 High defects
  • Payment flow: 100% test execution, 100% pass rate
  • Regression suite passes with <2% failure rate
  • Performance benchmarks met (pages load <3s, quiz submission <1s)

5. Key Metrics

  1. DRE — Target >90%. Measures overall testing effectiveness.
  2. Defect Escape Rate — Target <5%. Tracks bugs reaching production.
  3. Regression Automation Coverage — Target 70% in 6 months. Measures automation progress.
  4. Payment Defect Rate — Target 0 critical/high. Tracks our highest-risk area.
  5. Sprint Velocity Impact — Track whether testing delays releases. Target: <5% of sprints delayed by testing.

6. Resource Allocation

TesterPrimary FocusSecondary Focus
Tester 1 (Senior)Automation framework + payment testingMentoring
Tester 2Automation: quiz and enrollment flowsAPI testing
Tester 3Manual: new features each sprintExploratory testing
Tester 4Manual: cross-browser + mobile responsiveGDPR compliance
Tester 5Performance + security testingCertificate testing

Rotation: All testers participate in sprint planning and do at least one 30-minute exploratory session per sprint on any feature.

What is Next

If you scored 28+ out of 40, you are ready for Module 2: Test Levels and Types. If you scored below 28, review the topics where you lost points before proceeding. There is no shame in revisiting — the goal is solid understanding, not speed.

Module 2 will build directly on the concepts from Module 1, going deeper into test levels (unit, integration, system, acceptance) and test types (functional, non-functional, structural, regression). The foundation you built here will make Module 2 significantly easier.