Module 2 Assessment Overview

Congratulations on reaching the final lesson of Module 2. This comprehensive assessment tests your understanding of all topics covered across the module’s 35 lessons.

The assessment consists of three parts:

  1. Knowledge Questions — 10 quiz questions in the frontmatter (take them before reading further)
  2. Scenario-Based Questions — Classify and apply testing concepts to real-world situations
  3. Practical Exercise — Create a testing strategy for a new project

Preparation Tips

Before taking this assessment:

  • Review your notes from lessons 2.1 through 2.34
  • Pay special attention to the relationships between concepts (e.g., how testing levels relate to the testing pyramid, how static and dynamic testing complement each other)
  • Think about practical applications, not just definitions

Scoring Guide

  • Part 1 (Quiz): 10 questions, 3 points each = 30 points
  • Part 2 (Scenarios): 5 scenarios, 6 points each = 30 points
  • Part 3 (Exercise): 40 points (detailed rubric below)
  • Total: 100 points
  • Passing score: 70 points

Topics Covered

This assessment covers the following Module 2 topics:

Topic AreaLessonsKey Concepts
Testing Levels2.1-2.6Unit, integration, system, E2E, UAT, testing pyramid
Functional Testing Types2.7-2.10Smoke, sanity, regression, retesting
Non-Functional Testing2.11-2.25Performance, security, usability, accessibility, compatibility, reliability
Testing Methods2.26-2.28White-box, black-box, grey-box
Static vs. Dynamic2.29-2.31Reviews, inspections, static analysis, dynamic execution
Exploratory Approaches2.32-2.34Exploratory testing, ad hoc, monkey testing, SBTM

Part 2: Scenario-Based Questions

For each scenario, identify the most appropriate testing type(s) and explain your reasoning.

Scenario 1: Your team just deployed a hotfix to production that changes the login authentication flow. The fix was for a critical security vulnerability. You have 30 minutes before the next deployment window closes.

What type(s) of testing should you perform and why?

Scenario 2: A healthcare application stores patient records, processes insurance claims, and generates medical reports. The application is subject to HIPAA regulations. A new version adds a prescription management feature.

List all the testing types that should be applied to this release, organized by priority.

Scenario 3: Your team uses SonarQube for code quality. The latest report for a pull request shows: 0 bugs, 0 vulnerabilities, 3 code smells (all minor), 45% code coverage on new code. The Quality Gate is set to require 80% coverage on new code.

Should the PR be merged? What actions should be taken?

Scenario 4: A mobile banking app has been receiving user complaints about “random crashes” that the QA team cannot reproduce using their scripted test cases. The crashes occur on various Android devices.

What testing approach(es) would you recommend to investigate these crashes?

Scenario 5: Your company is building a new e-commerce platform from scratch. The project is in the planning phase. You have been asked to define the testing strategy.

At which point in the SDLC should each type of testing begin? Create a timeline.

Solution — Scenario 1

Testing types: Smoke testing + targeted security testing + targeted regression testing

Reasoning:

  • Smoke testing first — verify the application starts, critical paths work, and the authentication flow functions. This takes 5-10 minutes.
  • Targeted security testing — specifically test the vulnerability that was fixed. Verify the fix actually closes the security hole. Test common bypass attempts. This takes 10-15 minutes.
  • Targeted regression testing — test the features most likely affected by the auth flow change: login, logout, session management, password reset, SSO if applicable. This takes 10-15 minutes.

You do NOT have time for a full regression suite. Prioritize risk: security fix verification > smoke test > regression of adjacent features.

Solution — Scenario 2

By priority:

Critical (must complete before release):

  1. Functional testing — All prescription management features work as specified
  2. Security testing — HIPAA compliance, data encryption, access controls, audit logging
  3. Regression testing — Existing patient records, insurance claims, and reports still work
  4. Integration testing — Prescription module integrates correctly with patient records and insurance

High priority: 5. Performance testing — System handles expected concurrent users with new feature 6. Accessibility testing — Healthcare apps must be accessible (Section 508 compliance) 7. Compatibility testing — Works across browsers/devices used by healthcare staff 8. Usability testing — Healthcare workers can use the new feature efficiently

Medium priority: 9. Recovery testing — System recovers from failures without data loss (patient data is critical) 10. Exploratory testing — Investigate edge cases and unexpected interactions 11. Static testing — Code review and static analysis for the new module

Should also include: 12. UAT — Healthcare professionals validate the prescription workflow matches their needs

Solution — Scenario 3

Should the PR be merged? No — the Quality Gate fails due to 45% coverage on new code (below the 80% threshold).

Actions:

  1. The developer should write additional tests to bring new code coverage to at least 80%
  2. Review the 3 minor code smells — while they do not block the PR, they should be addressed to prevent tech debt accumulation
  3. Investigate why coverage is so low — is the code hard to test (possible design issue) or were tests simply not written?
  4. Do NOT lower the Quality Gate threshold — this sets a bad precedent
  5. After adding tests and fixing smells, re-run the SonarQube analysis to verify the Quality Gate passes
Solution — Scenario 4

Recommended approaches:

  1. Monkey testing (immediate) — Use Android Monkey tool to generate random events on the most-affected devices. This is the fastest way to reproduce random crashes:
adb shell monkey -p com.app.banking --throttle 200 -v -v 100000
  1. Exploratory testing with SBTM — Conduct focused sessions targeting:

    • Charter: Explore navigation between screens rapidly, switching between features, using back button during transactions
    • Charter: Explore app behavior under poor network conditions on various Android devices
  2. Compatibility testing — Test on the specific device models and Android versions from crash reports

  3. Crash log analysis — Review Crashlytics/Sentry logs to identify patterns: specific Android versions, memory thresholds, specific user actions before crash

  4. Performance/stress testing on mobile — Some “random” crashes are memory-related. Profile memory usage during extended sessions.

  5. Grey-box testing — If crash logs point to specific services, verify those services handle edge cases correctly

Solution — Scenario 5

Testing timeline for new e-commerce platform:

SDLC PhaseTesting Activities
RequirementsStatic testing: requirements reviews, acceptance criteria review
DesignStatic testing: architecture review, design review, security threat modeling
ImplementationUnit testing (developers), static analysis (SonarQube in CI), code reviews
IntegrationIntegration testing, API contract testing, grey-box testing of service interactions
System testingFunctional testing, regression testing, smoke testing, exploratory testing
Non-functionalPerformance testing, security testing (OWASP), accessibility testing, compatibility testing, usability testing
Pre-releaseUAT with business stakeholders, E2E testing of critical journeys, reliability testing
Post-releaseProduction smoke testing, monitoring, chaos engineering, ongoing exploratory sessions

Key principle: Testing starts in the requirements phase with static testing and progressively adds dynamic testing types as code becomes available. Shift-left: catch defects as early as possible.

Part 3: Practical Exercise — Build a Testing Strategy

You are the QA Lead for a new project: an online learning platform (similar to Coursera or Udemy). The platform includes:

  • User management: Registration, login (email + OAuth), profiles, subscription management
  • Course catalog: Search, filters, categories, course previews
  • Video player: Streaming, progress tracking, playback speed, subtitles
  • Quiz engine: Multiple choice, fill-in-the-blank, timed quizzes with scoring
  • Payment system: Credit card, PayPal, subscription billing, refunds
  • Notification system: Email, push notifications, in-app messages
  • Admin panel: Course creation, user management, analytics dashboard
  • Mobile apps: iOS and Android native apps

Create a comprehensive testing strategy document covering:

Part A (10 points): Testing levels — Which testing levels will you use? What percentage of test effort should go to each level? Justify your testing pyramid/trophy/diamond choice.

Part B (10 points): Testing types matrix — Create a matrix mapping features to testing types (functional, performance, security, usability, accessibility, compatibility, reliability).

Part C (10 points): Static vs. dynamic testing plan — When will you use static testing? When dynamic? What tools for each?

Part D (10 points): Exploratory testing plan — Which areas will get exploratory testing? Write 3 test charters. How will you manage sessions (SBTM)?

Grading Rubric

Part A (10 points):

  • Lists all relevant testing levels (2 pts)
  • Provides reasonable percentage allocation (3 pts)
  • Justifies the choice of pyramid/trophy/diamond with project context (5 pts)

Part B (10 points):

  • Matrix covers all 8 features (3 pts)
  • Includes all relevant testing types per feature (4 pts)
  • Prioritizes correctly (payment and video get more security/performance attention) (3 pts)

Part C (10 points):

  • Identifies appropriate static testing points in SDLC (3 pts)
  • Identifies appropriate dynamic testing types (3 pts)
  • Selects reasonable tools for both (2 pts)
  • Explains how they complement each other (2 pts)

Part D (10 points):

  • Identifies high-risk areas for exploration (2 pts)
  • Charters are well-formed (Explore/With/To discover) (4 pts)
  • SBTM plan includes session duration, metrics, and debrief cadence (4 pts)
Solution — Example Testing Strategy

Part A: Testing Levels

Testing trophy approach (emphasizing integration tests), because the platform relies heavily on service interactions (video streaming, payment processing, notifications).

LevelEffort %Justification
Unit25%Business logic: quiz scoring, subscription calculations, search algorithms
Integration35%Critical: payment gateway, video CDN, OAuth providers, notification services
System25%UI workflows, course enrollment flow, admin panel operations
E2E10%Critical journeys only: register → enroll → watch → complete quiz → certificate
UAT5%Instructors validate course creation; learners validate learning experience

Part B: Testing Types Matrix

FeatureFunctionalPerformanceSecurityUsabilityAccessibilityCompatibilityReliability
User managementHighMediumCriticalHighHighHighMedium
Course catalogHighHighLowHighHighHighMedium
Video playerHighCriticalMediumCriticalCriticalCriticalHigh
Quiz engineHighMediumMediumHighHighMediumHigh
PaymentCriticalMediumCriticalHighMediumHighCritical
NotificationsMediumMediumLowMediumMediumHighHigh
Admin panelHighLowHighMediumMediumMediumLow
Mobile appsHighHighHighCriticalHighCriticalHigh

Part C: Static vs. Dynamic Plan

Static testing:

  • Requirements phase: Review all user stories and acceptance criteria (informal review)
  • Design: Architecture review of video streaming and payment integration (technical review)
  • Implementation: SonarQube in CI pipeline (quality gate: 80% new code coverage, zero critical bugs/vulnerabilities), ESLint for frontend, code reviews for all PRs
  • Test plans: Peer review of test plans before execution

Dynamic testing:

  • Unit: Jest (frontend), pytest/JUnit (backend) — developers write, run in CI
  • Integration: Postman/Newman for API testing, contract tests with Pact
  • System: Playwright for web, Appium for mobile
  • Performance: k6 for load testing, Lighthouse for web performance
  • Security: OWASP ZAP for DAST, Semgrep for SAST

Part D: Exploratory Testing Plan

High-risk areas: Video player (cross-browser, network conditions), Payment flow (edge cases, error recovery), Mobile apps (device fragmentation).

Charter 1: Explore the video player on mobile devices with network throttling, device rotation during playback, and background/foreground switching to discover playback failures, progress tracking loss, and buffering edge cases.

Charter 2: Explore the payment and refund flow with expired cards, insufficient funds, browser back button during payment, and concurrent subscription changes to discover payment state inconsistencies, double charges, and refund processing gaps.

Charter 3: Explore the quiz engine under time pressure with rapid answer switching, browser tab switching during timed quizzes, and network drops during submission to discover scoring errors, timer inconsistencies, and answer loss scenarios.

SBTM plan: 90-minute sessions, debrief within 1 hour of session completion, track bugs per session and % on charter, plan 2-3 sessions per sprint targeting the highest risk areas.

What’s Next

Congratulations on completing Module 2. You now have a comprehensive understanding of testing levels, types, and methods — the foundation for all specialized testing topics ahead.

Module 3: Test Design Techniques takes the black-box techniques introduced in Lesson 2.27 and explores them in depth: equivalence partitioning, boundary value analysis, decision tables, state transition testing, pairwise testing, and more. These techniques will make you a more effective tester regardless of the testing type or level you are working at.