The Technique Selection Problem

You have learned over 20 test design techniques across this module. Equivalence partitioning, boundary value analysis, decision tables, state transitions, pairwise testing, MC/DC, path coverage, mutation testing, and more. The challenge is no longer “what techniques exist?” but “which technique should I use right now?”

Choosing the wrong technique wastes effort. Using EP on a stateful protocol misses transition bugs. Using state transition testing on a calculation engine misses boundary defects. Effective testers match the technique to the problem.

Decision Framework

Step 1: What Type of Feature Are You Testing?

Feature TypeBest-Fit Techniques
Input validation (forms, fields)Equivalence partitioning + BVA
Business rules with conditionsDecision tables
Workflows, protocols, sessionsState transition testing
Configuration/compatibilityPairwise testing
Calculations, formulasDomain analysis + BVA
Text search, pattern matchingEquivalence partitioning + error guessing
APIs with multiple parametersCombinatorial testing
Critical algorithms (finance, safety)MC/DC + path coverage
Complex user journeysUse case testing + state transitions

Step 2: What Information Do You Have?

Available InformationApplicable Techniques
Requirements/specifications onlyBlack-box: EP, BVA, decision tables, state transitions
Source code availableWhite-box: statement/decision coverage, path coverage, MC/DC
No documentationExperience-based: error guessing, exploratory testing
Formal model existsModel-based testing
Historical defect dataRisk-based: focus techniques on high-defect areas

Step 3: What Is the Risk Level?

Risk LevelRecommended Approach
Safety-criticalMC/DC + domain analysis + mutation testing to validate tests
Financial/regulatoryDecision tables + BVA + combinatorial testing
Core business logicEP + BVA + state transitions + path coverage
Standard featuresEP + BVA + error guessing
Low-risk/cosmeticError guessing + checklist-based

Technique Mapping by Category

Data Input Testing

When testing how a system handles input data:

  1. Start with equivalence partitioning — identify valid and invalid classes
  2. Apply BVA — test boundaries of each class
  3. Add domain analysis — if multiple inputs interact
  4. Use error guessing — add tests for common input mistakes (empty, null, special characters, very long strings)

Business Logic Testing

When testing rules that determine system behavior:

  1. Start with decision tables — map all condition combinations to actions
  2. Add state transitions — if behavior depends on previous state
  3. Apply cause-effect graphing — if conditions have complex dependencies
  4. Use combinatorial testing — if many parameters interact

Structural Testing (White-Box)

When testing code coverage:

  1. Start with statement coverage — basic minimum
  2. Add decision coverage — test both branches of every decision
  3. Apply MC/DC — if safety-critical
  4. Use path coverage — for critical algorithms
  5. Validate with mutation testing — ensure tests are actually effective

Integration Testing

When testing how components interact:

  1. State transition testing — for protocol-based interactions
  2. Pairwise testing — for configuration combinations
  3. Use case testing — for end-to-end workflows
  4. Data flow testing — for tracking data through components

Real-World Decision Examples

Example 1: Login Form

  • Username field: EP (valid/invalid formats) + BVA (min/max length)
  • Password field: EP (meets/doesn’t meet rules) + BVA (length bounds)
  • Login button behavior: State transitions (locked after 3 failures)
  • Overall: Error guessing (SQL injection, XSS, empty fields)

Example 2: Insurance Quote Calculator

  • Premium calculation: Decision tables (age, coverage, history rules)
  • Input ranges: BVA + Domain analysis (age, income boundaries)
  • Rate tiers: EP (standard, preferred, high-risk classes)
  • Discount combinations: Pairwise testing (multi-policy, good driver, etc.)
  • Critical calculations: Path coverage + Mutation testing

Example 3: E-commerce Checkout

  • Cart states: State transition testing (empty, has items, checkout, ordered)
  • Payment methods: Pairwise testing (method x currency x amount range)
  • Shipping rules: Decision tables (weight, destination, speed)
  • Coupon validation: EP + BVA (expired, min purchase, one-time use)
  • End-to-end flow: Use case testing

Exercise: Technique Selection

Problem 1

For each feature below, select the primary and secondary test design techniques. Justify your choices.

  1. A tax calculation engine that applies different rates based on income brackets, filing status, deductions, and state of residence
  2. A music player that supports play, pause, skip, shuffle, repeat, and queue management
  3. A search function that accepts text queries with optional filters (date range, category, sort order)
  4. An elevator control system for a 20-floor building with multiple elevators
  5. A password strength meter that evaluates length, character diversity, common patterns, and dictionary words
Solution
  1. Tax calculation:

    • Primary: Decision tables — complex rules with many conditions
    • Secondary: BVA — income bracket boundaries; Domain analysis — multi-variable boundaries interact; Path coverage — verify calculation paths
  2. Music player:

    • Primary: State transition testing — player has clear states (stopped, playing, paused) with events
    • Secondary: Pairwise testing — combinations of shuffle/repeat settings; Error guessing — corrupt files, empty playlist
  3. Search function:

    • Primary: Equivalence partitioning — valid/invalid queries, result categories
    • Secondary: Pairwise testing — filter combinations; BVA — date range boundaries; Error guessing — empty queries, special characters, SQL injection
  4. Elevator control:

    • Primary: State transition testing — elevator states (idle, moving up, moving down, doors open)
    • Secondary: Model-based testing — complex state interactions between multiple elevators; Combinatorial testing — floor request combinations
  5. Password strength meter:

    • Primary: Equivalence partitioning — strength categories (weak, medium, strong)
    • Secondary: BVA — length thresholds; Decision tables — character type combinations; Error guessing — common passwords, unicode, empty string

Problem 2

You are the QA lead for a new feature: a hotel booking system. The system must handle room search (dates, guests, room type), pricing (dynamic rates, discounts, taxes), reservation management (create, modify, cancel), and payment processing.

Create a testing strategy document mapping each sub-feature to specific test design techniques. Include your rationale.

Solution
Sub-FeaturePrimary TechniqueSecondary TechniqueRationale
Room search datesBVAEPDate inputs have clear boundaries (check-in before check-out, no past dates)
Guest countBVA + EPError guessingBoundaries (min 1, max per room), invalid values (0, negative, very large)
Room type selectionEPPairwiseCategories of rooms; combinations of room type + dates + guests
Dynamic pricingDecision tablesDomain analysisComplex rules (season, demand, day of week); multi-variable boundaries
Discount applicationDecision tablesBVARules for when discounts apply; discount amount boundaries
Tax calculationBVA + decision tablesPath coverageJurisdictional rules; boundary amounts; verify calculation logic
Reservation lifecycleState transitionUse case testingStates: pending, confirmed, modified, cancelled; event sequences matter
Modify reservationState transitionsEPValid/invalid modifications from each state
Cancel reservationState transitionsDecision tablesCancellation policies (refund rules based on timing)
Payment processingState transitionsError guessingPayment states (pending, authorized, captured, refunded); edge cases (timeout, double-charge)
Search + book flowUse case testingExploratory testingEnd-to-end happy path and alternative paths
ConfigurationPairwiseChecklist-basedBrowser/device combinations

Anti-Patterns in Technique Selection

Using only one technique. Teams that apply only EP everywhere miss state-dependent bugs and boundary defects.

Skipping experience-based techniques. Formal techniques cannot cover everything. Error guessing and exploratory testing find the “weird” bugs.

Over-engineering low-risk features. Applying MC/DC to a marketing page is waste. Match rigor to risk.

Ignoring white-box techniques entirely. Even if you test from the outside, structural coverage data reveals gaps.

Key Takeaways

  • No single technique is sufficient — effective testing requires combining techniques
  • Match technique to feature type: state-dependent → state transitions, rules → decision tables, inputs → EP+BVA
  • Risk level determines rigor: safety-critical needs MC/DC, standard features need EP+BVA
  • Available information constrains choices: no code = black-box only, no spec = experience-based
  • Always supplement formal techniques with error guessing and exploratory testing
  • Build a technique selection habit — for every feature, consciously ask “which technique fits best?”