The Exploratory Testing Dilemma
Exploratory testing is powerful—it leverages human creativity and domain knowledge to find bugs that scripted tests miss. Yet it faces criticism for being unstructured, difficult to manage, and impossible to measure. How do you track progress when there’s no predefined test case list? How do you know when you’re done? How do you justify the time spent?
Session-Based Test Management (SBTM), pioneered by Jon Bach and James Bach, solves this by bringing structure to exploration without sacrificing the benefits of investigative testing. It provides a framework that makes exploratory testing trackable, manageable, and measurable.
What Is Session-Based Test Management?
SBTM organizes exploratory testing into time-boxed sessions guided by test charters. Each session focuses on a specific mission, typically lasting 60-120 minutes (90 minutes is standard). After each session, testers document findings through structured debriefing.
Core Components
1. Test Charter: A mission statement defining what to explore 2. Session: Time-boxed testing activity (typically 90 minutes) 3. Session Sheet: Documentation template capturing testing activities 4. Debriefing: Post-session review and metrics collection 5. Metrics: Data-driven insights into testing progress and coverage
Test Charters: Defining the Mission
A test charter specifies what to test and why, providing focus without rigid scripts. The charter format typically follows this structure:
Charter: Explore [target]
With [resources]
To discover [information]
Examples of Well-Crafted Charters
Example 1: E-Commerce Checkout
Charter: Explore the checkout workflow
With various payment methods (credit card, PayPal, Apple Pay)
To discover how the system handles payment failures and retries
Example 2: Mobile App Performance
Charter: Explore the photo upload feature
With different network conditions (3G, 4G, WiFi, offline)
To discover performance degradation and error handling
Example 3: API Integration
Charter: Explore the user authentication API
With invalid tokens, expired sessions, and concurrent requests
To discover security vulnerabilities and edge case handling
Example 4: Data Migration
Charter: Explore the legacy system data import
With corrupted CSV files, unexpected data types, and large datasets
To discover data validation failures and performance bottlenecks
Charter Scope Guidelines
Good Charters:
- Focused enough for a 90-minute session
- Clear target and objectives
- Testable and actionable
- Aligned with current project risks
Avoid:
- Too broad: “Test the entire application”
- Too narrow: “Verify login button is blue”
- Vague: “Play around with the system”
- Duplicate: Repeating already-covered areas without reason
The SBTM Session Structure
A typical 90-minute session breaks down into:
Setup (5-10 minutes):
- Review charter
- Prepare test data, tools, environment
- Set session objectives
Testing (70-80 minutes):
- Execute exploratory testing guided by charter
- Document findings in real-time
- Follow interesting paths
- Note bugs, questions, risks
Wrap-up (5-10 minutes):
- Summarize findings
- Complete session sheet
- Identify follow-up areas
Managing Session Time
SBTM sessions are uninterruptible. If interrupted, pause the session clock. This ensures accurate time tracking and preserves flow state.
Session Types:
- Normal Session: Full testing time dedicated to the charter
- Short Session: < 60 minutes (when time is limited)
- Long Session: > 120 minutes (for complex integrations; break into multiple sessions if possible)
The Session Sheet Template
The session sheet is the core artifact of SBTM. It captures what was tested, what was found, and time allocation.
Standard Session Sheet Format
## Session Information
Charter: [Charter text]
Tester: [Name]
Date: [Date]
Start Time: [Time]
Duration: [X minutes]
Task Breakdown: [Test%, Bug%, Setup%]
## Areas Tested
- Login flow with SSO integration
- Password reset functionality
- Session timeout behavior
- Multi-factor authentication
## Test Notes
- Tested with Google, Microsoft, and Okta SSO providers
- Verified token refresh mechanism
- Explored edge case: expired SSO session during app usage
- Checked behavior with disabled MFA vs enabled MFA
## Bugs Found
BUG-1234: SSO login fails silently when Okta returns 503 error
BUG-1235: Session timeout prompt appears behind modal, making it unreachable
BUG-1236: MFA code accepts 5 digits instead of required 6
## Questions/Risks
Q: What happens if user changes password while session is active?
Q: Is there rate limiting on password reset requests?
RISK: No logging for failed SSO attempts, difficult to diagnose user issues
## Issues/Blockers
- Okta test environment was down for 20 minutes
- Unable to test Microsoft SSO due to missing test tenant
## Data Files
- test_users.csv (100 user accounts with various states)
- sso_tokens_invalid.json (expired and malformed tokens)
## Metrics
Session Duration: 90 minutes
Charter vs Opportunity: 70% charter, 30% opportunity
Test vs Bug vs Setup: 75% test, 15% bug investigation, 10% setup
Task Breakdown: The TBS Ratio
SBTM tracks time allocation across three categories:
Test: Time spent actively testing Bug: Time spent investigating and reporting bugs Setup: Time spent preparing environment, data, tools
Typical Ratios:
- Healthy: 70% Test, 20% Bug, 10% Setup
- Concerning: 40% Test, 10% Bug, 50% Setup (environment issues)
- Bug-heavy: 50% Test, 45% Bug, 5% Setup (unstable build)
Charter vs. Opportunity
Not all testing follows the charter. Testers might discover interesting tangents worth exploring.
Charter Time: Testing aligned with the session charter Opportunity Time: Testing tangential interesting findings
Example: Charter focuses on payment processing, but tester notices suspicious CORS errors in console. Spending 20 minutes investigating the CORS issue is “opportunity” testing.
Ideal Balance: 70-80% charter, 20-30% opportunity. Too much opportunity suggests poorly scoped charters or highly unstable software.
The Debriefing Process
After each session, the tester and test manager conduct a brief debrief (5-15 minutes). This serves multiple purposes:
Debrief Objectives
- Validate Findings: Are bugs reproducible? Is the description clear?
- Identify Follow-ups: Do we need additional sessions for this area?
- Share Knowledge: What did we learn? What patterns emerged?
- Adjust Strategy: Should we reprioritize charters based on findings?
Debrief Questions
For the Tester:
- What did you find most interesting?
- What surprised you?
- What would you test differently next time?
- What risks did you identify?
For the Manager:
- Is the charter still relevant?
- Should we invest more/less in this area?
- Are there dependencies blocking progress?
- What support does the tester need?
Debrief Outputs
- Charter Updates: Refine or retire charters based on learnings
- New Charters: Spawn follow-up missions from findings
- Bug Prioritization: Triage severity and assign to developers
- Coverage Tracking: Update test coverage maps
SBTM Metrics and Reporting
Unlike scripted testing, SBTM metrics focus on coverage and learning rather than pass/fail counts.
Key Metrics
1. Session Count: Number of completed sessions per area/feature 2. Coverage: Which areas received testing attention 3. Bug Discovery Rate: Bugs found per session hour 4. Session Distribution: Time allocation across different charters 5. Opportunity %: How much testing deviated from charters
Session Coverage Map
Visualize where testing effort has been invested:
Feature Area | Sessions | Total Hours | Bugs Found | Last Tested |
---|---|---|---|---|
User Auth | 8 | 12h | 14 | 2025-10-05 |
Payment | 12 | 18h | 23 | 2025-10-06 |
Shipping | 4 | 6h | 5 | 2025-10-03 |
Admin Panel | 2 | 3h | 7 | 2025-10-01 |
Reporting | 0 | 0h | 0 | Never |
Insights:
- Payment processing received most attention (high risk area)
- Admin panel finding high bug density (7 bugs in 2 sessions)
- Reporting area untested (schedule sessions)
Bug Discovery Trends
Track bugs found per session hour over time:
- Week 1: 3.2 bugs/hour (many low-hanging fruit)
- Week 2: 2.1 bugs/hour (major issues found)
- Week 3: 0.8 bugs/hour (diminishing returns)
- Week 4: 0.5 bugs/hour (approaching stability)
Decision Point: At week 4, shift testing focus to unexplored areas or features.
Productivity Metrics
Session Efficiency:
- Average session duration: 87 minutes (target: 90)
- Setup time %: 12% (target: < 15%)
- Sessions cancelled due to blockers: 8% (investigate environment stability)
Real-World SBTM Implementation
Case Study: Fintech Mobile App Launch
Context: 6-week testing window before launch, complex app with payment processing, biometric auth, and real-time stock trading.
SBTM Approach:
Week 1: Charter Generation Created 45 charters organized by risk:
- High Risk (15 charters): Payment processing, security, data integrity
- Medium Risk (20 charters): UI workflows, performance, integration
- Low Risk (10 charters): Settings, help screens, animations
Week 2-4: Execution Phase
- 2 testers, 6 sessions/day each
- Total: 252 sessions completed
- Found 127 bugs (0.5 bugs/session on average)
Session Distribution:
- 60% high-risk areas
- 30% medium-risk areas
- 10% low-risk areas
Week 5: Targeted Deep Dives Based on early findings, created focused charters:
- “Explore race conditions in concurrent trading” (spawned after intermittent crash)
- “Explore payment reversal scenarios” (spawned after refund bug)
- “Explore biometric fallback paths” (spawned after authentication edge case)
Week 6: Regression + Final Sweep
- Re-tested high-risk areas with fresh charters
- Verified all critical bugs fixed
- Explored previously low-priority areas
Results:
- Launched on schedule with 96% of critical bugs resolved
- Post-launch defect rate: 0.3 bugs/1000 users (industry avg: 2.5)
- Testing coverage clearly documented for audit trail
- Management visibility into testing progress throughout
Key Success Factors:
- Clear charters prioritized by business risk
- Daily debriefs kept team aligned
- Metrics showed when to shift focus
- Flexibility to pursue high-value tangents
Rapid Software Testing and SBTM
Rapid Software Testing (RST), developed by James Bach and Michael Bolton, is a methodology that heavily incorporates SBTM principles.
RST Core Principles
- Testing is learning: Focus on discovery, not just verification
- Skilled humans are essential: Tools support, but don’t replace, thinking
- Rapid feedback: Shorten cycles between testing and development
- Heuristic-driven: Use rules of thumb, not rigid processes
- Context-driven: Adapt approach to project needs
SBTM in the RST Framework
RST uses SBTM as the primary mechanism for organizing exploratory work:
Test Strategy: Defined by charter themes Test Execution: Organized into sessions Test Reporting: Captured via session sheets and debriefs Test Management: Tracked through session metrics
RST Heuristics for Charter Creation
SFDPOT (San Francisco Depot) - Dimensions to explore:
- Structure: Code, architecture, data structures
- Function: What the product does
- Data: Inputs, outputs, state
- Platform: OS, browser, hardware
- Operations: How users interact
- Time: Concurrency, sequences, timing
Example Charter Using SFDPOT:
Charter: Explore the shopping cart (Function)
With rapid add/remove operations (Time)
On mobile Safari (Platform)
To discover state management bugs
Tools Supporting SBTM
SessionTester
Open-source tool for managing SBTM sessions:
- Charter library management
- Session timer with pause/resume
- Integrated note-taking
- Automatic metrics calculation
- Team dashboard
RapidReporter
Lightweight session note-taking tool:
- Simple text-based interface
- Automatic timestamping
- Markdown export
- Minimal overhead
Spreadsheet-Based Tracking
Many teams use simple spreadsheets:
Columns:
- Session ID
- Date
- Tester
- Charter
- Duration
- Areas Tested
- Bugs Found
- Notes
- Test%/Bug%/Setup%
Integration with Test Management Systems
Export session data to:
- Jira (create tickets from session sheets)
- TestRail (link sessions to test plans)
- Confluence (maintain charter wiki)
Best Practices for SBTM Success
1. Start with Clear Charters
Invest time in charter quality:
- Review charters as a team
- Prioritize based on risk
- Update charters as software evolves
- Retire irrelevant charters
2. Keep Sessions Time-Boxed
Respect the 90-minute limit:
- Prevents fatigue
- Maintains focus
- Enables accurate tracking
- Forces prioritization
3. Debrief Consistently
Never skip debriefs:
- Validates findings immediately
- Transfers knowledge
- Adjusts strategy in real-time
- Maintains team alignment
4. Use Metrics to Guide, Not Judge
Metrics inform decisions, not performance reviews:
- Low bug count might mean stable software, not poor testing
- High setup % might indicate environment issues, not tester inefficiency
- Opportunity % variance expected and valuable
5. Balance Charter and Opportunity
Embrace the 70/30 split:
- Charter ensures coverage
- Opportunity enables discovery
- Too much charter = missing important tangents
- Too much opportunity = unfocused testing
6. Document Enough, Not Everything
Session sheets should be:
- Detailed enough for reproducibility
- Concise enough for quick review
- Focused on findings, not play-by-play
7. Iterate on Charters
Charters aren’t static:
- Spawn new charters from findings
- Combine redundant charters
- Split overly broad charters
- Retire completed charters
Combining SBTM with Other Approaches
SBTM complements rather than replaces other testing methodologies:
SBTM + Scripted Testing
- Use scripted tests for regression and compliance
- Use SBTM for new features and risk areas
- Let SBTM findings inform new automated tests
SBTM + Test Automation
- Automate stable, repeatable workflows
- Use SBTM for exploratory investigation
- SBTM identifies edge cases to automate
SBTM + BDD/Specification by Example
- Use BDD for documented requirements
- Use SBTM for scenarios beyond specifications
- SBTM discovers missing specifications
Conclusion: Structure Enables Freedom
The paradox of SBTM is that structure enhances rather than constrains exploratory testing. By organizing work into focused sessions, documenting findings systematically, and tracking metrics consistently, SBTM makes exploratory testing:
Manageable: Clear visibility into what’s being tested Measurable: Data-driven insights into coverage and effectiveness Valuable: Documented findings justify investment Sustainable: Prevents tester burnout through time-boxing
When stakeholders ask “What have you tested?”, SBTM provides clear answers. When managers ask “Are we done?”, metrics show coverage and bug discovery trends. When testers ask “Where should I focus?”, charters provide direction.
SBTM proves that exploratory testing can be both rigorous and creative, structured and adaptive, measurable and investigative. It’s not scripted testing in disguise—it’s exploratory testing done professionally.
Start with simple charters, run focused sessions, debrief honestly, and let the metrics guide your strategy. The result is exploratory testing that earns respect through results, not rhetoric.