What is Static Testing?
Static testing examines software artifacts (code, requirements, design documents, test cases) without executing the code. Unlike dynamic testing where the application runs, static testing analyzes work products through manual reviews or automated tools to identify defects, inconsistencies, and quality issues.
Key Principle: Find defects as early as possible, ideally before code execution.
Static vs Dynamic Testing
Aspect | Static Testing | Dynamic Testing |
---|---|---|
Code Execution | No execution required | Requires running code |
When Applied | Early stages (requirements, design, coding) | After implementation |
Focus | Structure, syntax, standards, logic | Behavior, functionality, performance |
Defects Found | Logic errors, standard violations, security flaws | Functional bugs, integration issues, performance problems |
Cost of Defects | Low (found early) | Higher (found later) |
Tools | Linters, static analyzers, review checklists | Test frameworks, monitoring tools |
Examples | Code review, requirements inspection | Unit testing, integration testing |
Benefits of Static Testing
✅ Early defect detection: Find issues before code runs, reducing fix costs by 10-100x
✅ Prevents defect injection: Catches issues in requirements/design before implementation
✅ Improves quality: Enforces coding standards and best practices
✅ Knowledge sharing: Team members learn through reviews
✅ Reduces testing time: Fewer defects reach dynamic testing phases
✅ Finds issues dynamic testing misses: Logic flaws, security vulnerabilities, maintainability problems
Types of Static Testing
1. Reviews
Informal Reviews
Characteristics:
- No formal process or documentation
- Quick feedback from peers
- Minimal preparation
Example:
Developer: "Hey, can you quickly look at this function?"
Reviewer: "Sure. *Looks for 5 minutes* Consider extracting this repeated logic and adding error handling for null inputs."
Walkthrough
Characteristics:
- Author leads the review session
- Team members ask questions and provide feedback
- Educational purpose
Process:
- Author presents code/document
- Team asks clarifying questions
- Issues identified informally
- Author takes notes for follow-up
Technical Review
Characteristics:
- Formal process with defined roles
- Focuses on technical correctness
- Documented outcomes
Roles:
- Moderator: Facilitates meeting
- Author: Presents work
- Reviewers: Provide technical feedback
- Scribe: Documents findings
Inspection (Formal Review)
Characteristics:
- Highly structured process
- Multiple phases (planning, preparation, meeting, rework, follow-up)
- Metrics collected (defects/page, review rate)
- Most effective but time-intensive
Inspection Process:
1. Planning
- Define scope and objectives
- Assign roles
- Distribute materials
2. Overview Meeting (optional)
- Author presents context
3. Individual Preparation
- Reviewers study materials
- Note defects using checklist
4. Inspection Meeting
- Discuss findings
- Classify defects
- Make decisions (accept/revise/reject)
5. Rework
- Author addresses defects
6. Follow-up
- Verify fixes
- Collect metrics
2. Static Code Analysis
Automated tools scan source code to detect:
- Syntax errors: Code that won’t compile/run
- Semantic errors: Logically incorrect code
- Code smells: Poor design patterns
- Security (as discussed in Dynamic Testing: Testing in Action) vulnerabilities: SQL injection, XSS, buffer overflows
- Complexity issues: High cyclomatic complexity
- Code standard violations: Formatting, naming conventions
Popular Tools
Language | Tools |
---|---|
JavaScript/TypeScript | ESLint, TSLint, SonarQube |
Python | Pylint, Flake8, Bandit (security), mypy (type checking) |
Java | SonarQube, Checkstyle, PMD, SpotBugs |
C/C++ | Clang Static Analyzer, Cppcheck, PVS-Studio |
C#/.NET | FxCop, StyleCop, SonarQube |
Go | golint, go vet, staticcheck |
Example: Static Analysis with Pylint
# buggy_code.py
def calculate_discount(price, discount):
if discount > 1:
raise ValueError("Discount cannot exceed 100%")
return price - (price * discount)
# Unused variable
unused_var = 42
# Missing docstring
def process_order(order_id):
return order_id * 2
Running Pylint:
$ pylint buggy_code.py
************* Module buggy_code
buggy_code.py:8:0: W0612: Unused variable 'unused_var' (unused-variable)
buggy_code.py:11:0: C0116: Missing function or method docstring (missing-function-docstring)
------------------------------------------------------------------
Your code has been rated at 6.67/10
Fixed Code:
"""Module for discount calculations."""
def calculate_discount(price, discount):
"""
Calculate discounted price.
Args:
price: Original price
discount: Discount percentage (0.0 to 1.0)
Returns:
Discounted price
Raises:
ValueError: If discount exceeds 100%
"""
if discount > 1:
raise ValueError("Discount cannot exceed 100%")
return price - (price * discount)
def process_order(order_id):
"""Process order by ID (placeholder logic)."""
return order_id * 2
3. Requirements Review
Checklist:
✅ Completeness: All necessary information included?
✅ Correctness: Requirements accurate and feasible?
✅ Consistency: No contradictions within/between requirements?
✅ Clarity: Unambiguous language, no room for misinterpretation?
✅ Testability: Can requirements be verified through testing?
✅ Traceability: Requirements linked to business goals?
Example Review:
Requirement: "The system should be fast."
Issues Found:
- ❌ Not measurable - what is "fast"?
- ❌ Not testable - no clear acceptance criteria
Improved Requirement:
"The system shall load the dashboard within 2 seconds for 95% of requests under normal load (1000 concurrent users)."
- ✅ Measurable (2 seconds)
- ✅ Testable (can verify with performance tests)
- ✅ Clear acceptance criteria
4. Design Review
Focus Areas:
- Architecture: System structure, components, interfaces
- Scalability: Can design handle growth?
- Security (as discussed in White Box Testing: Looking Inside the Code): Vulnerabilities in design?
- Maintainability: Easy to modify and extend?
- Performance: Design support performance requirements?
Example:
Design: Single-server deployment for e-commerce platform
Review Findings:
- ❌ Single point of failure (no redundancy)
- ❌ Cannot scale horizontally
- ❌ No load balancing
- ❌ Database and app server on same machine (resource contention)
Recommendation:
- ✅ Multi-server architecture with load balancer
- ✅ Separate database tier
- ✅ Caching layer (Redis) for frequent queries
- ✅ CDN for static assets
Static Testing Best Practices
1. Establish Clear Objectives
Know what you’re looking for:
- Security (as discussed in Bug Anatomy: From Discovery to Resolution) vulnerabilities?
- Code standards compliance?
- Logic errors?
- Maintainability issues?
2. Use Checklists
Code Review Checklist Example:
## Functionality
- [ ] Code implements requirements correctly
- [ ] Edge cases handled
- [ ] Error handling present
## Security
- [ ] Input validation performed
- [ ] SQL injection prevented
- [ ] Sensitive data not logged
## Performance
- [ ] No unnecessary loops
- [ ] Database queries optimized
- [ ] Caching used appropriately
## Maintainability
- [ ] Code follows team standards
- [ ] Functions/classes have single responsibility
- [ ] Magic numbers replaced with constants
- [ ] Adequate documentation present
## Testing
- [ ] Unit tests included
- [ ] Test coverage > 80%
- [ ] Tests are meaningful (not just coverage fillers)
3. Integrate into CI/CD
Automate static analysis in your pipeline:
# .github/workflows/static-analysis.yml
name: Static Analysis
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
pip install pylint flake8 bandit
- name: Run Pylint
run: pylint src/ --fail-under=8.0
- name: Run Flake8
run: flake8 src/ --max-complexity=10
- name: Run Bandit (Security)
run: bandit -r src/ -lll
4. Foster Positive Review Culture
❌ Bad: “This code is terrible. Did you even try?”
✅ Good: “Consider extracting this logic into a helper function for better readability. What do you think?”
Guidelines:
- Critique code, not people
- Explain the “why” behind suggestions
- Offer alternatives, not just criticism
- Balance positive and constructive feedback
5. Track Metrics
Measure static testing effectiveness:
Metric | Description | Target |
---|---|---|
Defect Detection Rate | Defects found per review hour | Varies by project |
Review Coverage | % of code reviewed before merge | 100% |
Defect Density | Defects per 1000 lines of code | < 5 |
Fix Time | Time to address review findings | < 1 day |
Escaped Defects | Issues found in prod that reviews missed | Minimize |
Common Static Testing Pitfalls
Pitfall 1: Reviewing Too Much at Once
Problem: Reviewing 1000+ lines of code in one session leads to fatigue and missed defects.
Solution: Limit review size to 200-400 lines per session. Break large changes into smaller, reviewable chunks.
Pitfall 2: Treating Static Analysis as Optional
Problem: Static analysis warnings ignored or bypassed.
Solution: Treat static analysis failures as build failures. Set quality gates in CI/CD.
Pitfall 3: Not Acting on Findings
Problem: Reviews identify issues, but no follow-up action taken.
Solution: Track review findings in ticketing system. Don’t close review until issues addressed.
Pitfall 4: Over-Reliance on Automation
Problem: Assuming automated tools catch everything.
Solution: Combine automated analysis with manual reviews. Tools miss context-specific issues.
Pitfall 5: Skipping Requirements/Design Review
Problem: Only reviewing code, missing upstream defects.
Solution: Review requirements and design documents. Defects found here are cheapest to fix.
Static Testing ROI
Cost of Defect by Phase:
Phase Found | Relative Cost |
---|---|
Requirements | 1x |
Design | 5x |
Implementation | 10x |
Testing | 15x |
Production | 100x |
Example Calculation:
Scenario: 100 defects in project
Without Static Testing:
- 20 found in testing (15x) = 300 units
- 10 found in production (100x) = 1000 units
- Total Cost: 1300 units
With Static Testing:
- 70 found in requirements/design review (1x-5x) = 280 units
- 20 found in code review (10x) = 200 units
- 10 found in testing (15x) = 150 units
- 0 found in production
- Total Cost: 630 units
Savings: 51% reduction in defect costs
Integration with Dynamic Testing
Complementary Approach:
graph TD
A[Requirements] --> B[Requirements Review - Static]
B --> C[Design]
C --> D[Design Review - Static]
D --> E[Implementation]
E --> F[Code Review - Static]
E --> G[Static Analysis - Automated]
F --> H[Unit Tests - Dynamic]
G --> H
H --> I[Integration Tests - Dynamic]
I --> J[System Tests - Dynamic]
Workflow:
- Static: Review requirements
- Static: Review design
- Static: Code review + static analysis
- Dynamic: Unit tests
- Dynamic: Integration tests
- Static + Dynamic: Continuous review and testing
Conclusion
Static testing is a critical component of comprehensive quality assurance, enabling early defect detection at a fraction of the cost of finding issues in later phases. By combining manual reviews with automated static analysis, teams can prevent defects, enforce standards, and improve overall code quality.
Key Takeaways:
- Static testing finds defects without executing code, examining artifacts through reviews and analysis
- Early detection saves costs: Defects found in requirements/design are 10-100x cheaper to fix
- Multiple techniques: Reviews, inspections, static analysis tools
- Automation is essential: Integrate static analysis into CI/CD pipelines
- Complements dynamic testing: Use both for comprehensive coverage
- Foster positive culture: Constructive feedback, not criticism
- Track and improve: Measure effectiveness and refine processes
Invest in static testing early and consistently. The time spent reviewing requirements, designs, and code pays dividends by preventing costly defects from reaching production and improving team knowledge and code quality.