What is Dynamic Testing?
Dynamic testing involves executing code to validate software behavior, functionality (as discussed in Static Testing: Finding Defects Without Running Code), and performance. Unlike static testing (analyzing code without execution), dynamic testing runs the application with specific inputs and verifies actual outputs against expected results.
Core Principle: Execute the software to verify it works as intended.
Dynamic vs Static Testing
Aspect | Dynamic Testing | Static Testing |
---|---|---|
Execution | Code runs | Code analyzed without running |
Timing | After implementation | During any phase |
Focus | Behavior, outputs, performance | Structure, standards, logic |
Defects Found | Functional bugs, runtime errors, performance issues | Design flaws, code smells, security vulnerabilities |
Examples | Unit tests, integration tests, UAT | Code reviews, static analysis |
Both are essential: Dynamic and static testing complement each other for comprehensive quality assurance.
Types of Dynamic Testing
1. Unit Testing
Testing (as discussed in White Box Testing: Looking Inside the Code) individual components (functions, methods, classes) in isolation.
Example: Python Unit Test
# calculator.py
def add(a, b):
return a + b
def divide(a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
# test_calculator.py
import unittest
from calculator import add, divide
class TestCalculator(unittest.TestCase):
def test_add_positive_numbers(self):
self.assertEqual(add(2, 3), 5)
def test_add_negative_numbers(self):
self.assertEqual(add(-1, -1), -2)
def test_divide_normal(self):
self.assertEqual(divide(10, 2), 5)
def test_divide_by_zero(self):
with self.assertRaises(ValueError):
divide(10, 0)
if __name__ == '__main__':
unittest.main()
Benefits:
- Fast execution
- Pinpoints exact failure location
- Enables refactoring with confidence
- Serves as living documentation
Best Practices:
- Test one thing per test
- Use descriptive test names
- Follow AAA pattern (Arrange, Act, Assert)
- Aim for 80%+ code coverage
- Keep tests independent
2. Integration Testing
Testing interactions between integrated components or systems.
Approaches:
Big Bang Integration
- Integrate all components at once
- Test as a complete system
- Drawback: Hard to isolate failures
Incremental Integration
Top-Down:
- Start with top-level modules
- Add lower modules progressively
- Use stubs for un-integrated modules
Bottom-Up:
- Start with lowest-level modules
- Add higher modules progressively
- Use drivers for un-integrated modules
Sandwich/Hybrid:
- Combine top-down and bottom-up
- Test middle layer from both directions
Example: API Integration Test
import requests
import unittest
class TestUserAPI(unittest.TestCase):
BASE_URL = "https://api.example.com"
def test_create_and_retrieve_user(self):
# Create user
create_response = requests.post(
f"{self.BASE_URL}/users",
json={"name": "John Doe", "email": "john@example.com"}
)
self.assertEqual(create_response.status_code, 201)
user_id = create_response.json()["id"]
# Retrieve user
get_response = requests.get(f"{self.BASE_URL}/users/{user_id}")
self.assertEqual(get_response.status_code, 200)
user = get_response.json()
self.assertEqual(user["name"], "John Doe")
self.assertEqual(user["email"], "john@example.com")
def test_update_user(self):
# Create, update, verify workflow
user_id = self._create_test_user()
update_response = requests.put(
f"{self.BASE_URL}/users/{user_id}",
json={"name": "Jane Doe"}
)
self.assertEqual(update_response.status_code, 200)
updated_user = requests.get(f"{self.BASE_URL}/users/{user_id}").json()
self.assertEqual(updated_user["name"], "Jane Doe")
def _create_test_user(self):
"""Helper to create test user."""
response = requests.post(
f"{self.BASE_URL}/users",
json={"name": "Test User", "email": "test@example.com"}
)
return response.json()["id"]
3. System Testing
End-to-end testing of the complete integrated system against requirements.
Types:
Functional (as discussed in Bug Anatomy: From Discovery to Resolution) Testing
Verify system performs required functions.
Non-Functional Testing
- Performance: Response times, throughput
- Load: Behavior under expected load
- Stress: Behavior under extreme load
- Security: Vulnerability assessment
- Usability: User experience evaluation
- Compatibility: Cross-browser, cross-platform testing
Example: E2E System Test (Selenium)
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import unittest
class TestCheckoutFlow(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Chrome()
self.driver.get("https://example-shop.com")
def test_complete_purchase(self):
driver = self.driver
# 1. Search for product
search_box = driver.find_element(By.ID, "search")
search_box.send_keys("laptop")
search_box.submit()
# 2. Select product
WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.CLASS_NAME, "product-item"))
)
driver.find_element(By.CLASS_NAME, "product-item").click()
# 3. Add to cart
add_to_cart_btn = driver.find_element(By.ID, "add-to-cart")
add_to_cart_btn.click()
# 4. Proceed to checkout
driver.find_element(By.ID, "checkout-btn").click()
# 5. Fill shipping info
driver.find_element(By.ID, "name").send_keys("John Doe")
driver.find_element(By.ID, "address").send_keys("123 Main St")
driver.find_element(By.ID, "city").send_keys("New York")
driver.find_element(By.ID, "zip").send_keys("10001")
# 6. Select payment
driver.find_element(By.ID, "payment-card").click()
driver.find_element(By.ID, "card-number").send_keys("4111111111111111")
driver.find_element(By.ID, "cvv").send_keys("123")
# 7. Place order
driver.find_element(By.ID, "place-order-btn").click()
# 8. Verify success
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.CLASS_NAME, "order-confirmation"))
)
confirmation = driver.find_element(By.CLASS_NAME, "order-confirmation").text
self.assertIn("Order placed successfully", confirmation)
def tearDown(self):
self.driver.quit()
4. Acceptance Testing
Validates system meets business requirements and is ready for deployment.
Types:
User Acceptance Testing (UAT)
- Performed by end users or business stakeholders
- Real-world scenario testing
- Final validation before production
Operational Acceptance Testing (OAT)
- Tests operational readiness
- Backup/restore procedures
- Disaster recovery
- Maintenance tasks
UAT Example Scenario:
Feature: Online Banking Transfer
Scenario: Transfer money between own accounts
Given I am logged into online banking
And I have a checking account with $1000
And I have a savings account with $500
When I transfer $200 from checking to savings
Then my checking account balance should be $800
And my savings account balance should be $700
And I should see a transfer confirmation
And I should receive a confirmation email
Dynamic Testing Techniques
1. Black Box Testing
Test without knowledge of internal code structure. Focus on inputs and outputs.
Techniques:
Equivalence Partitioning
Divide inputs into valid and invalid classes.
Example: Age field (valid: 18-65)
Test Cases:
- Invalid: Age < 18 (e.g., 15)
- Valid: 18 ≤ Age ≤ 65 (e.g., 30)
- Invalid: Age > 65 (e.g., 70)
Boundary Value Analysis
Test at boundaries of input ranges.
Example: Age field (valid: 18-65)
Test Cases:
- 17 (just below minimum)
- 18 (minimum)
- 19 (just above minimum)
- 64 (just below maximum)
- 65 (maximum)
- 66 (just above maximum)
Decision Table Testing
Test combinations of conditions.
Decision Table: Loan Approval
| Condition | Test 1 | Test 2 | Test 3 | Test 4 |
|-------------------|--------|--------|--------|--------|
| Credit Score ≥700 | Yes | Yes | No | No |
| Income ≥$50k | Yes | No | Yes | No |
| **Action** | Approve| Reject | Reject | Reject |
2. White Box Testing
Test with knowledge of internal code structure. Focus on code paths and logic.
Techniques:
Statement Coverage
Execute every line of code at least once.
def check_eligibility(age, income):
if age >= 18: # Line 2
if income >= 50000: # Line 3
return "Eligible" # Line 4
return "Not Eligible" # Line 5
# Tests for 100% statement coverage:
check_eligibility(20, 60000) # Executes lines 2, 3, 4
check_eligibility(15, 30000) # Executes lines 2, 5
Branch Coverage
Execute every branch (true/false) of conditions.
# Requires tests where:
# - age >= 18 is True and income >= 50000 is True
# - age >= 18 is True and income >= 50000 is False
# - age >= 18 is False
Path Coverage
Execute all possible paths through code.
# Requires tests for all combinations:
# Path 1: age >= 18, income >= 50000
# Path 2: age >= 18, income < 50000
# Path 3: age < 18
3. Grey Box Testing
Combines black box and white box approaches. Partial knowledge of internals.
Use Cases:
- Integration testing with knowledge of APIs
- Database testing with SQL knowledge
- Security testing with architecture awareness
Performance Testing (Dynamic)
Load Testing
Verify system handles expected user load.
from locust import HttpUser, task, between
class WebsiteUser(HttpUser):
wait_time = between(1, 3)
@task(3) # 3x weight
def view_homepage(self):
self.client.get("/")
@task(1)
def view_product(self):
self.client.get("/products/123")
@task(2)
def add_to_cart(self):
self.client.post("/cart", json={"product_id": 123, "quantity": 1})
# Run: locust -f loadtest.py --users 1000 --spawn-rate 10
Stress Testing
Push system beyond normal limits to find breaking point.
Endurance Testing
Test system stability over extended period.
# Example: Endurance test checking for memory leaks
import psutil
import time
import requests
def endurance_test(url, duration_hours=24):
end_time = time.time() + (duration_hours * 3600)
initial_memory = psutil.virtual_memory().used
while time.time() < end_time:
response = requests.get(url)
assert response.status_code == 200
time.sleep(60) # Request every minute
# Check memory growth
current_memory = psutil.virtual_memory().used
memory_growth = current_memory - initial_memory
print(f"Memory growth: {memory_growth / 1024 / 1024:.2f} MB")
endurance_test("https://api.example.com/health", duration_hours=24)
Dynamic Testing Best Practices
✅ Automate repetitive tests: Use frameworks (JUnit, pytest, Selenium)
✅ Follow test pyramid: Many unit tests, fewer integration tests, few E2E tests
✅ Isolate tests: Each test independent, no shared state
✅ Use meaningful assertions: Clear failure messages
# Bad
assert result == 5
# Good
assert result == 5, f"Expected discount to be 5%, got {result}%"
✅ Test both positive and negative scenarios: Happy path + error cases
✅ Maintain test data: Clean, consistent test data sets
✅ Run tests in CI/CD: Automated execution on every commit
✅ Monitor test execution time: Keep tests fast (unit < 1s, integration < 10s)
Common Pitfalls
❌ Flaky tests: Tests that pass/fail inconsistently
- Solution: Remove timing dependencies, use explicit waits, fix race conditions
❌ Over-reliance on E2E tests: Slow, brittle, expensive
- Solution: Follow test pyramid, unit test core logic
❌ Testing implementation, not behavior: Tests break on refactoring
- Solution: Test outcomes, not internal methods
❌ Ignoring test maintenance: Outdated tests provide false confidence
- Solution: Review and update tests regularly
❌ No test data strategy: Tests fail due to data issues
- Solution: Implement test data management (next article topic!)
Dynamic Testing Metrics
Metric | Description | Target |
---|---|---|
Test Coverage | % of code executed by tests | 80%+ |
Pass Rate | % of tests passing | 95%+ |
Test Execution Time | Time to run full test suite | < 10 min (CI) |
Defect Detection Rate | Bugs found per test hour | Varies |
Mean Time to Detection | Time from defect introduction to detection | Minimize |
Conclusion
Dynamic testing validates that software works correctly by actually running it—an essential complement to static testing’s analytical approach. From unit tests verifying individual functions to system tests validating end-to-end workflows, dynamic testing provides confidence that software behaves as intended under real conditions.
Key Takeaways:
- Dynamic testing executes code to verify behavior, unlike static testing
- Multiple levels: Unit, integration, system, acceptance testing
- Both black box and white box techniques have their place
- Automation is critical: Use testing frameworks and CI/CD integration
- Follow test pyramid: More unit tests, fewer E2E tests
- Complements static testing: Use both for comprehensive quality assurance
Invest in a robust dynamic testing strategy with automated tests at multiple levels, and you’ll catch defects early, enable confident refactoring, and deliver reliable software that meets user needs.