In software testing, three terms often get confused: Smoke Testing, Sanity Testing, and Regression Testing (as discussed in Black Box Testing: Techniques and Approaches). While they may seem similar, each serves a distinct purpose in the software development lifecycle. Understanding when and how to use each type is crucial for efficient testing.

This guide will clarify the differences, provide practical examples, and help you decide which testing type to use in different scenarios.

Quick Comparison

AspectSmoke TestingSanity TestingRegression Testing
PurposeVerify build stabilityVerify specific fixes/changesEnsure no new bugs in existing features
ScopeBroad but shallowNarrow and deepBroad and deep
WhenAfter new buildAfter minor changes/bug fixesAfter any code change
Executed ByDevelopers or QAQA teamQA team (often automated)
DocumentedUsually notUsually notYes, formal test cases
AutomatedOftenRarelyHighly recommended
Time15-30 minutes30-60 minutesHours to days
DepthSurface-levelFocused deep diveComprehensive

Smoke Testing: “Can We Even Start Testing?”

What is Smoke Testing?

Smoke Testing (also called Build Verification Testing or Confidence Testing) is a preliminary check to determine if a new build is stable enough for further testing. It’s like turning on a machine to see if it powers on before running diagnostics.

The term comes from hardware testing: if you turn on a device and it starts smoking, something is seriously wrong.

Key Characteristics

  • Quick: 15-30 minutes max
  • Shallow: Tests only critical, high-level functionality
  • Go/No-Go Decision: Pass = continue testing, Fail = reject build
  • Broad Coverage: Touches many features but doesn’t go deep
  • Entry Point: First testing done on a new build

What Smoke Testing Covers

  • ✅ Application launches successfully

  • ✅ Login functionality works

  • ✅ Critical pages/screens load

  • ✅ Navigation between main modules works

  • ✅ No critical crashes or blockers

  • ❌ Detailed functionality

  • ❌ Edge cases

  • ❌ Data validation

  • ❌ Complex workflows

Smoke Testing Example

E-commerce Application Smoke Test (20 minutes)

1. Application Startup
   ✓ Application loads without errors
   ✓ Homepage displays correctly

2. User Authentication
   ✓ Login page accessible
   ✓ Can login with valid credentials
   ✓ Logout works

3. Core Functions (Happy Path Only)
   ✓ Search bar accepts input
   ✓ Product page loads
   ✓ Add to cart works
   ✓ View cart page displays
   ✓ Checkout page accessible

4. Critical APIs
   ✓ User service responds (200 OK)
   ✓ Product service responds (200 OK)
   ✓ Payment gateway responds (200 OK)

Decision: If ANY of these fail → Reject build, send back to dev

Smoke Test Checklist Template

# Smoke Test - Build #v2.5.3
Date: 2025-10-02 | Tester: QA Lead

## Environment
- URL: https://staging.app.com
- Browser: Chrome 120

## Results

| Module | Test | Status | Notes |
|--------|------|--------|-------|
| Startup | App loads | ✅ Pass | |
| Auth | Login works | ✅ Pass | |
| Auth | Logout works | ✅ Pass | |
| Products | Search works | ✅ Pass | |
| Products | Product page loads | ✅ Pass | |
| Cart | Add to cart | ✅ Pass | |
| Checkout | Checkout page loads | ✅ Pass | |

**Decision**: ✅ BUILD ACCEPTED - Proceed with full testing

---

If any critical test fails:
**Decision**: ❌ BUILD REJECTED - Return to development

When to Use Smoke Testing

  • ✅ After every new build deployment
  • ✅ Before starting a full test cycle
  • ✅ After merging major feature branches
  • ✅ In CI/CD pipelines (automated smoke tests)
  • ✅ When deciding if build is testable

Smoke Testing Automation Example

// Automated Smoke Test Suite (Playwright)
describe('Smoke Tests - E-commerce App', () => {
  test('Application should load homepage', async ({ page }) => {
    await page.goto('https://staging.app.com');
    await expect(page).toHaveTitle(/E-Commerce/);
  });

  test('User should be able to login', async ({ page }) => {
    await page.goto('https://staging.app.com/login');
    await page.fill('#email', 'test@example.com');
    await page.fill('#password', 'Test123!');
    await page.click('button[type="submit"]');
    await expect(page).toHaveURL(/dashboard/);
  });

  test('Search functionality should work', async ({ page }) => {
    await page.goto('https://staging.app.com');
    await page.fill('#search', 'laptop');
    await page.press('#search', 'Enter');
    await expect(page.locator('.product-card')).toHaveCount.greaterThan(0);
  });

  test('Add to cart should work', async ({ page }) => {
    await page.goto('https://staging.app.com/product/123');
    await page.click('button:has-text("Add to Cart")');
    await expect(page.locator('.cart-count')).toHaveText('1');
  });
});

// Run time: ~2-3 minutes
// Runs on: Every deployment

Sanity Testing: “Did We Fix What We Claimed?”

What is Sanity Testing?

Sanity Testing (also called Narrow Regression Testing) (as discussed in Risk-Based Testing: Prioritizing Test Efforts for Maximum Impact) is a quick, focused test to verify that a specific bug fix or minor change works as expected. It’s like checking if the repair you made to a car actually fixed the problem.

Key Characteristics

  • Narrow Scope: Tests only the changed area
  • Unscripted: Usually not documented formally
  • Quick: 30-60 minutes
  • Deep on Changed Area: Goes deeper than smoke testing
  • Post-Fix Verification: Done after bug fixes or small changes

What Sanity Testing Covers

  • ✅ The specific bug that was fixed

  • ✅ Related functionality to the fix

  • ✅ Immediate dependencies

  • ❌ Entire application

  • ❌ Unrelated features

  • ❌ Full regression

Sanity Testing Example

Scenario: Bug Fix - “Password reset email not sending”

Bug Report:

Bug ID: BUG-1234
Issue: Users not receiving password reset emails
Fix: Updated email service configuration
Build: v2.5.4

Sanity Test (45 minutes):

Focus Area: Password Reset Flow

1. Trigger Password Reset
   ✓ Click "Forgot Password" link
   ✓ Enter valid email address
   ✓ Submit form
   ✓ Verify success message displays

2. Verify Email Sent
   ✓ Check email inbox (test account)
   ✓ Email received within 2 minutes
   ✓ Email contains reset link
   ✓ Email formatting is correct

3. Complete Password Reset
   ✓ Click reset link in email
   ✓ Link opens reset page
   ✓ Enter new password
   ✓ Confirm password reset
   ✓ Verify success message

4. Verify New Password Works
   ✓ Login with new password
   ✓ Login successful
   ✓ Old password rejected

5. Related Functionality (Sanity Check)
   ✓ Regular login still works
   ✓ Email notifications for other actions work
   ✓ User profile displays email correctly

Decision: If fix works → Accept change, continue testing
         If fix fails → Reject, send back for re-work

When to Use Sanity Testing

  • ✅ After a bug fix to verify it’s resolved
  • ✅ After minor code changes
  • ✅ After small configuration updates
  • ✅ Before running full regression
  • ✅ When time is limited (quick validation)

Sanity vs Smoke: Key Difference

Smoke Testing:
"Does the application work at all?"
Scope: Entire app (shallow)
Example: Can I login, search, add to cart?

Sanity Testing:
"Does this specific fix/change work?"
Scope: One feature/area (deep)
Example: Does password reset email now send?

Regression Testing: “Did We Break Anything Else?”

What is Regression Testing (as discussed in Shift-Left Testing: Early Quality Integration for Cost Savings)?

Regression Testing verifies that recent code changes haven’t broken existing functionality. It’s like making sure that fixing the car’s brakes didn’t somehow break the headlights.

Key Characteristics

  • Comprehensive: Tests entire application
  • Documented: Formal test cases
  • Time-Consuming: Hours to days
  • Automated: Should be highly automated
  • Repeated: Runs frequently (every sprint, every release)
  • Cumulative: Test suite grows over time

What Regression Testing Covers

  • ✅ All existing functionality
  • ✅ Previously fixed bugs (to ensure they don’t reoccur)
  • ✅ Integration points
  • ✅ Critical business workflows
  • ✅ Edge cases and boundary conditions

Types of Regression Testing

1. Complete Regression

  • Tests the entire application
  • Used before major releases
  • Time: Days to weeks

2. Selective Regression

  • Tests affected modules and dependencies
  • Used after moderate changes
  • Time: Hours to days

3. Progressive Regression

  • Tests new features + impacted areas
  • Used in Agile sprints
  • Time: Hours

Regression Test Suite Example

E-commerce Regression Test Suite

# Regression Test Suite v2.5
Total Tests: 450 | Automated: 380 (84%) | Manual: 70 (16%)

## Module Breakdown

### 1. User Authentication (50 tests)
- Login (valid, invalid, edge cases)
- Logout
- Password reset
- Session management
- Multi-factor authentication
- Social login (Google, Facebook)

### 2. Product Catalog (80 tests)
- Search functionality
- Filters and sorting
- Product details page
- Product recommendations
- Inventory updates
- Price changes

### 3. Shopping Cart (60 tests)
- Add/remove items
- Update quantities
- Cart persistence
- Cart expiration
- Apply coupon codes
- Tax calculation

### 4. Checkout (90 tests)
- Guest checkout
- Registered user checkout
- Shipping address
- Payment methods (credit card, PayPal, etc.)
- Order confirmation
- Email notifications

### 5. Order Management (70 tests)
- Order history
- Order tracking
- Cancel order
- Return/refund
- Order status updates

### 6. User Profile (40 tests)
- View/edit profile
- Change password
- Manage addresses
- Payment methods
- Notification preferences

### 7. Integration Tests (60 tests)
- Payment gateway integration
- Shipping provider integration
- Email service integration
- Analytics integration
- Inventory sync

Regression Testing Strategy

Sprint Cadence (2 weeks):

Week 1:
- Day 1-3: Development
- Day 4: Smoke Testing (new build)
- Day 5: Sanity Testing (verify fixes)

Week 2:
- Day 1-2: Feature Testing (new functionality)
- Day 3-4: Regression Testing (automated suite runs nightly)
- Day 5: Bug fixes + re-testing

Before Release:
- Full Regression Suite (all 450 tests)
- Performance testing
- Security testing
- UAT sign-off

Automated Regression Example

# Automated Regression Test Suite (Pytest)

@pytest.mark.regression
class TestUserAuthentication:
    def test_login_with_valid_credentials(self):
        # Test case TC-AUTH-001
        pass

    def test_login_with_invalid_password(self):
        # Test case TC-AUTH-002
        pass

    def test_password_reset_flow(self):
        # Test case TC-AUTH-010
        pass

@pytest.mark.regression
@pytest.mark.critical
class TestCheckoutFlow:
    def test_guest_checkout_credit_card(self):
        # Test case TC-CHECKOUT-001
        pass

    def test_registered_user_checkout_paypal(self):
        # Test case TC-CHECKOUT-005
        pass

# Run full regression:
# pytest -m regression --html=report.html

# Run critical tests only (for quick feedback):
# pytest -m "regression and critical"

# CI/CD Integration:
# Runs nightly at 2 AM
# Slack notification if failures detected
# Blocks deployment if critical tests fail

When to Use Regression Testing

  • ✅ Before every release (mandatory)
  • ✅ After major code refactoring
  • ✅ After integrating third-party libraries
  • ✅ After database schema changes
  • ✅ In Agile: Every sprint
  • ✅ In CI/CD: Automated on every merge to main branch

Real-World Testing Workflow

Scenario: New feature added - “Wishlist functionality”

Phase 1: Smoke Testing (30 min)

✓ App still loads
✓ Login still works
✓ Main features accessible
✓ No critical crashes

Result: ✅ Build is stable, proceed

Phase 2: Feature Testing (2 days)

Test new Wishlist feature:
✓ Add items to wishlist
✓ Remove items
✓ View wishlist page
✓ Move wishlist item to cart
✓ Share wishlist
✓ Wishlist persistence

Result: ✅ Feature works, found 3 minor bugs

Phase 3: Sanity Testing (1 hour)

After bugs fixed:
✓ Verify 3 bugs are resolved
✓ Re-test affected areas
✓ Quick check of related features

Result: ✅ Bugs fixed, no obvious issues

Phase 4: Regression Testing (1 day automated)

Run full regression suite (450 tests):
✓ All authentication tests pass
✓ All cart tests pass
✓ All checkout tests pass
✗ 2 product search tests fail (investigation needed)

Result: ⚠️ Found 2 regressions in search, need fixes

Phase 5: Final Smoke + Regression (after fixes)

Smoke: ✅ Pass (15 min)
Regression: ✅ All 450 tests pass (automated overnight)

Result: ✅ Ready for release

Quick Decision Guide

When Should I Run…

Smoke Testing?

  • ✅ New build deployed
  • ✅ Starting the day (verify environment)
  • ✅ CI/CD pipeline (automated)
  • ✅ Before allocating QA resources

Sanity Testing?

  • ✅ Bug fix deployed
  • ✅ Minor configuration change
  • ✅ Hot fix applied
  • ✅ Need quick verification

Regression Testing?

  • ✅ Before release (mandatory)
  • ✅ End of sprint
  • ✅ After major code changes
  • ✅ After refactoring
  • ✅ Integration of new libraries

Common Misconceptions

❌ Myth 1: “Smoke and Sanity are the same”

Reality: Smoke is broad/shallow, Sanity is narrow/deep

❌ Myth 2: “Regression is just re-running all tests”

Reality: Regression is strategic - prioritize critical paths, automate extensively

❌ Myth 3: “We don’t need smoke tests if we have regression”

Reality: Smoke tests save time - they quickly identify broken builds before wasting hours on regression

❌ Myth 4: “Sanity testing is always manual”

Reality: You can automate sanity checks for frequently fixed areas

❌ Myth 5: “100% regression coverage is required”

Reality: Aim for risk-based coverage - focus on critical business functions

Best Practices

Smoke Testing

  1. Keep it minimal - Only critical paths
  2. Automate it - Run on every build
  3. Fast feedback - Max 30 minutes
  4. Binary outcome - Pass/Fail (no maybe)
  5. Block deployment if fails - Don’t waste QA time on broken builds

Sanity Testing

  1. Focus tightly - Test only what changed
  2. Go deep on the fix - Don’t just click through
  3. Test related areas - Check dependencies
  4. Document if recurring - Consider adding to automation
  5. Quick turnaround - Developers are waiting

Regression Testing

  1. Automate aggressively - 70%+ automation target
  2. Prioritize tests - Critical paths run first
  3. Maintain test suite - Remove obsolete tests, add new ones
  4. Run regularly - Nightly, not just before release
  5. Track metrics - Pass rate, execution time, defect detection

Conclusion

All three testing types serve critical but different purposes:

TypeQuestion AnsweredTimeScope
Smoke“Is it worth testing?”MinutesBroad/Shallow
Sanity“Did the fix work?”< 1 hourNarrow/Deep
Regression“Did we break anything?”Hours-DaysBroad/Deep

The Complete Testing Flow:

New Build → Smoke Test → Feature Test → Sanity Test (if fixes) → Regression Test → Release

Understanding when and how to use each type will make your testing more efficient, catch bugs earlier, and ensure higher quality releases.

Quick Reference Card

┌─────────────────────────────────────────────────┐
│ SMOKE TEST                                       │
├─────────────────────────────────────────────────┤
│ When: Every new build                           │
│ Why: Verify build stability                     │
│ How: Quick checks of critical paths             │
│ Time: 15-30 minutes                             │
│ Automate: Yes                                   │
└─────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────┐
│ SANITY TEST                                      │
├─────────────────────────────────────────────────┤
│ When: After bug fix or minor change             │
│ Why: Verify specific fix works                  │
│ How: Deep dive on changed area                  │
│ Time: 30-60 minutes                             │
│ Automate: Sometimes                             │
└─────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────┐
│ REGRESSION TEST                                  │
├─────────────────────────────────────────────────┤
│ When: Before every release                      │
│ Why: Ensure no new bugs in existing features    │
│ How: Comprehensive test suite                   │
│ Time: Hours to days                             │
│ Automate: Highly recommended (70%+)             │
└─────────────────────────────────────────────────┘

Further Reading

  • ISTQB Foundation Syllabus - Testing Types
  • “Lessons Learned in Software Testing” by Cem Kaner
  • Martin Fowler’s blog on Continuous Integration
  • Google Testing Blog - Test Pyramid concepts