What Is Lighthouse?

Lighthouse is an open-source automated tool developed by Google for auditing web page quality. It runs a series of tests against a page and generates a report with scores and actionable recommendations across five categories: Performance, Accessibility, Best Practices, SEO, and PWA.

For QA engineers, Lighthouse serves as a comprehensive quality gate that can catch issues ranging from slow page loads to missing accessibility attributes to SEO misconfigurations — all from a single tool.

Five Audit Categories

Performance (0-100)

Measures how fast the page loads and becomes interactive. Key metrics include:

MetricWeightGood Threshold
First Contentful Paint (FCP)10%<1.8s
Largest Contentful Paint (LCP)25%<2.5s
Total Blocking Time (TBT)30%<200ms
Cumulative Layout Shift (CLS)25%<0.1
Speed Index10%<3.4s

The performance score is a weighted average of these metrics. TBT has the highest weight, meaning JavaScript execution efficiency has the biggest impact on the score.

Accessibility (0-100)

Checks for WCAG compliance issues:

  • Color contrast ratios
  • ARIA attribute usage
  • Form label associations
  • Image alt text
  • Heading hierarchy
  • Focus management
  • Keyboard navigation support

Note: Lighthouse catches approximately 30-40% of accessibility issues. Manual testing and screen reader testing are still necessary.

Best Practices (0-100)

Evaluates modern web development practices:

  • HTTPS usage
  • No deprecated APIs
  • Console error absence
  • Proper image aspect ratios
  • Safe cross-origin links (rel="noopener")
  • Correct charset declaration
  • No vulnerable JavaScript libraries

SEO (0-100)

Checks basic search engine optimization:

  • Valid meta description
  • Proper title tags
  • Crawlable links
  • Valid robots.txt
  • Proper hreflang for multilingual sites
  • Mobile-friendly tap targets
  • Legible font sizes

PWA (Pass/Fail)

Evaluates Progressive Web App criteria:

  • Valid web app manifest
  • Service worker registration
  • HTTPS
  • Proper icons and splash screens
  • Offline capability

Running Lighthouse

Method 1: Chrome DevTools

  1. Open the page in Chrome
  2. Open DevTools (F12) > Lighthouse tab
  3. Select categories and device type (Mobile/Desktop)
  4. Click Analyze page load
  5. Review the generated report

Tip: Always run in incognito mode to prevent extensions from affecting results.

# Install
npm install -g lighthouse

# Run audit
lighthouse https://example.com --output=html --output-path=./report.html

# Run with specific categories
lighthouse https://example.com --only-categories=performance,accessibility

# Output as JSON for programmatic use
lighthouse https://example.com --output=json --output-path=./report.json

# Use specific Chrome flags
lighthouse https://example.com --chrome-flags="--headless --no-sandbox"

Method 3: PageSpeed Insights

Visit pagespeed.web.dev — it runs Lighthouse on Google’s servers and combines lab data with real-user field data from Chrome UX Report.

Method 4: Lighthouse CI

For CI/CD integration:

npm install -g @lhci/cli

# Initialize config
lhci wizard

# Run in CI pipeline
lhci autorun

Reading a Lighthouse Report

Score Interpretation

Score RangeColorMeaning
90-100GreenGood — meets best practices
50-89OrangeNeeds improvement
0-49RedPoor — significant issues

Understanding Opportunities and Diagnostics

Opportunities show specific improvements with estimated time savings:

  • “Serve images in next-gen formats” — Save 2.4s
  • “Eliminate render-blocking resources” — Save 1.2s
  • “Properly size images” — Save 0.8s

Diagnostics provide additional performance information:

  • DOM size (number of elements)
  • Critical request chains
  • Main thread work breakdown
  • JavaScript execution time

Variability in Scores

Lighthouse scores can vary between runs due to:

  • Network conditions
  • Server response time fluctuations
  • CPU load on the testing machine
  • Third-party script timing

Best practice: Run Lighthouse 3-5 times and take the median score, or use Lighthouse CI which handles multiple runs automatically.

Exercise: Comprehensive Lighthouse Audit Workflow

Perform a full Lighthouse audit on a web application and create a prioritized action plan.

Step 1: Run the Audit

Run Lighthouse against a website in three configurations:

# Mobile (default)
lighthouse https://your-site.com --output=json --output-path=./mobile.json

# Desktop
lighthouse https://your-site.com --preset=desktop --output=json --output-path=./desktop.json

# Specific page (e.g., product page)
lighthouse https://your-site.com/product/123 --output=json --output-path=./product.json

Or use DevTools Lighthouse tab — run once for Mobile, once for Desktop.

Step 2: Document Scores

CategoryMobile ScoreDesktop ScoreProduct Page
Performance
Accessibility
Best Practices
SEO

Step 3: Prioritize Findings

For each failed audit, assess:

  1. Impact: How many users are affected? How severe is the issue?
  2. Effort: How hard is the fix? (CSS change vs architecture change)
  3. Priority: High impact + low effort = fix first

Create a priority matrix:

FindingCategoryImpactEffortPriority
H/M/LH/M/L1-5

Step 4: Set Up Lighthouse CI

Create a lighthouserc.js configuration:

module.exports = {
  ci: {
    collect: {
      url: [
        'https://your-site.com/',
        'https://your-site.com/products',
        'https://your-site.com/checkout',
      ],
      numberOfRuns: 3,
    },
    assert: {
      assertions: {
        'categories:performance': ['warn', {minScore: 0.8}],
        'categories:accessibility': ['error', {minScore: 0.9}],
        'categories:best-practices': ['warn', {minScore: 0.9}],
        'categories:seo': ['error', {minScore: 0.9}],
      },
    },
    upload: {
      target: 'temporary-public-storage',
    },
  },
};
Solution: Sample Audit Action Plan

Site: example-shop.com

Scores:

CategoryMobileDesktop
Performance4278
Accessibility7171
Best Practices8383
SEO9191

Priority 1 (High Impact, Low Effort):

  1. Add width and height to all <img> tags — fixes CLS, improves Performance
  2. Add alt text to 12 images — fixes Accessibility
  3. Add rel="noopener" to external links — fixes Best Practices
  4. Fix color contrast on footer links — fixes Accessibility

Priority 2 (High Impact, Medium Effort): 5. Convert images to WebP format — improves LCP by ~1.5s 6. Implement lazy loading for below-fold images — improves Performance 7. Add missing form labels to checkout form — fixes Accessibility

Priority 3 (Medium Impact, Medium Effort): 8. Defer non-critical CSS — reduces render-blocking 9. Implement font-display: swap — reduces FOIT 10. Add structured data for products — improves SEO

Priority 4 (Medium Impact, High Effort): 11. Code-split the main JavaScript bundle — reduces TBT 12. Implement service worker for offline support — enables PWA 13. Set up CDN for static assets — reduces TTFB globally

Expected results after Priority 1-2 fixes:

  • Performance: 42 → ~65 (mobile), 78 → ~88 (desktop)
  • Accessibility: 71 → ~90
  • Best Practices: 83 → ~92

Lighthouse CI in GitHub Actions

Example workflow for automated Lighthouse checks on every pull request:

name: Lighthouse CI
on: pull_request

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci && npm run build
      - name: Run Lighthouse
        uses: treosh/lighthouse-ci-action@v11
        with:
          urls: |
            http://localhost:3000/
            http://localhost:3000/products
          budgetPath: ./budget.json
          uploadArtifacts: true

This automatically audits every PR and flags performance regressions before they merge to main.

Common Pitfalls

  1. Optimizing for the score, not users. A 100 Lighthouse score does not guarantee a good user experience. It means you passed automated checks.

  2. Testing only the homepage. Product pages, checkout flows, and search results often perform worse. Audit representative pages from each section.

  3. Ignoring mobile. Desktop scores are almost always higher. Mobile performance is what Google uses for ranking.

  4. One-time audits. Lighthouse is most valuable as a continuous monitoring tool, not a one-time check. Set up CI integration.

  5. Treating all findings equally. A 2-second LCP improvement is worth more than fixing a minor console warning. Prioritize by user impact.

Key Takeaways

  • Lighthouse audits five categories: Performance, Accessibility, Best Practices, SEO, and PWA
  • Always run in incognito mode and take the median of multiple runs for reliable scores
  • CLI and Lighthouse CI are preferred over DevTools for reproducible, automated audits
  • Prioritize findings by user impact and implementation effort, not just the score delta
  • Integrate Lighthouse CI into your CI/CD pipeline to catch regressions before they reach production
  • Lighthouse catches many issues but is not a complete substitute for manual testing, especially for accessibility