Introduction to Zebrunner

Zebrunner is a modern test automation reporting and analytics platform designed specifically for engineering teams running large-scale automated test suites. Unlike traditional Test Case Management tools focused on manual testing, Zebrunner targets the “automated tests problem”: as test suites grow to thousands of tests executed across multiple browsers, devices, and environments, understanding failures becomes overwhelming.

Zebrunner solves this through intelligent test result aggregation, ML-powered failure analysis, real-time dashboards, and deep integrations with CI/CD pipelines and test execution infrastructure. The platform’s core value proposition: transform test execution noise into actionable quality signals.

Founded by the team behind the open-source Zebrunner (formerly Zafira) reporting framework, the commercial Zebrunner platform builds on years of community feedback to deliver enterprise-grade test intelligence.

Core Architecture

Test Session Recording

Zebrunner captures every test execution as a Test Session containing:

Execution Metadata: Test name, parameters, tags, environment, build number

Artifacts: Screenshots, videos, logs, network traffic, device logs

Timeline: Step-by-step execution trace with timing

Known Issues: Links to filed bugs or known failure patterns

Example session view:

Test: checkout_with_paypal
Status: Failed
Duration: 42.3s
Browser: Chrome 120 (Linux)
Build: #3456

Timeline:
├─ 0.0s: Navigate to /cart
├─ 1.2s: Click "Checkout"
├─ 2.5s: Select PayPal payment
├─ 8.3s: [FAILED] PayPal iframe timeout
└─ 42.3s: Test terminated

Artifacts:
- screenshot_failure.png
- browser_console.log
- network_traffic.har

Real-Time Dashboard

Zebrunner provides live updates during test execution:

Test Run Progress: Tests queued, running, passed, failed in real-time

Failure Heat Map: Which test classes/features have highest failure rate

Execution Timeline: When tests started/completed, parallel execution visualization

Environment Status: Device farm availability, Selenium Grid capacity

This enables QA leads to monitor overnight regression runs and triage failures as they occur rather than waiting for CI pipeline completion.

ML-Powered Failure Classification

Zebrunner automatically categorizes test failures:

Product Defects: Reproducible application bugs requiring fix

Test Automation Issues: Flaky locators, race conditions, test data problems

Environment Problems: Infrastructure failures (Grid down, device offline)

Known Issues: Failures matching existing bug tickets

Machine learning models analyze failure patterns (error messages, stack traces, screenshots) to classify new failures, reducing manual triage time by 60-70%.

Test Stability Metrics

Zebrunner tracks test reliability over time:

Stability Score: Percentage of consistent pass/fail results (not flakiness)

Test: user_login_valid_credentials
Last 100 runs: 97 passed, 3 failed
Stability: 97%
Trend: Stable (was 96% last week)

Flakiness Detection: Tests with inconsistent results flagged automatically

Test: checkout_payment_processing
Last 100 runs: 78 passed, 22 failed (intermittent)
Flakiness: High
Recommended: Quarantine for investigation

Mean Time to Repair (MTTR): Average time from test failure to passing fix

Key Features

Multi-Framework Support

Zebrunner integrates with major test frameworks via agents:

Selenium/WebDriver: Java TestNG, JUnit, Python pytest, JavaScript WebdriverIO

Appium: Mobile iOS/Android test frameworks

Playwright: Native Playwright test reporting

Cypress: Via Cypress plugin

REST Assured: API test result import

Example integration (Java TestNG):

<dependency>
  <groupId>com.zebrunner</groupId>
  <artifactId>agent-testng</artifactId>
  <version>1.9.5</version>
</dependency>
@Test
@TestLabel(name = "priority", value = "critical")
@TestLabel(name = "feature", value = "checkout")
public void testGuestCheckout() {
  // Zebrunner automatically captures screenshots on failure
  // Links test to JIRA tickets via @TestCaseKey annotation
}

Smart Test Launcher

Zebrunner can trigger test execution (not just report results):

Scheduled Runs: Cron-based test execution

On-Demand: Manual trigger from UI with parameter selection

Smart Retry: Automatically rerun failed tests to distinguish flaky from real failures

Selective Execution: Run only tests affected by code changes

This eliminates need for custom CI/CD scripting for common test orchestration scenarios.

Integration Ecosystem

CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI, Bamboo

Test Infrastructure: Selenium Grid, Browserstack, Sauce Labs, LambdaTest, device farms

Issue Trackers: JIRA, GitHub Issues, Azure DevOps

Notifications: Slack, Microsoft Teams, email with failure summaries

Test Management: TestRail, Zephyr, qTest (bidirectional sync)

Example Slack notification:

🔴 Regression Suite Failed (Build #3456)
━━━━━━━━━━━━━━━━━━━━━━━━
Passed: 487 | Failed: 13 | Skipped: 2
Pass Rate: 97.4% (was 98.1% yesterday)

Top Failures:
  - checkout_paypal (known issue: PAY-1234)
  - user_profile_edit (flaky: under investigation)
  - search_filtering (NEW: requires triage)

View Report: https://zebrunner.company.com/run/3456

Test Artifacts Management

Zebrunner provides centralized artifact storage:

Video Recording: Automatic video capture for web and mobile tests

Screenshots: On-demand and automatic on failure

Logs: Application logs, browser console, Appium server logs

Network Traffic: HAR files for API call analysis

Custom Artifacts: Test data files, generated reports

Artifacts are automatically associated with test sessions and retained per configured policy (30-90-180 days).

Comparison with Alternatives

FeatureZebrunnerReportPortalAllure TestOpsTestRailGrafana + Custom
Framework Support✅ 10+ frameworks✅ 15+ frameworks✅ 15+ frameworks⚠️ Via API⚠️ Custom
ML Failure Analysis✅ Built-in✅ Built-in⚠️ Basic❌ No❌ No
Real-Time Dashboard✅ Yes✅ Yes✅ Yes❌ Post-run only✅ Yes
Test Orchestration✅ Smart launcher❌ Reporting only✅ Full❌ No⚠️ External
Video/Screenshot✅ Automatic✅ Automatic⚠️ Via sidecar❌ No⚠️ Custom
SaaS + On-Prem✅ Both✅ Both✅ Both✅ Both⚠️ Self-hosted
Price$$ Medium$ Open-source$$$ High$$ Medium$ Infrastructure

Zebrunner vs. ReportPortal: Zebrunner offers commercial SaaS with support, ReportPortal is fully open-source but requires more setup

Zebrunner vs. Allure TestOps: Similar capabilities, Zebrunner focuses on ML-powered triage, Allure on test case documentation

Zebrunner vs. Grafana: Grafana requires custom dashboards and metric collection, Zebrunner is purpose-built for testing

Pricing and Licensing

Zebrunner Cloud

Startup: $99/month

  • Up to 10,000 test executions/month
  • 30-day data retention
  • Standard integrations
  • Community support

Business: $299/month

  • Up to 50,000 test executions/month
  • 90-day data retention
  • All integrations
  • Email support
  • Smart Test Launcher

Enterprise: Custom pricing

  • Unlimited test executions
  • Custom data retention
  • SSO, audit logs
  • Dedicated support
  • On-premise option

Zebrunner On-Premise

Self-Hosted License: Starting at $12,000/year

  • Perpetual license available
  • Deployment on your infrastructure
  • All Enterprise features

Open-Source Alternative: Zebrunner CE (Community Edition)

  • Free, limited features
  • Self-hosted only
  • Community support
  • Good for evaluation

Cost Example

Team running 100K tests/month:

  • Zebrunner Business: ~$500/month (volume discount)
  • Allure TestOps: ~$1,500-2,000/month (per-user pricing)
  • ReportPortal Open-Source: Free + $200-500/month infrastructure
  • Custom Grafana: $300-1,000/month (development + infrastructure)

Zebrunner offers competitive pricing for high-volume automation scenarios.

Best Practices

Test Labeling Strategy

Use consistent labels for powerful filtering:

@TestLabel(name = "priority", value = "P1")
@TestLabel(name = "feature", value = "payments")
@TestLabel(name = "platform", value = "web")
@TestLabel(name = "smoke", value = "true")

Enables queries: “Show all P1 payment tests that failed in last 7 days”

Failure Triage Workflow

  1. Daily Review: QA lead reviews new failures each morning
  2. Classification: Zebrunner ML suggests classification, human confirms
  3. Assignment: Link to existing JIRA or create new bug
  4. Quarantine: Move flaky tests to separate suite
  5. Weekly Cleanup: Review quarantined tests, fix or remove

This systematic approach prevents test suite decay.

Integration with Test Management

Link Zebrunner (execution) with TestRail (test design):

@TestCaseKey("TC-1234")  // TestRail test case ID
public void testCheckoutFlow() {
  // Zebrunner reports execution to TestRail
}

TestRail shows test case design, Zebrunner shows execution history—best of both worlds.

Custom Dashboard Creation

Build executive dashboards:

Quality Scorecard:

  • Pass rate trend (last 30 days)
  • Test stability score
  • MTTR (mean time to repair)
  • Top 10 flaky tests

Team Velocity:

  • Tests automated per sprint
  • Test execution count per day
  • Defect discovery rate

Zebrunner’s widget system enables no-code dashboard building.

Limitations

Not a Test Case Management Tool: Zebrunner doesn’t manage test case design, only execution results

Requires Automation: No value for pure manual testing teams (use TestRail instead)

Learning Curve: ML-powered features require training data (100+ executions) to be effective

Cost at Scale: High test execution volumes can become expensive on cloud tier

Limited Mobile-Specific Features: Works with Appium but lacks device farm management features

Implementation Recommendations

Phase 1: Pilot (Week 1-2)

  1. Select test suite: Choose 100-200 automated tests
  2. Instrument tests: Add Zebrunner agent
  3. Run executions: Generate 20-30 runs for ML training
  4. Configure integrations: Connect JIRA, Slack

Phase 2: Rollout (Week 3-4)

  1. Expand coverage: Add remaining test suites
  2. Train team: Failure triage workflow
  3. Set up dashboards: Create quality scorecards
  4. Establish metrics: Define stability/MTTR targets

Phase 3: Optimization (Month 2+)

  1. Tune ML models: Provide feedback on classifications
  2. Quarantine flaky tests: Achieve 95%+ stability
  3. Automate workflows: Slack notifications, auto-retry
  4. Optimize costs: Adjust retention policies

Conclusion

Zebrunner excels for engineering teams with substantial automated test investments who are drowning in test execution data. The platform’s ML-powered failure classification, real-time dashboards, and smart test orchestration transform chaotic test results into structured quality intelligence.

Choose Zebrunner if:

  • Running 10,000+ automated tests monthly
  • Struggling with failure triage overhead
  • Need real-time test execution visibility
  • Want intelligent test retry and orchestration

Choose alternatives if:

  • Primarily manual testing (TestRail better fit)
  • Want fully open-source (ReportPortal)
  • Need requirements traceability (Aqua ALM better fit)
  • Have <1,000 tests (overhead not justified)

For automation-heavy teams, Zebrunner delivers ROI through reduced triage time (60-70% savings), faster feedback loops (real-time vs. post-run), and improved test suite health (stability tracking). The platform represents the evolution from “test execution” to “test intelligence”—understanding not just what failed, but why, and what to do about it.