Introduction to ReportPortal

ReportPortal is an open-source AI-powered test automation dashboard and results aggregation platform maintained by EPAM Systems. Unlike commercial alternatives like Zebrunner or Allure TestOps, ReportPortal is completely free with all enterprise features available in the open-source version. The platform distinguishes itself through advanced machine learning capabilities that automatically analyze test failures, detect patterns, and suggest root causes.

Originally created to solve EPAM’s internal test reporting challenges across 1000+ projects, ReportPortal was open-sourced in 2016 and has since been adopted by thousands of organizations worldwide. The platform’s core strength: transforming scattered test execution data from multiple teams, frameworks, and CI pipelines into unified quality intelligence with minimal manual classification effort.

This guide explores ReportPortal’s architecture, AI-powered analysis capabilities, deployment options, integration strategies, and how it compares to commercial test intelligence platforms despite being free.

Core Architecture

Unified Launch Repository

ReportPortal organizes test executions as Launches, which are collections of test suites executed together:

Launch: Nightly Regression Build #3456
├─ Test Suite: Authentication Tests
│   ├─ Test: Valid login
│   ├─ Test: Invalid credentials
│   └─ Test: Session timeout
├─ Test Suite: Checkout Flow
│   ├─ Test: Guest checkout
│   └─ Test: Registered user checkout
└─ Test Suite: API Validation
    └─ Test: Product catalog endpoints

Each launch contains:

  • Metadata: Build number, environment, test framework, executor
  • Statistics: Total/passed/failed/skipped counts, duration
  • Logs: Hierarchical test logs with severity levels (trace, debug, info, warn, error)
  • Attachments: Screenshots, videos, HAR files, custom artifacts

AI-Powered Auto-Analysis

ReportPortal’s flagship feature is automatic failure analysis using machine learning:

Pattern Recognition: ML models compare new test failures against historical data to identify similar previous failures

Root Cause Suggestions: System suggests defect types based on error messages, stack traces, and log patterns

Auto-Triage Classification:

  • Product Bug (PB): Application defect requiring fix
  • Automation Bug (AB): Test code issue
  • System Issue (SI): Infrastructure problem
  • No Defect (ND): Intended behavior, false alarm
  • To Investigate (TI): Requires human analysis

Example auto-analysis:

Test: checkout_payment_processing
Status: Failed
Error: "Timeout waiting for PayPal iframe"

AI Analysis:
- Similarity: 95% match to test_paypal_integration (Launch #3401)
- Previous Classification: Product Bug (linked to JIRA-5678)
- Confidence: High
- Suggestion: Link to existing defect JIRA-5678

After initial manual classification, the system learns and automatically categorizes similar future failures with 80-90% accuracy.

Pattern Analysis Engine

ReportPortal identifies trends and patterns across launches:

Test Instability Detection: Flags tests with inconsistent pass/fail behavior

Test: user_profile_update
Last 50 runs: Pass=38, Fail=12 (intermittent)
Pattern: Fails during peak hours (infrastructure issue detected)
Recommendation: Investigate timeout settings

Error Clustering: Groups failures by error message similarity

Cluster: "NullPointerException at UserService.java:234"
Affected Tests: 7 tests
First Occurrence: Build #3420
Suspected Cause: Recent code change in UserService

Defect Prioritization: Ranks defects by impact (number of affected tests, failure frequency)

Real-Time Dashboard

ReportPortal provides live execution monitoring:

Launch Progress: Real-time test execution status as tests complete

Failure Heat Map: Visual representation of failure distribution across test suites

Health Check: Overall project quality metrics (pass rate trends, stability scores)

Widgets: Customizable dashboard widgets (top failures, longest tests, flakiest tests, team productivity)

Key Features

Multi-Framework Integration

ReportPortal supports all major test frameworks via agents and listeners:

Java: JUnit 4/5, TestNG, Cucumber, Serenity, Karate

JavaScript: Jest, Mocha, Cypress, WebdriverIO, Playwright, Codecept

Python: pytest, Robot Framework, Behave, nose

C#: NUnit, xUnit, SpecFlow

.NET: MSTest, Gallio

Others: PHP (Codeception, PHPUnit), Go, Ruby (RSpec), Scala (ScalaTest)

Example integration (Python pytest):

# pytest.ini
[pytest]
rp_endpoint = https://reportportal.company.com
rp_project = my_project
rp_launch = Regression Suite
rp_launch_description = Daily regression tests
import pytest

@pytest.mark.parametrize("user_type", ["guest", "registered"])
def test_checkout(user_type):
    # ReportPortal automatically captures test logs, screenshots on failure
    assert checkout_flow(user_type) == "success"

Defect Management

ReportPortal includes built-in defect tracking with JIRA/Jama/Rally integration:

Defect Lifecycle: Submit defect → Link to test → Track status → Auto-retest when resolved

Bulk Operations: Classify multiple similar failures at once

Defect History: View all test failures associated with specific defect

JIRA Integration: Create JIRA tickets from failures, sync status bidirectionally

Test Failure → Create JIRA Issue (auto-filled with logs/screenshots)
JIRA Resolved → ReportPortal marks test for retest
Test Passes → JIRA issue verified fixed

Multi-Project Management

ReportPortal supports unlimited projects with:

Project Roles: Admin, Project Manager, Member, Customer (view-only)

Shared Launches: Cross-project comparison reporting

Project Settings: Independent configuration per project (analyzers, integrations, retention)

Project Activity: Audit log of user actions

Notification System

Configurable notifications for test results:

Email: Launch completion summaries with failure breakdown

Slack/MS Teams: Real-time failure notifications

Webhooks: Custom integrations with any system via HTTP callbacks

Example Slack notification:

🔴 Regression Suite Failed
━━━━━━━━━━━━━━━━━━━━━━━━
Project: E-commerce Platform
Launch: #3456 | Duration: 42m 15s
Passed: 892 | Failed: 18 | Skipped: 5

New Failures: 3
Known Issues: 12
To Investigate: 3

Top Issues:
1. PayPal integration timeout (PB-5678) - 7 tests
2. Product search flakiness (AB-1234) - 4 tests
3. Database connection pool exhausted (SI-9012) - 2 tests

View Details: https://reportportal.company.com/launch/3456

Deployment Options

Docker Compose (Quickstart)

ReportPortal provides official Docker images for rapid deployment:

# docker-compose.yml
version: '3.7'
services:
  gateway:
    image: reportportal/service-gateway:5.9.0
    ports:
      - "8080:8080"
    environment:
      - RP_PROFILES=docker

  api:
    image: reportportal/service-api:5.9.0
    environment:
      - RP_DB_HOST=postgres
      - RP_AMQP_HOST=rabbitmq

  ui:
    image: reportportal/service-ui:5.9.0

  postgres:
    image: postgres:12-alpine
    environment:
      POSTGRES_DB: reportportal
      POSTGRES_USER: rpuser
      POSTGRES_PASSWORD: rppass

  rabbitmq:
    image: rabbitmq:3.11-management

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.9
    environment:
      - discovery.type=single-node
docker-compose up -d
# Access ReportPortal at http://localhost:8080
# Default credentials: superadmin / erebus

Kubernetes Deployment

For production scale, ReportPortal provides Helm charts:

helm repo add reportportal https://reportportal.io/kubernetes
helm install reportportal reportportal/reportportal \
  --set postgresql.enabled=true \
  --set rabbitmq.enabled=true \
  --set elasticsearch.enabled=true \
  --set gateway.ingress.enabled=true \
  --set gateway.ingress.hosts[0]=reportportal.company.com

Supports:

  • Horizontal scaling of API/analyzer services
  • External databases (AWS RDS, Azure Database)
  • External message brokers (Amazon MQ, CloudAMQP)
  • S3-compatible storage for artifacts

SaaS Option

EPAM offers managed ReportPortal SaaS (ReportPortal.io):

  • No infrastructure management
  • Automatic updates
  • Enterprise support available
  • Pricing: Contact sales (not publicly listed)

Comparison with Alternatives

FeatureReportPortalAllure TestOpsZebrunnerTestRailGrafana K6
AI Analysis✅ Advanced ML⚠️ Basic✅ ML-powered❌ No❌ No
Open Source✅ Fully free❌ Commercial⚠️ CE limited❌ Commercial✅ Free
Framework Support✅ 15+ frameworks✅ 15+ frameworks✅ 10+ frameworks⚠️ Via API⚠️ K6 only
Real-Time Dashboard✅ Yes✅ Yes✅ Yes❌ No✅ Yes
Self-Hosted✅ Full control✅ Available✅ Available✅ Available✅ Yes
Test Orchestration❌ Reporting only✅ Full✅ Smart launcher❌ No✅ Yes
Enterprise Support⚠️ Paid (EPAM)✅ Included✅ Included✅ Included⚠️ Cloud only

ReportPortal vs. Commercial Tools: Free with comparable features to $500-2000/month commercial platforms

ReportPortal vs. Allure: Allure Report is simpler but ReportPortal has superior ML analysis and multi-project management

ReportPortal Unique Advantage: Only enterprise-grade test intelligence platform that’s completely free

Pricing

ReportPortal Open Source: $0 - Fully free, all features included

Infrastructure Costs (self-hosted):

  • Small Team (1-5 users, 10K tests/month): $50-100/month (AWS t3.medium instances)
  • Medium Team (10-25 users, 100K tests/month): $200-400/month (Kubernetes cluster)
  • Enterprise (100+ users, 1M+ tests/month): $1000-3000/month (multi-AZ, HA setup)

Professional Services (optional):

  • EPAM Consulting: Custom pricing for implementation, customization
  • Community Support: Free (GitHub issues, Slack channel)
  • Enterprise Support SLA: Contact EPAM for pricing

Cost Comparison (100K tests/month):

  • ReportPortal: $200/month (infrastructure only)
  • Allure TestOps: $1,500-2,000/month
  • Zebrunner: $500-800/month
  • TestRail: $1,400/month (20 users)

ReportPortal offers 75-90% cost savings vs. commercial alternatives.

Best Practices

Launch Naming Convention

Standardize launch names for better filtering:

Format: [Project]_[Suite]_[Environment]_[Build]
Examples:
- WebApp_Regression_Staging_#3456
- MobileApp_Smoke_Production_v2.5.1
- API_Integration_Dev_PR-789

Log Level Strategy

Use appropriate log levels for ML analysis effectiveness:

ERROR: Test failures, exceptions WARN: Recoverable issues, retries INFO: Test flow checkpoints DEBUG: Detailed execution steps TRACE: Framework internals

ML models rely on error-level logs for pattern matching—ensure failures log meaningful error messages.

Analyzer Configuration

Tune auto-analysis settings:

Minimum Should Match: 80% similarity threshold (lower = more suggestions, less accurate)

Analyzer Mode:

  • Current Launch: Analyze against previous launches
  • All Launches: Analyze against entire launch history (slower, more data)

Number of Log Lines: Analyze last 5-10 log lines (balance performance vs. accuracy)

Defect Triage Workflow

  1. Daily Review: Team lead reviews “To Investigate” failures
  2. Bulk Classification: Use AI suggestions to classify similar failures quickly
  3. JIRA Linking: Link product bugs to tracking systems
  4. Quarantine: Mark flaky tests for investigation
  5. Retest: When defects resolved, trigger retest of affected tests

Dashboard Customization

Create role-specific dashboards:

QA Engineers: Failed tests, flaky tests, execution timeline QA Leads: Pass rate trends, test health, team productivity Managers: Quality scorecard, defect distribution, ROI metrics Developers: Tests related to their components, recent failures

Limitations

No Test Execution: ReportPortal only aggregates results, doesn’t trigger tests (needs CI/CD integration)

Learning Curve: ML features require time to train (50-100 launches minimum)

Infrastructure Management: Self-hosted deployment requires DevOps expertise

Limited Test Design Features: No test case repository like TestRail (execution-focused)

UI Complexity: Feature-rich interface has steeper learning curve than simpler alternatives

Conclusion

ReportPortal stands out as the most powerful completely free test intelligence platform available. Its AI-powered analysis capabilities rival commercial tools costing $1,500-2,000/month, making it an exceptional value proposition for teams with technical capacity to self-host.

Choose ReportPortal if:

  • Running large-scale test automation (10K+ tests)
  • Need ML-powered failure analysis without budget for commercial tools
  • Have DevOps resources for self-hosted deployment
  • Want enterprise features without enterprise pricing

Choose alternatives if:

  • Need managed SaaS with zero infrastructure (Zebrunner, Allure TestOps)
  • Want test orchestration capabilities (qTest, Allure TestOps)
  • Prefer simpler setup (Allure Report, TestRail)
  • Need test case design features (TestRail, Aqua)

For teams willing to invest in deployment and configuration, ReportPortal delivers extraordinary ROI: enterprise-grade test intelligence at infrastructure-only costs. The platform proves that open-source can compete with—and often exceed—commercial alternatives in the test management space.