TL;DR
- Locust = Python-based load testing (tests are just Python code)
- Define user behavior in
locustfile.pywith@taskdecorators- Run with web UI (
locust) or headless (locust --headless)- Distributed mode: one master + N workers for massive scale
- Real-time metrics: RPS, response times, failure rates
Best for: Python teams, API load testing, developers who prefer code over GUI Skip if: Need GUI-based test building or extensive protocol support (use JMeter) Reading time: 12 minutes
Your API handles 100 requests per second in production. Black Friday is coming — will it handle 10x? You need to know before customers find out.
Locust lets you write load tests in Python. No XML configs, no complex GUIs. Just Python code that simulates user behavior.
What is Locust?
Locust is an open-source load testing framework written in Python. You define user behavior as Python code, and Locust spawns thousands of concurrent users to stress your system.
Why Locust:
- Python-native — tests are regular Python code
- Scalable — distributed mode for millions of users
- Real-time UI — web dashboard shows metrics live
- Lightweight — low resource overhead per user
- Flexible — test any system with Python libraries
Installation
pip install locust
Verify installation:
locust --version
Your First Load Test
Basic Locustfile
# locustfile.py
from locust import HttpUser, task, between
class WebsiteUser(HttpUser):
wait_time = between(1, 3) # Wait 1-3 seconds between tasks
@task
def homepage(self):
self.client.get("/")
@task(3) # 3x more likely than homepage
def api_endpoint(self):
self.client.get("/api/products")
Run the Test
# Start with web UI
locust -f locustfile.py --host=https://api.example.com
# Open http://localhost:8089
# Set users and spawn rate
# Click Start
Headless Mode (CI/CD)
locust -f locustfile.py \
--host=https://api.example.com \
--users 100 \
--spawn-rate 10 \
--run-time 5m \
--headless
Writing User Scenarios
Multiple Tasks
from locust import HttpUser, task, between
class ApiUser(HttpUser):
wait_time = between(0.5, 2)
@task(5)
def list_products(self):
self.client.get("/api/products")
@task(2)
def get_product(self):
self.client.get("/api/products/1")
@task(1)
def create_order(self):
self.client.post("/api/orders", json={
"product_id": 1,
"quantity": 2
})
Setup and Teardown
from locust import HttpUser, task, between
class AuthenticatedUser(HttpUser):
wait_time = between(1, 3)
def on_start(self):
# Run once when user starts
response = self.client.post("/api/login", json={
"username": "test",
"password": "password"
})
self.token = response.json()["token"]
def on_stop(self):
# Run once when user stops
self.client.post("/api/logout")
@task
def protected_endpoint(self):
self.client.get("/api/profile", headers={
"Authorization": f"Bearer {self.token}"
})
Sequential Tasks
from locust import HttpUser, task, between, SequentialTaskSet
class UserFlow(SequentialTaskSet):
@task
def browse(self):
self.client.get("/products")
@task
def add_to_cart(self):
self.client.post("/cart", json={"product_id": 1})
@task
def checkout(self):
self.client.post("/checkout")
class EcommerceUser(HttpUser):
tasks = [UserFlow]
wait_time = between(1, 3)
Assertions and Validation
from locust import HttpUser, task
class ValidatingUser(HttpUser):
@task
def check_api(self):
with self.client.get("/api/status", catch_response=True) as response:
if response.status_code != 200:
response.failure(f"Got {response.status_code}")
elif response.elapsed.total_seconds() > 2:
response.failure("Too slow!")
elif "ok" not in response.text:
response.failure("Missing 'ok' in response")
else:
response.success()
Distributed Testing
Master Node
locust -f locustfile.py --master --host=https://api.example.com
Worker Nodes
# On each worker machine
locust -f locustfile.py --worker --master-host=192.168.1.100
Docker Compose
# docker-compose.yml
version: '3'
services:
master:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --master -H https://api.example.com
worker:
image: locustio/locust
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host master
deploy:
replicas: 4
Understanding Results
Key Metrics
| Metric | What It Means |
|---|---|
| RPS | Requests per second throughput |
| Median | 50th percentile response time |
| 95%ile | 95% of requests faster than this |
| Failures | Failed request percentage |
| Users | Current simulated user count |
Export Results
locust -f locustfile.py \
--headless \
--users 100 \
--spawn-rate 10 \
--run-time 5m \
--csv=results \
--html=report.html
Generates:
results_stats.csv— request statisticsresults_failures.csv— failure detailsresults_stats_history.csv— time series datareport.html— visual report
CI/CD Integration
GitHub Actions
# .github/workflows/load-test.yml
name: Load Tests
on:
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install Locust
run: pip install locust
- name: Run Load Test
run: |
locust -f locustfile.py \
--host=${{ secrets.API_URL }} \
--users 50 \
--spawn-rate 5 \
--run-time 2m \
--headless \
--csv=results
- name: Upload Results
uses: actions/upload-artifact@v4
with:
name: load-test-results
path: results*.csv
Locust with AI Assistance
AI tools can help write and optimize Locust tests.
What AI does well:
- Generate user scenarios from API specs
- Create data variations for realistic load
- Suggest performance thresholds
- Convert JMeter tests to Locust
What still needs humans:
- Defining realistic user behavior patterns
- Setting appropriate load levels
- Interpreting results in business context
- Debugging production-specific issues
FAQ
What is Locust?
Locust is a Python-based open-source load testing framework. You write user behavior as Python code using @task decorators, and Locust spawns concurrent users to generate load. It provides a web UI for real-time monitoring and supports distributed testing for massive scale.
Locust vs JMeter — which is better?
Locust uses Python code (better for developers who want version control and IDE support). JMeter uses GUI (easier for non-developers). Locust is more lightweight and scales easily with distributed mode. JMeter has broader protocol support and more plugins. Choose Locust for Python teams, JMeter for teams needing GUI or specific protocols.
Can Locust test WebSockets?
Yes, through custom clients. Locust’s architecture supports any protocol — you implement the connection logic in Python. The community provides libraries for WebSockets, gRPC, and other protocols. Locust handles user simulation and metrics while you control the actual protocol implementation.
How many users can Locust simulate?
A single machine typically handles 1,000-10,000 concurrent users depending on scenario complexity and available resources. For larger tests, use distributed mode with one master coordinating multiple workers. This scales to millions of users across a cluster.
Official Resources
See Also
- JMeter Tutorial - GUI-based load testing
- k6 Load Testing - JavaScript load testing
- API Performance Testing - Performance testing basics
- Gatling Tutorial - Scala-based load testing
