Database (as discussed in API Performance Testing: Metrics and Tools) performance is critical for application responsiveness. Slow database (as discussed in Performance Testing: From Load to Stress Testing) queries can cascade into system-wide performance issues, affecting user experience and scalability.
Key Performance Metrics
database_metrics:
query_performance:
- execution_time: "< 100ms"
- throughput: "> 1000 queries/sec"
- slow_query_threshold: "1 second"
resource_utilization:
- cpu_usage: "< 70%"
- memory_usage: "< 80%"
- disk_io: "optimized"
- connection_pool: "healthy"
reliability:
- availability: "99.95%"
- replication_lag: "< 1 second"
- deadlock_frequency: "minimal"
Query Performance Testing
1. Identify Slow Queries
PostgreSQL:
-- Enable slow query log
ALTER SYSTEM SET log_min_duration_statement = 1000;
-- Find slow queries
SELECT query, calls, mean_exec_time, max_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
MySQL:
-- Enable slow query log
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;
-- Analyze slow queries
SELECT * FROM mysql.slow_log
ORDER BY query_time DESC;
2. EXPLAIN Analysis
-- PostgreSQL
EXPLAIN ANALYZE
SELECT u.name, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at > '2025-01-01'
GROUP BY u.id
ORDER BY order_count DESC;
-- Look for:
-- - Sequential Scans (bad)
-- - Index Scans (good)
-- - Nested Loops
-- - Hash Joins
3. Index Optimization
-- Create index
CREATE INDEX idx_orders_user_id ON orders(user_id);
CREATE INDEX idx_users_created_at ON users(created_at);
-- Composite index
CREATE INDEX idx_orders_composite ON orders(user_id, status, created_at);
-- Test index usage
EXPLAIN SELECT * FROM orders WHERE user_id = 123;
Connection Pool Testing
connection_pool_config:
min_connections: 10
max_connections: 100
connection_timeout: 30s
idle_timeout: 600s
max_lifetime: 1800s
load_test_scenarios:
normal_load:
concurrent_connections: 50
duration: 30m
assertion: "no connection timeouts"
peak_load:
concurrent_connections: 150
duration: 10m
assertion: "connections < max_pool_size"
Connection Pool Test:
import psycopg2.pool
import threading
# Create connection pool
pool = psycopg2.pool.ThreadedConnectionPool(
minconn=10,
maxconn=100,
host="localhost",
database="testdb"
)
def (as discussed in [Load Testing with JMeter: Complete Guide](/blog/jmeter-load-testing)) execute_query(query_id):
conn = pool.getconn()
try:
cursor = conn.cursor()
cursor.execute("SELECT * FROM large_table LIMIT 1000")
results = cursor.fetchall()
finally:
pool.putconn(conn)
# Simulate load
threads = []
for i in range(200):
t = threading.Thread(target=execute_query, args=(i,))
threads.append(t)
t.start()
for t in threads:
t.join()
Load Testing Tools
1. Sysbench
# Prepare test
sysbench /usr/share/sysbench/oltp_read_write.lua \
--mysql-host=localhost \
--mysql-user=test \
--mysql-password=password \
--mysql-db=testdb \
--tables=10 \
--table-size=1000000 \
prepare
# Run test
sysbench /usr/share/sysbench/oltp_read_write.lua \
--mysql-host=localhost \
--threads=50 \
--time=300 \
run
# Results
sysbench /usr/share/sysbench/oltp_read_write.lua cleanup
2. Custom Scripts
import time
import psycopg2
from concurrent.futures import ThreadPoolExecutor
def run_query_test(num_threads=50, duration=300):
conn = psycopg2.connect(
host="localhost",
database="testdb"
)
queries = [
"SELECT * FROM users WHERE id = %s",
"SELECT * FROM orders WHERE user_id = %s",
"UPDATE users SET last_login = NOW() WHERE id = %s"
]
def worker():
end_time = time.time() + duration
while time.time() < end_time:
cursor = conn.cursor()
query = random.choice(queries)
cursor.execute(query, (random.randint(1, 10000),))
conn.commit()
with ThreadPoolExecutor(max_workers=num_threads) as executor:
futures = [executor.submit(worker) for _ in range(num_threads)]
Performance Optimization
1. Query Optimization
-- Bad: N+1 query problem
SELECT * FROM users;
-- Then for each user:
SELECT * FROM orders WHERE user_id = ?;
-- Good: Single query with JOIN
SELECT u.*, o.*
FROM users u
LEFT JOIN orders o ON u.id = o.user_id;
-- Bad: SELECT *
SELECT * FROM large_table WHERE id = 123;
-- Good: Select only needed columns
SELECT id, name, email FROM large_table WHERE id = 123;
2. Caching Strategy
caching_layers:
application_level:
- redis_cache
- in_memory_cache
database_level:
- query_result_cache
- prepared_statement_cache
infrastructure:
- cdn_caching
- reverse_proxy_cache
3. Partitioning
-- Table partitioning by date
CREATE TABLE orders_2025_01 PARTITION OF orders
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE orders_2025_02 PARTITION OF orders
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
Monitoring and Analysis
Real-Time Monitoring
monitoring_tools:
database_specific:
postgres: pg_stat_statements, pgBadger
mysql: Performance Schema, pt-query-digest
mongodb: profiler, mongostat
general:
- prometheus + grafana
- datadog
- new_relic
Performance Dashboard
# Key metrics to track
dashboard_metrics = {
"query_performance": [
"average_query_time",
"slow_query_count",
"query_throughput"
],
"connections": [
"active_connections",
"idle_connections",
"connection_errors"
],
"resources": [
"cpu_usage",
"memory_usage",
"disk_io",
"network_io"
],
"locks": [
"deadlock_count",
"lock_wait_time",
"blocked_queries"
]
}
Conclusion
Database performance testing ensures queries execute efficiently and systems scale properly. Regular testing, monitoring, and optimization prevent performance degradation and maintain responsive applications.
Key Takeaways:
- Identify and optimize slow queries
- Implement proper indexing strategy
- Test connection pool configuration
- Monitor database metrics continuously
- Use caching and partitioning strategically