Why Kubernetes Matters for QA

Kubernetes (K8s) has become the standard platform for running containerized applications in production. If the application you test runs on Kubernetes, understanding its architecture helps you debug failures, understand deployment behavior, and design more effective tests.

You do not need to become a Kubernetes administrator. But as a QA engineer, you need enough knowledge to read pod logs, check deployment status, understand why a test environment is misbehaving, and communicate effectively with DevOps teams.

Kubernetes Architecture

Cluster Components

A Kubernetes cluster consists of:

  • Control Plane (Master): Manages the cluster state, scheduling, and API
  • Worker Nodes: Machines that run application containers
  • etcd: Distributed key-value store for cluster data

Key Resources

ResourcePurposeQA Relevance
PodSmallest unit — one or more containersYour application runs in pods
DeploymentManages pod replicas and updatesRolling updates affect your tests
ServiceStable network endpoint for podsHow tests connect to the application
NamespaceVirtual cluster isolationTest environments as namespaces
ConfigMapConfiguration dataTest environment settings
SecretSensitive dataAPI keys, database credentials
IngressExternal HTTP/S accessURL routing for test environments

Essential kubectl Commands for QA

Viewing Resources

# List all pods in current namespace
kubectl get pods

# List pods in a specific namespace
kubectl get pods -n staging

# Get detailed pod information
kubectl describe pod my-app-abc123

# List all services
kubectl get services

# List all deployments
kubectl get deployments

# Watch pods in real-time
kubectl get pods -w

Debugging Applications

# View pod logs
kubectl logs my-app-abc123

# Follow logs in real-time
kubectl logs -f my-app-abc123

# View logs from a previous container (after restart)
kubectl logs my-app-abc123 --previous

# Execute a shell inside a pod
kubectl exec -it my-app-abc123 -- bash

# Port-forward to access a pod locally
kubectl port-forward my-app-abc123 3000:3000

# Check pod resource usage
kubectl top pods

Checking Deployment Status

# View deployment status
kubectl rollout status deployment/my-app

# View deployment history
kubectl rollout history deployment/my-app

# Check events (recent cluster activity)
kubectl get events --sort-by='.lastTimestamp'

Namespaces for Test Environments

Namespaces provide isolation within a cluster. Teams commonly use namespaces to create separate test environments:

production    → Live application
staging       → Pre-production testing
qa            → QA team's test environment
feature-xyz   → Ephemeral environment for a specific feature
# Create a namespace for testing
kubectl create namespace qa-testing

# Deploy to a specific namespace
kubectl apply -f deployment.yaml -n qa-testing

# Set default namespace
kubectl config set-context --current --namespace=qa-testing

Common QA Scenarios in Kubernetes

Scenario 1: Tests Fail After Deployment

Your E2E tests suddenly fail after a new deployment. Check:

# Is the pod running?
kubectl get pods -n staging

# Are there recent restarts? (CrashLoopBackOff)
kubectl describe pod app-pod-name -n staging

# Check application logs for errors
kubectl logs app-pod-name -n staging --tail=100

# Check events for scheduling or resource issues
kubectl get events -n staging --sort-by='.lastTimestamp'

Scenario 2: Intermittent Test Failures

Tests pass sometimes and fail others. Possible K8s-related causes:

  • Pod scaling: Requests hitting different pods with different states
  • Resource limits: Pod running out of memory or CPU
  • Liveness probe failures: Pod restarting mid-test
# Check if pods are restarting
kubectl get pods -n staging -o wide

# Check resource limits and usage
kubectl top pods -n staging
kubectl describe pod app-pod -n staging | grep -A5 "Resources:"

Scenario 3: Cannot Connect to Test Environment

# Check if the service exists and has endpoints
kubectl get service my-app -n staging
kubectl get endpoints my-app -n staging

# Check ingress configuration
kubectl get ingress -n staging
kubectl describe ingress my-app-ingress -n staging

# Port-forward as a workaround
kubectl port-forward service/my-app 3000:80 -n staging

Exercise: Debug a Failing Test Environment

Your team deploys an application to a Kubernetes staging namespace. E2E tests that ran fine yesterday now timeout with “connection refused.” Walk through the debugging steps.

Solution

Step 1: Check pod status

kubectl get pods -n staging

Look for: CrashLoopBackOff, ImagePullBackOff, Pending, or 0/1 Ready.

Step 2: Check pod events and logs

kubectl describe pod app-pod-name -n staging
kubectl logs app-pod-name -n staging

Look for: OOM killed, failed health checks, configuration errors.

Step 3: Check the service

kubectl get service my-app -n staging
kubectl get endpoints my-app -n staging

Look for: Missing endpoints (no healthy pods backing the service).

Step 4: Check recent deployments

kubectl rollout status deployment/my-app -n staging
kubectl rollout history deployment/my-app -n staging

Look for: Failed rollout, wrong image tag, missing ConfigMap.

Step 5: Check resource availability

kubectl top pods -n staging
kubectl describe node | grep -A5 "Allocated resources"

Look for: Node at capacity, unable to schedule pods.

Common root causes:

  1. New deployment has a bug — pod crashes on startup
  2. Docker image tag is wrong — ImagePullBackOff
  3. Missing environment variable or secret — application fails to start
  4. Resource quota exceeded — pod cannot be scheduled
  5. Network policy blocks traffic — service is unreachable

Kubernetes Testing Patterns

Pattern 1: Namespace-per-PR

Create an ephemeral namespace for each pull request with the full application stack. Delete after tests pass.

Pattern 2: Shared Staging

A single staging namespace with the latest main branch deployed. All QA tests run here.

Pattern 3: Local Development with Minikube

Run a local Kubernetes cluster for development testing:

# Start Minikube
minikube start

# Deploy your application
kubectl apply -f k8s/

# Access the application
minikube service my-app --url

Key Takeaways

  1. Know enough K8s to debug test failureskubectl get pods, kubectl logs, and kubectl describe are your primary tools
  2. Namespaces isolate test environments — each team or feature can have its own namespace
  3. Pod lifecycle affects tests — restarts, scaling, and resource limits cause intermittent failures
  4. Services provide stable endpoints — always connect tests to services, not directly to pods
  5. Collaborate with DevOps — QA does not manage the cluster but must understand it to diagnose issues