Introduction to Non-Functional Testing
While functional testing answers “Does it work?”, non-functional (as discussed in Exploratory Testing: The Art of Software Investigation) testing answers the equally critical questions: “Does it work well? Is it usable? Is it accessible? Does it work everywhere?”
Non-functional (as discussed in SDLC vs STLC: Understanding Development and Testing Processes) testing validates quality attributes that determine user satisfaction, market reach, and long-term success. A perfectly functional (as discussed in Bug Anatomy: From Discovery to Resolution) application that’s slow, difficult to use, inaccessible to disabled users, or incompatible with certain browsers may fail in the marketplace despite having zero functional bugs.
In this comprehensive guide, we’ll explore the key pillars of non-functional testing: usability, compatibility, localization/internationalization, and accessibility. You’ll learn practical techniques, tools, and best practices to ensure your software not only works correctly but delivers an exceptional experience to all users.
Understanding Non-Functional Testing
What Is Non-Functional Testing?
Non-functional testing evaluates how a system performs rather than what it performs (as opposed to functional testing which focuses on what the system does). It focuses on quality attributes including:
- Usability: How easy and pleasant is the system to use?
- Performance: How fast and responsive is it?
- Reliability: How stable and dependable?
- Compatibility: Does it work across different environments?
- Security: How well is it protected?
- Accessibility: Can all users, including those with disabilities, use it?
- Localization: Does it adapt to different languages and cultures?
Why Non-Functional Testing Matters
Business impact:
- User retention: 88% of users won’t return after a bad user experience
- Legal compliance: Accessibility failures can result in lawsuits
- Market reach: Poor localization limits global expansion
- Brand reputation: Compatibility issues damage credibility
- Competitive advantage: Superior UX differentiates products
Real-world consequences of neglecting non-functional testing:
Example 1: E-commerce site
- 1 second page load delay = 7% reduction in conversions
- Poor mobile compatibility = 50% of potential customers lost
- Inaccessible checkout = Violation of ADA regulations
Example 2: SaaS application
- Complex UI = Higher support costs and user churn
- No right-to-left language support = Middle East market inaccessible
- Browser incompatibility = Enterprise deals lost
Example 3: Mobile app
- High memory usage = 1-star reviews and uninstalls
- Poor localization = Bad app store ratings in international markets
- Accessibility failures = Exclusion of 15% of potential users
Usability Testing: Human-Centered Quality Assurance
Understanding Usability
Definition: Usability measures how effectively, efficiently, and satisfactorily users can accomplish their goals using a system.
The five quality components of usability (Jakob Nielsen):
- Learnability: How easy is it for new users to accomplish tasks?
- Efficiency: How quickly can experienced users perform tasks?
- Memorability: Can users remember how to use it after a break?
- Errors: How many errors do users make, and can they recover?
- Satisfaction: How pleasant is the experience?
Usability Testing Methods
1. Moderated Usability Testing
What: A facilitator guides users through tasks while observing and asking questions.
When to use:
- Early design validation
- Complex workflows
- When you need in-depth insights
- Medical, financial, or critical systems
Process:
1. Preparation:
- Define objectives
- Create test scenarios
- Recruit participants (5-8 users per round)
- Prepare test environment
- Create facilitator script
2. Execution:
- Welcome and consent
- Explain think-aloud protocol
- Present scenarios
- Observe and take notes
- Ask follow-up questions
- Post-test questionnaire
3. Analysis:
- Identify patterns across users
- Categorize issues by severity
- Calculate success rates and time on task
- Document insights and recommendations
Best practices:
- Use realistic scenarios, not “Click the login button”
- Avoid leading questions
- Remain neutral—don’t help unless necessary
- Record sessions (with permission)
- Test with representative users
- Focus on tasks, not features
Sample scenario (good):
"You just finished a workout and want to log it in the app.
Log your 30-minute run from this morning."
Sample scenario (bad):
"Click the 'Add Workout' button and fill in the form."
2. Unmoderated Remote Testing
What: Users complete tasks independently while their interactions are recorded.
When to use:
- Large sample sizes needed
- Distributed user base
- Budget constraints
- Quick feedback needed
Tools:
- UserTesting
- Lookback
- Maze
- UsabilityHub
Pros:
- Cost-effective
- Faster results
- Natural environment
- Larger sample sizes
Cons:
- No real-time clarification
- Can’t ask follow-up questions
- May miss context
- Technical issues harder to resolve
3. A/B Testing
What: Compare two versions to determine which performs better.
When to use:
- Optimizing specific elements
- Data-driven design decisions
- High-traffic applications
- Incremental improvements
What to A/B test:
- Call-to-action buttons (color, text, placement)
- Headlines and copy
- Navigation structure
- Form layouts
- Pricing presentation
- Image vs video content
Best practices:
✓ Test one variable at a time
✓ Ensure statistical significance
✓ Run tests long enough (typically 1-2 weeks minimum)
✓ Segment results by user type
✓ Consider external factors (seasonality, campaigns)
✓ Document and share learnings
Example A/B test:
Hypothesis: Changing CTA from "Submit" to "Get My Free Trial"
will increase conversions
Metric: Conversion rate (form submissions / page views)
Sample size: Minimum 1000 conversions per variant
Duration: 2 weeks
Result: Version B increased conversions by 23%
4. Heuristic Evaluation
What: Experts evaluate the interface against established usability principles.
Nielsen’s 10 Usability Heuristics:
Visibility of system status: Keep users informed about what’s happening
- Example: Progress bars, loading indicators, confirmation messages
Match between system and real world: Use familiar language and concepts
- Example: Trash icon for delete, shopping cart icon for purchases
User control and freedom: Provide undo/redo
- Example: “Undo send” in email, edit/delete options
Consistency and standards: Follow platform conventions
- Example: Logo in top left, search in top right, consistent button styles
Error prevention: Better than good error messages
- Example: Disable invalid options, confirmation for destructive actions
Recognition rather than recall: Minimize memory load
- Example: Dropdown menus, recently used items, autofill
Flexibility and efficiency: Shortcuts for power users
- Example: Keyboard shortcuts, templates, bulk actions
Aesthetic and minimalist design: No irrelevant information
- Example: Progressive disclosure, clean layouts, clear hierarchy
Help users recognize, diagnose, and recover from errors: Clear error messages
- Example: “Password must be 8+ characters” instead of “Invalid input”
Help and documentation: Searchable, task-focused, concise
- Example: Contextual help, tooltips, searchable knowledge base
Conducting heuristic evaluation:
1. Preparation:
- Select 3-5 evaluators
- Define scope (whole app or specific features)
- Provide heuristics checklist
2. Evaluation:
- Each evaluator independently reviews interface
- Document violations with:
* Which heuristic violated
* Severity (cosmetic, minor, major, catastrophic)
* Location (screenshot, page, feature)
* Description of problem
* Suggested fix
3. Debriefing:
- Consolidate findings
- Remove duplicates
- Prioritize issues
- Create action plan
Usability Metrics
Quantitative metrics:
Task success rate: % of users who complete task correctly
Formula: (Successful completions / Total attempts) × 100 Good: >78%
Time on task: How long it takes to complete
Measure: Average time, median time Compare: Against benchmarks or previous versions
Error rate: Mistakes made during tasks
Formula: (Number of errors / Total task attempts) × 100 Track: Type of errors and recovery
Clicks/taps to completion: Efficiency measure
Best practice: Minimize steps to goal 3-click rule: Critical info accessible within 3 clicks
Qualitative metrics:
System Usability Scale (SUS): 10-question standardized survey
Score range: 0-100 Above 68: Above average Above 80: Excellent
Net Promoter Score (NPS): “How likely to recommend?”
Scale: 0-10 Promoters: 9-10 Passives: 7-8 Detractors: 0-6 NPS = % Promoters - % Detractors
Customer Satisfaction (CSAT): “How satisfied were you?”
Scale: 1-5 or 1-7 Good: >80% satisfied (4-5 on 5-point scale)
Usability Testing Checklist
Navigation & Information Architecture:
- Users can find what they’re looking for quickly
- Navigation is consistent across pages
- Breadcrumbs help users understand location
- Search functionality works well
- Menu labels are clear and descriptive
Forms & Input:
- Forms are as short as possible
- Labels are clear and positioned properly
- Required fields are clearly marked
- Input validation is helpful, not frustrating
- Error messages are specific and actionable
- Form remembers entered data on error
Content & Readability:
- Text is scannable (headings, bullets, short paragraphs)
- Font size is readable (minimum 16px for body text)
- Contrast ratio meets standards (4.5:1 minimum)
- Important actions are visually prominent
- Content uses plain language
Mobile Usability:
- Touch targets are at least 44×44px
- Text is readable without zooming
- Horizontal scrolling not required
- Forms are mobile-optimized
- Navigation is thumb-friendly
Performance & Feedback:
- Page loads feel fast (<3 seconds)
- Actions provide immediate feedback
- Loading states prevent confusion
- Errors are prevented where possible
- Success confirmations are clear
Compatibility Testing: Ensuring Universal Access
Understanding Compatibility Testing
Definition: Compatibility testing verifies that software works correctly across different environments, including browsers, devices, operating systems, and networks.
Types of compatibility:
- Browser compatibility: Chrome, Firefox, Safari, Edge, etc.
- Device compatibility: Desktop, tablet, mobile, different screen sizes
- Operating system compatibility: Windows, macOS, Linux, iOS, Android
- Network compatibility: Different speeds, offline scenarios
- Database compatibility: Different database systems
- Backward compatibility: Works with older versions
Browser Compatibility Testing
Market share drives priorities (as of 2025):
Desktop:
- Chrome: ~65% (Priority 1)
- Safari: ~15% (Priority 1)
- Edge: ~10% (Priority 2)
- Firefox: ~7% (Priority 2)
- Others: ~3% (Priority 3 or skip)
Mobile:
- Chrome Mobile: ~65% (Priority 1)
- Safari iOS: ~25% (Priority 1)
- Samsung Internet: ~6% (Priority 2)
- Others: ~4% (Priority 3 or skip)
Testing approach:
1. Identify target browsers:
Analyze your analytics:
- Which browsers do your users use?
- What versions?
- What percentage of traffic?
Define support tiers:
Tier 1: Fully supported, all features work (90% of users)
Tier 2: Supported, critical features work (9% of users)
Tier 3: Best effort, content accessible (1% of users)
2. Create test matrix:
Browser | Versions | Operating Systems | Priority
-----------|---------------|-------------------|----------
Chrome | Latest, N-1 | Windows, Mac | P0
Safari | Latest, N-1 | Mac, iOS | P0
Edge | Latest | Windows | P1
Firefox | Latest | Windows, Mac | P1
Samsung | Latest | Android | P2
3. Common compatibility issues:
CSS issues:
/* Flexbox support */
.container {
display: -webkit-box; /* Old Safari */
display: -webkit-flex; /* Safari 6.1+ */
display: -ms-flexbox; /* IE 10 */
display: flex; /* Modern browsers */
}
/* Grid support - check caniuse.com */
@supports (display: grid) {
.grid-container {
display: grid;
}
}
/* Vendor prefixes */
.element {
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
}
JavaScript issues:
// ES6 features may need polyfills for older browsers
// Check compatibility: caniuse.com
// Arrow functions
const multiply = (a, b) => a * b; // May need Babel
// Promises
fetch('/api/data') // May need polyfill for IE11
.then(response => response.json())
.then(data => console.log(data));
// Use feature detection
if ('IntersectionObserver' in window) {
// Use IntersectionObserver
} else {
// Fallback approach
}
4. Testing tools:
Cross-browser testing platforms:
- BrowserStack: Real devices and browsers in the cloud
- Sauce Labs: Automated and manual testing
- LambdaTest: Live and automated testing
- CrossBrowserTesting: Real browser testing
Local testing:
- BrowserStack Local: Test localhost on real browsers
- Virtual machines: Test on different OS
- Device lab: Physical devices for testing
Automated testing:
// Selenium WebDriver - cross-browser automation
const { Builder, By, Key, until } = require('selenium-webdriver');
async function testCrossBrowser(browserName) {
let driver = await new Builder()
.forBrowser(browserName)
.build();
try {
await driver.get('http://yoursite.com');
await driver.findElement(By.name('q')).sendKeys('webdriver', Key.RETURN);
await driver.wait(until.titleIs('webdriver - Google Search'), 1000);
} finally {
await driver.quit();
}
}
// Test across browsers
testCrossBrowser('chrome');
testCrossBrowser('firefox');
testCrossBrowser('safari');
Device & Responsive Testing
Testing strategy:
1. Define device matrix:
Category | Devices | Priority
----------------|-----------------------------------|----------
Mobile - Small | iPhone SE (375×667) | P0
Mobile - Medium | iPhone 14 (390×844) | P0
Mobile - Large | iPhone 14 Pro Max (430×932) | P1
Tablet | iPad (810×1080) | P1
Desktop - Small | 1366×768 | P0
Desktop - Medium| 1920×1080 | P0
Desktop - Large | 2560×1440 | P2
2. Responsive breakpoints:
/* Mobile first approach */
/* Base styles: Mobile (320px+) */
.container {
padding: 1rem;
}
/* Tablet (768px+) */
@media (min-width: 768px) {
.container {
padding: 2rem;
max-width: 720px;
}
}
/* Desktop (1024px+) */
@media (min-width: 1024px) {
.container {
padding: 3rem;
max-width: 960px;
}
}
/* Large desktop (1280px+) */
@media (min-width: 1280px) {
.container {
max-width: 1200px;
}
}
3. Testing checklist:
Layout:
- No horizontal scrolling at any viewport
- Content is readable without zooming
- Images scale appropriately
- Text doesn’t overflow containers
- Grid/flex layouts work correctly
Touch interactions:
- Tap targets are minimum 44×44px
- Buttons have adequate spacing
- Swipe gestures work (if implemented)
- Dropdown menus are touch-friendly
- Hover states have touch equivalents
Performance:
- Images are optimized for mobile
- Lazy loading implemented
- Minimal JavaScript on mobile
- Fast initial page load
4. Testing tools:
- Chrome DevTools: Device simulation
- Firefox Responsive Design Mode: Multiple viewports
- Real devices: Physical device testing
- BrowserStack: Real device cloud
- Responsively App: All viewports at once
Operating System Compatibility
Testing considerations:
Desktop OS differences:
Windows vs macOS:
- Font rendering (Windows: ClearType, Mac: smoother)
- Keyboard shortcuts (Ctrl vs Cmd)
- File paths (backslash vs forward slash)
- Default fonts availability
- Form controls styling
Linux:
- Various distributions (Ubuntu, Fedora, etc.)
- Different desktop environments (GNOME, KDE)
- Font availability varies
Mobile OS differences:
iOS vs Android:
- Safari vs Chrome default browser
- Different date/time pickers
- Different form input behaviors
- Touch event handling
- Push notification differences
- Camera/file access APIs
Compatibility Testing Best Practices
1. Prioritize based on data:
✓ Use analytics to identify user browsers/devices
✓ Test most common configurations first
✓ Define minimum supported versions
✓ Document browser support policy
2. Automate where possible:
✓ Automated cross-browser tests in CI/CD
✓ Visual regression testing
✓ Responsive design tests
✓ Accessibility automation
3. Use progressive enhancement:
✓ Core functionality works everywhere
✓ Enhanced features for modern browsers
✓ Graceful degradation for older browsers
✓ Feature detection over browser detection
4. Document compatibility:
✓ Maintain browser support matrix
✓ Document known issues
✓ Provide workarounds if available
✓ Update regularly
Localization & Internationalization Testing
Understanding L10n and I18n
Internationalization (i18n): Designing software so it can be adapted to various languages and regions without code changes.
Localization (l10n): Adapting internationalized software for a specific region or language.
Think of it as:
- i18n = Making it possible
- l10n = Actually doing it
Internationalization: Building for Global Reach
Key i18n requirements:
1. Text handling:
// DON'T: Hardcoded text
<button>Submit</button>
// DO: Externalized strings
<button>{t('common.submit')}</button>
// Language files
en.json: { "common": { "submit": "Submit" } }
es.json: { "common": { "submit": "Enviar" } }
fr.json: { "common": { "submit": "Soumettre" } }
2. Unicode support:
✓ Use UTF-8 encoding everywhere
✓ Support characters from any language
✓ Handle emojis and special characters
✓ Proper string length calculation (grapheme clusters, not bytes)
Example issues:
"Café" in UTF-8: é can be 1 or 2 characters depending on normalization
"👨👩👧👦" (family emoji): Looks like 1 character, but is actually several
3. Date and time:
// DON'T: Hardcoded format
const date = "12/01/2025"; // Ambiguous: Dec 1 or Jan 12?
// DO: Use locale-aware formatting
const date = new Date('2025-01-12');
const formatted = new Intl.DateTimeFormat('en-US').format(date);
// Output: 1/12/2025 (US)
const formattedUK = new Intl.DateTimeFormat('en-GB').format(date);
// Output: 12/01/2025 (UK)
const formattedJP = new Intl.DateTimeFormat('ja-JP').format(date);
// Output: 2025/1/12 (Japan)
// Time zones
const options = {
timeZone: 'America/New_York',
hour: '2-digit',
minute: '2-digit'
};
4. Numbers and currency:
// Number formatting
const number = 1234567.89;
new Intl.NumberFormat('en-US').format(number);
// Output: 1,234,567.89
new Intl.NumberFormat('de-DE').format(number);
// Output: 1.234.567,89
new Intl.NumberFormat('fr-FR').format(number);
// Output: 1 234 567,89
// Currency
const amount = 99.99;
new Intl.NumberFormat('en-US', {
style: 'currency',
currency: 'USD'
}).format(amount);
// Output: $99.99
new Intl.NumberFormat('de-DE', {
style: 'currency',
currency: 'EUR'
}).format(amount);
// Output: 99,99 €
new Intl.NumberFormat('ja-JP', {
style: 'currency',
currency: 'JPY'
}).format(amount);
// Output: ¥100 (JPY doesn't use decimals)
5. Text direction:
/* Support RTL (Right-to-Left) languages like Arabic, Hebrew */
/* DON'T: Fixed direction */
.container {
text-align: left;
padding-left: 20px;
}
/* DO: Logical properties */
.container {
text-align: start; /* Adapts to text direction */
padding-inline-start: 20px; /* Left in LTR, right in RTL */
}
/* HTML direction attribute */
<html dir="rtl" lang="ar">
/* RTL-specific styles */
[dir="rtl"] .icon {
transform: scaleX(-1); /* Mirror icons if needed */
}
6. Text expansion:
Translation text length varies:
English "Account" (7 chars)
→ German "Benutzerkonto" (14 chars) - 100% longer
→ French "Compte d'utilisateur" (20 chars) - 185% longer
Design considerations:
✓ Don't set fixed widths for text
✓ Allow text to wrap
✓ Test with longest language
✓ Use flexible layouts (flexbox, grid)
✓ Add 30-40% extra space in designs
Common expansion factors:
English → German: +30%
English → French: +15-20%
English → Spanish: +20-30%
Localization Testing
What to test:
1. Translation quality:
Checklist:
- [ ] All text is translated (no English in Spanish version)
- [ ] Translations are contextually correct
- [ ] No truncated text
- [ ] No text overflow
- [ ] Terminology consistency
- [ ] Cultural appropriateness
- [ ] No machine translation artifacts
2. Linguistic testing:
Grammar and spelling:
- [ ] Correct grammar
- [ ] Proper punctuation
- [ ] Appropriate formality level (formal vs informal)
- [ ] Gender agreement (languages with gendered nouns)
- [ ] Plural forms (some languages have multiple plural forms)
Example: Russian plurals
1 файл (1 file)
2 файла (2 files)
5 файлов (5 files)
Different endings for 1, 2-4, and 5+ items
3. Formatting:
- [ ] Date formats correct for locale
- [ ] Time formats (12 vs 24 hour)
- [ ] Number formats (decimal separator)
- [ ] Currency symbols and placement
- [ ] Phone number formats
- [ ] Address formats
- [ ] Name order (given name first vs family name first)
4. Cultural adaptation:
- [ ] Images appropriate for culture
- [ ] Colors have correct cultural meaning
- [ ] Icons understood in target culture
- [ ] Examples relevant to region
- [ ] No offensive content
- [ ] Holidays and celebrations localized
Examples:
- ❌ Thumbs up emoji: Offensive in some Middle Eastern countries
- ❌ Red: Lucky in China, danger in Western countries
- ❌ Owl: Wisdom in West, bad luck in some Asian cultures
- ❌ Example addresses: Use local format and real city names
5. Locale-specific functionality:
- [ ] Local payment methods supported
- [ ] Local shipping options available
- [ ] Local tax calculation
- [ ] Local legal compliance (GDPR, CCPA, etc.)
- [ ] Local contact information
- [ ] Local customer support hours
6. RTL (Right-to-Left) testing:
For Arabic, Hebrew, Persian:
- [ ] Text flows right to left
- [ ] Layout mirrors correctly
- [ ] Icons mirror where appropriate
- [ ] Forms align correctly
- [ ] Tables read right to left
- [ ] Scrollbars on correct side
- [ ] Navigation menu reversed
Testing tools:
Pseudolocalization: Test i18n without actual translations
English: "Save Changes" Pseudo: "[§Śàvè Çĥàñĝèś Ţèśţ Éẋţŕà Çĥàŕś§]" Benefits: - Shows untranslated strings - Tests text expansion - Finds encoding issues - No need to know target language
Translation management: Crowdin, Lokalise, Phrase
Automated testing: Check for missing translations, key consistency
Native speakers: Always involve for final validation
Localization Best Practices
✓ Plan for i18n from day one
✓ Externalize all strings
✓ Use ICU MessageFormat for complex strings
✓ Test with actual native speakers
✓ Avoid text in images
✓ Use locale-aware libraries
✓ Document translation context
✓ Implement continuous localization
✓ Test early and often
✓ Consider cultural differences beyond language
Accessibility Testing: Inclusive Design
Understanding Accessibility
Definition: Accessibility ensures that people with disabilities can perceive, understand, navigate, and interact with software.
Who benefits from accessibility:
- 15% of global population has some form of disability
- Permanent disabilities: Blindness, deafness, motor impairments
- Temporary disabilities: Broken arm, eye infection
- Situational limitations: Bright sunlight, noisy environment, hands full
Legal and business case:
Legal requirements:
- ADA (Americans with Disabilities Act)
- Section 508 (US government)
- EN 301 549 (EU)
- Potential lawsuits for non-compliance
Business benefits:
- Larger market reach
- Better SEO (accessibility overlaps with SEO)
- Improved usability for all users
- Corporate social responsibility
- Innovation driver
WCAG 2.1 Guidelines
Web Content Accessibility Guidelines (WCAG) 2.1 organized around four principles:
1. Perceivable
Information must be presentable to users in ways they can perceive.
1.1 Text Alternatives:
<!-- Images -->
<img src="chart.png" alt="Sales increased 25% in Q4">
<!-- Decorative images -->
<img src="decorative-line.png" alt="">
<!-- Complex images -->
<img src="complex-chart.png"
alt="Bar chart showing quarterly sales"
longdesc="sales-data.html">
<!-- Icons with text -->
<button>
<svg aria-hidden="true">...</svg>
<span>Delete</span>
</button>
<!-- Icons without text -->
<button aria-label="Delete item">
<svg aria-hidden="true">...</svg>
</button>
1.2 Time-based Media:
Video content:
- [ ] Captions for deaf users
- [ ] Audio descriptions for blind users
- [ ] Transcript available
Audio content:
- [ ] Transcript available
1.3 Adaptable:
<!-- Proper heading structure -->
<h1>Main Title</h1>
<h2>Section 1</h2>
<h3>Subsection 1.1</h3>
<h2>Section 2</h2>
<!-- Semantic HTML -->
<nav>...</nav>
<main>...</main>
<aside>...</aside>
<footer>...</footer>
<!-- Lists -->
<ul><!-- Unordered list --></ul>
<ol><!-- Ordered list --></ol>
<!-- Tables -->
<table>
<thead>
<tr>
<th scope="col">Name</th>
<th scope="col">Age</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>30</td>
</tr>
</tbody>
</table>
1.4 Distinguishable:
Color contrast:
- Normal text: Minimum 4.5:1 ratio
- Large text (18pt+): Minimum 3:1 ratio
- UI components: Minimum 3:1 ratio
Test contrast:
- Chrome DevTools
- WebAIM Contrast Checker
- Stark plugin for Figma
Don't use color alone:
❌ "Required fields are in red"
✓ "Required fields are marked with an asterisk (*)"
Resize text:
- [ ] Text can be resized to 200% without loss of functionality
- [ ] Use relative units (rem, em) not fixed pixels
2. Operable
User interface components must be operable.
2.1 Keyboard Accessible:
<!-- All interactive elements must be keyboard accessible -->
<!-- Native elements are keyboard accessible by default -->
<button>Click me</button>
<a href="/page">Link</a>
<input type="text">
<!-- Custom elements need tabindex -->
<div role="button" tabindex="0" onKeyPress={handleKey}>
Custom button
</div>
<!-- Skip to main content link -->
<a href="#main-content" class="skip-link">
Skip to main content
</a>
<main id="main-content">
...
</main>
<!-- Keyboard trap prevention -->
<!-- Users must be able to navigate away from any component -->
Test keyboard navigation:
- [ ] Tab through all interactive elements
- [ ] Shift+Tab navigates backwards
- [ ] Enter activates buttons/links
- [ ] Space activates buttons/checkboxes
- [ ] Arrow keys work in dropdowns/radio groups
- [ ] Escape closes modals/dropdowns
- [ ] Focus visible on all elements
- [ ] Logical tab order
- [ ] No keyboard traps
2.2 Enough Time:
Timing adjustable:
- [ ] User can extend time limits
- [ ] User can turn off time limits (where possible)
- [ ] User warned before timeout
- [ ] At least 20 seconds to extend
Auto-playing content:
- [ ] User can pause/stop/hide auto-updating content
- [ ] No auto-play unless user can control
2.3 Seizures:
- [ ] Nothing flashes more than 3 times per second
- [ ] No large flashing areas
- [ ] Parallax effects can be disabled
2.4 Navigable:
<!-- Page titles -->
<title>Page Name - Site Name</title>
<!-- Landmarks -->
<header role="banner">...</header>
<nav role="navigation" aria-label="Main">...</nav>
<main role="main">...</main>
<aside role="complementary">...</aside>
<footer role="contentinfo">...</footer>
<!-- Focus order -->
<!-- Visual order should match DOM order -->
<!-- Link purpose -->
❌ <a href="/products">Click here</a>
✓ <a href="/products">View our products</a>
<!-- Multiple ways to find pages -->
- Search
- Site map
- Navigation menu
- Breadcrumbs
3. Understandable
Information and operation must be understandable.
3.1 Readable:
<!-- Language of page -->
<html lang="en">
<!-- Language of parts -->
<p>The French word <span lang="fr">bonjour</span> means hello.</p>
<!-- Clear language -->
- Use plain language
- Define jargon
- Expand abbreviations
- Provide reading level appropriate content
3.2 Predictable:
Consistent navigation:
- [ ] Navigation in same place on every page
- [ ] Same components have same labels
Consistent identification:
- [ ] Icons have consistent meanings
- [ ] Same functionality = same labels
On focus:
- [ ] Focus doesn't trigger unexpected changes
- [ ] Modal doesn't open on focus
On input:
- [ ] Form submission requires explicit action
- [ ] No automatic navigation on selection
3.3 Input Assistance:
<!-- Error identification -->
<label for="email">Email *</label>
<input
type="email"
id="email"
aria-invalid="true"
aria-describedby="email-error"
>
<span id="email-error" role="alert">
Please enter a valid email address
</span>
<!-- Labels and instructions -->
<label for="password">
Password (minimum 8 characters)
</label>
<input type="password" id="password">
<!-- Error prevention -->
- Confirmation for destructive actions
- Ability to review before submission
- Ability to undo submissions
4. Robust
Content must be robust enough to work with assistive technologies.
4.1 Compatible:
<!-- Valid HTML -->
- No duplicate IDs
- Proper nesting
- Closed tags
<!-- ARIA (Accessible Rich Internet Applications) -->
<!-- Name, Role, Value must be programmatically determined -->
<button
role="button"
aria-label="Close dialog"
aria-pressed="false"
>
×
</button>
<div
role="progressbar"
aria-valuenow="25"
aria-valuemin="0"
aria-valuemax="100"
aria-label="Upload progress"
>
25% complete
</div>
Accessibility Testing Methods
1. Automated testing:
Tools:
- axe DevTools (browser extension)
- Lighthouse (Chrome DevTools)
- WAVE (WebAIM)
- Pa11y (command line)
Catches ~30-40% of issues:
✓ Missing alt text
✓ Color contrast
✓ Missing labels
✓ Invalid ARIA
✓ Heading structure
Doesn't catch:
✗ Focus order quality
✗ Alt text quality
✗ Keyboard navigation flow
✗ Screen reader experience
✗ Context and meaning
2. Manual testing:
Keyboard navigation:
1. Unplug mouse
2. Navigate using only keyboard
3. Verify all functionality accessible
4. Check focus indicators
5. Test logical tab order
Screen reader testing:
- NVDA (Windows, free)
- JAWS (Windows, paid)
- VoiceOver (Mac/iOS, built-in)
- TalkBack (Android, built-in)
Basic screen reader test:
1. Turn on screen reader
2. Navigate with screen reader shortcuts
3. Verify all content announced
4. Test forms and interactions
5. Verify alternative text quality
3. User testing:
Test with actual users with disabilities:
- Blind users (screen readers)
- Low vision users (magnification)
- Motor impairment users (keyboard only)
- Deaf users (captions)
- Cognitive disabilities (clear language)
Benefits:
- Real-world usage patterns
- Uncover issues tools miss
- Empathy building
- Innovation insights
Accessibility Checklist
Foundation:
- Valid, semantic HTML
- Logical heading structure
- Landmark regions defined
- Skip links provided
- Page title describes content
Images & Media:
- All images have alt text
- Decorative images have empty alt
- Complex images have long descriptions
- Videos have captions
- Audio has transcripts
Forms:
- All inputs have labels
- Required fields indicated
- Error messages clear and specific
- Errors associated with inputs
- Fieldsets group related inputs
Keyboard:
- All functionality keyboard accessible
- Focus indicators visible
- Logical tab order
- No keyboard traps
- Skip navigation links
Color & Contrast:
- 4.5:1 contrast for normal text
- 3:1 contrast for large text
- Color not sole indicator
- Works in high contrast mode
Assistive Technology:
- Works with screen readers
- ARIA used appropriately
- Live regions for dynamic content
- Status messages announced
Content:
- Plain language used
- Link text descriptive
- Language of page set
- Instructions don’t rely on shape/size/location
Conclusion
Non-functional testing is not optional—it’s essential for creating software that users love and that reaches the widest possible audience. Each dimension we’ve covered contributes to overall product quality:
- Usability testing ensures your software is intuitive and pleasant to use
- Compatibility testing guarantees access across browsers, devices, and platforms
- Localization testing opens global markets and shows cultural respect
- Accessibility testing includes everyone and often improves the experience for all users
The best approach is to integrate non-functional testing throughout development, not as an afterthought. Build accessibility into designs, test usability with prototypes, check compatibility early, and plan for localization from the start.
Remember: Functional correctness gets users in the door, but non-functional excellence keeps them there. In today’s competitive software landscape, superior user experience, universal accessibility, and global reach are the differentiators that drive success.
Resources
Usability:
- Nielsen Norman Group: https://www.nngroup.com/
- Usability.gov
- “Don’t Make Me Think” by Steve Krug
- “The Design of Everyday Things” by Don Norman
Compatibility:
- Can I Use: https://caniuse.com/
- MDN Web Docs Browser Compatibility
- BrowserStack, LambdaTest (testing platforms)
Localization:
- Unicode CLDR (Common Locale Data Repository)
- ICU (International Components for Unicode)
- Crowdin, Lokalise (translation platforms)
Accessibility:
- WCAG 2.1 Guidelines: https://www.w3.org/WAI/WCAG21/quickref/
- WebAIM: https://webaim.org/
- A11y Project: https://www.a11yproject.com/
- “Inclusive Design Patterns” by Heydon Pickering