What Is Ad Hoc Testing?

Ad hoc testing is unplanned, informal testing driven by the tester’s intuition, experience, and knowledge of the application. There are no pre-written test cases, no formal documentation, and no structured approach.

The term “ad hoc” literally means “for this purpose” — tests are invented on the spot for the immediate situation.

When Ad Hoc Testing Adds Value

Quick sanity checks. A developer finishes a fix and asks QA to “take a quick look.” You click around the affected area for a few minutes based on instinct.

Gap filling. After running all scripted test cases, you spend 15 minutes poking around areas not covered by formal tests.

New hire exploration. A new team member tests the application without training to see if the UI is intuitive. Their fresh perspective often catches usability issues that experienced users have become blind to.

Post-deployment smoke. After deploying to staging, you quickly verify critical flows before running the full regression suite.

Ad Hoc Testing Limitations

  • Not repeatable — No documentation means you cannot re-run the same tests
  • Not measurable — You cannot report coverage or progress
  • Depends on skill — Experienced testers find bugs; inexperienced testers waste time
  • Not accountable — You cannot prove what was tested
  • Bugs are harder to report — Without documented steps, reproducing found issues can be difficult

What Is Monkey Testing?

Monkey testing involves providing random, unexpected, or invalid inputs to an application to check if it crashes or behaves unexpectedly. The name comes from the idea of a monkey randomly pressing keys on a keyboard.

The goal is not to verify specific functionality — it is to find crashes, hangs, memory leaks, and unhandled exceptions that structured testing might miss.

Dumb Monkey Testing

A dumb monkey generates completely random events with no understanding of the application:

  • Random screen taps/clicks
  • Random keyboard input
  • Random swipes and gestures
  • Random navigation actions

The dumb monkey does not know what a login form is, what valid data looks like, or how to navigate the app. It just generates noise.

Strength: Finds crashes caused by completely unexpected input combinations that no human tester would think to try.

Weakness: Very inefficient — most random inputs are meaningless and do not exercise interesting code paths. A dumb monkey might spend hours tapping the same blank area.

Smart Monkey Testing

A smart monkey has knowledge of the application:

  • Knows valid and invalid input formats
  • Understands the navigation structure
  • Can simulate realistic user behavior with occasional mistakes
  • Targets specific areas known to be fragile

Strength: Much more efficient than a dumb monkey. Reaches deeper into the application and exercises more meaningful code paths.

Weakness: Requires setup effort — someone must define the application model.

Comparison

AspectDumb MonkeySmart Monkey
Setup effortNoneModerate to high
Application knowledgeNonePartial/full
EfficiencyVery lowModerate to high
Test depthShallow (stays on surfaces)Deep (reaches inner features)
Bug types foundCrashes from random inputLogic errors, state issues, edge cases
ReproducibilityVery difficultBetter (logged paths)

Monkey Testing Tools

Android Monkey (Built-in)

Android includes a built-in monkey testing tool that generates random UI events:

# Send 10,000 random events to an app
adb shell monkey -p com.your.app -v 10000

# With throttle (300ms between events) and specific event types
adb shell monkey -p com.your.app \
  --throttle 300 \
  --pct-touch 40 \
  --pct-motion 25 \
  --pct-nav 15 \
  --pct-majornav 10 \
  --pct-syskeys 5 \
  --pct-anyevent 5 \
  -v -v 50000

Gremlins.js (Web Applications)

Gremlins.js is a JavaScript library that unleashes “gremlins” on a web page:

// Basic usage — release the gremlins!
gremlins.createHorde().unleash();

// Custom configuration
gremlins.createHorde()
  .gremlin(gremlins.species.clicker())
  .gremlin(gremlins.species.formFiller())
  .gremlin(gremlins.species.scroller())
  .gremlin(gremlins.species.typer())
  .strategy(gremlins.strategies.distribution({
    distribution: [0.3, 0.3, 0.2, 0.2]
  }))
  .unleash();

Other Tools

ToolPlatformType
Android MonkeyAndroidDumb monkey
Gremlins.jsWebDumb/configurable monkey
Netflix Chaos MonkeyInfrastructureRandom failure injection
AFL (American Fuzzy Lop)Any (binary input)Smart fuzzer
Burp Suite IntruderWeb APIsSmart monkey for security

Monkey Testing vs. Fuzz Testing

Monkey testing and fuzz testing are related but distinct:

Monkey testing generates random user interactions (clicks, taps, navigation). It simulates a chaotic user.

Fuzz testing (fuzzing) generates random or malformed data inputs (file formats, network packets, API payloads). It targets data processing, not user interaction.

Both aim to find crashes and vulnerabilities through unexpected inputs, but they operate at different levels.

Exercise: Perform Monkey Testing and Document Findings

Part 1: Manual Monkey Testing

Choose any web application you have access to (a personal project, a test environment, or a public test site). Perform 15 minutes of “human monkey” testing:

  • Click random elements rapidly
  • Enter random text (including special characters, emoji, very long strings) into every input field
  • Use the back/forward buttons at unexpected moments
  • Try to submit forms with missing required fields
  • Open multiple tabs of the same page and interact with them simultaneously
  • Resize the browser window rapidly while interacting with the page

Document every crash, error, or unexpected behavior you observe.

Part 2: Tool-Based Monkey Testing

If you have access to a web application in a test environment, inject Gremlins.js and run it for 5 minutes:

// Paste in browser console
var s = document.createElement('script');
s.src = 'https://unpkg.com/gremlins.js';
document.body.appendChild(s);
// Wait for script to load, then:
gremlins.createHorde().unleash();

Observe and document: Did it find any crashes? Console errors? UI glitches? How does the experience compare to manual monkey testing?

Part 3: Analysis

After completing Parts 1 and 2, answer:

  1. What types of bugs did monkey testing find that structured testing would likely miss?
  2. What are the challenges of reproducing the bugs you found?
  3. In what project context would you recommend including monkey testing in the test strategy?
HintFor Part 1, focus on rapid, unexpected interactions. Real users sometimes double-click when they should single-click, press Enter before the page loads, or paste text into fields that expect numbers. These "chaotic" interactions reveal robustness issues.

For Part 3, consider the cost-benefit: monkey testing is cheap to run but expensive to investigate. When does that tradeoff make sense?

Solution

Part 1: Typical Findings from Manual Monkey Testing

Common issues found:

  • Console errors: JavaScript errors from clicking elements while the page is still loading or transitioning
  • UI overlap: Rapidly resizing the window causes elements to overlap or disappear
  • Form validation bypass: Quickly submitting a form before client-side validation activates
  • Stale state: Opening the same page in two tabs, making changes in one, and trying to save in the other causes a conflict (or silent data loss)
  • Broken layouts: Very long input strings overflow their containers
  • Unhandled states: Pressing the back button after a form submission leads to a “resubmit form” dialog or a broken state

Part 2: Gremlins.js Results

Typical Gremlins.js findings:

  • Multiple console errors as gremlins click on non-interactive elements
  • Modal dialogs opening and closing rapidly, occasionally getting stuck
  • Form fields filled with garbage data triggering various validation errors
  • Scroll position jumping erratically
  • Occasionally, gremlins trigger a navigation away from the page

Part 3: Analysis

  1. Bug types unique to monkey testing:

    • Race conditions from rapid interactions (double-submit, concurrent edits)
    • Crashes from unexpected input combinations (emoji in numeric fields, extremely long strings)
    • State management bugs from unusual navigation sequences (back button, multi-tab)
    • UI robustness issues that only manifest under chaotic usage
  2. Reproduction challenges:

    • The exact sequence of random actions is unknown, making it hard to write “steps to reproduce”
    • Timing matters — some bugs only occur when actions happen within milliseconds of each other
    • Solution: Use browser developer tools to capture console errors, network requests, and DOM state. For Gremlins.js, configure logging to record the sequence of actions.
  3. When to recommend monkey testing:

    • Mobile applications: Android Monkey should be part of every Android app’s CI pipeline — it costs almost nothing to run and catches crashes
    • Consumer-facing web apps: Where diverse and unpredictable user behavior is expected
    • Before production releases: As a final robustness check
    • Not recommended as a primary testing method — use it as a supplement to structured testing
    • Most valuable when combined with crash logging and error monitoring (e.g., Sentry, Crashlytics) that captures the context around failures

Key Takeaways

  • Ad hoc testing is unplanned, intuition-based testing — useful for quick checks but not accountable or repeatable
  • Monkey testing generates random inputs to find crashes and unhandled exceptions
  • Dumb monkeys generate purely random events; smart monkeys understand the application and simulate realistic but chaotic behavior
  • Tools like Android Monkey and Gremlins.js automate random input generation
  • The main challenge is reproducibility — random inputs make it hard to write steps to reproduce
  • Monkey testing is most valuable as a supplement to structured testing, not a replacement
  • Fuzz testing is the data-focused cousin of monkey testing, targeting file formats and API payloads