messageCross Icon
Cross Icon
Software Development

Cognitive Bias in QA: How Assumptions Lead to Bug Leakage

Cognitive Bias in QA: How Assumptions Lead to Bug Leakage
Cognitive Bias in QA: How Assumptions Lead to Bug Leakage

What is Bug Leakage?

Bug leakage is the occurrence of software defects that were not detected during testing but are subsequently identified in later phases, such as User Acceptance Testing (UAT) or post-deployment in the production environment. This reflects the deficiencies in test coverage or effectiveness, which is a key metric in assessing the quality of the testing process.

Imagine you're baking a cake. You follow the recipe, mix everything just right, and pop it in the oven. But when it comes out, oh no! It’s flat and dense. You forgot the baking soda. Even though you double-checked everything else, that one missing ingredient caused the cake to flop.  In software development, that kind of oversight is called bug leakage.  

Just like a beautifully decorated cake, software can appear flawless on the surface. But if something vital is missed during testing, a hidden bug may only become apparent once users begin interacting with it - much like realizing your cake hasn’t risen only after it comes out of the oven. 

My Real-Life Experience with Cypress

We developed a UI for a kiosk-based self-service terminal, including custom error pages (e.g., 404, 501). During UAT, the client tested these by intentionally entering misspelled URLs, but the browser’s default error pages appeared instead. This oversight resulted in bug leakage and financial loss. 

Hire Now!

Hire Manual QA Testers Today!

Ready to launch high-quality, bug-free applications? Partner with Zignuts’ dedicated manual QA testers to ensure a flawless product experience.

**Hire now**Hire Now**Hire Now**Hire now**Hire now

Key Assumptions made while testing, which result in bug leakage

1. Assumption 1:  Assuming Users Follow the “Happy Path”

  • Assumption: The workflow is designed based on the belief that users will follow steps exactly as intended, without deviation.
  • Leakage Result: Testing only covers expected user actions, so when real users skip steps or use the app differently, defects go unnoticed until the software breaks in production.
  • QA Perspective Example: A tester verifies a file upload feature by selecting a file from the system and clicking “Upload.” However, they don’t test what happens if a user drags and drops a file instead. In production, the drag-and-drop action fails silently, leaving users confused and unable to upload documents.

2. Assumption 2: Misinterpreting Requirements

  • Assumption: Developers or testers believe they understand what the user wants without confirming.
  • Leakage Result: Features are built or tested incorrectly, and the mismatch isn’t caught until users complain.
  • QA Perspective Example: A QA engineer is testing a feature that allows users to filter search results by date. The requirement states, “Users should be able to filter results from the past 30 days.” However, the engineer interprets this as “Users should be able to filter results starting from 30 days ago until today.”

3. Assumption 3:  Skipping Edge Cases

  • Assumption: “Users will never do that.”
  • Leakage Result: Testers ignore unusual inputs or behaviors, which later cause crashes or data corruption.
  • QA Perspective Example: 
  • “We assumed users wouldn’t multitask. Not testing edge cases like receiving a call mid-transaction led to session loss and repeated debits due to improper state handling.” 

4. Assumption 4: Assuming Zero-Based Indexing

  • Assumption: Developers assume arrays always start at index 0, as in languages like C, Java, or Python.
  • Leakage Result: In languages where arrays start at 1 (like MATLAB or Lua), this leads to off-by-one errors or incorrect data access. These issues may not be caught during testing and can cause failures in production.

QA Perspective Example:

During test case design, a QA engineer validates a report generation feature that pulls data from a table. The test data is indexed starting from 1 (as per the backend system), but the test script assumes zero-based indexing. As a result, the first row of data is skipped in validation, and a critical defect, missing customer records, is not detected until after release. This oversight leads to incorrect reports being sent to clients.  

5. Assumption 5: Assumed Reliable External Integrations

  • Assumption: Teams assume that external systems, APIs, or third-party services will always respond correctly and consistently during real-world use.
    Leakage Result: Integration scenarios are not thoroughly tested, especially for failures, timeouts, or unexpected responses. As a result, bugs related to broken connections, incorrect data handling, or unhandled errors only appear after release, often affecting critical workflows.
  • QA Perspective Example:
    Tester assumed the third-party SMS service would always respond instantly, but during a real outage, OTPs were delayed, causing login failures that weren’t caught in testing.‍

6. Assumption 6: No Regressions Because “It Worked Before.”

  • Assumption: Teams believe that if a feature worked in the past, it will continue to work, even after new changes are made.
  • Leakage Result: Older features are skipped during testing, allowing regression bugs to slip through. These bugs often surface in production when updates unintentionally break existing functionality.
  • QA Perspective Example: After adding a new filter to a product search page, the QA team focused only on testing the new filter logic. They didn’t retest the existing sorting feature, assuming it was unaffected. Post-release, users reported that sorting no longer worked correctly, an issue caused by a small code change that disrupted the original logic. This regression could have been caught with proper retesting of previously working features.‍

7. Assumption 7: The code is “Too Simple to fail.” 

  • Assumption: Teams believe that certain parts of the code are too simple or stable to fail, so they skip writing test cases for them.
  • Leakage Result: These “safe” areas are left untested, allowing hidden bugs to slip through. When those parts behave unexpectedly, especially after code changes, they cause issues in production that weren’t caught during QA.
  • QA Perspective Example:
    A QA team skipped testing a utility function that formats dates, assuming it was too basic to fail. However, after a library update changed how time zones were handled, the function began returning incorrect timestamps. Since no test cases covered this logic, the bug wasn’t caught until users reported scheduling errors in live reports.

8. Assumption 8: Not replicating real user behavior 

  • Assumption: Testers assume users will interact with the system in a standard, step-by-step way (e.g., entering one User ID at a time), and don’t test bulk or unconventional input methods.
  • Leakage Result: When users perform actions like pasting multiple User IDs at once, the system fails to respond correctly, and only one username is displayed instead of all. This issue goes undetected during testing because the real-world usage pattern wasn’t replicated.
  • QA Perspective Example: During UI testing, the QA team validated the auto-fill functionality by manually entering single User IDs and pressing Enter. However, they didn’t test the scenario where users paste multiple IDs into the field. In production, users trying to process bulk entries encountered incorrect results, leading to confusion and support tickets.  

Tips to prevent bug  leakage due to assumptions:

  1. Document All Assumptions
    • If you have to make an assumption, document it and get it validated. This ensures alignment with the development and product teams.
  2.  Encourage a “What If” Mindset
    • During test planning, ask: “What if the user does this?” or “What if this fails?” 
  3. Pair Testing with developers 
    •  Collaborate during development to uncover hidden assumptions in logic or design. 
  4. Clarify requirements early 
    • Don’t guess, ask questions. Confirm unclear or ambiguous requirements with stakeholders or business analysts.
  5. Test for the Opposite of the Assumption
    • If it is assumed that the API will consistently return valid data, it is equally important to validate scenarios involving invalid, missing, or delayed responses to ensure system resilience.
  6. Peer Review Test cases 
    • Engage a fellow QA engineer or developer to review your test cases, as a second perspective can help identify overlooked assumptions or gaps in coverage.
  7. Update test cases when requirements change 
    • Don’t assume old tests are still valid. Regularly review and revise tests when features evolve or specifications are updated.

Final Thoughts

Given the nature of the QA role, project architecture, and professional experience, forming assumptions is common. While some may prove useful, others can result in critical errors and significant organizational loss. It is therefore essential to consistently validate assumptions and proactively seek clarification from relevant stakeholders when requirements are unclear. Even minor misunderstandings can lead to defect leakage and considerable business impact.

card user img
Twitter iconLinked icon

A results-driven professional with a passion for transforming ideas into impactful, user-focused solutions. Known for a sharp eye for detail, strategic thinking, and a collaborative approach that drives product success from concept to launch.

Frequently Asked Questions

No items found.
Book Your Free Consultation Click Icon

Book a FREE Consultation

No strings attached, just valuable insights for your project

download ready
Thank You
Your submission has been received.
We will be in touch and contact you soon!
View All Blogs