Reading:
Why Didn’t You Find That Bug?
Share:

Why Didn’t You Find That Bug?

Got a few spare minutes? Come and read the best of The Testing Planet archive from Ministry of Testing

By Stephen Blower 

As a tester, I realise that I’m not going to find every bug. But even so, when a problem does slip by to find a home in production I often ask myself, how did I miss it? What could I have done better? What can I do in the future to prevent this from happening again?

These are all good questions, but there needs to be some level of pragmatism and we need to realise that bugs will often sneak past us undetected. There are ways to help us reduce the number of these undiscovered bugs, but there are no ways I know of that will guarantee a completely bug free product.

When I first started testing I was very sensitive to others finding bugs in areas of a product that I had already tested and I often heard the dreaded phrase echo around the office – “Who tested this? Why didn’t you find this bug?” Back then I wasn’t experienced enough to be able to argue in a reasoned manner as to why not all bugs could or would be found in any product over any given time period.

In some ways though, these experiences have helped me to become more thorough in my testing approach.

So before I list some things you can try out in order to limit the number of bugs that escape detection, I have a few stories to share about bugs that like ninjas in the night have snuck past me undetected. These stories show that even the most obvious bugs (obvious once known about), can happen and will happen.

How did we miss that?

During a project where a website was being fully redesigned with some additional functionality in key areas, such as sign-up and product selection, I spent around two months carrying out various test activities. During this time familiarity with the various changes and improvements set in. We were aware of the dangers though and to mitigate them we got some people from customer support to help out with the testing. Unfortunately, they were mainly utilised in verifying bug fixes or specifically directed to test certain areas of the site.

Within about the first minute of launching the website in its production environment, a problem was identified. A bug that was so obvious it created some considerable confusion in how it could possibly have been missed. It wasn’t a problem that required any particular skill to find except being able to spell. On the front page of the website (the shop window as it’s often called), the first page customers see upon landing on your site, in very large characters in the middle of the screen, was the word “BRAODBAND” instead of “BROADBAND.” This was a blatantly obvious typo but somehow, everyone had missed it until it the site went live and then for some reason, the bug immediately revealed itself to all.

It’s alive!

I have a Heuristic for this now: “It’s Alive” spoken aloud in the old 1931 film Frankenstein fashion. I use this when a product is in a mature state, in order to gently nudge my mind into seeing things that I could possibly miss. In this situation, the problem was so obvious it created an odd atmosphere of confusion, and possibly because the fix was simple no accusations were made. However, this does show quite clearly that even the most blatantly obvious bugs can easily be missed by many people.

Shoulda, woulda, coulda.

These bugs are not necessarily missed during testing due to not finding them, but missed because a tester didn’t go through a process towards the end of the development cycle of thinking about what would or could happen if… x-event happens. These bugs, when found late in the day or when the product is in production, often provoke a remark like “Yeah I thought that might happen” or “I knew that would happen.”

These scenarios are sometimes called edge (testing at the extremes) or corner-cases (very specific, infrequently happening circumstances). The latter often surprise me the most as they may have been labelled as corner cases earlier on in the project, only later to actually be something that happens quite often. This can be down to not understanding the system and its interactions well enough to be able to make that judgement in the first place.

One in this category that I’ve experienced was when working on a new application that was going to be processing hundreds of thousands of customer records. During testing, I asked the question “what about load testing?” To which the reply was “Oh it’ll be fine it can easily handle load up to a million records.” However, when this change was rolled out, within the first few minutes significant load was observed which proved to then cause serious problems and as a result, the offending code was reverted. In those days I didn’t fully understand the technologies involved and made an assumption that the developers knew better than I and had probably already considered this potential issue. I learned that day that when I have a gut feeling there may be a problem, I should not accept the quick and easy answer as truth. I should instead seek supporting information from elsewhere in order to confirm or refute my hypothesis.

My Spidey-Sense is tingling!

I call this my “Spider Senses” heuristic, when I have a feeling about something I don’t fully understand or the information I have available seems suspicious and until I have a reasonable understanding I’m not happy to ignore it and move on. In these situations, I will make sure everyone involved with the project knows that I have concerns. This usually helps as others then start to question why I have concerns; be it that my assumptions are wrong due to a lack of knowledge or turn out to be correct, it doesn’t matter to me. What matters is that I understand what I’m testing, as without it, I don’t feel I can do my job effectively.

Easy to replicate. Not easy to find.

Often when testing the number of scenarios required to cover every aspect of a product or system can be prohibitive. Even systems that appear relatively simple to test can have millions upon millions of permutations. There are ways to tackle such issues, one of which is called Pair Wise (or combinatorial) testing, which I’ll not go into too much detail about here but if you’d like to read further Michael Bolton has provided a good treatment of the subject here.

The issue that I experienced in this category was the stereotypical negative response when a problem was found that was 100% reproducible. With only 5 easy steps to follow it could be reproduced every time and resulted in a “how the hell wasn’t this found?” accusation.

The problem was this; based on a replication system that has multiple folders and files each of which can have one of three different permissions, Read, Read/Write or Deny. Within each folder, each file can have the same three file permissions.
At this time I was now more experienced and willing to challenge others who made comments like “Why didn’t you find that bug?” Therefore I devised a simple scenario to show that given a limited number of variables the number of tests required to test all permutations was huge. The scenario was this:

  • Three Folders in a hierarchy
  • Each folder contains four files
  • Both folder and files can have different permissions

The number of tests to cover every permutation in this relatively simple scenario is 14,348,907. Which is 315 (12 entities with 3 states). With such a large number it’s obvious that you cannot in a reasonable amount of time test every permutation. You could, of course, create an automated check to do this more quickly, however, the problem here is that this scenario isn’t even a real world situation. I have instead created a simple exercise to show how difficult it was to test this apparently simple to replicate the problem, knowing that in the real world customers had hundreds of folders in large hierarchical structures containing hundreds or thousands of files. Communicating this simple scenario soon helped me to demonstrate how testing won’t find every bug, regardless of how obvious it is with hindsight.

What else?

As has been shown with these three real world stories even when you think you’re done you probably are not.  There are many reasons for stopping testing, but one of them is not everything has been tested. I find it invaluable to be thinking about whether I have missed anything? What haven’t I thought of? Is there something obvious I’ve overlooked or assumed to be not worth considering? I will normally try to document these questions. Some will get answers, some won’t, but that doesn’t stop me from applying my “What Else?” heuristic.

Prevent embarrassing mistakes!

  • Name your Heuristics so that you can easily recall them in the future.
  • When you’ve got to ask how to do something so you can test it, beware – there may be a problem. Both with your understanding and the answer given.
  • Create many test ideas. There really shouldn’t be a time when you’ve run out of them.
  • Think about the “what-if’s?”
  • Be aware of your emotions. They are telling you something
  • Have fresh eyes give the change a look over, but don’t instruct them what to test.
  • Be aware of “no one would do that” statements.
  • Pair-up on testing with other testers, developers, product managers, marketing – whoever’s available and willing!

When testing we need to have a large dose of pragmatism. Bugs are going to be missed no matter what tools we put in place to help prevent detection-failures from happening. The use of heuristics can help to reduce the overall number.

Giving heuristics memorable names or mnemonics can help you to both remember and use them to aid you with identifying bugs early and preventing obvious bugs. Naming your heuristics can be done according to your own personal tastes, so if any of the strategies above seem useful to you, feel free to rename and use them to help you find those supposedly obvious bugs.

About the author

Stephen Blower has been a tester for 18 years working in various organisations. Currently a Test Manager working at Ffrees Family Finance, he’s in the enviable position of being able to create a test team from scratch. A significant part of his role is to inspire testers, not just create processes. Stephen strongly encourages interactivity and feedback and for testers to take control, empowering them to become valued members of the development group.

Stephen runs a regular Test Gathering in Sheffield where attendees can discuss a variety of testing topics with speakers and guests in an open forum format. His blog can be found at www.stephenblower.co.uk and his Twitter tag is badbud65.

Stephen Blower's profile
Stephen Blower


The Future Of Software Testing Part Two
The Periodic Table Of Data – Pipe Dream or Possibility?
What is Testing?
The Art Of The Bug Report
A Guide to Bug Refinement in Software Testing: Streamlining Your Workflow
Can a Frequently Seen Bug Cheat-Sheet Prevent Bugs? - Amy Phillips
Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
Essentials - Introduction to Software Development and Testing
Start your journey into software development and testing by learning what it's all about