How We Ran A Bug Hunt

By Luke Barfield

This article builds on the 99-second talk that I gave at Test Bash 3 about a Bug Hunt that we held at work recently.

The Build Up

When I started in my current role in mid-2013 one of my early tasks was to write down a QA Process; at this time I read somewhere (unfortunately I can’t remember where) about the concept of a bug bash (or hunt) and it captured my imagination and thus it was written into my Concise QA Process. My boss, the Director of R&D loved the idea, and we agreed that when we got close to our first major product launch we would run one.

Fast forward to March this year, about a month prior to our first customer cutting over to our new product and we start to firm up plans to run our first bug hunt. A day was scheduled into peoples’ calendars, time allocated in the development pods’ current sprints and the excitement / apprehension was mounting (depending on the person’s involvement in the product’s development). Mindful that we were going to be handing the testing over to our non-tester colleagues for the day, the test team prepared some information to assist this temporary transition of testing responsibility in an attempt to prevent anarchy in the ranks. We did the following:

  • Published some how-to guides on our team wiki site, covering how to write bug reports and possible test techniques (a high-level guide to the heuristic test strategy).
  • Created a special project in our bug tracking software and simplified the form used to create bugs.
  • Catalogued the important areas of the product that we wanted to test based on a discussion with the Product Manager. We then created labels (or tags if you like) to denote what particular area a bug was found in.
  • Queried the bug repository for a list of existing bugs and exported the results into Excel.
  • Setup the test environment, which included installing the product and associated applications, migrating data, creating users in the test Active Directory and configured the logging. It would be remiss of me not to credit our chief architect and one of the senior developers for assisting to setup the environment.
  • Prepared a briefing for the actual bug hunt and distributed a high-level agenda for the day

So What is a Bug Hunt?

In my context a bug hunt is as the name suggests, a hunt (or search) for bugs. It is important to note that this was not the sole approach to testing this product nor is it a silver bullet / catch all / safety net. This activity was part of one of our testing missions which was to provide information to the product managers to help them make a release decision, they were interested in finding as many bugs as possible in a short space of time so that they could be prioritised and addressed if required.

Who is Involved in a Bug Hunt?

A dedicated test team is a new concept at my company. I was the first tester and at the time of the bug hunt there were two of us in the test team, so we took great pains to engage colleagues from the various areas within the organisation. However everyone involved in the bug hunt had an interest or a stake in the product and in the end we managed to secure the following:

  • 10 developers
  • 3 Product Managers
  • The Director of R&D
  • Chief Architect
  • 2 Support Consultants
  • 1 Application Consultant

The test team were facilitators of the bug hunt and did not partake in the actual testing for this one day.

It is worth noting that this approach has both benefits and limitations. The key benefits we realised were:

  1. Diversity – Quality, like beauty is in the eye of the beholder. We also all have biases and preferences, by collating the opinions of many we were able to concentrate on the problems found that would affect our customers
  2. Coverage – We achieved a really big poke of the product (18 * 4 = 72 hrs of testing, this would have taken a week of dedicated testing for both testers (if we worked at 100% utilisation). Clearly this is a dangerous comparison (see Two “Scoops” of “Bugs” by James Bach), so let’s say that we were able to cast a wider net over a greater area. We just aren’t too sure if and where there might be holes in our net. This did give us a set of rumours (reported bugs) that we could investigate further.
  3. Peace of mind for Product Managers – The Product managers were involved in the bug hunt and could observe first-hand what their product was capable of
  4. Peace of mind for developers – Developers know the areas of the product that they fear the most (in terms of risk to quality) and this gave them the opportunity to explore these risks and potentially get rewarded (prizes) rather than chastised.
  5. User testing – Our target users (or paying customers) were already represented by the product managers, but our internal users (support staff and consultants) were more heavily represented during the Bug Hunt and their opinion was important for assessing quality.

As with all good things there were some not so good things:

  1. Each bug had to be quickly reviewed for conformance to a standard report so that they could be understood after the bug hunt, this was both time consuming and perceived as pedantic.
  2. Each bug then had to be reviewed to see if this was actually a bug (i.e. does it threaten the value of the product to a stakeholder). Again very time consuming.
  3. The absence of bugs reported for a product area doesn’t mean that there are no bugs.
  4. We were using non-testers for testing tasks and therefore the quality of the testing could be (and had to be) questioned.

The Day of the Bug hunt

The majority of the team are based in the same office, some of the bug hunters who work in different offices made the trip to our office and the other hunters were in contact over IM and VOIP (we use MS Lync). In order to let our Scottish colleagues arrive we had a nice leisurely start time of 10:00 am. We had declared it a dress down day and the director of R&D offered to pay for a KFC for everyone at lunch.

I kicked off the Bug Hunt with a Briefing; I went through a nifty mind map presentation that I had put together using iMindMap and which covered topics such as:

  • The format and expectations of the day.
  • How to login to the test environment.
  • The product areas that the product managers wanted covering.
  • How to raise bugs.
  • Suggested test techniques.
  • The prizes that we would be handing out.

We split the testing into two sessions lasting two hours each. This was to try to maintain optimum concentration and to also allow for some normal duties to be performed. The bug hunt participants were split into teams of two. The brief was that one bug hunter would “drive” and the other would lead the testing, take notes and write bug reports. The teams were expected to switch roles at least once during the day.

Throughout the sessions we were projecting bug counts (to stoke the competitive spirit) and a heat map of where the bugs were being found. I was on hand to offer support if requested and my colleague was reviewing bugs as they were being logged. The bug review was limited as we had nine teams capable of logging bugs at any point within the session and only one person reviewing them. The review consisted of ensuring that the bug report was consistent with the documented “How to” guide for raising bugs and was not obviously a duplicate of one already raised. There were a lot of bugs rejected in the first part of the first session, but as the bug hunters experienced rejection they did start to learn from their mistakes.

At the end of the day we ran a debriefing session. The aim of the debriefing was to thank everyone for their participation, summarise the day’s testing and to run through the prizes. The following prizes were awarded:

  • Most Bugs found (with a status of Accepted) – Unsurprisingly this award went to the team with the senior developer responsible for the majority of the development work.
  • Best Bug found – this was a purely subjective opinion of the test team awarded in an Oscar-style nomination for Best Bug.
  • We created a special prize for a team that had a support technician paired with one of the main product developers because the majority of bugs logged by the team were when the developer was freaking out over something not working properly.
  • Booby Prize – Again a subjective opinion from the test team, but there was only one clear winner – The developer who raised a bug that was a duplicate of a bug that had been outstanding with him for the past month!

We then rounded the day off with poker, beer and pizza.

The Aftermath

The day after the bug hunt was spent reviewing the bugs in much more detail, this involved reproducing, clarifying steps, de-duplicating and categorising the bugs so that they could be prioritised with the product managers and added to the backlog. At the end of the bug hunt we had approximately150 bugs logged, this was reduced to around 80 by the end of the following day. Many of the bugs raised during the bug hunt were classified as low priority but there were four or five that were escalated to critical for go-live. One of our development teams then spent the remaining time in their sprint working through the highest priority bugs.

Conclusions and Lessons Learned

The bug hunt was a huge success for our organisation because:

  • It allowed us to look at our product with many different perspectives.
  • Each bug hunter felt involved in the eventual delivery and also took a little bit of ownership for the quality of the product.
  • The bug hunters experienced some of the challenges that testers face on a day-to-day basis.
  • The developers that had worked on the product had the chance to investigate areas of the product that they were concerned with or anxious about.
  • The developers that hadn’t worked on the product yet had the chance to see what the product was about and test for things that they have experienced issues with in the past.
  • The product managers got to experience the product critically for the first time (i.e. not in demo mode).
  • We were able to test random load against our product by allowing each team to perform their own tests independent of each other.
  • We identified gaps between what the Product Manager had asked for and what they intended.
  • It gave our stakeholders a warm fuzzy feeling that the development effort had been worth the investment, with a clear road map for the remaining work prior to going live.

Whilst these are all great benefits, there were some valuable lessons learned:

  • This approach tends to favour quantity over quality in terms of bugs.
  • Much of the time gained by turning non-testers into testers is then used up reviewing and prioritising bugs.
  • Reviewing bugs against a standard bug report template is a thankless task and doesn’t guarantee that the bug is:
    • Valid,
    • Reproducible,
    • A symptom or a root cause.

All in all the bug hunt went well and we will definitely have more at our company where our testing mission is to find important bugs in a short space of time, or to demonstrate product’s capability to key stakeholders within the company.

Author Bio

Luke Barfield is the Head of Testing within the Research and Development team of an IT Services company. He started as a graduate for an independent software testing consultancy and spent the majority of his early testing career working for clients within financial, e-commerce and travel industries.

Luke used to have spare time, but now has a new baby son and so spare time is an almost non-existent commodity. When the baby is sleeping he spends his time self-studying in an attempt to be a better tester and if he is really lucky and has spare energy he might be seen climbing or mountain biking!

You can follow him on Twitter at @lukebarfield83.