Reading:
Defining Story Completion As A Software Tester
Share:

Defining Story Completion As A Software Tester

Read "Defining Story Completion As A Software Tester" by Elizabeth Zagroba in Ministry of Testing's Testing Planet

By Elizabeth Zagroba

There comes a point in your life when you realize: this story is done. The team has figured out what the feature should be, they’ve made it what it is, and it’s been tested.

The Ticket Itself

As a tester, I get asked to put “Test OK” on stories. For most teams I’ve been on, that’s enough. They trust that I’ve done my due diligence. The acceptance criteria have been met. I’ve asked some questions. Future tickets are updated with the knowledge we now share. Why write anything more? Who’s going to read it anyway?

On some teams, we’ve been blessed to have a product owner very involved in development; we all close the ticket together. In more cases, the product owner is harder to get a hold of, and the team moves the ticket through the workflow to await their approval.

Regardless, I try to summarize the journey we took to get a ticket to done in a comment when I close/move the ticket, including:

  • what was tested (this may feel like repeating the acceptance criteria)
  • what wasn’t tested, and why
  • what required a long time or help from other teams to set up
  • which environment, URL and login credentials you used to access the feature, or how you were able to reproduce the bug
  • a screenshot (or GIF, I’m addicted to LICEcap) for visual stuff, or a description of how the data changed for less visual stuff
  • the build or commit number of the change
  • who helped you, especially if this isn’t immediately clear from the ticketing system
  • what exploratory testing uncovered, even if those things won’t be resolved as part of this ticket
  • any file you used as part of a test (upload/download features especially) or to aid development (mindmap, spreadsheet, architecture diagram, etc.)

Here’s an example:

Test OK. Tested with <<colleague>> in the test environment. The integration tests in <<repository name and path>> passed. After I delete the object, it's not available at <<link to where it would have been>> and no longer appears in the list <<annotated screenshot of where it's missing with that URL visible>>. I'm not seeing anything suspicious in the logs <<attached>>.

Who Cares?

I don’t include all of these things every time. But I try to think about future me. She knows some things about the system and the people involved, but she’s not going to remember every detail of this story. My closeout comment should answer the question “What does future me need to know about what past me tested?”

It’s not just future me that benefits from this answer. It’s the product owner, who wants to see the feature for themselves. It’s the developer, who wasn’t sure we remembered to run the automation. It’s the technical architect, who’s wondering how long this page took to load. It’s the UX designer, who can look at the screenshot to see what the empty version of this list looks like. It’s the tester, who I’m coaching on how deeply to test and document their testing. And it’s present me, who realizes as she’s attaching the log file that it’s too big to attach, and maybe that’s a problem too.

If quality is value to some person who matters, being able to realize why someone might not matter as much, or the value might not be significantly diminished, can help. Let’s look at the story about deleting something from the database. Going to the URL does not prove it’s gone; it only shows me that I can’t see it anymore. My access has been revoked, but the item could still exist. Querying the database would require credentials from internal IT, and a week for them to process the request. But we realize if we don’t check it, the risk is that the database will grow and we’ll see performance degrade. We’re in the habit of looking at the daily, weekly, and monthly database calls everyday, so we’ll discover this mess there before customers could notice.

A product owner can help you determine what matters. User personas can be a guide. As much as I like finding weird stuff, it only counts if you can convince someone that it matters. Raising an issue to the people who can do something about it is much more effective than raising the issue on your team. IT blocking access? Raise it with the IT manager. Third-party service preventing conclusive debugging? Tell them! When the way that your collaborators prevent you from doing your best work, you’re allowed to dig in more and find out if you can push back.

I’ve worked in a regulated context: our product tracked side effects people experience during clinical trials. Being able to pinpoint how the software looked and was able to behave for a particular release made going through an audit possible.

Advocating For Bugs

Once somebody finds something that is different from what we expect, we’ve got to decide what we’re going to do about it. Mature developers who encounter small bugs are likely to fix them quickly without even mentioning them to you. This can be great for small things, but can also lead to scope creep and transparency into a process where the details were originally agreed to and understood by the whole team. The more available a product or project person is to help you balance the time you’re spending vs. the value to the customer, the better.

Easy testing problems are when you know what to do: you know you have to fix it right now, or it’s clear that doing nothing will have no consequences. Complicated testing problems result in bugs you have to advocate for. The argument I often want to make is: if we don’t fix it now, we’re never going to. But that’s not something most product owners want to hear. Sometimes a technical debt argument will work: we’ll never remember why we did this if we leave it this way. By the time this code gets touched again, there will be different people on the team. But usually, the technical debt argument works against picking things up: a customer’s unlikely to be affected. We’re going to rebuild this feature or replace it with a new one in the coming months. As much as you may want every little thing fixed, knowing when to give up is a better way to live.

Accurate Reports Build Confidence

Being specific about what you tested builds confidence in your work and the product. Closing a ticket with a comment of “seems to work ok” gives the reader some indication that the deepest testing possible was not performed. The safety language (“seems” as opposed to “does”) could be a jumping-off point for an interesting conversation, but without clearly stating who was involved in the testing, I’ll have to hope that the person who performed the testing and the person who left the comment are one and the same.

You want to provide enough details of your testing so you know what to think about for a release. What configuration variables need to be set up on production/ Which teams around the company will want to be informed when this change goes live? It might also be a good point for reflection for you. What have you learned about the product or the team? Pausing to recognize how far you’ve come can give you the energy to keep going.

Report too many things at the highest urgency, and people may begin to think you’re crying wolf or getting hysterical over nothing. (Do men who test ever get described as hysterical??) Report too few things or de-prioritize everything you find and your colleagues may be concerned that you won’t find the important things. Describing your testing can help your colleagues understand why and how you found as many or as few things as you did.

When You Can’t Test It Yourself

I’m lucky enough to be part of teams that allow me to contribute to development. There’s only been one case where I contributed to some production code. It should be a relief to have a helping hand from developers in testing a product thoroughly. But it makes me anxious. I worry that they’re not going to notice what I would notice. I’m concerned they’ll see something and not worry about the consequences. They won’t write it down or mention it to me, so when I or someone else comes across it, we won’t realize it’s a pattern. Pairing has been the best way I’ve found to have my teammates see and practice the skill of going “wait a minute,” continuing down a different thought path, and being able to return to the original track.

To counteract this inclination to dismiss things in an effort to have them done, I encourage developers to start by writing everything down. It may feel excessive or obvious. It’s likely to yield false positives, where the developer thought something was off, but it was actually what’s happening on production, or what we’d expect on the test environment at the moment. I’d prefer to be confident we didn’t miss anything and work through a bit of over-signaling than miss things. The balance here depends on the priorities and risks of your project, or even the personalities of the team. If the company has shown that small bugs can be tolerated (most places) and big things can be tackled after the release because we’ve reserved time to account for the unexpected (nowhere I’ve worked or heard about), then less is ok.

Under Pressure

It’s flattering for your team to see you as understanding what a story needs and being able to think of everything. But as a tester, you are not the gatekeeper for quality. Putting that “Test OK” stamp on the story is not just a stamp, it means something. You’re putting your reputation and your standards on the line when you’re involved. Don’t answer the “when will this be done?” and “can we close this already?” messages from a team garner a “yes” or “no” answer from you. Take these anti-quality behaviors and turn them into a conversation where your teammates are involved in the decisions about the quality of your product.

References

Author Bio

Elizabeth Zagroba is a Test Engineer at Mendix in Rotterdam, The Netherlands. She was the keynote speaker at Let’s Test in South Africa in 2018, and she’s spoken at TestBashes and other conferences around North America and Europe. Her article about mind maps became one of the most viewed on Ministry of Testing Dojo in 2017. You can find Elizabeth on the internet on Twitter and Medium, or you can spot her bicycle around town by the Ministry of Testing sticker on the back fender.

Exploratory Testing: LIVE - Adam Howard
Zero Bug Policy: The Myths And The Reality
Testing Ask Me Anything - Test Reporting
Avoiding The Million Dollar Question: How Did The QA Team Miss This Defect?
Bug Detection - Iain McCowatt
A Guide to Bug Refinement in Software Testing: Streamlining Your Workflow
Explore MoT
Episode One: The Companion
A free monthly virtual software testing community gathering
Bug Reporting 101
A quick course on raising bugs