Bug Investigation

I’ve been to many testing conferences and workshops in my testing career (maybe you have, too), and I’ve noticed an interesting pattern. Speakers talk about all kinds of methods, techniques, approaches, technologies, tools, heuristics, mindsets, skill sets and even theories that will help you find bugs, but rarely what to do when you succeed in finding one!

I mean, is it a forgone conclusion that all you need to do is file a report about what you did and what you saw?

Well, wait a second…

You can file that bug, but are you ready for the questions that may come back to you? Are you willing to wager your reputation on the details you put in there?

It’s not hard to celebrate and quickly throw in a bug report, especially when time is short. But I’ve seen times where it’s worth more time – sometimes just a minute or two – to think before filing. And these minutes of investigation may increase your credibility.

For example:

Is it isolated to your machine?

Does it happen every time?

Can you make it happen a different way?

Does it happen on another environment?

Is this bug a fault or just a surface failure of a fault?

What tools might help?

Have you seen this before?

How could this be fooling you into something that may be by design?

Is any part of the system changing underneath your testing?

Are there any dependencies to the feature that might be failing?

Follow-up tests: “what if I try X?”

What are my assumptions?

Are my oracles well-defined?

Would looking at the code help?

What if you re-ran the tests and changed one of the test factors?

The point is now that you have the bug in your sights, is it really trapped or is there more you can do to unearth it?

When you’re driving on the interstate, you may have noticed some grooved lines in the shoulder pavement. These are meant to cause your car to vibrate if you stray off the road, to alert you to a bad problem that may happen if you continue to veer off course. It’s called the “rumble strip.” In testing, the notion of the rumble strip is there, too. It’s behavior that stimulates your notion of a problem. Treat this alert not like the bug it seems, but as an alert to a much bigger bug on the other side of that guard rail. Sure, you can file your account of the behavior you see initially, but try continuing off the cliff the rumble strip is trying to protect you from. It’s just software, but you may find yourself being able to tell a better story of a more spectacular crash.

——————-

Jon –

This is in response to your blog about “bug investigation.” While I am not a software tester or developer myself, this whole notion of diagnosis versus treatment is very common in my line of work as a psychotherapist. Some of the greatest diagnosticians can’t treat to save their souls, and some of the best clinicians havea hard time nailing down an actual diagnosis.

I have often thought that the whole idea behind this is the focus of what one is aiming to see. If one is able to look for bugs, then one can find them. If one is able to fix the bug, usually the “bug seeks him/her” and not the other way around. Similar to the concept of when the student is ready, the teacher will appear.

Best regards –

Kathleen B. Shannon

Plenty Questions

Here’s a little game to help you think of context when testing. It’s called “Plenty Questions.”

It goes like this:

I give you a riddle and you figure it out, asking an unlimited amount of “yes” or “no” questions.

For example: “A woman died because she was a voracious reader. How is this possible?”

Although you can go right for guessing the situation I have in mind, one of the game’s powerful lessons is to encourage you to examine every word of the riddle.

For example: “woman”…

Q: Was this a mother?

A: No.

Q: Was this woman previously a man?

A: No.

Q: Was this woman older than 80?

A: No.

Then “died”…

Q: Was she reading a book about committing suicide?

A: No.

Q: Did she die of a heart attack at reading something alarming?

A: No.

Q: Did she die because she was constantly reading and did not make time to eat or drink?

A: No.

Let’s try the word “because” (an affect of “cause”)…

Q: Did she die while in the act of reading?

A: Yes.

Q: Did she die from an outside influence?

A: Yes.

Q: Would she have died of this particular cause had she never read books?

A: No.

And then “voracious”…

Q: Was she a speed reader?

A: No.

Q: Did she read more than 100 books a day?

A: No.

Q: Was there an aspect to her reading style that is important for me to know?

A: Yes.

And finally, “reader”…

Q: Did she read out loud?

A: No.

Q: Did the volume of pages she read have an effect on her?

A: Yes.

Q: Did the manner in which she was reading cause her death?

A: Yes.

It’s useful to examine the context, but also our assumptions about definitions of words. Words convey images in our head and my images may be different than your images. helping you assemble pieces of the puzzle.

This is important because in testing, it’s useful to push back when someone says:

1) “When are you going to be done?”

Is “you” the test team or you, personally? Does “done” mean done for the day or done with this project?

2) “Try it and see if it works.”

What techniques are involved in “trying”?

What are you supposed to see?

What requirements are in your head at the time and to what degree should it meet requirements

3) “Perform the following regression tests”

“Perform” in what amount of time?

Do you want me to follow the steps as written or can I improvise?

How will I know if we’re regressing in the user’s perception of quality?

Since the questions in Plenty Questions are yes/no, it’s also a useful exercise for practicing the skill of lawyer-like inquiry which is useful for testing assumptions you have that might be faulty, for creating a foundation of logic, and for sparking your imagination for variables that might reveal new problems.

Can vs. Will

Whenever I run a test that passes, I never stop wondering if it will pass when the customer does the same thing.

Up until today, I thought that was just me being a worrier and that I needed to seek counseling about it. But a discussion today on software-testing@yahoogroups.com helped convince me otherwise.

Whenever a test passes, it’s only an *indication* that the software CAN meet a requirement or expectation. It is no guarantee that it WILL.

That implies that tests that pass are really just educated opinions about a product’s capability, not an assurance that it will meet customer expectations.

Sure, they may call it Quality Assurance, but I’ve always been uneasy about that term. I can’t assure anything when it comes to software because I can think of so many contexts that might change or affect software when the customer goes to run it.

So forget the idea that passed tests say “the software WILL do x, y, z.”

I can’t assure such a statement any more than I can assure I’ll be in a good mood tomorrow.

I prefer to say “the software CAN do x, y, z in this context.”

I’m not trying to be bulletproof or get out of being responsible to the concerns of stakeholders, I’m just trying to set their expectations about testing. That way, when it ships and they come to me with a bug and say “Didn’t you test this?” I can point to my testing and remind them of what CAN really means: “I had reason to believe it is capable of x, y, z because I ran a test on this context and got results that I considered to be consistent with my idea of a customer expectation around those features.”

I know… it’s not a sexy sound bite, but it’s the scientific, rational answer – appropriate for a person (a tester) whose job it is to be rational and scientific.

Maybe that’s why people say “I CAN and I WILL” when they want to emphasize both their capability and assurance about something. I just think it’s worth it to separate those two powerful words when we report our testing to stakeholders who may not see a difference.

Scroll to top
Bitnami