A Note About Peer Conferences

This past July 7 and 8, I hosted the 4th Workshop on Heuristic and Exploratory Techniques here at the lab.

That shouldn’t excite you.

But what you may find interesting is WHET’s format. It was modeled after the Los Altos Workshops on Software Testing (LAWST) where attendance is by application or invitation, and limited to 15-20 participants. Instead of several topics, a LAWST-style conference has only a few specific topics, with the goal to work on at least one thoroughly. Discussion does not end until the group is satisfied (or exhausted).

Attendees are highly encouraged to challenge the results of the presenter, who must defend their presentation, which is usually an experience report. Papers and blogs from the ensuing discussions are encouraged and belong to everyone in the group.

This was my 13th workshop in the LAWST format, but this blog is the first time I’ve decided to write about the concept because I think there’s a lot of innovation out there waiting to be discovered and inspired through peer conferences like this.

For example, if you think boundary testing is as simple as thinking of a test case just before the boundary, then one on the boundary, then one just after the boundary, you’re not alone. But assemble 15 colleagues to talk about it and you might get 15 different definitions of what a “boundary” is, including:

“A category of software testing focused on determining and/or exploiting changes in the pattern of system behavior based on changing one or more test variables or parameters along a (frequently linear) range of values or settings.”

And

“Observing event driven state transitions in an application, and exploring the relationship between: a) The transition that occurred b) The underlying data associated with the triggering event c) The intended transition that should have occurred in the context of that data”

And

“Exploring changes in behaviour as the values in variables approach or pass critical points.”

And

“Any testing to the extent that it involves the discovery or evaluation of boundaries or boundary-related behavior, with ‘boundary’ defined as a dividing point between two otherwise contiguous regions of data that A) do evoke or represent different product behavior, and/or B) should evoke or represent different product behavior.”

Discuss these for just a few minutes with your colleagues and watch context emerge like notions of:

  • Time-based boundaries
  • State-based boundaries
  • Soft boundaries
  • Hard boundaries
  • Boundaries seen through different perspectives
  • Testing *for* boundaries vs. testing *at* boundaries
  • Emergent boundaries
  • Implicit vs. explicit boundaries

And you may find yourself inspired to show an impromptu software testing demonstration — perhaps where an input field accepts 255 characters, then you click OK to the dialog, then you open it again to find that the field only accepts 32 characters. If boundary meant only testing 254, 255, and 256 characters, what do you do now? How does that change your testing? Is it a soft boundary or a hard boundary? Is it an emergent boundary? And if so, what caused you to get it to emerge?

That’s what you can expect from peer conferences like this — years of experience that meet critical debate, imagination, and study. Here is where testing’s next innovations (and revolutions) are likely to occur.

As Vice President for Conferences and Training for the Association for Software Testing, I can say that the AST supports peer workshops like this. If you want to start one, send me the details and I can advise you.

Scroll to top
Close Bitnami banner
Bitnami