Emergent Complexity

This is from an episode I happened to catch when flipping channels tonight. I usually linger on PBS for a few seconds, and caught this exchange about something called “emergent complexity”. Immediately I thought of why software often behaves “irrationally”…

From Nova ScienceNow, https://www.pbs.org/wgbh/nova/transcripts/3410_sciencen.html#h03

JOHN HOLLAND: Anything I know that exhibits emergence, involves, a lot of, we might call them agents, a lot of individuals or parts. We could call them parts.

CARLA WOHL: John Holland’s first experience with emergence came from some fairly unsophisticated electronic parts that came together to create something almost intelligent. And he saw it a half century ago with a game of checkers. You used to look at this as child’s play, right?

JOHN HOLLAND: Yes, I did.

CARLA WOHL: I believe it’s your move, too, by the way.

JOHN HOLLAND: Oh, all right.

CARLA WOHL: What changed your mind?

JOHN HOLLAND: What changed my mind was my encounter at IBM – this was in the early ’50s I was busy at that time simulating neural networks.

CARLA WOHL: Meanwhile, a coworker, Arthur Samuel, was doing something else.

JOHN HOLLAND: He programmed the machine to play checkers. And I thought, “Well, what he’s doing is interesting, but that isn’t anywhere near as deep as simulating neurons.”

CARLA WOHL: It’s checkers, right?

JOHN HOLLAND: Yeah, it’s checkers.

CARLA WOHL: As it turned out Samuel had achieved something far deeper than anyone at IBM expected.

JOHN HOLLAND: He programmed the rules, and the machine would move according to the rules.

CARLA WOHL: Not only was the computer following the basic rules of checkers, it had another set of rules as well, a strategy to favor moves that might lead to victory.

JOHN HOLLAND: Simply by its experience with him and other players, it favored better moves than he did. That machine learned well enough that it could actually beat Samuel himself. With this learning I have emergence.

CARLA WOHL: It was emergent because when the computer followed simple rules, something as unpredictable and complex as learning emerged, something until then only living things could do.

—————

This may explain why testing is so complex — that even when you get all your tests planned, exploratory testing is so useful because it tends to find issues that emergent complexity creates.

Workshop for Heuristic and Exploratory Techniques (WHET)

At the 4th Workshop for Heuristic and Exploratory Techniques held here at the lab on July 7 and 8, 15 colleagues gathered to discuss boundary testing.

Aside from presentations from Keith Stobie, Scott Barber, James Bach, and Robert Sabourin to name just a few, there was a breakout session that consisted of two teams. One team’s mission was to find ways to discover boundaries. Here is the (unscrubbed) brainstorm from a team consisting of Karen Johnson, James Bach, Scott Barber, Dawn Haynes, Henrik Andersson, David Gilbert, and Michael Bolton.

It’s heuristic and some will need explanation from the idea owner (some day), but I wanted you to see the raw list that was assembled in just 30 minutes of discussion. What ideas do they trigger for you?

  • Probing
  • Resource utilization
  • Customs
  • Removable drives
  • Investigation of transitions
  • Defocusing strategies
  • Datatype converion
  • Purposeful vs accidentally
  • Trends and patterns
  • Push every button, slide every lever
  • Frenetic test execution
  • Database model
  • Confusable sets
  • XML schema
  • Dogpiling
  • Grouping
  • Intentional stress
  • Tools (Nikto, Holodeck)
  • Model validation
  • Continuity and discontinuity
  • Architecture
  • Limits, extents, borders
  • Communication paths
  • Theories of error
  • Licensing
  • Reverse engineering tools
  • Forensic tools
  • Log files
  • Ask developer
  • Cookie expiration
  • Test printer
  • Magic numbers
  • Taxonomy
  • Linguistic analysis
  • Misdeclared boundaries
  • Space
  • Terminator
  • Mismatched intrfaces
  • Finished
  • Root and stem effects
  • HTML schema
  • Powers of 10 / powers of 2
  • Opposite
  • Embrace color
  • Dissection
  • Screen resolution
  • Batch
  • HW compatibility
  • Transition to 3rd party
  • Authorship
  • DLLs missing
  • Protocol
  • Code analysis
  • Unit tests
  • API
  • Code coverage methods
  • Import old data
  • States
  • Complexity
  • Events and triggers
  • Exploratory search (for limits or transitions)
  • Periodicity
  • Scatter plots
  • Concurrency
  • Precision and fuzziness
  • Traditional boundaries
  • 1-3-7-11-more
  • Experience
  • Trend graphing
  • Bisection search
  • Round numbers
  • Contracts
  • Google it
  • Reliability
  • Segregation
  • Sorting
  • Visual discontinuity
  • Failure analysis
  • Blink testing
  • OSI layers
  • Comparison
  • Brute cause analysis
  • Exploit each bug
  • Test something similar
  • Trigger
  • Array
  • Alternate functional pathways
  • “Monday morning” disease
  • Incursion
  • Intuition and instinct
  • Reactivating dormant
  • Malicious user
  • Judging
  • Anti-random
  • Retrospective search
  • Fuzz testing
  • Marching and resonance
  • Merging
  • Decomp and recomp
  • Composite data structures
  • Watch the kids
  • Max and min
  • Switching
  • Notice if you have a hard time finding a boundary
  • Should and shouldn’t
  • Built-in tests
  • Internal vs. external
  • Data replication
  • Mirror
  • Look in the spec
  • Look in the secret spec
  • Check the industry standard
  • Controls (third-party)
  • Leaks
  • Author / authorship
  • Configuration Management
  • “Don’t do that”
  • Encapsulation
  • Sloppy
  • Infinite
  • Divide by 0
  • Zero
  • Java pitfalls
  • “Impossible”
  • “Not yet”
  • Marketing claims
  • Check Google groups
  • System requirements
  • Intended use
  • Egg language
  • Price
  • Turn debug info on
  • Monitoring
  • Memory
  • Multiple processors
  • Transaction logging
  • Selection
  • Error codes
  • Check bug taxonomy
  • Roles and permission
  • New build

The Conference of the Association for Software Testing (CAST)

Why aren’t there testing conferences in Seattle?

There are workgroups like QASIG, SASQAG, ASQ, SPIN, and OWASP, but no full-on conferences.

So, I decided to have one.

I’m a member of the Association of Software Testing. It’s a non-profit organization started two years ago by Cem Kaner and a handful of other software testing luminaries. There’s more than a handful of us now (about 180 members) and last year we had our first conference in Indianapolis. One hundred people attended the three-day program.

I was Program Chairman last year, helping to review papers and decide the conference schedule. But this year, I told the Association that I’d run for Conference President if they’d let me host the conference in Seattle. They agreed.

So, here we are, 9 months later, and we did it. We had a testing conference! Ok, it was in Bellevue, not Seattle, but it’s the thought that counts.

From July 9 – 11 at the Meydenbauer Center, members of the AST gathered along with anyone else who wanted to discuss this year’s theme: “Testing Techniques: Innovations and Applications.”

Highlights:

  • 176 attendees, featuring 18 sponsors
  • We held the second annual AST tester competition, sponsored by Microsoft. Thirteen teams competed for cash prizes totaling $2500. Each of the teams’ bugs was videotaped and categorized to post later on the CAST site.
  • We held the first Tester Exhibition, sponsored by Google and the brainchild of Google Software Test Engineer and noted Model-Based Testing expert Harry Robinson. The exhibition featured me, Harry, James Bach, Lydia Ash, Robert Sabourin, Danny Faught, Doug Hoffman, Mike Kelly and Scott Barber, assembled as a team to approach and discuss the testing of the CAST 2007 registration page. The aim was to show the audience how us so-called “experts” would handle a real testing problem, and invite them to play along.
  • Q and A sessions for each speaker during the conference were led by a trained facilitators, and each attendee had numbered, colored placards to raise when they either had a new question or wanted to add something to a question raised by another attendee.
  • The program featured 5-minute preambles for each speaker for the audience to get an idea of which talk they wanted to attend next.
  • There was a separate breakout room for conferring, in case a talk ran long and the speaker still had questions to answer.

In the next few weeks, the conference program will be published on the CAST site as well as artifacts from the testing competition. For details, go to https://www.associationforsoftwaretesting.org/conference/

A Note About Peer Conferences

This past July 7 and 8, I hosted the 4th Workshop on Heuristic and Exploratory Techniques here at the lab.

That shouldn’t excite you.

But what you may find interesting is WHET’s format. It was modeled after the Los Altos Workshops on Software Testing (LAWST) where attendance is by application or invitation, and limited to 15-20 participants. Instead of several topics, a LAWST-style conference has only a few specific topics, with the goal to work on at least one thoroughly. Discussion does not end until the group is satisfied (or exhausted).

Attendees are highly encouraged to challenge the results of the presenter, who must defend their presentation, which is usually an experience report. Papers and blogs from the ensuing discussions are encouraged and belong to everyone in the group.

This was my 13th workshop in the LAWST format, but this blog is the first time I’ve decided to write about the concept because I think there’s a lot of innovation out there waiting to be discovered and inspired through peer conferences like this.

For example, if you think boundary testing is as simple as thinking of a test case just before the boundary, then one on the boundary, then one just after the boundary, you’re not alone. But assemble 15 colleagues to talk about it and you might get 15 different definitions of what a “boundary” is, including:

“A category of software testing focused on determining and/or exploiting changes in the pattern of system behavior based on changing one or more test variables or parameters along a (frequently linear) range of values or settings.”

And

“Observing event driven state transitions in an application, and exploring the relationship between: a) The transition that occurred b) The underlying data associated with the triggering event c) The intended transition that should have occurred in the context of that data”

And

“Exploring changes in behaviour as the values in variables approach or pass critical points.”

And

“Any testing to the extent that it involves the discovery or evaluation of boundaries or boundary-related behavior, with ‘boundary’ defined as a dividing point between two otherwise contiguous regions of data that A) do evoke or represent different product behavior, and/or B) should evoke or represent different product behavior.”

Discuss these for just a few minutes with your colleagues and watch context emerge like notions of:

  • Time-based boundaries
  • State-based boundaries
  • Soft boundaries
  • Hard boundaries
  • Boundaries seen through different perspectives
  • Testing *for* boundaries vs. testing *at* boundaries
  • Emergent boundaries
  • Implicit vs. explicit boundaries

And you may find yourself inspired to show an impromptu software testing demonstration — perhaps where an input field accepts 255 characters, then you click OK to the dialog, then you open it again to find that the field only accepts 32 characters. If boundary meant only testing 254, 255, and 256 characters, what do you do now? How does that change your testing? Is it a soft boundary or a hard boundary? Is it an emergent boundary? And if so, what caused you to get it to emerge?

That’s what you can expect from peer conferences like this — years of experience that meet critical debate, imagination, and study. Here is where testing’s next innovations (and revolutions) are likely to occur.

As Vice President for Conferences and Training for the Association for Software Testing, I can say that the AST supports peer workshops like this. If you want to start one, send me the details and I can advise you.

The Seattle Testing Community

You know about Silicon Valley, but up here, we’re just as much a software community.

We have Starbucks and Boeing, Microsoft and Google, RealNetworks and F5 Networks, Getty Images and Corbis, Safeco and Washington Mutual, T-Mobile and Verizon, Amazon and Adobe…

With all of these big name technology companies, there’s lots of room to discuss software testing.

Here’s a sample:

  1. QASIG www.qasig.org – this is the group that Quardev sponsors that meets on the second Wednesday of every odd month here at the lab – see the site for past speakers, all of which were booked because of their innovative topics
  2. SASQAG www.sasqag.org – meets on the third Thursday of every month.
  3. Sea-SPIN www.seaspin.org – Seattle Eastside Area Software Process Improvement Network – small group devoted to meeting monthly about process issues – meets at Construx in Bellevue on the third Monday of every month.
  4. OWASP Open Web Application Security Project – https://lists.owasp.org/mailman/listinfo/owasp-seattle – Mike de Libero, OWASP Seattle chapter co-leader.
  5. ASQ American Society for Quality (Seattle chapter) – https://www.asq-seattle.org/
  6. PNSQC Pacific Northwest Software Quality Conference – https://www.pnsqc.org – Now in their 25th year! A three-day conference held in Portland in October.

If you know of something that is not on this list, email me!

Baby on Board

An honest admission:

I became a father this past February and I *still* don’t know what to do when I see a “Baby on Board” sticker on the car in front of me.

Seriously. What does that driver want me to do? NOT carjack them? NOT honk if they cut me off, else it wake their sleeping baby?

Now when I drive and baby Charlotte’s in the back seat, I *still* don’t know what people want me to do when I see their sign. But I did think of an interesting reason to have one.

One day while I was driving, Charlotte dropped her pacifier and shrieked uncontrollably. As I fussed and impulsively reached back to find it while driving, it occurred to me that I should pull over because I was beginning to drive a bit erratically. In other words, there was an internal system in my car that other drivers couldn’t see.

What would a sign have done? Maybe it’s more to connote risk. “I may start driving like an idiot because I am subject to the whims of a child capable of reaching 150 decibels at any second…”

If only software came with such signs… but then I guess I’d be out of a job and wouldn’t be able to afford one of those nifty little “Baby on Board” signs…

Scroll to top
Bitnami