CAST 2008

For the third year, I’ve been an officer for the non-profit Association for Software Testing (https://www.associationforsoftwaretesting.org). Each year, we’ve staged a small conference of less than 200 people who gather to talk about innovative developments in software testing. What makes this gathering unique is that each session has a facilitator to encourage discussion.

This year, CAST is in Toronto from July 14 – 16. Registration is now open to all, but AST members get a discounted rate.

The theme is “Interdisciplinary Approaches to Software Testing” and we’re proud to have booked the following keynotes:

Gerald Weinberg — Lessons from the Past to Carry into the Future

Cem Kaner — The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers

Rob and Anne Sabourin — Applied Testing Lessons from Delivery Room Labor Triage

Also featured are thee one-day tutorials from:

Gerald Weinberg — The Tester’s Communication Clinic

Scott Barber — Performance Testing Software Systems: Analyzing Performance Test Data

Julian Harty — Mobile Wireless Test Automation

Hung Nguyen — From Craftsmanship to Leadership

Other speakers:

Michael Bolton and Jonathan Kohl — Testing and Music: Parallels in Practice, Skills and Learning

Bart Broekman — Testing Fuzzy Interfaces – Can We Learn From Biology And Wargaming?

Morven Gentleman — Measuring File Systems

Adam Goucher — Lessons in Team Leadership from Kids in Armor

Diane Kelly and Rebecca Sanders — The Challenge of Testing Scientific Software

Jeremy Kominar — Sleight-of-Quality — A Magical Approach to Testing

Steve Richardson and Adam Geras — Seeking Data Quality: Using Agile Methods to Test a Data Warehouse

Martin Taylor — Visualization and Statistical Methods In High Volume Test Automation of Embedded Devices

Adam White — Software Testing To Improv

Doug Hoffman — Lessons for Testing from Financial Accounting: Consistency in a self-regulated profession

Scott Barber — Testing Lessons From Civil Engineering

Acceptance Testing Guidance

A few years ago, I met Grigori Melnik at a workshop for software professionals interested in learning new ways to teach testing. He was a programmer and a professor at the University of Calgary, and his presentation was titled “Test Infecting Your Developers.”

I liked him even before I heard him speak because he seemed to be interested in encouraging developers to understand and think more like testers. His presentation included a demonstration of the quirk of notepad that misinterprets the string “this app can break” as a two-byte Unicode text file when it is closed. (There’s even a wikipedia article about this.)

Four years later, Grigori is still at it- trying to “test infect” the people he works with at the Patterns and Practices Team at Microsoft to create meaningful ideasthat help software professionals around the world.

I’m pleased to be working directly with him on a project that’s dedicated to helping developers, testers, managers, and customers prepare for acceptance testing, which we define as “planned evaluation by a customer (or proxy) to determine whether a system satisfies their expectations.”

The topics we plan to focus on include:

– test objectives and strategy,

– the notion of “readiness” as well as “acceptance,”

– defining and reconciling “good-enough” criteria in various industrial contexts,

– working with customers and customer-proxies,

– supporting stories/requirements with acceptance tests.

Our aim is to create guidance artifacts, like case studies and exercises from the real world. As we do that, we’re taking an Agile approach consisting of weekly iterations, standup meetings, backlogs, story cards, personas, and interviews. I’m also keeping a “Dogfood Diary” to show how we anticipate acceptance of the work we produce (that is, “eating our own dogfood”).

On behalf of Grigori, myself, Michael Puleio, and Rohit Sharma, we welcome your comments and thoughts on any aspects of acceptance testing that you would like to comment on (especially if it’s with respect to a project pain point you are encountering).

If you have an interesting experience with acceptance testing that you’d like to share and perhaps be profiled in our guide as a case study, we’d like to hear about it!

Grigori’s blog

Scroll to top
Bitnami