Numbers don’t lie?

It’s a simple statement, but I honestly don’t know what it means.

It’s said by testing pundits at conferences and by project managers who want to make a point that there is truth in the numbers they are using in their reports, but to me, there’s always the fine print – caveats and provisos in 3-point font at the bottom of the contract.

Here is a stat from the Golf Channel

Joe Durant led the Tour in driving accuracy, hitting 74.9 percent of his fairways.

Furthermore, he was third in hitting greens — 70.58 percent.

Seems like he should be the top money-maker on the tour last year, but no. He was 182nd.

Why was that?

Because he was 137th in putting average.

That means he had trouble sinking his putts to get a good score for each hole.

Maybe it’s why golfers say “drive for show, putt for dough.”

Gathering that kind of “fine print” context is known as “framing” – it’s a reference point to make the data tell a different story. The reason I don’t understand “numbers don’t lie” is that there are all kinds of different stories that statistics can say depending on how you frame it, depending on the context you discover.

People don’t need to be told when to scrutinize numbers. If they don’t believe, they will automatically question it. The thing is, they might not know how. When we hear something, images and assumptions run through our mind so fast that we often don’t have to time to think.

Given that, let’s take some time right here to slow things down — like an instant replay.

Any number, any statistic is like software. It can be tested.

Here are some statistics to consider. Each of these has a framing context that makes it a half-truth, or that makes it risky to believe:

1) After 500 downloads, zero bugs have been reported in the three weeks since release.

Let’s look at that in slo-mo…

a. No one has actually *deployed* the software yet;

b. Release was on 12/15, so no one as around to test it during the Christmas / Hanukah / Kwanzaa / New Year holidays;

c. All kinds of bugs were found, but no one has uploaded the reports yet;

d. Due to a bug in the download report, a download is mistaken for a site visit, and no one has actually downloaded the software due to another bug

e. The bug reporting mechanism is broken and no one knows it.


2) We’ve run 123,000 tests.

Let’s take another look at that…

a. Each character combination on the character map is considered a test case.

b. This includes tests that were run in the past four years, not just the current cycle.

c. 2 million additional tests have not been run.

d. Each line of all the test procedures is considered a “case”.

e. It’s a repetitive stress test: 1 test run 123,000 times.


3) 100% of our tests are passing.

And here’s the slowed down replay…

a. We’ve only run 10 tests out of 3000.

b. We’re only counting automated tests.

c. Who’s “our”? Maybe a lot more important tests from an outside vendor are failing.

d. We’ve only tested on one supported platform or configuration.

e. A bug in the test harness considers the wrong machine state or return code as “passing.”

My best advice for testing statements like this might take some courage. It also might take some diplomacy and patience – especially in meetings when everyone wants to get to the heart of why the presenter is using whatever numbers and statistics they’re using.

Remember, you have the right to know what it is you’re “buying” by listening to someone report. You have the right to slow things down, examine some of the key words in the sentence and test them. It may not be a contract you’re signing, but it could help you know what the truth might be.

Scroll to top
Close Bitnami banner