The purpose of testing

Oh his blog, Andy Pohls tells an interesting story of a customer who refused to deploy code that did not have an automated test. Moreover, the output of the missing test in question was an XML file. When the customer was shown how to read the XML that was being passed between systems, he realized that it did not suit the business needs.
But what I found most thought-provoking was one of the comments to the post, written by Michael Bolton:

It’s unusual to hear that the customer learned something and used the information obtained from creating the test. . . This is much closer to my view of what is really important about testing: discovering and revealing information so that people can make informed decisions. Most of the time, we hear about something different: confirming and validating information so that people can feel reassured knowing that last week’s tests are still passing this week. That might be reassuring, but it has enormous potential for self-deception. We need always to ask if our tests are helping us to learn, not just helping us to sleep.

I’ll have to think about how sharing tests with customers can enhance quality.

Don’t be fooled by the coverage report

I just ran across an (older) article about the difference between code coverage and code quality. The author argues that teams should not just rely on code coverage statistics. They should be more interested in the quality of the unit tests themselves.
As a QA engineer, unit testing has been one of the most difficult testing areas for me to deal with. As a non-developer and significantly poorer programmer than the developers I work with, I just don’t have the skills to review unit tests myself. And because unit tests traditionally fall into the developer’s task list–even though they are testing tasks–it’s been difficult to motivate the developers to think like QA engineers in order to implement unit test reviews or other means of testing the unit tests beyond basic code coverage stats.
It seems like this sort of process improvement should be easier to implement in an agile environment, due to the team focus and to the softer separations between roles. But so far, I haven’t had much success at fostering interest in delving into the types of issues raised in this article.
I’d love to hear how other non-programmers have helped to improve unit testing in their organizations.

Agile testing with globally distributed resources, part 2

In my previous post, I explained that our company’s team in Singapore performs what we call enterprise testing, and I outlined some of the steps we’re taking to help the enterprise testing team to support the agile R&D teams more effectively.
In this post, I’ll share some specific practices that we’re working to implement.

Continue reading Agile testing with globally distributed resources, part 2

Agile testing with globally distributed resources

Here at Borland, basic functional testing is the domain of the agile team that develops the functionality, while a dedicated QA team in Singapore is charged with performing what we refer to as enterprise testing–performance and scalability, integration, localization, etc.
This post outlines some of the changes that we’re implementing in regard to the enterprise testing group.

Continue reading Agile testing with globally distributed resources

Agile self-righteousness

I posted a question to an agile testing group and one of the replies implied that if we were doing agile correctly, I wouldn’t face the problem that I asked about.
The mildly holier-than-thou tone of the reply really sounded familiar to me–I realized that I often have the same attitude. But now I see that such self-righteousness doesn’t do anyone any good. If someone hasn’t already gotten the message through preaching, more of it really isn’t going to help. I will monitor my own responses better in the future.