In InfoWorld, Frank Hayes offers a very brief introduction to agile: Agile programming is no hooey. This might be a good high-level starting place in an introduction to agile.
In my discussions and reading, I hear a lot of people declare that so-and-so group thinks they’re agile, but they aren’t really. I don’t think this type of either/or judgments are very useful–there isn’t really a checklist of criteria for judging whether a group is truly ‘doing agile’ or not.
I think there’s a more useful question to ask: has a group understood and embraced the principles of agile or have they viewed it as just a set of practices to be adopted (e.g., short iterations, daily stand-ups, etc.)?
It doesn’t do any good to tell a group, for instance ‘To be agile, you need to do X’ without explaining the principle behind the practice.
Unfortunately, this sounds like a group that doesn’t grasp the principles of agile. I really feel for the tester who posted the frustrated message to the forum. He seems to be trying to embrace the principles with team members who don’t see them too clearly.
Throughout my career as a QA engineer, I’ve used the following informal question as a basic sanity check: Does my gut tell me that everyone involved is on the same page? Do the developers, the QA engineers, the technical writers all have a common understanding of what they’re developing? Most of my jobs had no particular defined process, so this sanity check was particularly important.
I’ve found this question also to be useful in an agile team, but the dynamics are a little different.
From 37Signals’ blog: If youâ€™re working in a big group, youâ€™re fighting human nature
According to British author Antony Jay, there are centuries of evidence to support the idea that small groups are the most efficient. In â€œThe Corporation Man,â€ he talks about how humans have worked in small groups, usually five to fifteen people, as hunters and farmers for hundreds of generations.
Oh his blog, Andy Pohls tells an interesting story of a customer who refused to deploy code that did not have an automated test. Moreover, the output of the missing test in question was an XML file. When the customer was shown how to read the XML that was being passed between systems, he realized that it did not suit the business needs.
But what I found most thought-provoking was one of the comments to the post, written by Michael Bolton:
Itâ€™s unusual to hear that the customer learned something and used the information obtained from creating the test. . . This is much closer to my view of what is really important about testing: discovering and revealing information so that people can make informed decisions. Most of the time, we hear about something different: confirming and validating information so that people can feel reassured knowing that last weekâ€™s tests are still passing this week. That might be reassuring, but it has enormous potential for self-deception. We need always to ask if our tests are helping us to learn, not just helping us to sleep.
I’ll have to think about how sharing tests with customers can enhance quality.
I just ran across an (older) article about the difference between code coverage and code quality. The author argues that teams should not just rely on code coverage statistics. They should be more interested in the quality of the unit tests themselves.
As a QA engineer, unit testing has been one of the most difficult testing areas for me to deal with. As a non-developer and significantly poorer programmer than the developers I work with, I just don’t have the skills to review unit tests myself. And because unit tests traditionally fall into the developer’s task list–even though they are testing tasks–it’s been difficult to motivate the developers to think like QA engineers in order to implement unit test reviews or other means of testing the unit tests beyond basic code coverage stats.
It seems like this sort of process improvement should be easier to implement in an agile environment, due to the team focus and to the softer separations between roles. But so far, I haven’t had much success at fostering interest in delving into the types of issues raised in this article.
I’d love to hear how other non-programmers have helped to improve unit testing in their organizations.
In my previous post, I explained that our company’s team in Singapore performs what we call enterprise testing, and I outlined some of the steps we’re taking to help the enterprise testing team to support the agile R&D teams more effectively.
In this post, I’ll share some specific practices that we’re working to implement.
Here at Borland, basic functional testing is the domain of the agile team that develops the functionality, while a dedicated QA team in Singapore is charged with performing what we refer to as enterprise testing–performance and scalability, integration, localization, etc.
This post outlines some of the changes that we’re implementing in regard to the enterprise testing group.
Check out Slick or Slack (SFW)
This was our development group’s entry to Borland’s video contest for Sales Kickoff 2008. I’m waterfall in the video.