Problem-solvers vs task-doers

James Bach posted the following comment to my recent post about context-driven testing:

[Context-driven testing] is not self-evident, except to problem-solvers. To “task-doers” it is self-evident that they should memorize practices and ignore context– because they will say that context doesn’t really change or matter much.

I’ve been mulling that thought over for the last few days. In my jobs as QA lead/manager/architect, I’ve designed and deployed a lot of QA processes over the years: defect tracking, automated testing, test management, etc.
I would rate my success in this work as only so-so: many of the processes I’ve developed were not followed very well, not by very many people or not for very long. This apparent lack of success has troubled me.
But if you view my process-development work in the context of problem-solvers vs. task-doers, maybe those aren’t the right judgment criteria.
I’ve almost always worked in environments that valued problem-solvers (like me) over task-doers (or process-followers), and when I’ve implemented said processes, my coworkers and I were very clear that we would implement as much process as necessary and no more. Nothing bugs problem-solvers like me more than processes that are followed for their own sake–especially processes that do not seem to serve any valuable purpose.
So, within a group of problem-solvers, processes arise, evolve and die based on need; processes don’t tend to live on if they don’t have clear value. In such an environment, then, the appropriate criteria for judging the success of a process would be: did the process serve its intended purpose? And based on the general success of the groups that used the processes that I implemented, I would say my work was fairly successful: the processes that I developed and implemented usually served the need at hand, and then evolved or died based on changing needs.

Context-based testing

Cem Kaner, James Bach and Bret Pettichord have been developing a concept that they’re calling context-driven testing. Here’s one version of the emerging definition:

Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.
Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.

It’s good that Cem, Bret and James are trying to define this concept, but I thought that context-based testing, like risk-based testing, was self-evident. It’s good to have principles to follow, but I would never dream of following the prescribed process without evaluating its appropriateness to the situation and adapting it appropriately.
One example where context-based testing is necessary is automated testing. I absolutely believe that as much testing as possible should be automated, but developing automated tests takes time and effort and there are other contentions for the resources. With functional automation, this often results in a phased automation strategy: start small, expand based on priorities over time.

Career reflection

I think the most impressive looking items on my resume are probably my stint as Director of Quality Assurance at two small companies. In reality, however, my current job as QA Architect at Borland is, by far, the best experience that I’ve had.
At the small companies, I was in charge of both architecting and implementing QA processes. At my current job, however, I am responsible for helping other groups within the company implement the processes that I devise. That’s a much more challenging and educational task than implementing my own processes, and it’s also, not coincidentally, why I sought a QA architect position at a larger company.

Exploratory testing as a structured process

In common usage, I think the term ‘exploratory testing’ is often used to refer to ad hoc testing–it’s lipstick on a pig. But the big thinkers in quality assurance view ET as a structured testing process. Mike Kelly lists some of the skills necessary for performing exploratory testing. Mike also points to the newest version of James and Jonathan Bach’s Exploratory Testing Dynamics [PDF] document, which is very useful.

Gotcha

In a new post, Scott Barber reminds testers that there may often be valid business reasons for making decisions that may run counter to the tester’s view of what it takes to build quality software:

Most testers I meet simply have not been exposed to the virtually impossible business challenges regularly facing development projects that often lead to decisions that appear completely counter to a commitment to quality when taken out of context. The fact is that there are a huge number of factors influencing a software development project that, at any particular point in the project, may rightly take precedence over an individual tester’s assessment of quality. Given their lack of exposure, it’s no wonder testers seem to habitually take a “my team doesn’t listen to me” point of view.

When I conduct job interviews with QA engineers, I often test the candidate’s awareness of these factors by asking this question: “Can you name a time when you just had to put your foot down with regard to quality? For example, declared that the software can’t ship due to quality concerns, etc.?”
crisis_management.jpgIt’s a little bit of a trick question. The answer that I hope to hear is: No; it’s not my job to make those decisions; it’s my job to provide risk assessment data to decision makers who do have to make these tough decisions. Secondarily, if I’m doing my job correctly throughout the dev cycle, there should not be any surprises of this type. If a situation is building that might result in such a confrontation, then I haven’t done my job in monitoring the situation, trying to solve it, or at the very least keeping management in the loop on the building crisis, so that they can make appropriate contingency plans. There’s nothing management likes less than getting into a crisis situation with no warning.

What’s the deal with Gen-Y testers?

Down under, Dean Cornish has been having a hard time finding qualified QA engineers, and in his recent blog post, he ponders why that is.
In his post, Dean throws out a lot of possible reasons for this problem, but the end of the post gets to the heart of the matter for him:

Off the top of my head I cannot recall a single university in this country that talks about a career in testing as an equally viable career choice in the same vein as development. Even though in the workplace, I’d argue that testers have an equally as important role as developers. This discrepancy contributes to our lack of growth in mature and capable candidates, leading us to see the same poor candidates going from shop to shop and always somehow getting through the front door.
It is as though testing has become the place for people who fail at being a dev, a system analyst or business analyst or if you can pull a visa and need something where the demand is so great that the quality of the screening is frequently wavered to get “warm bodies” through the door.

Maybe the situation is different in Australia than in Austin, but I’m not sure I see the same dearth of qualified candidates. And as for Dean’s concern about testing not being seen as “an equally viable career choice . . .as development”, as far as I can tell, that’s always been the case. If anything, this situation might be better than it used to be as the software industry has matured.
I’d love to hear others’ thoughts and experiences.

Conventional software QA engineers and agile

As I learn more about and gain more experience with testing in an agile environment, I’m becoming increasingly concerned about the suitability of conventional QA engineers in agile environments. Since agile methodologies stress that the entire team is responsible for testing and that as much testing as possible be automated, the QA specialist has limited usefulness on an agile team.
I still believe that agile teams need people whose primary interest is ensuring quality. These team members should be primarily responsible for some traditional QA tasks: recommending the types of testing that need to be performed, recommending and/or planning testing strategies, tracking that all the necessary testing is performed, etc. But on an agile team, not all of the test writing and execution itself is necessarily performed by these same team members.
So, what does this mean for conventional QA engineers?
When asked for career advice, I’ve always recommended that QA engineers become as technical as possible: learn programming languages, testing tools, etc. Quality assurance in an agile environment just strengthens the necessity of this recommendation. I don’t think it’s necessary for QA specialists to become capable of writing production code (though it’s helpful!), but a more technical QA engineer can work on a larger portion of the automated test architecture, test writing and execution.
I’m interested to hear others’ thoughts on this.

On the same page

Throughout my career as a QA engineer, I’ve used the following informal question as a basic sanity check: Does my gut tell me that everyone involved is on the same page? Do the developers, the QA engineers, the technical writers all have a common understanding of what they’re developing? Most of my jobs had no particular defined process, so this sanity check was particularly important.
I’ve found this question also to be useful in an agile team, but the dynamics are a little different.

Continue reading On the same page

The purpose of testing

Oh his blog, Andy Pohls tells an interesting story of a customer who refused to deploy code that did not have an automated test. Moreover, the output of the missing test in question was an XML file. When the customer was shown how to read the XML that was being passed between systems, he realized that it did not suit the business needs.
But what I found most thought-provoking was one of the comments to the post, written by Michael Bolton:

It’s unusual to hear that the customer learned something and used the information obtained from creating the test. . . This is much closer to my view of what is really important about testing: discovering and revealing information so that people can make informed decisions. Most of the time, we hear about something different: confirming and validating information so that people can feel reassured knowing that last week’s tests are still passing this week. That might be reassuring, but it has enormous potential for self-deception. We need always to ask if our tests are helping us to learn, not just helping us to sleep.

I’ll have to think about how sharing tests with customers can enhance quality.