To my surprise, I’ve never run across this wikipedia page before: Category: Software testing. Bookmarked it.
This morning, I received the following email (identifying info redacted; I don’t see any purpose in revealing the sender’s product):
Ran across your blog “Agile Testing” while looking for bloggers in the software testing area. I thought I’d tell you about the launch of [redacted]. It’s a flexible [redacted] system that features [redacted]…
For the first time, IT leaders can realize the benefits of [more marketing text redacted]…
[Redacted] provides a simple way for organizations to [redacted]…
We’d love to hear what you think about [redacted] and will be glad to share more information with you, including a demo. A mention in your blog would be great. (emphasis added)
This is actually a type of software that interests me, but what galls me is the intent of this email. The purpose of this email is not to get the word out about this fantastic software to a blogger whose audience might also be interested in it. At best, it’s an attempt to get free publicity, but I rather suspect the actual goal is search engine manipulation: getting a link to their site from a site in order to boost their site’s search engine ranking.
If the marketing drones had made a good faith effort to educate me about their software, I might well have tried it out and posted my (hopefully) review on this blog. With these cheap tactics, however, all they get is my scorn.
Today, an Amerigas employee who is involved with the rollout of their self-service propane bottle exchange service contacted me regarding my poor experience with the service at my local Home Depot.
He asked me some specific questions about my experience in order to try to troubleshoot the problem.
That’s just so cool.
[Context-driven testing] is not self-evident, except to problem-solvers. To “task-doers” it is self-evident that they should memorize practices and ignore context– because they will say that context doesn’t really change or matter much.
I’ve been mulling that thought over for the last few days. In my jobs as QA lead/manager/architect, I’ve designed and deployed a lot of QA processes over the years: defect tracking, automated testing, test management, etc.
I would rate my success in this work as only so-so: many of the processes I’ve developed were not followed very well, not by very many people or not for very long. This apparent lack of success has troubled me.
But if you view my process-development work in the context of problem-solvers vs. task-doers, maybe those aren’t the right judgment criteria.
I’ve almost always worked in environments that valued problem-solvers (like me) over task-doers (or process-followers), and when I’ve implemented said processes, my coworkers and I were very clear that we would implement as much process as necessary and no more. Nothing bugs problem-solvers like me more than processes that are followed for their own sake–especially processes that do not seem to serve any valuable purpose.
So, within a group of problem-solvers, processes arise, evolve and die based on need; processes don’t tend to live on if they don’t have clear value. In such an environment, then, the appropriate criteria for judging the success of a process would be: did the process serve its intended purpose? And based on the general success of the groups that used the processes that I implemented, I would say my work was fairly successful: the processes that I developed and implemented usually served the need at hand, and then evolved or died based on changing needs.
Over the holidays, I noticed that the Home Depot near our house had installed a self-service propane bottle exchange system. It consists of a computer kiosk and a rack of cages for propane bottles.
I watched the demo:
Step #1: You swipe your credit card and make the purchase.
Step #2: The door to an empty cage pops open.
Step #3: You insert your empty bottle in the cage and close the door.
Step #4: The door to a cage containing a full bottle pops open.
Step #5: You take your full bottle and go on your merry way.
As soon as I saw it, I knew I had to try it. One of the big advantages is that you can replace your propane bottle even when the Home Depot store is closed. As a QA engineer, however, I knew better than to try it after hours.
So, last Sunday afternoon, I took my empty propane bottle over to the store and tried it out. It worked great until the end of step #3. After I deposited my empty bottle and closed the cage door, the kiosk screen thanked me for my transaction and did not continue with steps #4 and #5. No error messages. Nothing. I was left standing in the parking lot with a $19.97 debit and no propane.
I walked up to the service desk in the store, and explained what had happened. The service desk employee told me that it had been doing that. He added, in a matter of fact tone, “It’s computerized, you know” as if that explained everything. He grabbed his keys, took me out to a different rack of cages with padlocks on them, gave me a full bottle and said, “Good thing we kept these around.”
As a software developer, I’m proud to see that the products of my industry inspire such great confidence and excitement in people.
Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.
Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply â€œbest practices,â€ we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.
It’s good that Cem, Bret and James are trying to define this concept, but I thought that context-based testing, like risk-based testing, was self-evident. It’s good to have principles to follow, but I would never dream of following the prescribed process without evaluating its appropriateness to the situation and adapting it appropriately.
One example where context-based testing is necessary is automated testing. I absolutely believe that as much testing as possible should be automated, but developing automated tests takes time and effort and there are other contentions for the resources. With functional automation, this often results in a phased automation strategy: start small, expand based on priorities over time.