Descriptive vs prescriptive testing

Over at his Collaborative Software Testing blog, Jonathan Kohl has an interesting post about Descriptive and Prescriptive Testing which he defines as follows:
A prescriptive style is a preference towards direction (“do this, do that”) while a descriptive style is more reflective (“this is what we did”). Both involve desired outcomes or goals, but one attempts to plan the path to the outcome in more detail in advance, and the other relies on trying to reach the goals with the tools you have at hand, reflecting on what you did and identifying gaps, improving as you go, and moving towards that end goal.
Jonathan also explores the personality types that are drawn to each type of testing, and he relates how he recently incorporated descriptive testing into a prescriptive environment:

For example, I helped a friend out with a testing project a few weeks ago. They directed me to a test plan and classic scripted test cases. . . Within an hour or two of following test cases, I got worried about my mental state and energy levels. I stopped thinking and engaging actively with the application and I felt bored. I just wanted to hurry up and get through the scripted tests I’d signed on to execute and move on. I wanted to use the scripted test cases as lightweight guidance or test ideas to explore the application in far greater detail than what was described in the test cases. I got impatient and I had to work hard to keep my concentration levels up to do adequate testing. I finally wrapped up later that day, found a couple of problems, and emailed my friend my report.

And then he explains how he developed some descriptive testing:

The next day, mission fulfilled, I changed gears and used an exploratory testing approach. I created a coverage outline and used the test cases as a source of information to refer to if I got stuck. I also asked for the user manual and release notes. I did a small risk assessment and planned out different testing techniques that might be useful. I grabbed my favorite automated web testing tool and created some test fixtures with it so I could run through hundreds of tests using random data very quickly. That afternoon, I used my lightweight coverage to help guide my testing and found and recorded much more rich information, more bugs, and I had a lot of questions about vague requirements and inconsistencies in the application.

Like Jonathan, I definitely lean towards the descriptive testing approach. In fact, you might say that the experience that I recently described in my post Just enough process: the checklist was an attempt to balance out prescriptive and descriptive testing.

Team liturgies

Over at the Agile Software Development blog, Janusz Gorycki reminds us that scrum activities–daily stand-up, planning, retrospectives–play an important role beyond the purported goal of each activity:

The quality of the team and the caliber of its members is more important than the efficiency of its processes – nothing controversial about this statement really, this fact has been well documented in some pretty classic books on team management (“Good To Great” by Jim Collins comes to mind). And the old fashioned rituals are crucial in maintaining the team spirit and identity. This is because people happen to be wired in such a way that it makes them feel safe and comfortable if they can devote some time every day to repeatable old habits. And teams become better integrated if they have some common habits that they care to repeat every day – as a team. Liturgies and rituals do matter a lot – and there is no reason for these rituals to be overly efficient. That’s not their purpose to be efficient.

I love it that he calls these scrum activities “liturgies.” As a church-goer, I think it’s an apt analogy. I recognize that there is indeed a certain value in just coming together with my fellow parishoners each week and at the very least, going through the motions together. Though with church, as with work, you need to try to keep the point of the ritual in mind.

Just enough process: the checklist

Following processes is important for ensuring quality, but often processes become an end in themselves. Therefore, one of the stock statements you’ll hear from me is: we need just enough process and no more.
checklist.jpgIn one company where I worked, we had a couple of developers on our project who were inexperienced at doing UI work. Every time one of them delivered a UI screen, the QA engineer would immediately find numerous UI defects. It would then take quite some time for the QA engineer to document the defects, then more time for fixing and verifying them.
After we’d gone through this a few times, we realized we needed some process in place to try to prevent these defects in the first place. The problem was, the UI defects related to assumed requirements. Our requirements stated, for instance (not a real example), that the user needed to be able to specify the following information when adding a contact to the address book: first name, last name, street address, city, state, ZIP, etc. However, the UI defects that the QA team was finding lay in details at a lower level than our specifications: tab order, various types of form field validation, consistency of error messages, etc.
The obvious solution would have been to make our requirements more detailed. This approach, however, would have taken as much or more time than the defect process was taking and nobody wanted to deal with that level of requirements.
Instead, I noted that the types of data we were working with were common (actually, I think basic contact info was in fact one of our areas of functionality) or at least well understood by our team members for the domain of our application. The problem lay not so much in the requirements but in the fact that the inexperienced UI developers were not used to thinking about all the little ‘gotchas’ in UI work.
My solution was a UI checklist for each type of UI screen. For a data input form, for instance, it included things like:

  • tab order
  • UI widget is appropriate to data type
  • hot keys on field labels
  • data validation: required fields
  • data validation: min and max length
  • data validation: allowed characters
  • etc.

I presented the checklists to the developers as follows: we would like to ask you to make a good faith effort to address all of the items on the appropriate checklists before delivering UI screens to QA. We all understand the domain and data well, so I figure that, for example, the QA engineer will agree with 80% of your choices on how you implemented these details, 10% of the time the QA engineer will disagree, and 10% of the time there will just be a bug.
After that point, the UIs that these developers delivered to QA were noticeably more mature first time around, and my 80/10/10 example turned out to be roughly correct.
I think that the last part–making it clear that we trusted the developers–was the key in the success of this program. Instead of focusing on the fact that these developers were inexperienced, we gave them some guidelines and showed them that we trusted them to do the right thing–both in just following through on their verbal commitment to make a good faith effort to address these details in the choices that they made in doing so.