Just enough process: the checklist

Following processes is important for ensuring quality, but often processes become an end in themselves. Therefore, one of the stock statements you’ll hear from me is: we need just enough process and no more.
checklist.jpgIn one company where I worked, we had a couple of developers on our project who were inexperienced at doing UI work. Every time one of them delivered a UI screen, the QA engineer would immediately find numerous UI defects. It would then take quite some time for the QA engineer to document the defects, then more time for fixing and verifying them.
After we’d gone through this a few times, we realized we needed some process in place to try to prevent these defects in the first place. The problem was, the UI defects related to assumed requirements. Our requirements stated, for instance (not a real example), that the user needed to be able to specify the following information when adding a contact to the address book: first name, last name, street address, city, state, ZIP, etc. However, the UI defects that the QA team was finding lay in details at a lower level than our specifications: tab order, various types of form field validation, consistency of error messages, etc.
The obvious solution would have been to make our requirements more detailed. This approach, however, would have taken as much or more time than the defect process was taking and nobody wanted to deal with that level of requirements.
Instead, I noted that the types of data we were working with were common (actually, I think basic contact info was in fact one of our areas of functionality) or at least well understood by our team members for the domain of our application. The problem lay not so much in the requirements but in the fact that the inexperienced UI developers were not used to thinking about all the little ‘gotchas’ in UI work.
My solution was a UI checklist for each type of UI screen. For a data input form, for instance, it included things like:

  • tab order
  • UI widget is appropriate to data type
  • hot keys on field labels
  • data validation: required fields
  • data validation: min and max length
  • data validation: allowed characters
  • etc.

I presented the checklists to the developers as follows: we would like to ask you to make a good faith effort to address all of the items on the appropriate checklists before delivering UI screens to QA. We all understand the domain and data well, so I figure that, for example, the QA engineer will agree with 80% of your choices on how you implemented these details, 10% of the time the QA engineer will disagree, and 10% of the time there will just be a bug.
After that point, the UIs that these developers delivered to QA were noticeably more mature first time around, and my 80/10/10 example turned out to be roughly correct.
I think that the last part–making it clear that we trusted the developers–was the key in the success of this program. Instead of focusing on the fact that these developers were inexperienced, we gave them some guidelines and showed them that we trusted them to do the right thing–both in just following through on their verbal commitment to make a good faith effort to address these details in the choices that they made in doing so.

Concerns about unit test quality

In a recent blog post, Nat Pryce expressed a concern that I’ve long had:

I have observed that few programmers are skeptical enough about the code they write. They focus on success cases, ignore edge cases and error cases, try to bang out as many features as possible, and don’t notice that those features are incomplete or will not work in some situations.

And then Nat goes on to explain how test-driven development can help developers improve their thinking about the tests that they write:

As programmers get more experienced with writing automated tests, they get better at thinking about how their code might fail. That’s because they get plenty of opportunity to observe where their tests are insufficient and think about how to avoid such problems in the future.

I don’t disagree with Nat at all, but I am still concerned that it’s a case of fitting a case of trying to fit a round peg into a square hole: the developer may get better at writing tests, but she’s still a developer, not a testing specialist.
This is, of course, one of the reasons why we perform other types of testing in addition to unit tests. But if we want to make sure unit tests are robust, it seems like we need to involve the testing specialist in some way. And having the developer pair with the testing specialist to review unit tests is the most obvious solution to em. Even if the testing specialist’s skill-reading codes are weak, the developer can at least describe the tests to the testing specialist (If the testing specialist has the skills to write unit tests, having him (help to) write the unit tests would be another alternative).
I’ve pursued this type of pairing a few times, but it has never worked out for more than a couple of weeks. The primary reason for this failure, in my opinion, is lack of buy-in from the developers. They don’t see the value of having a testing specialist review their unit tests, or unit testing just somehow remains the property of the development club.
Any advice or experiences you can share would be helpful.

User experience annoyances

I use Microsoft SQL Server Management Studio quite frequently in the course of my work. Most of the time, however, I’m logged in as a user who has broad privileges on the instance I’m managing.
Recently, however, I’ve been managing an internal production application that is hosted by IT. Since this application’s database resides on a SQL Server instance that contains multiple production databases, the SQL Server user that I log in as has pretty limited permissions, relating only to the one database that I need to manage.
My assumption is that if my user does not have permission to perform a function in the SQL Server Management application, the user’s access to that function will be limited–UI components that give the user access to said function will be absent or disabled.
The case in point today was creating other SQL Server users. I assumed that my user would not be able to create other users, but I was surprised to find that I could navigate to the Security/Logins node in the Object Explorer and that the ‘New Login…’ command was presented on the context menu.
Being a tester, I just had to try to create a new user–if for no other reason, to ensure that IT gave this user only the necessary permissions. So, I selected the ‘New Login…’ menu item and lo and behold, the ‘New Login’ dialog appeared. I filled out all the necessary information and submitted the form.
Only when I submitted the New Login form did I get an error message informing me that my user does not have permissions to create another user.
Since most actions in the application result in SQL queries to the database(s), I can imagine Microsoft’s reasoning is as follows: permissions are built into the database; let the database do its work; why duplicate database functionality in the UI? From a user experience standpoint, however, this does not make for the most user-friendly application.

Forest for the trees, from a different angle

Today, Elisabeth Hendrickson has a blog post out titled How Much to Automate? Agile Changes the Equation. In this post, she points out the value of timely, reliable automated test results:

But teams that do practice TDD and ATDD wind up with large suites of automated tests as a side effect. (Yes, it’s a side effect. Contrary to what some people think, TDD is a design technique, not a testing technique. We do TDD because it leads to clean, well-factored, malleable, testable code. The automated tests are just a nice fringe benefit.)
Moreover, the resulting set of automated tests is so much more valuable because the test results are typically reliable and trustworthy. When the tests pass, we know it means that the code still meets our expectations. And when the tests fail, we’re genuinely surprised. We know it means there’s something that broke recently and we need to stop and fix it.

Upon reflection, I realize that I may have assumed too much in my recent discussion (see here and here) with a colleague about his company’s automated tests. I explained the cost/benefit of creating and maintaining different types of automated tests, but I did not discuss the cost and benefit of the information that they provide; I made some assumptions in this regard.
If I had it to do over again, I would have asked him about the value that his group receives from having and running the automated UI tests. From there, we could have discussed more explicitly the value they would derive from having a robust set of unit tests and running them regularly.
As it was, I assumed that the relative worth of unit tests was much higher than the GUI tests. For reasons that I’m unaware of, that may not be the case; Or, more likely, my colleague may not have thought about the value derived from different types of tests and my exploration of that aspect would have lent additional credibility to my recommendations.

How did we get into this forest?

In my previous entry, I described a conversation I had recently. A colleague asked for my opinion on how to solve a problem with maintaining automated UI tests. After hearing a few details about the situation, I told him that in my opinion, his team had a larger problem that that: they would be better off focusing first on building and running a suite of lower-level testing, such as unit tests. Once that suite was solid, then they should explore other automated testing options, such as UI tests.
The even bigger issue here is: how did they get into this situation in the first place?
I did not talk to my colleague in detail about this, but I mentioned one fact in my previous post that sheds some light on the source of the problem: this small company contracted out their testing. In their case, they used a company that provides a part-time local QA manager and a test team in Ukraine.
By separating the testing so completely from the rest of the development process, they left the test team to address automated testing by the only means available to them: the UI.
To his credit, this colleague and his coworkers realize that the outsourced testing is not suiting their needs, and they are exploring other options for testing. But it seemed clear to me that my colleague still maintained a mental separation between testing and development. He was looking for solutions to their quality problems that could be implemented within the realm of QA/testing, not solutions to more systemic problems–solutions that would involve changing the way his entire team works.

Forest for the trees

I had a conversation recently with a colleague whose small company had contracted out their testing. One of the (several) problems of this arrangement, he said, was that the testers had created automated UI tests, but they were having trouble keeping the tests up to date. The colleague asked me what I thought they should do about it.
I gave him my standard spiel about the cost/benefit of automated testing: as a general rule, UI testing is the least cost effective automated testing that you can undertake. The main reason is its high maintenance cost; you have to update your tests pretty much every time you change the UI. And in my experience, the UI changes a whole lot more than the underlying layers.
After delivering this sermon, I asked my colleague what other types of automated testing they have. He said that they have a few unit tests that are not run on a regular basis and do not all pass anyway (so, basically, no other automated tests).
Based on this information and other things that my colleague told me, my conclusion was that the automated UI tests were not their real problem. I recommended that they give up on the automated UI tests altogether and focus on developing a comprehensive suite of lower level tests that run and pass on a daily basis, starting with unit tests. Once they have that base well covered, then they should start thinking about automated UI tests.
I got the distinct impression that my solution was not at all what my colleague wanted to hear. He wanted to solve the specific problem of the automated UI tests using their existing QA resources (the contract test team). I told him instead that they actually have a different, and broader, problem and that his company needs a more comprehensive approach to solving it.
Unfortunately, we tend to get so focused on specific problems that we forget to step back and look at the bigger picture. We miss the forest for the trees.

This chaps my ass

This morning, I received the following email (identifying info redacted; I don’t see any purpose in revealing the sender’s product):

Hi Stan,
Ran across your blog “Agile Testing” while looking for bloggers in the software testing area. I thought I’d tell you about the launch of [redacted]. It’s a flexible [redacted] system that features [redacted]…
For the first time, IT leaders can realize the benefits of [more marketing text redacted]…
[Redacted] provides a simple way for organizations to [redacted]…
We’d love to hear what you think about [redacted] and will be glad to share more information with you, including a demo. A mention in your blog would be great. (emphasis added)

This is actually a type of software that interests me, but what galls me is the intent of this email. The purpose of this email is not to get the word out about this fantastic software to a blogger whose audience might also be interested in it. At best, it’s an attempt to get free publicity, but I rather suspect the actual goal is search engine manipulation: getting a link to their site from a site in order to boost their site’s search engine ranking.
If the marketing drones had made a good faith effort to educate me about their software, I might well have tried it out and posted my (hopefully) review on this blog. With these cheap tactics, however, all they get is my scorn.