Concerns about unit test quality

In a recent blog post, Nat Pryce expressed a concern that I’ve long had:

I have observed that few programmers are skeptical enough about the code they write. They focus on success cases, ignore edge cases and error cases, try to bang out as many features as possible, and don’t notice that those features are incomplete or will not work in some situations.

And then Nat goes on to explain how test-driven development can help developers improve their thinking about the tests that they write:

As programmers get more experienced with writing automated tests, they get better at thinking about how their code might fail. That’s because they get plenty of opportunity to observe where their tests are insufficient and think about how to avoid such problems in the future.

I don’t disagree with Nat at all, but I am still concerned that it’s a case of fitting a case of trying to fit a round peg into a square hole: the developer may get better at writing tests, but she’s still a developer, not a testing specialist.
This is, of course, one of the reasons why we perform other types of testing in addition to unit tests. But if we want to make sure unit tests are robust, it seems like we need to involve the testing specialist in some way. And having the developer pair with the testing specialist to review unit tests is the most obvious solution to em. Even if the testing specialist’s skill-reading codes are weak, the developer can at least describe the tests to the testing specialist (If the testing specialist has the skills to write unit tests, having him (help to) write the unit tests would be another alternative).
I’ve pursued this type of pairing a few times, but it has never worked out for more than a couple of weeks. The primary reason for this failure, in my opinion, is lack of buy-in from the developers. They don’t see the value of having a testing specialist review their unit tests, or unit testing just somehow remains the property of the development club.
Any advice or experiences you can share would be helpful.

User experience annoyances

I use Microsoft SQL Server Management Studio quite frequently in the course of my work. Most of the time, however, I’m logged in as a user who has broad privileges on the instance I’m managing.
Recently, however, I’ve been managing an internal production application that is hosted by IT. Since this application’s database resides on a SQL Server instance that contains multiple production databases, the SQL Server user that I log in as has pretty limited permissions, relating only to the one database that I need to manage.
My assumption is that if my user does not have permission to perform a function in the SQL Server Management application, the user’s access to that function will be limited–UI components that give the user access to said function will be absent or disabled.
The case in point today was creating other SQL Server users. I assumed that my user would not be able to create other users, but I was surprised to find that I could navigate to the Security/Logins node in the Object Explorer and that the ‘New Login…’ command was presented on the context menu.
Being a tester, I just had to try to create a new user–if for no other reason, to ensure that IT gave this user only the necessary permissions. So, I selected the ‘New Login…’ menu item and lo and behold, the ‘New Login’ dialog appeared. I filled out all the necessary information and submitted the form.
Only when I submitted the New Login form did I get an error message informing me that my user does not have permissions to create another user.
Since most actions in the application result in SQL queries to the database(s), I can imagine Microsoft’s reasoning is as follows: permissions are built into the database; let the database do its work; why duplicate database functionality in the UI? From a user experience standpoint, however, this does not make for the most user-friendly application.

Forest for the trees, from a different angle

Today, Elisabeth Hendrickson has a blog post out titled How Much to Automate? Agile Changes the Equation. In this post, she points out the value of timely, reliable automated test results:

But teams that do practice TDD and ATDD wind up with large suites of automated tests as a side effect. (Yes, it’s a side effect. Contrary to what some people think, TDD is a design technique, not a testing technique. We do TDD because it leads to clean, well-factored, malleable, testable code. The automated tests are just a nice fringe benefit.)
Moreover, the resulting set of automated tests is so much more valuable because the test results are typically reliable and trustworthy. When the tests pass, we know it means that the code still meets our expectations. And when the tests fail, we’re genuinely surprised. We know it means there’s something that broke recently and we need to stop and fix it.

Upon reflection, I realize that I may have assumed too much in my recent discussion (see here and here) with a colleague about his company’s automated tests. I explained the cost/benefit of creating and maintaining different types of automated tests, but I did not discuss the cost and benefit of the information that they provide; I made some assumptions in this regard.
If I had it to do over again, I would have asked him about the value that his group receives from having and running the automated UI tests. From there, we could have discussed more explicitly the value they would derive from having a robust set of unit tests and running them regularly.
As it was, I assumed that the relative worth of unit tests was much higher than the GUI tests. For reasons that I’m unaware of, that may not be the case; Or, more likely, my colleague may not have thought about the value derived from different types of tests and my exploration of that aspect would have lent additional credibility to my recommendations.

How did we get into this forest?

In my previous entry, I described a conversation I had recently. A colleague asked for my opinion on how to solve a problem with maintaining automated UI tests. After hearing a few details about the situation, I told him that in my opinion, his team had a larger problem that that: they would be better off focusing first on building and running a suite of lower-level testing, such as unit tests. Once that suite was solid, then they should explore other automated testing options, such as UI tests.
The even bigger issue here is: how did they get into this situation in the first place?
I did not talk to my colleague in detail about this, but I mentioned one fact in my previous post that sheds some light on the source of the problem: this small company contracted out their testing. In their case, they used a company that provides a part-time local QA manager and a test team in Ukraine.
By separating the testing so completely from the rest of the development process, they left the test team to address automated testing by the only means available to them: the UI.
To his credit, this colleague and his coworkers realize that the outsourced testing is not suiting their needs, and they are exploring other options for testing. But it seemed clear to me that my colleague still maintained a mental separation between testing and development. He was looking for solutions to their quality problems that could be implemented within the realm of QA/testing, not solutions to more systemic problems–solutions that would involve changing the way his entire team works.

Forest for the trees

I had a conversation recently with a colleague whose small company had contracted out their testing. One of the (several) problems of this arrangement, he said, was that the testers had created automated UI tests, but they were having trouble keeping the tests up to date. The colleague asked me what I thought they should do about it.
I gave him my standard spiel about the cost/benefit of automated testing: as a general rule, UI testing is the least cost effective automated testing that you can undertake. The main reason is its high maintenance cost; you have to update your tests pretty much every time you change the UI. And in my experience, the UI changes a whole lot more than the underlying layers.
After delivering this sermon, I asked my colleague what other types of automated testing they have. He said that they have a few unit tests that are not run on a regular basis and do not all pass anyway (so, basically, no other automated tests).
Based on this information and other things that my colleague told me, my conclusion was that the automated UI tests were not their real problem. I recommended that they give up on the automated UI tests altogether and focus on developing a comprehensive suite of lower level tests that run and pass on a daily basis, starting with unit tests. Once they have that base well covered, then they should start thinking about automated UI tests.
I got the distinct impression that my solution was not at all what my colleague wanted to hear. He wanted to solve the specific problem of the automated UI tests using their existing QA resources (the contract test team). I told him instead that they actually have a different, and broader, problem and that his company needs a more comprehensive approach to solving it.
Unfortunately, we tend to get so focused on specific problems that we forget to step back and look at the bigger picture. We miss the forest for the trees.