In my previous post, Defect severity vs. priority, I used examples that explained the rationale behind deciding when to fix and not fix defects. Given agile’s focus on not allowing defects to go unaddressed, I now see that some people could have been confused by these examples.
Please note that that post addressed a general quality assurance concept, that the examples were hypothetical, and that it was not agile-specific.
I should write some more blog posts on my experiences with defects in agile environments.
In my recent post, Unnecessary abstraction, I used defect severity as an example. I also mentioned that more a more descriptive (less abstract) name for this information would be something like “Customer severity” or “Impact on user.”
In my post, I assumed a specific definition of severity. In my career, I’ve dealt repeatedly with confusion between defect severity and defect priority, so I thought I should document my preferred definitions here.
I define defect severity, as I mentioned above, as the effect on the software user. If severity is a dropdown field in the defect management software, I usually recommend values such as
- Critical functionality broken, no workaround
- Non-critical functionality broken, or critical with workaround
- Minor functional defect
- Cosmetic or minor usability issue
As I mentioned in my earlier post, the values for this field don’t have to be hierarchical. Who’s to say that ‘Non-critical functionality is broken’ is more or less severe than ‘Critical functionality broken, but with workaround’?
Unless new information is discovered regarding a defect (e.g., a work-around is identified), severity should not change.
When putting together a defect tracking process, I suggest that the person who enters the defect be required to provide a severity.
Defect priority represents the development team’s priority in regard to addressing the defect. It is a risk-management decision based on technical and business considerations related to addressing the defect. To make the term less abstract, I usually propose it be called ‘Development priority’ or something similar.
Priority can be determined only after technical and business considerations related to fixing the defect are identified; therefore the best time to assess priority is after a short examination of the defect, typically during a ‘bug scrub’ attended by both the product owner and technical representatives.
Here are some examples I give when explaining severity and priority:
High severity, low priority – Critical impact on user: nuclear missiles are launched by accident. Factor influencing priority: analysis reveals that this defect can only be encountered on the second Tuesday of the first month of the twentieth year of each millennium, and only then if it’s raining and five other failsafes have failed.
Business decision: the likelihood of the user encountering this defect is so low that we don’t feel it’s necessary to fix it. We can mitigate the situation directly with the user.
High severity, low priority – Critical impact on user: when this error is encountered, the application must be killed and restarted, which can take the application off-line for several minutes. Factors influencing priority: (1) analysis reveals that it will take our dev team six months full-time refactoring work fix this defect. We’d have to put all other work on hold for that time. (2) Since this is a mission-critical enterprise application, we tell customers to deploy it in a redundant environment that can handle a server going down, planned or unplanned.
Business decision: it’s a better business investment to make customers aware of the issue, how often they’re likely to encounter it, and how to work through an incidence of it than to devote the time to fixing it.
Low severity, high priority – Minimal user impact: typo. Factors influencing priority. (1) The typo appears prominently on our login screen; it’s not a terribly big deal for existing customers, but it’s the first thing our sales engineers demo to prospective customers, and (2) the effort to fix the typo is minimal.
Decision: fix it for next release and release it as an unofficial hotfix for our field personnel.
Flash can store cookie-like data on your computer. You can manage the privileges and stored data via this web page.
Today, a NetFlix company slideshow titled Reference Guide on our Freedom and Responsibility Culture has been making the blog rounds. As I was reading the slides today, I kept thinking that NetFlix is really trying to employ agile principles (and some others) company-wide. Lots of good ideas in there; i highly recommend it.
During my recent job hunt, test automation came up in practically every interview, typically some broad question like, ‘So, how would you go about implementing test automation?”
My standard answer is that you generally get the best bang for your buck the farther deeper in your code you test. As an example, I contrast maintenance of unit tests (deep end) vs. maintenance of automated UI tests: you have to update a UI test almost anytime you make a change to the UI; you only have to update a unit test if you change an existing method. And the UI typically changes much more frequently than individual classes. Furthermore, UI changes frequently necessitate changing a whole string of user actions in your automated tests, whereas unit tests, by definition, are isolated and thereby typically much quicker to update.
This morning, I ran across a new blog post by B. J. Rollison, a.k.a. I.M. Testy, titled, “UI Automation Out of Control,” in which he lists some of the shortcomings of automated UI tests and some ways you should try to test before you resort to automated UI testing. It’s a good read.