Automated GUI testing in agile

Recently, I’ve had several conversations at work about the cost/benefit analysis of performing automated GUI testing (in our case, using Borland SilkTest, of course) in our agile environment.
In general, automated GUI testing is most useful in the following situations: when the UI of the application under test (AUT) does not change much, and when there is significant regression testing to be done–where the time commitment of creating and maintaining automated tests is lower than the time commitment of repeating manual regression testing.
I’ve been working with an agile team that is in the first few sprints of a new product; this product’s UI is still changing frequently at this point and the product does not yet have much functionality for regression testing.
So, we have a resource allocation dilemma: this team would not benefit particularly from automated functional testing at this time. However, in, say, a year, when this product is more mature, automated UI tests would be very beneficial. But if we wait until then to start automating, we’ll never get caught up, so we need to start devoting steady effort to the automated testing soon.
In an agile environment, it’s difficult to get resource commitments for efforts that only have long-term results.
I’d love to hear whether others have faced this same dilemma and how you dealt with it.

3 thoughts on “Automated GUI testing in agile”

  1. Did you ever get any responses to this question outside of this blog?

  2. It is certainly true that if you wait too long to start developing tests, you’ll get so far behind that you’ll likely never catch up. But with a volatile UI, tests written early are quite likely to be discarded at some point.
    Why not start by writing a series of tests that is “long but not very wide” – that is, tests that exercise most or all of the forms/dialogs/panels in the UI, but in a very narrow way – probably just the “happy path” through the app? By avoiding too many corner cases or incorporating bells and whistles that are likely to change, you make the tests less brittle (less impacted by UI upheaval), but you still get at least the value of a “smoke test” that can be incorporated into your CI build which will throw up a red flag if something has gone seriously wrong. Then, as the volatility recedes, you can broaden the tests to include more paths through the app and incorporate things like tests for validation errors and other interesting bits of functionality.

Comments are closed.