Where’s s/Waldo/Stan?

As you may have noticed (those of you who still visit here), I haven’t posted anything substantive here in several months. Lots of reasons both personal and professional.
But let’s not dwell on the past. Let me tell you what I’m up to now.
At the first of this year, I started as the lead of a newly-forming test automation group at Polycom, working on the HDX video conferencing appliances. Although I’m not directly involved in the agile process here at Polycom, our new automation team is playing a crucial role in allowing the development organization to act in a more agile fashion.
Our product line is based on a codebase that has been around for a number of years, and since we develop entire appliances, it’s a pretty complex development environment with traditional applications, OS development, drivers, video and audio codecs, interoperability with other devices (both ours and other manufacturers’), etc.
Each software change goes through three levels of ‘promotions’, as we call it around here. First, it’s checked into the agile team’s codebase. Then, the team’s codebase is periodically promoted to the ‘integration’ code stream, and finally that codebase gets merged periodically into the global codebase.
Currently, promotions to the highest levels take place infrequently, these promotions are a significant event, and a lot of manual testing has to be done to ensure that the code to be promoted hasn’t broken anything.
This process has several problems. First, as alluded to above, a tremendous amount of time is devoted to manual regression testing. Second, regression issues are frequently not identified at the team level, and there are frequent high severity bug crises during promotions testing or, if a bug escapes through that, after the promotion takes place. And finally, since promotions are not daily, teams do not pull down the latest shared code frequently. So, a team develops against stale shared code for up to weeks at a time, and when they are ready to promote their code, a lot has changed in the shared codebase, leading to significant code merge problems.
The solution to this problems is obvious to agilists: continuous integration and automated testing. And in fact, that’s the initiative my group is involved in. Our goal is to have daily builds at all codebase levels and to run a suite of automated tests on each build. Considering the size and complexity of our environment, it’s an ambitious project, but I’ll keep you posted on the progress.

One thought on “Where’s s/Waldo/Stan?”

  1. His Stan,
    We were experiencing the same problems as you note in your post.
    In that context, we have extended our CI typical life cycle with the development of a tool that implements, in addition, continuous deploy and continuous black box testing. Our system is a highly distributed one deployed in the Amazon Cloud. That is a great context to implement a tool environment like this. Basically, the system detects a new build in Hudson (our CI tool) and in a nightly basis it deploys the whole system in the staging environment in the cloud, it runs some smoke integration tests, and then executes a battery of functional automated tests by finally sending a email report with the complete set of results. The execution and validation of the results is not that simple because of our system domain and the distribution of our components, it requires much design/implementation time and effort. But it is worth to automated regression tests while we can provide resources, effort and time to the design and execution of tests that are specific to the current release candidate.

Comments are closed.