SilkTest’s replacement for the Exists() method

I’ve recently started a new SilkTest project testing a web application, using SilkTest’s new open agent and the 4Test scripting language. This post covers an aspect of what I’ve learned.
In the SilkTest ‘classic’ agent, you used the ‘Exists()’ method to test whether an object exists in the application under test, e.g.,:

if (Page.Object1.Object2.Object3.Exists())
// do something...

With the open agent’s dynamic object recognition, the Find() method is what you need to use, but it took me some research to figure out how to do an if statement with the find() method. Here’s a test:

if (Desktop.Find("//BrowserApplication//BrowserWindow//INPUT[@id='sysname']",{5, false}) != NULL)
// do something

You’ll notice that I added an optional argument: {5,null}. These two values constitute a FINDOPTIONS data type record. The first one is the timeout. The second value is the important one for our purposes: it “determines whether the Find method throws an E_WINDOW_NOT_FOUND exception if no object is found or NULL is returned if no object is found. ”
So, you set that value to FALSE and then test to see whether or not the Find() method returns NULL. If not null, the object exists.

Borland SilkTest’s new open agent

I’ve been a user of Borland SilkTest off and on since the late 1990s. After not having used it for several years, I picked it up again when I worked at Borland 2006-2009. Since it was our company’s own tool, we intended to use it to automate testing of the web UI application I was working on. However, we faced some significant challenges.
The first problem was the well-known problem with window declarations. Due to the depth of objects in our web-based UI, maintenance of the window declarations was an onerous task. The other problem was that the agent and recorder simply didn’t interact very well with our AJAX-y UI (we were using ExtJS for the UI).
Since that time, Borland has released the new ‘open agent’ for SilkTest. I began using the open agent right before I left Borland and I’m creating a new project with it here at Polycom. I’m happy to say that the SilkTest dev team has done a really good job of overcoming the previous shortcomings of the ‘classic’ agent.
The biggest problem I’m facing now with my SilkTest open agent automation is access to information. The user documentation for the open agent is not as mature as the documentation for the open agent, and as a result, I’ve opened numerous support tickets with Micro Focus to figure out how to do things.
I figured I should use my blog to share some of the things I’ve learned about the open agent and its 4Test implementation. Stay tuned…

Testing and Toyota

Testing rock star James Bach has published several good blog posts about the Toyota braking problems: Advice to Lawyers Suing Toyota, Toyota Story Analysis, CNN Believes Whatever Computers Say.
The following line struck me from the ‘Advice’ post: “Extensive testing” has no fixed meaning. To management, and to anyone not versed in testing, ALL testing LOOKS extensive. This is because testing bores the hell out of most people, and even a little of it seems like a lot.
That’s very true except when a bug slips through and gets caught by users.
That’s when it’s so much fun to remind management that they chose the amount of testing that let this bug slip past. Test lead to management: Remember way back when, I presented you with some options regarding testing: if we have X amount of time, we’ll get Y amount of testing done, here’s the way we prioritize the work and the associated risks, or if we have X+A amount of time, we can do Y+B amount of testing done, with the following risks.

Where’s s/Waldo/Stan?

As you may have noticed (those of you who still visit here), I haven’t posted anything substantive here in several months. Lots of reasons both personal and professional.
But let’s not dwell on the past. Let me tell you what I’m up to now.
At the first of this year, I started as the lead of a newly-forming test automation group at Polycom, working on the HDX video conferencing appliances. Although I’m not directly involved in the agile process here at Polycom, our new automation team is playing a crucial role in allowing the development organization to act in a more agile fashion.
Our product line is based on a codebase that has been around for a number of years, and since we develop entire appliances, it’s a pretty complex development environment with traditional applications, OS development, drivers, video and audio codecs, interoperability with other devices (both ours and other manufacturers’), etc.
Each software change goes through three levels of ‘promotions’, as we call it around here. First, it’s checked into the agile team’s codebase. Then, the team’s codebase is periodically promoted to the ‘integration’ code stream, and finally that codebase gets merged periodically into the global codebase.
Currently, promotions to the highest levels take place infrequently, these promotions are a significant event, and a lot of manual testing has to be done to ensure that the code to be promoted hasn’t broken anything.
This process has several problems. First, as alluded to above, a tremendous amount of time is devoted to manual regression testing. Second, regression issues are frequently not identified at the team level, and there are frequent high severity bug crises during promotions testing or, if a bug escapes through that, after the promotion takes place. And finally, since promotions are not daily, teams do not pull down the latest shared code frequently. So, a team develops against stale shared code for up to weeks at a time, and when they are ready to promote their code, a lot has changed in the shared codebase, leading to significant code merge problems.
The solution to this problems is obvious to agilists: continuous integration and automated testing. And in fact, that’s the initiative my group is involved in. Our goal is to have daily builds at all codebase levels and to run a suite of automated tests on each build. Considering the size and complexity of our environment, it’s an ambitious project, but I’ll keep you posted on the progress.