Agile Testing

by Stan Taylor

“Just Good Enough”

by Stan on 2011/08/10, one comment

One of the automation engineers on our team is extremely thorough. When she does code reviews, she sends back lengthy emails, and she provides a lot of good information regarding coding practices. Her devotion to detail is a real asset to the team. However, she is getting burned out on code reviews and sometimes I think her time could be better spent on her own work.
As a team lead, I struggle with this type of team member? She’s doing outstanding work and almost every point she makes is technically correct and/or a good practice. I can’t very well tell her she’s not doing a good job.
My solution is to realize that she has a different viewpoint from mine. Hers is technical: from a technical point of view she’s almost always right. But I have to balance the technical viewpoint with the business viewpoint. While what she is doing is technically right, from a business viewpoint, it may not be the best use of her time. From a business viewpoint, sometimes the right thing is consciously to let some things slide.
On this team, we’re constantly refining our coding standards and practices. Lately, I’ve introduced the idea of ‘Just good enough.’ This is short-hand for the business viewpoint, a way of balancing the technically correct decisions with the business realities.
A lot of software engineers are happy doing their coding and letting me deal with the business issues. Unfortunately, this is one instance, however, where the engineers have to think about the business perspective as well.

How to market yourself

by Stan on 2011/06/08, no comments

I really enjoy helping QA engineers with their careers, but if you’re a stranger asking for help, how you ask makes all the difference.
Back in 1006, I received this email:

Subject: QA in Austin
I am a QA professional in Minneapolis, and I may be moving to Austin in the next few months. I found your resume and web site through Google.
You sound like a pretty interesting, friendly guy based on your website. I’m hoping that you may be able to let me know of some people in Austin who may be hiring for senior software QA positions. I’d also be interested in learning about any professional quality assurance organizations in Austin. I’m currently a member of one in Minneapolis:
I’m not sure what the general salary range in Austin is compared to Minneapolis. I have a feeling that I may need to adjust my expectations downward.
I know that this request is out of the blue, but I would appreciate any time you could give me.

I happily provided him extensive information about Austin and the job market. In our subsequent email correspondence, I connected him with some local recruiters and other QA professionals who I thought might be able to help him.
When he later moved to Austin, he invited me to lunch to thank me for the help. We subsequently became good friends and good professional colleagues.
In contrast, in 2009, I received another request for help from a stranger:

Subject: Can you help me to find a job?
My resume is attached.

Here’s the rest of the correspondence between me and the person who sent the second email above.
My response:

I see from your resume that you’re a QA engineer. I actually help a lot of local QA engineers and others to find jobs, but I might suggest that an email with the entire content “Can you help me to find a job? My resume is attached” is not a great introduction to a stranger who might be in a position to help you.

The inquirer’s response:

I am sorry, that I have not given an introduction.
My name is [redacted], have MS in Mathematics and Diploma in Computer Science I have 6 years of QA experience from Dell and Borland. I also have CSTP certification from IIST.
My resume is attached for your ready reference. Can you please help me to find a job? I appericiate your great help. I found your email and resume when I googled under SQA.
I look forward to hear back from you soon.

A little better, but not much. Me:

Here’s the best help I can give you at this point…
When networking, especially with strangers, you need to do your homework, and to use the info that you uncover to try to make as personal a contact as possible. You’re selling yourself, and in the process showing the other person that you’re thorough, thoughtful, etc.
Your first email to me, and your second one, to a large extent, was like someone coming to my door and just saying, “Hi, I’m selling X. Do you want some?” I’ll just shut the door in that person’s face. That’s why those f***ing door-to-door magazine subscription scammer kids give you some story about how they’ll win a scholarship or some such shit if they sell enough subscriptions; they don’t just come to the door and ask if you want a subscription.
If I were you, I would have written something like this:

Hi Stan,
I see that you have an extensive history of QA in Austin and that you’ve recently worked at Borland. I also noticed from your resume that you have just taken a new job. How was the job hunt? What do you think about the local job market for QA?
My name is X and I am also a QA engineer here in Austin, and in fact, I also once worked for Borland. I am also looking for a new job, and I was wondering if you could offer any advice? [Then, invent some specific question that I might have some insight on, such as] In particular, I was wondering what automated testing tools are in greatest demand right now?
I’d appreciate any insight you can share into the job search in Austin. Please feel free to email me back or call me at xxx.
Regards, X

Good luck on your job hunt. If I can provide any other specific help, let me know.

Status update

by Stan on 2011/01/18, no comments

As you may have noticed, I haven’t been posting much to this blog in the last year or so. I have had a lot of things going on in my life that have kept my attention from this blog.
So, what have I been up to? In January, 2010, I started work as a test automation lead at Polycom, working with the HDX video endpoints group. The video endpoints products have been around for a number of years, and several attempts have been made over that time to automate some of the user interfaces (not just graphical UIs), but most of these efforts were started by individual developers or QA engineers to meet an individual’s or small group’s needs at a specific time. Therefore, these these automation projects were not generally used very widely within the group and/or for not a very long time.
In 2009, the company decided to solve this problem by forming a dedicated test automation group, and that’s where I came in. In the past year, our automation group has had several successes: we’ve automated about half of manual tests via one interface, and we’ve started a good ongoing automation project for a new external input device that was just released. Furthermore, we’ve defined a new test approach for one of the other existing UIs (but we’re not actively developing tests on it at present) and we’re working closely with the architecture team to make automated testability a priority with some new development.
In addition to the automation work, I’ve been playing an instrumental role in QA tooling and reporting.
All in all, it’s been an exciting year, and I’m starting on a second year that looks to be equally exciting. I plan to discuss some of our testing efforts in more detail here in the future.

Scammers HCI International are doing business under other names

by Stan on 2010/09/22, no comments

NOTE:The original version of this post contained much stronger accusatory language. In October, 2010, I was contacted by someone who claimed to be associated with the companies in this post. He was very upset at my accusation that the companies were running a scam. Although I have no intention just to buckle to his pressure, I re-read the post and realized that my accusations were not based on my direct experience. Although all the information I’ve gathered leads me to believe strongly that these people are misrepresenting themselves to scam money out of desperate job hunters, I have no direct evidence of it. Therefore, I changed the language in this post to reflect this more directly.

When I was job hunting in 2009, I had a bad experience with HCI International which I documented on this blog post.
This week, their name came up in a discussion thread in the Austin High-Tech LinkedIn group. Several people pointed out that the same people who run HCI International are running the same questionable business practices under several business names. This Better Business Bureau profile shows that the parent company is called MCW, Inc. and that they are doing business as HCI International, THE and in San Antonio as STC International.
Here’s the relvant info from the BBB profile:

Business Contact and Profile for MCW, Inc
Name: MCW, Inc
Phone: (972) 818-5420
Fax: (972) 818-5429
Address: 515 Congress Ave., Suite 2260
Austin, TX 78701
Original Business Start Date: August 2001
Principal: Mr. Ian McClure, President
Customer Contact: Mr. Joe Johnson, Executive Vice President - (512) 474-9466
Type of Business: Career & Outplacement Counseling, Employment Agencies, Employment Counseling, Executive Search Consultants, Personnel Consultants
BBB Accreditation: MCW, Inc is not a BBB Accredited business.
Additional DBA Names:
STC International
HCI International
Additional Locations and Phone Numbers
Additional Addresses
515 Congress Ave
Austin, TX 78701
17950 Preston Road Suite 1070
Dallas, TX 75252
Fax: (972) 733-1601
Fax: (972) 733-1601
8200 IH 10 W # 720
San Antonio, TX 78230
Tel: (210) 979-7726
Fax: (210) 979-0971
100 Congress Ave # 760
Austin, TX 78701
Tel: (512) 474-9466
Fax: (512) 474-9491
Additional Phone Numbers
Tel: (512) 476-2333

Awesome programming jargon

by Stan on 2010/05/11, no comments

My favorites from here:
Bugfoot – A bug that isn’t reproducible and has been sighted by only one person.
Hindenbug – A catastrophic data-destroying bug. Oh, the humanity!
Shrug Report – A bug report with no error message or “how to reproduce” steps and only a vague description of the problem. Usually contains the phrase “doesn’t work.”
Smug Report – A bug report submitted by a user who thinks he knows a lot more about the system’s design than he really does. Filled with irrelevant technical details and one or more suggestions (always wrong) about what he thinks is causing the problem and how we should fix it.

SilkTest’s replacement for the Exists() method

by Stan on 2010/03/31, one comment

I’ve recently started a new SilkTest project testing a web application, using SilkTest’s new open agent and the 4Test scripting language. This post covers an aspect of what I’ve learned.
In the SilkTest ‘classic’ agent, you used the ‘Exists()’ method to test whether an object exists in the application under test, e.g.,:

if (Page.Object1.Object2.Object3.Exists())
// do something...

With the open agent’s dynamic object recognition, the Find() method is what you need to use, but it took me some research to figure out how to do an if statement with the find() method. Here’s a test:

if (Desktop.Find("//BrowserApplication//BrowserWindow//INPUT[@id='sysname']",{5, false}) != NULL)
// do something

You’ll notice that I added an optional argument: {5,null}. These two values constitute a FINDOPTIONS data type record. The first one is the timeout. The second value is the important one for our purposes: it “determines whether the Find method throws an E_WINDOW_NOT_FOUND exception if no object is found or NULL is returned if no object is found. ”
So, you set that value to FALSE and then test to see whether or not the Find() method returns NULL. If not null, the object exists.

Borland SilkTest’s new open agent

by Stan on 2010/03/31, no comments

I’ve been a user of Borland SilkTest off and on since the late 1990s. After not having used it for several years, I picked it up again when I worked at Borland 2006-2009. Since it was our company’s own tool, we intended to use it to automate testing of the web UI application I was working on. However, we faced some significant challenges.
The first problem was the well-known problem with window declarations. Due to the depth of objects in our web-based UI, maintenance of the window declarations was an onerous task. The other problem was that the agent and recorder simply didn’t interact very well with our AJAX-y UI (we were using ExtJS for the UI).
Since that time, Borland has released the new ‘open agent’ for SilkTest. I began using the open agent right before I left Borland and I’m creating a new project with it here at Polycom. I’m happy to say that the SilkTest dev team has done a really good job of overcoming the previous shortcomings of the ‘classic’ agent.
The biggest problem I’m facing now with my SilkTest open agent automation is access to information. The user documentation for the open agent is not as mature as the documentation for the open agent, and as a result, I’ve opened numerous support tickets with Micro Focus to figure out how to do things.
I figured I should use my blog to share some of the things I’ve learned about the open agent and its 4Test implementation. Stay tuned…

Testing and Toyota

by Stan on 2010/03/21, no comments

Testing rock star James Bach has published several good blog posts about the Toyota braking problems: Advice to Lawyers Suing Toyota, Toyota Story Analysis, CNN Believes Whatever Computers Say.
The following line struck me from the ‘Advice’ post: “Extensive testing” has no fixed meaning. To management, and to anyone not versed in testing, ALL testing LOOKS extensive. This is because testing bores the hell out of most people, and even a little of it seems like a lot.
That’s very true except when a bug slips through and gets caught by users.
That’s when it’s so much fun to remind management that they chose the amount of testing that let this bug slip past. Test lead to management: Remember way back when, I presented you with some options regarding testing: if we have X amount of time, we’ll get Y amount of testing done, here’s the way we prioritize the work and the associated risks, or if we have X+A amount of time, we can do Y+B amount of testing done, with the following risks.

Where’s s/Waldo/Stan?

by Stan on 2010/03/05, one comment

As you may have noticed (those of you who still visit here), I haven’t posted anything substantive here in several months. Lots of reasons both personal and professional.
But let’s not dwell on the past. Let me tell you what I’m up to now.
At the first of this year, I started as the lead of a newly-forming test automation group at Polycom, working on the HDX video conferencing appliances. Although I’m not directly involved in the agile process here at Polycom, our new automation team is playing a crucial role in allowing the development organization to act in a more agile fashion.
Our product line is based on a codebase that has been around for a number of years, and since we develop entire appliances, it’s a pretty complex development environment with traditional applications, OS development, drivers, video and audio codecs, interoperability with other devices (both ours and other manufacturers’), etc.
Each software change goes through three levels of ‘promotions’, as we call it around here. First, it’s checked into the agile team’s codebase. Then, the team’s codebase is periodically promoted to the ‘integration’ code stream, and finally that codebase gets merged periodically into the global codebase.
Currently, promotions to the highest levels take place infrequently, these promotions are a significant event, and a lot of manual testing has to be done to ensure that the code to be promoted hasn’t broken anything.
This process has several problems. First, as alluded to above, a tremendous amount of time is devoted to manual regression testing. Second, regression issues are frequently not identified at the team level, and there are frequent high severity bug crises during promotions testing or, if a bug escapes through that, after the promotion takes place. And finally, since promotions are not daily, teams do not pull down the latest shared code frequently. So, a team develops against stale shared code for up to weeks at a time, and when they are ready to promote their code, a lot has changed in the shared codebase, leading to significant code merge problems.
The solution to this problems is obvious to agilists: continuous integration and automated testing. And in fact, that’s the initiative my group is involved in. Our goal is to have daily builds at all codebase levels and to run a suite of automated tests on each build. Considering the size and complexity of our environment, it’s an ambitious project, but I’ll keep you posted on the progress.

Documenting code changes with defect reports

by Stan on 2010/01/20, no comments

Today, Rafe Colburn listed four reasons to file the bugs found in code reviews. A commenter points out that defect reports aren’t the only way of communicating about changes to the code:

I guess it depends on the local culture, but in my experience, developers only look at a bug report if it’s assigned to them. The revision control system is a better way to see what’s happened recently.

In my ideal system, I would take things a little further: every commit must have one or more work items (requirements, defect reports) associated with it and an indication of whether each is in progress or completed. The argument for this is pretty simple: if you’re not implementing a requirement or fixing a bug, then why the heck are you changing code?
Additionally, the build system should display the commits in the build, the work items associated with each commit as well as a list of the changed files and an easy way to view file diffs for changes in each commit.
As a QA engineer, my need to see completed work items is obvious. However, the list of changed files and diffs provide a different type of equally useful data. This information provides me an easy way to familiarize myself with the code, input in deciding how to test the change, and opportunities for starting discussions with the programmers about their code.
When I describe this system to programmers, their first thought is often that it requires a lot of red tape/documentation. I have only worked with a system like this once in my career, and in that situation, the programmers did not find it onerous. It’s true that the programmers had to file defect reports for bugs that they fixed and found, but we relaxed our defect report standards in such cases; they didn’t have to fill out severity, steps to reproduce, etc., which allowed the developers to spend a very short amount of time filing the reports. We decided that having a minimal ‘placeholder’ defect report was good enough in many such cases if it allowed buy-in from the developers. Besides, as mentioned above, the reporting in the build system was a backup source of information about code changes.