It’s true that an automation framework is like a puzzle piece that brings a testing practice together, an engine that makes a car zoom, and a conveyor that transforms system analysts into automated testers. But do some frameworks do this better than others? Is it possible that your automation engineers are spending a tremendous amount of time maintaining legacy solutions? Even more important, can regaining that time increase your return on investment? To truly understand the possibilities, we need to recognize that automated testing, like any technology, is an evolving practice. Over 15 years of experience working at large organizations, has led me to believe that test managers fall into one of three camps in their perception of automation. These perceptions correlate nicely to the different stages of automation evolution. These stages are cyclical and can be seen inside individual institutions and also in the general testing industry. Typically this is how the debate goes:
Camp 1: “It never works as advertised and is not worth pursuing.” This is the result of the ‘record & playback’ stage which has commonly been used as a sales pitch to demonstrate how easy a person can produce a test case using their product. A salesperson can with a simple example, dazzle the eye as keystrokes and mouse clicks are automatically recorded into a programming language and then played back by clicking a button. It seems like only minutes after purchasing the product that the claim of this wonderful technique has turned into classic snake oil. Hours pile up as test teams struggle to maintain endless amounts of scripted test cases. Test managers are left with a mistrust of automation and harshly conclude that it’s not worth the effort. I would also argue that the latest claims from tool vendors offering scriptless automation have reintroduced the drawbacks that occur with Record & Playback.
Camp 2: “It will work, but with tons of programming and resources.” This is the result of the ‘structured programming’ stage or test managers that didn’t give up on the promise of automatic test execution. Determining that if they invest more by hiring programmers then this would help control script challenges. Although an improvement on record & playback, cost, time consumption, and the ability to collaborate with other script developers becomes a glaring weakness. Test managers realize that the resources needed to get meaningful automation will chew up their budget and perhaps, it’s best to find other solutions. Unfortunately the other solution ends up being manual testing.
Camp 3: “You think your framework is better than mine?” This is the result of the ‘framework’ stage or those that expand on structured programming to build robustness into their programming efforts. This stage produces the best results by modularizing code into reusable functions, components, and parameterizing test data. User-friendliness is an important characteristic so it can be handed off to system analysts and alleviate the amount of expensive programmers. As a shortcoming, the standard for success has been to simply claim that an implementation has reached a framework level. The degree to measure reusability, maintenance, and user-friendliness vary greatly and are subjective at best. Passionate design debates have spawned conversations such as “You think your automation framework is better than mine?” Unfortunately this has diluted the viability of good frameworks.
As a test manager with tight budget constraints, it’s become extremely important to take a closer look at how new technologies can be used in the testing field to become better, cheaper, and faster. A good solution will provide a healthy balance of tools, process, and governance. Together these three disciplines formulate a QA system that can react to the testing demands of mobile, web, and desktop applications. In the spirit of ‘you think your automation framework is better than mine’ here at FogChain, we provide a patented, tool agnostic framework that simplifies your favorite scripting tools. We provide a user-friendly interface to create automated test cases but, unlike script-less vendors, we encourage the use of scripting and programming. In our opinion, QA groups that are provided an organized framework (interface and code modules), clear architecture goals, and knowledge from a qualified automation firm, will excel at their task. Having the ability to mix & match commercial and open source tools under one solution truly answers the question, “how do we automate mobile, web, or the next big thing?”
About The Author
Chief Strategy Officer, FogChain Inc.