At this point, it is common knowledge that an automated testing strategy should be implemented by any organization that wishes to consider itself a DevOps organization. And while there are many advantages to automating tests over performing them manually, it isn’t reasonable to automate each and every test in your arsenal. So the next step is to consider the following questions: Which tests should be automated, and how many tests should be automated? The answers to these questions rely upon several important factors.
Below I will discuss the factors that impact the decision to include a test in your automated testing plan. In addition, I will answer another critical question when implementing your test automation plan: why can’t we just automate them all?
The benefits of automating your software testing are both obvious and numerous. Automating a test, given that it’s implemented properly, eliminates the potential impact of human error that always exists when testing a feature manually. In addition, test automation allows you to find bugs quickly through integration of your automated tests with your CI tool. Upon committing your code, your tests are run to ensure your changes do not result in broken features. If a test does fail, the build fails, and a fix can be implemented immediately while mitigating the risk of promoting broken code to a shared environment in the meantime. This keeps the cost of fixing bugs low as they are discovered shortly after they are introduced.
So with all of these benefits, what argument could be made against automating tests for each feature in your application? One thing to consider is that test automation, while saving time in the discovery of bugs, can be expensive in certain ways. The development of an automated test script means overhead in design and development in the initial implementation as well as in the maintenance of the script. Plus, whenever a feature changes, no matter how minor the change, the script may need to be reviewed and possibly adjusted to properly test the changes to the feature itself. Therefore, there is a possibility that the expensive nature of the maintenance of certain tests outweighs the benefits and peace of mind that the test provides.
What this means is that, in practice, you’ll probably not be automating all of your tests. You need to decide which ones make the most sense to automate, and which to keep performing manually.
With drawbacks such as the expensive nature of test maintenance, several factors must be taken into consideration when deciding which tests to automate. As Angie Jones makes clear in her presentation at SauceCon 2018, the goal of a test automation engineer isn’t to automate everything, but to automate the right things.
Several factors play into whether or not the testing of a particular feature should be automated:
Is the feature of great importance to the overall viability of the application? Every application has features that have little impact on the customer’s ability to use the application (to the point where maybe they wouldn’t even notice if said feature was broken). In this case, it’s likely we can save ourselves time on maintenance and development by avoiding automated testing altogether where the impact is particularly low. In contrast, there are also likely scenarios where the impact of a broken feature would essentially render the application useless. Features with this type of impact on users should definitely require automated tests that ensure they are in working order. The cost-benefit ratio of employing automated testing in this situation leans heavily towards test automation being beneficial.
Is this feature tested by proxy by other automated tests? One thing that should be avoided as much as possible in an automated testing strategy is redundancy. When developers get “test automation happy” and automate tests at every turn, they can end up testing the same components multiple times. This proves costly when the component tested five or even 10 times is broken by a developer. Instead of debugging one failed test, you’re now debugging many.
How many automated tests is TOO many? Simply automating everything you intend to test will make builds longer. It will make maintenance a logistical nightmare (both expensive in actual upkeep as well as determining what needs to be “upkept” with each application change). A balance needs to be found. Automate testing for each feature deemed important to the viability of the application. Consider the lengths of your builds after CI integration of your tests, and ask yourself when considering the effort to automate: Is the juice worth the squeeze? Is the effort to automate, maintain, plus the added time for each build while the test is run worth the benefit you’ll receive from automating the test? While the exact number of automated tests will vary with each organization and each application, these are some of the questions to keep in mind in determining what’s right for your particular organization and application.
A test automation plan is a necessary part of any DevOps organization. And a big part of this plan is choosing which tests to automate and which to run manually when necessary. Factors such as the impact of the feature on the user’s ability to use the application, redundancy, and the impact the test plays on the length of the application build should all be considered when determining which path you will take to an effective automated testing strategy. Taking each feature and analyzing both the costs and benefits of automating the tests for each on a case-by-case basis will serve the organization well in the long run.
Scott Fitzpatrick is a Fixate IO Contributor and has over 6 years of experience in software development. He has worked with many languages, including Java, ColdFusion, HTML/CSS, JavaScript and SQL.