One of the most valuable assets to a software development organization is an effective continuous testing strategy. Continuous testing requires the implementation of end-to-end automated testing, allowing for greater insight into application quality at earlier points in the development lifecycle.
It sounds great, and it is. But, as with any other development process, the transition to a strategy involving automated testing often comes with the need to deal with common pitfalls that can make this period stressful.
What challenges will a DevOps organization face when trying to implement an effective automated testing strategy? Below, I will answer that question. I'll address challenges ranging from test infrastructure to test data management, identify common automated testing pain points, and provide advice for overcoming them.
Simply put, automated testing serves as part of a larger testing strategy designed to increase application quality while maintaining the speed of delivery. It does so by facilitating the discovery of bugs within an application at earlier points in the development lifecycle. One aspect of such a strategy is unit testing to be written as components are developed, run by the developer within the build process. Another involves the integration of more complex automated tests into the CI process. So, as developers write and commit changes to the application code, new and existing functionality is being continually validated.
When issues in the code base are identified earlier in the development process, they are often less expensive to resolve. Likely, they are the result of a recent code change, and the bug can be quickly rectified by one of the developers on the project. This means a lower likelihood of show stopping bugs making it to the end of the development process and, consequently, less risk posed to the delivery schedule.
With all the benefits that come with an effective implementation of automated testing, it's clear that the juice is worth the squeeze. That is to say, the effort that will undoubtedly be needed to overcome the challenges associated with the development of a test automation process will be worthwhile. Consider the following challenges that are commonplace when getting started (or even expanding upon) an automated testing strategy:
Lack of expertise among team members - One common challenge amongst organizations getting started with test automation is a lack of experience and expertise among current team members. While this concern likely cannot be completely alleviated without gaining significant experience, there are some steps a DevOps team can take to help make the transition progress as smoothly as possible. For instance, an organization can leverage testing frameworks that allow the utilization of programming languages with which their team is familiar. A good example of this is Selenium. Selenium, which essentially automates browsers, has bindings for different languages. If an organization is a JavaScript shop, they can use JS to write their test scripts (same with Python and many others). Further, it can initially be challenging for a QA team to determine which test scripts should be automated. Training staff members on this topic will allow for better identification of components/features with which test automation should be a high priority versus those that may be a lower priority.
The challenge of building in development time for test automation - While an automated testing strategy certainly saves time in the long run, it also comes with an inevitable level of overhead to allow for the development of test scripts. For some organizations, this can be a challenge out of the gate in terms of properly allocating the time for this development. Consider building this overhead into the agile development process. Add a task to user stories to develop automated test scripts where it is determined that test automation is appropriate. In addition, make “has automated testing” a part of the acceptance criteria. This will drive proper estimation of development activities that includes overhead while also fostering a development culture where automating tests becomes a habit.
Lack of adequate test infrastructure - Another issue for many organizations is the lack of existing infrastructure for providing necessary test coverage and adequate speed of execution. Consider the scenario where an application must be tested against many combinations of browsers and operating systems. In order to run each test against each configuration in a reasonable time frame, the test scripts likely need to be run in parallel. And for that to happen, the infrastructure must exist to support the strategy of parallelization.
While an organization just getting started with test automation may not be prepared to build and maintain an infrastructure of this nature internally, there are other options to fill infrastructure needs. An example is to work with a cloud-based test infrastructure provider such as Sauce Labs. Working with such a provider allows for access to environments with the necessary configurations. This will lead to a higher level of test coverage and efficiency while eliminating the overhead that comes with building out and maintaining these environments internally.
Unrealistic goals for an automated testing strategy - Initially, test automation can feel like a permanent fix for the cost of application analysis and manual testing for an organization. It is crucial to set appropriate expectations for an automated testing strategy and to include other forms of testing and analysis for monitoring application quality. For instance, despite the fact that automating tests will catch more bugs during development, it remains likely that some will still make it into production. This may be due to an oversight in an automated script, scaling issues, etc. Shift-right testing can help to identify these problems when automated test scripts running as part of the CI process fail to do so. One possibility is to institute performance monitoring to alert DevOps engineers when the application is failing or experiencing performance issues. Through such an implementation, issues can be identified in a time-efficient manner in instances where problems aren’t caught in development. The lesson is that testing is a continuous process and should exist at all stages where possible — including production.
Data dependency problems - Automated testing can create complicated problems related to test data management. When a test script is run, it likely requires certain data to be in a certain state. This can create problems in several scenarios. For instance, what happens if a test script is being run against multiple environment configurations at the same time? Will it fail due to both instances of the script utilizing the same data from the same database? Modification of said data by one instance of the test’s execution may cause another instance to fail. Or how about the scenario where the data for a test script is set up through the execution of another test script? What happens when the test script it depends upon fails?
An effective way to manage these potential issues is to develop test scripts to be self-contained and completely independent of one another. In other words, all test scripts should be developed to create and clean up all data needed for their successful execution. In this manner, testing personnel will remove the possibility of test failures due to data-related issues. In addition, the DevOps team will be properly prepared — from a test data standpoint — to scale up to any level of test parallelization they desire.
While test automation will undoubtedly allow for the quicker identification and resolution of problems within an application’s code base, an effective implementation of an automated testing strategy does not come without its pain points. That being said, there are always ways to overcome the common barriers to a solid automated testing strategy (or avoid them completely). An organization can take big steps towards testing effectively by building time into the process to develop test scripts, ensuring these scripts are developed to be independent of one another, and ensuring the DevOps team has access to quality test infrastructure.
Scott Fitzpatrick is a Fixate IO Contributor and has 7 years of experience in software development. He has worked with many languages and frameworks, including Java, ColdFusion, HTML/CSS, JavaScript and SQL.