What Are the Advantages of Test Automation in DevOps?
Successful DevOps needs automated processes throughout the software delivery life cycle. It relies on a culture and environment where building, testing and releasing software can happen rapidly, frequently and reliably.
Given that within our DevOps model testing can help find bugs and improve stability and reliability but generally take a long time to complete, then wouldn’t automating all of the testing always be beneficial? The tester’s or SDET’s objective in DevOps is therefore pretty simple “speed up testing by automating it”.
I believe it’s more subtle than that.
For testers to add most value to the DevOps model, we must first architect the automated tests. I’m being specific with my language here. Rather than architect, I could have said design but what I want to convey that it is deeper than that. We must make the foundations strong and resilient, while allowing the building (read tests) on top to flex with the wind and the occasional earthquake. To do that we need to take a step back, think about what we’re trying to achieve, what is happening all around us, what the purpose of our building (read test) is, and what the future may hold.
Therefore, the tester (who is now filling the role of a test architect) needs to consider what should automated testing try to achieve, what does the release pipeline look like and what tests should we automate.
Automated tests can deliver the following benefits, so these and the required type of test need to be considered before the test scripting starts and scenarios are identified.
Repeatable Tasks
By automating routine tasks, a team can ship software sooner and achieve or maintain competitive advantage through increased efficiencies. Tests that execute in minutes may take a human being an order of magnitude more to execute. Additionally, automated tests are less prone to human error (at least once they have been proven to work and are controlled from there on) and execute the same way every time they are run.
Early Quality Indicator
Automated tests that run after every build of the source code provide early feedback of possible quality issues in the product. For example, if a test that has historically executed successfully suddenly fails, a regression may have been caused by a code change. Finding out about potential defects in existing functionality quickly is paramount to keep the quality of the product high and reduce the cost of fixing the problem.
Write Once, Run Anywhere
Automated tests can run many times over with different configurations. For example, the same test can validate different combinations of web browser and operating system for greater coverage of your test matrix.
Accurate Validation
A test may be easy to execute or automate but validating the results of the test may be time consuming. Consider an accounting system that does complex calculations generating a report. A human may have difficulty validating a result in a sea of numbers, but automated validation may take a fraction of a second.
Validate Difficult Scenarios
Attempting to validate the performance and scalability of a product without automation is extremely difficult. How would one simulate thousands of simultaneous transactions to find the breaking point? Test automation helps solve this by simulating many users accessing the software concurrently.
Confirm Product Coverage
Running automated tests at a regular cadence helps indicate how much of the product code is tested. Without automation in place, it’s difficult to obtain this data easily. Regular test automation runs complete code coverage and provides feedback on where additional tests are needed. The coverage percentage should increase over time, providing incremental confidence that regressions are minimised.
Legacy Applications
In scenarios where there is continual enhancement of a legacy application that was not designed with testability in mind, it is difficult to retrofit any kind of automation on top of it, with the exception of perhaps UI automation. Additionally, the investment in automation for stable areas of the application may not be warranted. For greenfield applications that can be designed for testability, on the other hand, teams should automate a higher percentage of tests, but there is still room for manual verification and mandatory exploratory testing.
Assuming the decision is made not to automate everything, what should be automated and what test cases should be automated first? A useful input for making these decisions is a set of existing test cases that have historically been run manually. This may be a large list and it is necessary to prioritise what to automate:
P0 – BVTS (Build Verification Tests). These tests run quickly and are executed as frequently as possible - ideally after every build of the software. Smoke tests validate the most basic and important functionality of the system. For example, does the application launch? Can a basic user login? Can the primary use case be initiated? The number of test cases marked as P0 are typically small relative to the other buckets.
P1 – Smoke Tests. These tests go a level deeper than BVTs and typically cover the most important use cases of the software.
P2 - Functional Tests. These tests are executed whenever a regression test is required, such as at the end of an iteration or release. This bucket contains the most tests and exercise the majority of user interaction with the software.
P3 - Low Severity Tests. Tests in this bucket reflect lower severity user scenarios, such as boundary cases or areas of the application that are less frequently touched by either the end user or developer.
The bottom line is, will automating a test provide a return on the investment of creating it? If the effort involved in creating and maintaining the test case is larger than the efficiency gains projected over the life of the test case, then do not automate it. Prioritise the test cases accordingly and start by automating the P0 and P1 tests. Automated tests may not identify important issues, such as an improperly rendered chart unless the appropriate tools are used and checks made. The cost benefit to do this isn’t always obvious.
A few final things to remember:
Redundant Tests Cause Noise
Too much (redundant) automation can have a negative impact on progress. Broken tests and unnecessary tests clog up progress and require maintenance just to generate noise. Test failures are a combination of legitimate issues the tests are designed to identify and noise generated by unreliable tests. The noise can become overwhelming, impact the ability to make decisions based upon test results and more seriously, undermines the trust we have on our solution.
Brittle Tests Kill Returns
Automated tests can be brittle and break when the system under test changes. A robust framework and test script code that is easy to maintain is one of the most important considerations of an automated test approach.