Testing is an indispensable part of all software development, and as is normal for all technological progress, testing has become more robust, more fragmented, and more complex over time. This progress can be double-edged at times.
On one hand, the increased usage of networked applications in all spheres of life leads to an increasing criticality of those applications. The tolerance for problems in applications decreases as we all depend more and more on those applications for the world to function. This criticality requires that our testing methods step up to the challenge of preventing and identifying problems in applications earlier and quicker, and of near immediate remediation of any problems that slip through the cracks. To meet this need, testing has specialized (unit, integration, subcutaneous, UI, UA, etc.) and been automated to a large degree, especially in the earlier stages of the deployment lifecycle.
On the other hand, this specialization and automation has also made the cost of implementing and maintaining proper testing more expensive and has increased the dangers of both improper/incomplete testing and unnecessary duplication of effort at different layers of testing.
Performance testing (and synthetic monitoring in particular) is especially subject to these dangers. To avoid implementing improper or incomplete performance testing requires a deep understanding of what your measurement goals are, and because (unlike testing earlier in the development lifecycle) performance testing looks at an application’s interaction with users, the goals can be harder to adequately define. How long can the application be unavailable before it’s a critical issue? How slow is too slow? What constitutes the “critical path” in the application, and what features or workflows can you pay less attention to? There is no one correct answer to these questions, but the types of performance testing required, and the results that are important from that testing depend entirely on understanding those answers.
Performance testing also poses challenges with duplication of effort. There is at least some degree of overlap between the parts of any application tested earlier in the development lifecycle, and in the later stages where performance testing is typically used. Unfortunately, because the teams responsible for early stage and late stage testing are often different, and the testing tools are also different and incompatible, virtually none of the effort expended to create early stage tests is reused in performance testing in the later stages or in production monitoring. This means that with every incremental change to the application, each test must be recreated not once, but multiple times. Not only is this a waste of valuable time, but it’s also a clear recipe to make more mistakes.
Forward thinking performance testing companies have recognized this as an opportunity to “shift left”, to participate earlier and better in the development lifecycle. By both integrating into the common workflow tools (CI/CD), and by supporting test creation methods that are familiar to early stage test creators, modern performance testing tools can help organizations to control the effort and complexity of testing, while also streamlining the overall testing process along the entire development lifecycle.
To participate in this exciting evolution of the performance testing space, Apica has chosen three broad areas in which to improve our product: CI/CD integration, modern scripting methods, and code repository support for tests. These three areas, in combination, will allow our customers to effectively use performance testing in the same (or at least very similar) ways they use earlier forms of testing, including automating tests before/during/after deployments, flexibility in assignment of test creation and maintenance to appropriate teams, and significantly improved ability to reuse tests across multiple development lifecycle stages.
Additionally, Apica is moving to eliminate the line between load/stress testing and synthetic monitoring. Apica’s load testing and synthetic monitoring solutions have supported the ability to run the same scripts in both tools for many years, and now we are taking the next logical step of bringing these two complimentary solutions even closer together. Initially, this will simply mean easier workflows for uploading test scripts to both solutions, or sharing the same code repository. However, the future will see a single unified performance monitoring platform where any script can be automatically deployed as part of a complete development lifecycle to load or stress test a preproduction environment, and then seamlessly deploy the same scripts to monitor applications in production.