Test Speedup

Version 1.53 by chrisby on 2024/05/05 18:04

Fast testing not only saves time, but also enables more frequent execution, leading to improved code quality. Optimizing the speed of test execution is therefore critical. While extensive and frequent testing is ideal, it shouldn't excessively slow the pace of development.

Measures

  • Test type segregation: Unit tests tend to run much faster than other types of tests. For large test suites, you should consider running unit tests regularly on the developer's local machine, while scheduling more resource-intensive tests in a CI environment. The CI environment can, for example, run the slower tests in parallel and notify you just in case something fails. If the tests take too long for this approach, you can run them at a fixed rate, usually once a day at midnight. Also see Types of Tests.
  • Selective Testing: You don't need to run all tests every time. It can be sufficient to run only the tests related to recently changed code, or only the fast tests, and then run all the tests when you finished a major implementation step.
  • Mock slow dependencies to minimize code execution time, especially operations such as I/O, transaction management, and networking.
  • Prefer in-memory databases during testing for cleaner and faster operations compared to standard databases.
  • Identify performance bottlenecks by increasing the number of threads:
    • If execution time remains constant, CPU is the bottleneck. Mitigate with faster CPUs, more cores, or additional machines.
    • If execution time decreases, I/O is the bottleneck. Use more threads, faster storage (such as SSDs), or additional storage for concurrent filesystem operations.
  • Improve I/O speed by using RAM disks, such as Linux's tmpfs tool. Configure your tests to direct all file interactions to the RAM disk.
  • Parallelize test execution. Multiple threads can improve execution speed even on single-core processors by keeping the CPU busy while other threads wait for disk I/O.
  • Offload CPU-intensive tasks to cloud-based computing resources using automation scripts:
    • Upload project files to the cloud.
    • The cloud service builds the project, runs tests, and generates a test report.
    • Upon completion, download the test report from the cloud.

Asynchronous Testing

Synchronous Testing

A simple TDD workflow is to write new code, run tests locally, wait for them to finish, and if they pass, move on. To avoid long wait times, you run only a few very fast tests. This is fine when you are working on isolated code that is checked by unit tests, but as soon as integration of the new code with the old code comes into play, it becomes a problem. Now you have two bad choices: either you run a few fast tests and do not use the full power of your test suite, resulting in low coverage and possibly missing bugs that would have been easier to fix if they had been caught earlier, or you run all the tests locally and are unproductive for a long time while waiting for them to finish. This problem can be solved with asynchronous testing.

Asynchronous Testing

When you push code into the code repository, there should be a DevOps infrastructure that triggers a CI pipeline that runs all the tests. This allows your code to be extensively tested while you continue to work without waiting. If the CI pipeline succeeds, the test suite has proven that your code changes are okay. If the CI pipeline fails, you should receive a notification, such as an SMS, email, or chat message that triggers a ringtone, or a desktop notification, so you can immediately stop what you are working on and fix the problem first. Apply the fix and continue working without waiting for the tests to finish.

It is not uncommon for a single developer to trigger many CI pipelines running simultaneously. While this technique may require advanced DevOps infrastructure to implement, it's often worth the investment to set it up. Or you can simply pay cloud providers like GitLab or GitHub to use their infrastructure, which provides this capability.