Tips and Tricks

Last modified by chrisby on 2024/04/01 13:11

Here are a few general guideline that help to write better test code as well as influencing the design of the production code.

  • Frequent Execution: Run tests frequently, ideally before every commit, for optimal quality assurance. In particular, run all relevant tests before pushing code or creating a pull/merge request. Continuous Integration practices are helpful for enforcing testing of code uploaded by other developers.
  • Use functional programming for data processing tasks because it is less prone to errors and side effects.
  • It's common to create test users and test data to facilitate the testing process.
  • Don't reinvent the wheel and use existing test libraries. There are proven solutions that minimize the effort of creating tests.
  • Use a common test structure convention by dividing the test logic into three parts
    1. given (a context → setting up data, fields, and objects)
    2. when (something happens → define behavior of mocks and execute production code)
    3. then (a certain outcome is expected → check the result via assertions)
    • Alternative common names for the three steps: arrange, act, assert
  • Use additional simple high-level checks. For example, when working with a collection, checking the number of items before examining an item from the collection in detail is a good indicator of unexpected or missing items.
  • More is better. When in doubt, it is better to write one test too many than one test too few. Possible duplication is a lesser evil than not testing at all.
  • Keep test resources close to the tests to make their use easy to understand. Test resources should be placed in the same location as the test if the resource is only needed by that test.
  • Avoid threads in tests
    • Concurrent programming is prone to introducing errors and very difficult to test and debug properly.
    • If threads are necessary, keep the amount of asynchronous code to a minimum.
    • Separate synchronous and asynchronous logic to test them separately.
    • Prefer thread termination conditions over fixed wait times, as this usually increases test performance dramatically.
  • Avoid files in tests
    • I/O is slow increasing the test time unnecessarily.
    • Temporary files may persist because you forgot to delete them or put them in folders where they will never be found. At the very least, be sure to delete files at the very beginning of the test to ensure their absence. Deleting files at the end of a test is prone to errors because if the test fails/aborts, those files may persist and affect subsequent tests. Often, cleanups before and after testing are useful.
    • Prefer data streams over files for testing.
  • Don't leave tests unfinished, don't just comment out @Test (annotation in Java to mark a function as a test), and don't leave empty test bodies as these things are confusing and waste the reader's time. If you come across this kind of code, try to learn its purpose, ask people who worked on it, and try to finish the implementation. If that is not possible or worth the time, then delete it as the dead code that it is. To indicate that tests need to be implemented, an issue/work unit could be created and referenced, or a TODO comment with an explanatory description could be added.
  • A test should fail if an expected exception is not thrown. Test libraries usually have methods to handle such cases. If the exception object must be examined, use a try-catch block that includes a fails() method at the end of the try block. Without the fails() method to make the test fail, the test would wrongfully pass if no exception were thrown:
try {
    //code that should throw exception
    fails(); //executed when no exception was thrown
} catch (Exception e){
    //examine and assert exception
}
  • Different branches of production code should be checked in different tests.
  • Avoid if-statements in the test code, but at least be careful with assertions in if/if-else blocks, as the test may pass without executing them. To prevent this, here are a few suggestions:
// bad - still passes when x == false and y == false
if (x){
    //some logic
    assertTrue(y);
}

// good - passes only if both conditions are met
if (x){
    //some logic
    assertTrue(y);
} else {
    fails()
}

// also good
assertTrue(x);
//some logic
assertTrue(y);
  • Test Case Logging: If certain production code needs to be called several times with slightly different arguments, it may be useful to iterate over a list of these arguments, calling the production code and then asserting the result on each iteration. It is then often good practice to print out each element of the list before each assertion. If an element causes a test to fail, the terminal output will immediately show which element caused the error.
  • Bug Coverage Responsibility: If you find a bug or a case that has not yet been tested, it is your duty to create a test that covers it, so that the software is stable against that problem from that moment on.
  • Range-Based Assertions: Potentially inaccurate results should never be asserted against an exact value, but only within an expected range of approximation. This includes all assertions on floating-point numbers, which may vary slightly between different operating systems, compilers, or CPU architectures.
  • Avoid Cascading Validation: It would be very cumbersome to perform input validations, such as the famous null checks, and corresponding tests for each unit. A common solution is to define a module consisting of several units. The unit at the top, which receives the input for the first time, validates it once. All subsequent units can safely assume that the input is not null when processing it, and no longer need to explicitly check or write tests for such cases.
  • Stick to Performance Requirements: Once a performance test is passed, there is no need to further performance optimize of the code. Any effort in that direction is a waste of resources.
  • Separate code to be unit tested from third-party code. Third-party libraries should be hidden behind a wrapper anyway; for third-party libraries that interfere with unit tests, their wrapper should be the implementation of an interface. The unit under test should use the interface, which is easy to mock, instead of directly using the third-party library or wrapper, which are harder to mock.
  • Proportion of test code in the code base: A comprehensive test suite is critical to keeping the functionality of a software stable when changes are made to the code base. Therefore, it is not uncommon for test code to make up a large portion of the total code base, such as 30-50%.