Tips and Tricks

Version 1.4 by chrisby on 2023/05/29 11:03

  • Regularity: Run tests regularly, ideally before every commit, for optimal quality assurance. In particular, run all relevant tests before pushing code or creating a pull/merge request. 'Continuous integration' practices are helpful for enforcing testing of code uploaded by other developers.
  • Use functional programming for data processing tasks because it is less prone to errors and side effects.
  • It's common to create test users and test data to facilitate the testing process.
  • Don't reinvent the wheel and use existing test libraries. There are proven solutions that minimize the effort of creating tests.
  • Use a common test structure convention by dividing the test logic into three parts
    1. given (a context → setting up data, fields, and objects)
    2. when (something happens → execute production code)
    3. then (a certain outcome is expected → check the result via assertions)
    • Alternative common names for the three steps: arrange, act, assert
  • Use additional simple high-level checks. Use additional simple high-level checks. For example, when working with a collection, checking the number of elements is a good indicator of unexpected or missing elements.
  • More is better. When in doubt, it is better to write one test too many than one test too few. Possible duplication is a lesser evil than not testing at all.
  • Also test non-functional aspects such as security, single request computation performance, and perform load/stress testing to validate software throughput.
  • Keep test resources close to the tests to make their use easy to understand. Test resources should be placed in the same location as the test if the resource is only needed by that test.
  • Avoid threads, as they are usually buggy and very difficult to test and debug properly. If threads are necessary, keep the amount of asynchronous code to a minimum. Also, separate synchronous and asynchronous logic to test them separately. Prefer thread termination conditions over fixed wait times, as this usually increases test performance dramatically.
  • Avoid files
    • I/O is slow increasing the test time unnecessarily.
    • Temporary files may persist because you forgot to delete them or put them in folders where they will never be found. At the very least, be sure to delete files at the very beginning of the test to ensure their absence. Deleting files at the end of a test is prone to erros because if the test fails/aborts, those files may persist and affect subsequent tests.
    • Prefer data streams over files.
  • Don't leave tests unfinished, don't just comment out @Test (annotation in Java to mark a function as a test), and don't leave empty test bodies as these things are confusing and waste the reader's time. If you come across this kind of code, try to learn its purpose, ask people who worked on it, and rebuild it. If that is not possible or worth the time, then delete it as the dead code that it is. To indicate that tests need to be implemented, an issue/work unit could be created and referenced, or a TODO comment with an explanatory description could be added.
  • A test should fail if an expected exception is not thrown. Test libraries usually have methods to handle such cases. If the exception object must be examined, use a try-catch block that includes a fails() method at the end of the try block. Without the fails() method to make the test fail, the test would wrongfully pass if no exception were thrown:
try {
   //code that should throw exception
    fails(); //executed when no exception was thrown
} catch (Exception e){
   //examine and assert exception
}
  • Different branches of production code should be checked in different tests.
  • Avoid if-statements in the test code, but at least be careful with assertions in if/if-else blocks, as the test may pass without executing them. To prevent this, add 'else {fails()}' at the end, or assert the condition and branch:
// bad - still passes when x == false and y == false
if (x){
   //some logic
   assertTrue(y);
}

// good - passes only if both conditions are met
if (x){
   //some logic
   assertTrue(y);
} else {
   fails()
}

// also good
assertTrue(x);
//some logic
assertTrue(y);