- Regular Execution: Run tests regularly, ideally before every commit, for optimal quality assurance. In particular, run all relevant tests before pushing code or creating a pull/merge request. 'Continuous integration' practices are helpful for enforcing testing of code uploaded by other developers.
- Use functional programming for data processing tasks because it is less prone to errors and side effects.
- It's common to create test users and test data to facilitate the testing process.
- Don't reinvent the wheel and use existing test libraries. There are proven solutions that minimize the effort of creating tests.
- Use a common test structure convention by dividing the test logic into three parts
- given (a context → setting up data, fields, and objects)
- when (something happens → execute production code)
- then (a certain outcome is expected → check the result via assertions)
- Alternative common names for the three steps: arrange, act, assert
- Use additional simple high-level checks. Use additional simple high-level checks. For example, when working with a collection, checking the number of elements is a good indicator of unexpected or missing elements.
- More is better. When in doubt, it is better to write one test too many than one test too few. Possible duplication is a lesser evil than not testing at all.
- Keep test resources close to the tests to make their use easy to understand. Test resources should be placed in the same location as the test if the resource is only needed by that test.
- Avoid threads
- Concurrent programming is prone to introducing errors and very difficult to test and debug properly.
- If threads are necessary, keep the amount of asynchronous code to a minimum.
- Separate synchronous and asynchronous logic to test them separately.
- Prefer thread termination conditions over fixed wait times, as this usually increases test performance dramatically.
- Avoid files
- I/O is slow increasing the test time unnecessarily.
- Temporary files may persist because you forgot to delete them or put them in folders where they will never be found. At the very least, be sure to delete files at the very beginning of the test to ensure their absence. Deleting files at the end of a test is prone to erros because if the test fails/aborts, those files may persist and affect subsequent tests.
- Prefer data streams over files for testing.
- Don't leave tests unfinished, don't just comment out @Test (annotation in Java to mark a function as a test), and don't leave empty test bodies as these things are confusing and waste the reader's time. If you come across this kind of code, try to learn its purpose, ask people who worked on it, and rebuild it. If that is not possible or worth the time, then delete it as the dead code that it is. To indicate that tests need to be implemented, an issue/work unit could be created and referenced, or a TODO comment with an explanatory description could be added.
- A test should fail if an expected exception is not thrown. Test libraries usually have methods to handle such cases. If the exception object must be examined, use a try-catch block that includes a fails() method at the end of the try block. Without the fails() method to make the test fail, the test would wrongfully pass if no exception were thrown:
try {
//code that should throw exception
fails(); //executed when no exception was thrown
} catch (Exception e){
//examine and assert exception
}
//code that should throw exception
fails(); //executed when no exception was thrown
} catch (Exception e){
//examine and assert exception
}
- Different branches of production code should be checked in different tests.
- Avoid if-statements in the test code, but at least be careful with assertions in if/if-else blocks, as the test may pass without executing them. To prevent this, here are a few suggestions:
// bad - still passes when x == false and y == false
if (x){
//some logic
assertTrue(y);
}
// good - passes only if both conditions are met
if (x){
//some logic
assertTrue(y);
} else {
fails()
}
// also good
assertTrue(x);
//some logic
assertTrue(y);
if (x){
//some logic
assertTrue(y);
}
// good - passes only if both conditions are met
if (x){
//some logic
assertTrue(y);
} else {
fails()
}
// also good
assertTrue(x);
//some logic
assertTrue(y);
- Test Case Logging: If certain production code needs to be called several times with slightly different arguments, it may be useful to iterate over a list of these arguments, calling the production code and then asserting the result on each iteration. It is then often good practice to print out each element before each assertion. If an element causes a test to fail, the terminal output will immediately show which element caused the error.
- Bug Coverage Responsibility: If you find a bug or a case that has not yet been tested, it is your duty to create a test that covers it, so that the software is stable against that problem from that moment on.
- Range-Based Assertions: Inaccurate results should never be asserted against an exact value, but only within an expected range of approximation. This includes all assertions of floating-point numbers.
- Avoid Cascading Validation: It would be very cumbersome to perform input validations, such as the famous null checks, and corresponding tests for each unit. A common solution is to define a module consisting of several units. The unit at the top, which receives the input for the first time, validates it once. All subsequent units can safely assume that the input is not null when processing it, and no longer need to explicitly check or write tests for such cases.
- Stick to Performance Requirements: Once a performance test is passed, there is no need to optimize the performance of the code. Any effort in that direction is a waste of resources.