Code Coverage Definition: The percentage of classes/methods/lines of code executed during testing. There is class coverage, method coverage and line coverage, the latter being the most important.
100% code coverage is the goal. Untested code is insecure code and prone to bugs.
Leverage code coverage tools that determine code coverage during test execution.
Meaningful Tests: Don't add tests that don't add value or test something that has already been tested.
Strictness: If at least one test fails, it requires the developer's attention. He is not allowed to proceed with other work until that problem is fixed, and all tests must pass without exception.
Falsifiability = The ability to fail.
Wrongfully failing tests are easy to detect, because a good developer will pay attention to all failing tests and fix the problems before moving on.
Wrongfully passing tests lack the ability to fail properly and are hard to detect because everything seems fine, which is why it is so important to focus on preventing them. They do damage by appearing to validate functionality that they do not. A properly executed Test Driven Development workflow is one of the best ways to avoid these kinds of problems, which are described in this article.
Determinism: Repeated tests should always produce the same result. The success of tests should never depend on anything other than the production code. Avoid sources of randomness that could cause the test to fail even though the production code is fine. These include random number generators, threads, host-specific infrastructure (operating system, absolute paths, hardware: I/O speed or CPU load), networking over the local network or Internet (since the network or other hosts may sometimes be down), time/timestamps, pre-existing values in the database or files on the host system that are not in the source code.
Single Action Execution: The entire setup and testing process should require a single click or shell command. Otherwise, they will not be used regularly, which reduces the quality assurance of the production code.
Independence: Tests should be independent and not influenced by other tests; the order in which tests are executed should not matter.
Clean Test Code: Maintain the same high standards for test code as for production code. The only exception is execution speed - tests need to be fast, but not highly optimized.
Easy Bug Tracing: Failed tests should clearly indicate the location and reason for the failure. Measures to achieve this could include using only one assertion per test, using error logs, and using unique exceptions and error messages where appropriate.
Test behavior, not implementation. Always run tests against interfaces from production code to test the implementation behind them. The developer should be free to modify the code as long as it produces the correct results. This also reduces the coupling between test and production code, resulting in less work to adapt tests when the implementation changes. Avoid reflection and weakening encapsulation by making private things public. However, weakening encapsulation is a lesser evil than not testing at all.
'Program testing can be used to show the presence of bugs, but never show their absence!' (Edsger W. Dijkstra).
Follow good testing practices to avoid a large number of bugs, but be aware that bugs may still occur and be prepared for them.
Requirement fulfillment over specific solutions: Don't limit tests to a specific solution when multiple valid solutions exist. Test for compliance with solution requirements, not the specific output. This provides flexibility by accepting any valid solution. For example, in pathfinding problems, minimum travel costs validate a solution, not the particular path chosen.
Conciseness: Tests should be small, fast, and specific, which means, for example, that we don't need a million test data points, but maybe 5, each covering just one specific use case or border case. The exceptions are load/stress tests, which are specifically designed to send huge amounts of data to an application.