Acceptance Tests

Last modified by chrisby on 2024/05/11 08:20

Guidelines

Acceptance Tests provide an unambiguous completion criteria for stories, the Definition of Done. A story is done when its associated acceptance tests have passed, along with all other existing tests. Only then can the estimated points of a story be added to the charts at the end of an iteration.

  • Integration Of All Components: Acceptance testing is the ultimate test of whether a feature works or not. To check this, the software must be deployed in production mode and then tested from the user's point of view; if there is a GUI, this is usually done by interacting with it. A common tool used to simulate such user interactions is Cypress.
  • Always Production Ready: Acceptance testing ensures the stability of the software by proving that it works correctly if it passes the tests. This prevents developers from inadvertently breaking features by changing the code. Therefore, software functionality cannot evolve backwards. However, this does not prevent code from technically rotting under the surface, so code should also be kept clean, not just functional.
  • Execution Frequency: Since running all acceptance tests can take a very long time, it is common to schedule an automated run once a day at midnight. If bugs occur, they are addressed immediately the next day.
  • Implementation Timing: Stories should not wait for acceptance tests to be completed. Ideally, acceptance tests should be started to be written in the previous iteration, but no later than after the IPM, so that they are completed before the story they verify. Since it is not possible to know with certainty which stories will be implemented in the next iteration, testers can ask stakeholders which stories are likely to come next. If acceptance testing is consistently late, QA will need to hire more people. QA and developers should communicate and negotiate the test structure, perhaps even pair programming, instead of the "eat or die" tactic of writing tests without consulting the developer.
  • Level of Detail: The business should specify the requirements at a level of detail that is specific enough for the developers to know what to do, but still short and vague enough not to contain details. Acceptance tests are story specification.
  • Requirement Automation: Wherever practically feasible, requirements should be written as automated tests.
  • Acceptance Test Anatomy: Acceptance tests consist of technical acceptance test code and can contain business acceptance test code. The business acceptance test code is written in natural language (English) and, when executed, they run the technical acceptance test code written in a programming language. Acceptance tests are the link between easy-to-understand business requirements and test automation.

Sample Acceptance Test

Below is the business acceptance test code of a "Login Scenario" written in the "Gherkin" language, which is used by the Cucumber acceptance testing framework. For example, the line 'Given I am on the login page' calls the test code that requests the login page of the website.

Feature: User Login
  In order to access my account
  As a user
  I want to be able to log in using my credentials

  Scenario: Successful login
    Given I am on the login page
    When I enter a valid username and password
    And I click the login button
    Then I should see my dashboard
  • Ease of Comprehension: As shown above, the technical keywords in business test code are replaced with easy-to-understand ones that allow business people to read and write executable tests. Business acceptance test code should always be available to stakeholders.

Tester Roles

Larger software teams in particular have distinct roles for people writing acceptance tests. All testers collaborate with the developers to ensure that the acceptance tests are technically meaningful.

  • Business Analysts (BA): Works with customers to gain an accurate understanding of the requirements and translates this into 'happy path' tests, which assume the ideal sequence of events where everything works as expected. For example, all user inputs are correct, software is being used as intended by the developers, etc. The usual workflow is that the business and the BA write the business acceptance test code for the acceptance tests together. The business analyst then automates the tests by writing the technical acceptance test code on his own.
  • Quality Assurance (QA): Writes acceptance tests for the 'unhappy path', i.e. everything that could possibly go wrong. For example, invalid user input, unintended use of the software, etc. QA usually requires more resources, as there are much more test cases to cover. QA should be technically experienced, to have a good eye for triggering bugs. Because the business does not see these unhappy path tests, no business acceptance test code is required, reducing implementation effort.

While BA's provide more of a late feedback to check if a feature works as expected at the end of story implementation, QA's need to collaborate with developers early because in case problems are found by the unhappy path tests, the developers need enough time to fix the issues.