Acceptance Tests

Version 2.10 by chrisby on 2023/10/13 13:09

Guidelines

Acceptance Tests provide an unambiguous completion criteria for stories, the Definition of Done. A story is done when its associated acceptance tests have passed, along with all other existing tests. Only then can the estimated points of a story be added to the charts at the end of an iteration.

  • Integration Of All Components: Acceptance testing is the ultimate test of whether a feature works or not. To check this, the software must be deployed in production mode and then tested from the user's point of view; if there is a GUI, this is usually done by interacting with it. Common tools used to simulate such user interactions are Cypress and Selenium.
  • Always Production Ready: Acceptance testing ensures the stability of the software by proving that it works correctly if it passes the tests. This prevents developers from inadvertently breaking features by changing the code. Therefore, software functionality cannot evolve backwards. However, this does not prevent code from technically rotting under the surface, so code should still be kept clean, not just functional.
  • Execution Frequency: Since running acceptance tests is expensive, it is common to schedule an automated run once a day at midnight. If bugs occur, they are addressed immediately the next day.
  • Implementation Timing: Stories should not wait for acceptance tests to be completed. Ideally, acceptance tests should be started to be written in the previous iteration, but no later than after the IPM, so that they are completed before the story they verify. Since it is not possible to know with certainty which stories will be implemented in the next iteration, testers can ask stakeholders which stories are likely to come next. If acceptance testing is consistently late, QA will need to hire more people. QA and developers should communicate and negotiate the test structure, perhaps even pair programming, instead of the "eat or die" tactic of writing tests without consulting the developer.
  • Level of Detail: The business should specify the requirements at a level of detail that is specific enough for the developers to know what to do, but still short and vague enough not to contain details. Acceptance tests are story specification.
  • Requirement Automation: Wherever practically feasible, requirements should be written as automated tests.
  • Acceptance Test Anatomy: Acceptance tests consist of technical test code and can contain business test code. The business test code is written in English and in natural language and, when executed, they run the technical test code written in a programming language. Acceptance tests are the link between easy-to-understand business requirements and test automation.

Sample Acceptance Test

Below is the business test code of a "Login Scenario" written in the "Gherkin" language, which is used, for example, by the Cucumber acceptance testing framework:

Feature: User Login
  In order to access my account
  As a user
  I want to be able to log in using my credentials

  
Scenario: Successful login
    Given I am on the login page
    
When I enter a valid username and password
    
And I click the login button
    
Then I should see my dashboard
  • Ease of Comprehension: As shown above, the technical keywords in business test code are replaced with easy-to-understand ones that allow business people to read and write executable tests. Business test code should always be available to stakeholders.

Tester Roles

Larger software teams in particular have distinct roles for people writing acceptance tests. All testers collaborate with the developers to ensure that the acceptance tests are technically meaningful.

  • Business Analysts (BA): Works with customers to gain an accurate understanding of the requirements and translates this into 'happy path' tests, which assume the ideal sequence of events where everything works as expected. For example, all user inputs are correct, software is being used as intended by the developers, etc. The workflow is usually that the business writes the business test code for the acceptance tests. The business analyst then automates the tests by writing the technical test code.
  • Quality Assurance (QA): Writes acceptance tests for the 'unhappy path', i.e. everything that could possibly go wrong. For example, invalid user input, unintended use of the software, etc. QA usually requires more resources, as there are much more test cases to cover. QA should be technically experienced, to have a good eye for triggering bugs. Because the business does not see these unhappy path tests, no business test code is required, reducing overhead.

While BA's provide more of a late feedback to check if a feature works as expected at the end of story implementation, QA's need to collaborate with developers early because in case problems are found by the unhappy path tests, the developers need enough time to fix the issues.