The tests contained here represent the functional end-to-end (E2E) test suite of the OpenLMIS Ref Distro.
This test suite has these goals:
- Verifies high-level functional OpenLMIS feature requirements.
- Living documentation for feature acceptance.
- Reports on suitability of OpenLMIS feature use in high-latency (i.e. typical last mile) networks.
- Taurus in Docker - Dockerized environment for running tests, and defining test context (iterations, provisioning, etc). Also supports virtual display for headless testing.
- Cucumber - Allows our tests to be written in human-readable prose and how tabular data is input.
- JUnit - reporting on test cases ran.
- Webdriver.io - Node.js wrapper of Selenium's WebDriver. Allows our code for WebDriver to be written in Javascript and includes a basic REPL.
- ES6 - Pages and steps are defined in ECMAScript 6.
- Selenium WebDriver - Drives browser so that a Selenium test may use a browser directly though an API.
- BrowserMob-Proxy - Captures all network traffic for reporting on connections, payload size, timing, etc. Also used to simulate slow networks.
- Appium (Planned) - API extension to WebDriver for testing mobile (iOS/Android) applications (native or hybrid).
- Run tests:
docker-compose run funtest
- To cleanup:
docker-compose down
Install (once):
- Install yarn.
- Run:
yarn install
Test:
- Start (once before):
yarn run local-webserver
- Test:
yarn run wdio
Use REPL (Install and Start first):
- Start Selenium:
yarn run selenium-standalone start
- Run REPL:
yarn run wdio repl
- Feature acceptance:
./build/WDIO.xunit...xml
- Network Requests (HAR):
./build/openlmis.har
- Paste results to HAR Viewer
Tests are broken down by:
- Features - human readable, behavior driven, acceptance definitions.
- Steps - the definition of each step of a feature.
- Pages - encapsulating the "how" of what a page allows a user to do.
Features are written in Gherkin to describe the high-level acceptance criteria for a particular piece of user expected behavior. Each feature may have one or more scenarios which outline specific uses of that feature.
Features are in src/features/featureName/
(where featureName is the name of the feature):
- Define a feature file per feature - ideally one feature per user-story.
- Append the filename with
.feature
. - Scenarios of a feature are different aspects or uses of that feature.
Steps are the definition's behind features. These steps should be human readable and be free of page logic (see Page Objects and Selectors).
Steps follow the typical given, when, then of BDD:
- Given: the state of the world before the behavior to test
- When: the behavior to test
- Then: the expected outcome of the behavior.
Steps are located alongside their features in src/features/featureName/
:
- Break given, when, then into seperate files.
- Append
.steps.js
to the end of the filename. e.g.given.steps.js
. - Don't declare the same step definition more than once. Steps in Cucumber are in a global
namespace - even though the step definition files are located near their
.feature
, no two features may declare the same step definition.
The Page Object pattern is used to encapsulate the actions that a user may take in any particular piece of the workflow. As part of this each page object encapsulates selectors and other particulars (e.g. wait, visible, focus, etc) leaving the Step definitions free of the Page Definitions. The hope is that not only is Page logic encapsulated to the Page Objects, but that the step definitions could apply to any UI of OpenLMIS, such as mobile or web.
Pages:
- Expose an "API" that defines what a user can do: give a username and password, click a button, press keys on the keyboard, gesture, etc.
- May be a for a page like a screen, or a page such as a re-usable modal that shows up in multiple places.
- Define selectors.
- Workflow agnostic - not the place to define steps.
Pages are in src/pages
, and should follow the pattern pageName.page.js
where pageName
is
the name of the page.
- Clean test output and installed packages:
yarn clean
- Validate test style:
yarn run test:validate
E2E tests inevitably end up changing the data which the system under test has (i.e. it changes its state). A couple of conventions are in use for this suite:
- Tests should presume the existence of demo-data.
- One test might end up changing the data that another relies on. If this occurs, we should build a runtime capability to reset demo-data.
This testing project was based off of the boilerplate WebDriver.io project: cucumber-boilerplate, to which we thank for the head-start.