As framework deals with significant technical debt baggage many of currently configured tests do not resemble practise we want to follow in newly introduced tests.
Please follow this document as the only guideline, it also provides links to tests that serve as a good example to replicate
Tests are configured with Mocha test framework, and can be run with following command
npm test
All new tests should be configured with help of runServerless util - it's the only way to test functionality against completely intialized serverless
instance, and it's the only scenario that reflects real world usage.
Check documentation of runServerless
at @serverless/test/docs/run-serverless. Note that runServerless
as configured at ./utils/run-serverless.js
supports two additional options (fixture
and configExt
), which provides out of a box setup to run Serverless instance against prepared fixture with eventually extended service configuration
As runServerless
tests are expensive, it's good to ensure a minimal count of runServerless
runs to test given scope of problems. Ideally with one service example we should cover most of the test cases we can (good example of such approach is ALB health check tests)
When creating a new test, it is an established practice to name the top-level describe after the path to the file, as shown in AWS Kafka tests.
- Run against config passed inline
- Run against preprepared fixture
- Fixtures can be extended on spot. Whenever possible it's better to extend existing fixture (e.g. basic
function
) instead of creating new one (check ALB health check tests for good example on such approach) - If needed introduce new test fixtures at test/fixtures
- Fixtures can be extended on spot. Whenever possible it's better to extend existing fixture (e.g. basic
Example of test files fully backed by runServerless
:
If we're about to add new tests to an existing test file with tests written old way, then best is to create another describe
block for new tests at the bottom (as it's done here)
Note: PR's which rewrite existing tests into new method are very welcome! (but, ideally each PR should cover single test file rewrite)
We aim for a (near) 100% test coverage, so make sure your tests cover as much of your code as possible.
During development, you can easily check coverage by running npm run coverage
, then opening the index.html
file inside the coverage
directory.
Run all tests via:
AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=xxx npm run integration-test-run-all
_Note: Home folder is mocked for test run, therefore relying on AWS_PROFILE
won't work. _ and secret key, need to be configured directly into env variables_
Note: Some integration tests depend on shared infrastructure stack (see below)
Ideally any feature that integrates with AWS functionality should be backed by integration test.
Check existing set of AWS integration tests at test/integration
Pass test file to Mocha directly as follows
AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=xxx npx mocha test/integration/{chosen}.test.js
Due to the fact that some of the tests require a bit more complex infrastructure setup which might be lengthy, two additional commands has been made available:
integration-test-setup
- used for setting up all needed intrastructure dependenciesintegration-test-teardown
- used for tearing down the infrastructure setup by the above command
Such tests take advantage of isDependencyStackAvailable
util to check if all needed dependencies are ready. If not, it skips the given test suite.
Examples of such tests: