-
Notifications
You must be signed in to change notification settings - Fork 49
Testing
JR Conlin edited this page Dec 4, 2020
·
6 revisions
- CircleCI:
- We’re using CircleCI to run automated tests as a requirement before merging any PRs. These tests are configured here, and are currently setup to run:
- Unit tests - configured here; runs the entire test suite.
- DB tests - configured here, separate set of tests. we’re planning to combine these with the above test suite but for the time being they are broken out (see #410).
- End to end tests - run using a special dockerfile; hits this test endpoint that’s tied to the legacy server-syncstorage repo.
- Load Testing:
- We have a python based load testing script that can be run both in an automated fashion and on demand. We’ve historically run it against the staging environment. We’re defining rough pass/no pass thresholds here, and will refine as needed. Based a few recent data points (here and here) where we saw a 0.0006-.0008% failure rate, we are going to start with:
- PASS - Failure rate less than 0.001%
- FAIL - Failure greater than or equal to 0.001%.
- Client Integration Tests:
- FxA automation - FxA + Sync
- App services e2e tests
- Desktop Fenix automation
- Sync iOS automation
- You can perform sync testing for the environments by adjusting configuration on the various sync clients.
- If testing Android, you may wish to enable android dev mode, and point to a local or stage sync server.
- Manual test plans:
- Durable Sync test plan - Test plan used by Softvision for both the initial rollout and user migration for Durable Sync.
- Regular sync testing against staging is currently performed on a weekly cadence by Softvision as part of the Firefox train releases. Here’s a list of the tests performed.
- If you open up one of those tests (ie here) you can see under “preconditions” there’s a set of testing instructions. Based on those instructions, we can confirm that SV is already testing against staging instance of Durable Sync here.
- If QA finds an issue in staging, they should contact #services-engineering for triage. We’ll identify if the issue is worth rolling a new release for staging, or if it’s safe to allow the issue to roll out to production without being immediately addressed. If a new release needs to be created, we need to also be cautious to understand what else may have been merged to master since the previous release so as not to introduce any unwanted changes that may go untested.
- We have a collection of example bookmark files and tools here that can be useful when performing manual testing.
- There is also a collection of Conditioned Profiles that can be useful in both manual and automated tests.
- Code changes are required as part of the release creation process, which means all the automated tests via CircleCI will be run.
A major/high risk release is identified by the team as something that either impacts a large section of the codebase (ie, sweeping re-write), or is a change that we have other reason to suspect might need some extra attention. An example of this can be found here (identified by team as high risk because of the pagination optimization), or changes to the spanner queries that we’d like to perform additional specific tests against.
- Same as regular releases AND:
- Additional load tests run as needed.
- Release manager or EM/EPM should ensure we have an appropriate PI request created to coordinate manual testing. We’ll want to get this request in as early as possible to ensure time for coordination with the QA team. See here for more details on the PI Request process.
- If the releases fixes a security issue, make sure to involve Security.
If we’re adding new tests as part of a release, we’ll need to pay attention to make sure they’re actually being run automatically. If not, they may need to be manually triggered for the first release create that includes them.