I am wondering what specific best practices are around full integration testing (like by a QA team). I am a frontend developer working in a large legacy web app with very little frontend testing. We have a large monolith Rails app with React on the FE added on as an afterthought. We have a few React-based unit tests for each component, but the components often have complex interdependencies with other components and the backend (BE) or with third party services such as using the Stripe credit card react input field, or logging on the FE directly to datadog. We also have a few “global” integration tests with Cypress, but we have mocked out the BE. We mocked the BE because it was theoretically going to be too slow to run full integration tests on each pull request (PR). This limits us greatly in what we can test, as we can’t test saving records, or doing flows in the app, just basic rendering of the full page.
At previous companies I have used things like RainforestQA to write human-driven QA tests, but it is hard to scale. At my current company, we have a dedicated QA team who uses spreadsheets to write down all their manual tests they need to perform for each new feature (from a large product/feature brief), before they have the time to write automated tests. This list of manual tests is easily 50 to 100 scenarios for a new feature:
- Login as this user with this set of roles for that company with these company features enabled.
- Perform some sequence of actions.
- During each step, check for certain results.
Our data model is fairly complicated and detailed, so we have users with roles and all kinds of custom settings, companies with all kinds of configurations and settings, and other objects even more complex than these. So there are lots of even basic variations to test in the integration tests (not counting the actual combinations of everything, just talking covering the app’s main features in large swaths by one sweeping integration test so to speak).
When they finally get around to writing automated tests (usually a few product briefs behind, or 6 months to a year behind when the feature was actually released, because we are short staffed), we use Selenium and Capybara on Rails to programmatically login and do whatever is necessary to automate the test. However, we skip out on testing integration with third parties, such as:
- Checking if an email was sent (or something failed here).
- Checking if a text message was sent to Twilio (or something failed here).
- Checking if a payment was made to Stripe (or something failed here).
- Checking if custom PDF invoices were generated.
- Checking if some other third party thing happened.
In an ideal world, we would be testing these things automatically, as the QA team currently needs to manually test these before every release to make sure nothing is broken.
In an ideal world, we would be able to also play with time somehow, so we don’t have to wait for 30 minutes for an authorized payment to actually be charged and the email to be sent, and stuff like that.
In an ideal world we would be writing tests with Capybara/Cucumber, Cypress, or just plain Puppeteer, and it would be loading up Gmail or checking the Stripe dashboard or checking the Twilio account or some fake phone somehow that these things were successful.
In an ideal world we would be able to seed the database instantly for each PR on GitHub, so it takes less than 2-5 minutes to run the full suite of tests.
In an ideal world we would be writing full integration tests (including with third-party services) as a FE team, so QA was just formality in the end. By writing these automated tests in TDD style from the beginning, we would be simplifying our workflow greatly, as currently I have to refresh the page and click through 10, 20 clicks to reproduce some state in the app to see if my change in this deeply nested (in terms of flow) screen is the right style or integrates with the BE properly. I would like for tests to be written to programmatically perform any complicated interactions to reproduce the state.
In an ideal world, I think we should be writing full integration tests at the customer experience/perception level. That is, when given a new feature to work on, such as SSO (single-sign on) or MFA (multi-factor authentication), we should write a test that:
- Configures a company with SSO through the UI.
- Configures a user to prefer SSO through the UI.
- Logs in with SSO (including clicking the appropriate “approve” buttons on Google Oauth and such).
- Checks that we are logged into the app now (through the UI) by trying to perform some authorized action, etc..
If that’s not realistic, then what is the state of the art?
- How can you have a full suite of integration tests, including testing integration with third-party services for a reasonably complicated app like this?
- How can you have these integration tests run for each PR in under 5 minutes total?
- If not possible to run that fast, what corners could be cut and how?
For example, we could separate tests into groups that run in parallel. We could perhaps run some of the tests only under certain conditions (just before release as opposed to for each PR). But when/how would we decide this?
Basically, I would like to start moving our company down the path of better integration testing of our web app and would like to know what is possible and at the same time practical for implementing within let’s say a year given just a developer or two.