Integration Tests: What’s in a Name?

Software engineering as a discipline has a terminology problem – and there are fewer more overloaded terms in software testing than ‘integration test’! It’s very likely that my definition is very different from your definition, and if we were to involve a third party that their definition would differ yet again. We all agree, with a little variation, about what a unit test is and what capabilities and limitations that phase of testing has. But what comes next? When we want to move beyond testing functions with direct calls?

The software test pyramid - unit tests on the bottom, then integration tests, and specialty tests on top.

The software test pyramid - unit tests on the bottom, then integration tests, and specialty tests on top.

What, then, is an integration test? In my experience, integration testing is primarily focused on how services talk together – how they integrate. It is focused not on the functional test matrix of the likes of ‘input x and receive y’, but instead helps to answer the question of, ‘call service X with condition A and ensure the service calls the correct downstream parts and returns Y ‘. It is about ensuring that communication and protocol boundaries are correct – both in the positive and negative test cases. It is about ensuring errors propagate in a way all services involved can handle. It is about validating scenarios are correct end-to-end.

Integration testing guidelines

Integration testing guidelines

The most important part of integration testing is crossing boundaries. This may not necessarily mean connecting two or more of your published services together – it can also include crossing an interaction or machine boundary. An integration test may not ever even leave the machine! A smoke test that pings localhost to ensure a service is running is STILL an integration test, because you are crossing the HTTP boundary from your webhost and then into your application.

As with any automated test, integration tests must not require manual intervention between runs. If someone must drop tables on your QA SQL environment every second or third run, the test suite is not only not an integration test suite – but it is not an automated suite at all! Likewise, integration tests need to be reliable. My bar is 99.9% (one false failure every 1000 runs). This seems like a high bar but consider that in a continuous integration and deployment environment, there may be 10s to 100s of test runs a day depending on how large the team is. If a team aims for a lesser reliability like 99% and is running the test suite 100 times a day – then that’s a false failure every single day. False failures build alert fatigue, and rapidly undermine confidence in all test suites – not just the flaky one!

An integration test is not limited to simple numeric/string-based asserts on returns from function calls. Side-effects that are measurable can also be checked – things like counter increments or file updates are also excellent behavioral markers that can be used to verify what actually happened during a given test run.


What makes a good integration test?

Speed

Developers tend to be impatient people. In my experience, developers will wait about five minutes for build and unit test. They will wait about half an hour for deployment and integration test pass. A good integration test pass (including ALL suites) must complete in around 20 minutes – assuming you can run all your integration test suites in parallel!

Reliability

Automated tests should have 99.9% reliability. This includes false positives and false negatives, as well as any manual cleanup that must be occasionally done. They must return a clear PASS or FAIL. INDETERMINATE test results that require human interpretation are to be avoided.

Specificity

Must be a clear answer to a clear question. An integration test suite should have an overarching purpose, and the test cases within it should align with that purpose. The purpose cannot be vague, like ‘does it work’. Rather, it should be extremely specific such as ‘the API interface aligns with its specifications’.


What falls in the `integration test` umbrella?

Many different test suites that commonly masquerade under different terms can fall within the category of the integration test.

  • Smoke Suite – tests that are run immediately upon starting up a service. They are simple, fast, and pass/fail. If the smoke suite fails, then no further testing is possible, and the deployment will need to be rolled back.

  • API Suite – tests that explore an HTTP API. They are concerned with communication protocol and error-handling.

  • End to End Suite (conditional) – if and only if they can be run quickly and reliably. They explore scenarios from beginning to end (and are not necessarily hitting every part of a deployed system)

  • UI Suite (conditional) – if and only if they can be run quickly and reliably. They drive scenarios from the UI layer and are usually harnessed through Selenium or Cypress.

… and many more! If a test suite can fall within the requirements specified above for what makes a good integration test, then it can be an integration test!


Conclusion

A reliable, fast, and specific integration test suite is a key part of any continuous delivery pipeline! Without being able to clearly know if a given deployed service contains regressions in behavior or functionality all our continuous delivery pipeline is doing is continuously delivering garbage into production and onto the screens of our users!

Previous
Previous

Melissa on WeHackPurple Podcast!

Next
Next

When is a null not a null?