Learn what OpenAPI-driven contract testing is, how and when contract testing is performed and scaled for massive microservices programs, and how it can provide relief to development teams that have been overwhelmed by testing bottlenecks and increasing rates of code errors in production.
The world runs on apps and platforms built from microservices – and microservices run on APIs. With that in mind, many software engineers are rushing to add microservices to help make apps smarter, safer, and more frictionless across multiple devices.
As a developer, you know firsthand the mounting pressures to speed up releases for CI/CD pipelines and microservices. However, the need for speed can sometimes force teams to turn to risky shortcuts to save time, like reducing or even eliminating testing sprints. But how do you gain the confidence to send releases to production – and if there is an issue, do you have the ability to quickly diagnose and repair it?
In today’s increasingly decentralized software organizations, efforts to reduce risk in API-first development mainly involve a specifications-driven approach to API builds and quality, in which API documentation is formatted in standardized specification files that define the API contract. In this way, it’s more difficult for definitions to get lost in translation during handoffs. OpenAPI has emerged as the most popular specifications format, and many teams are embracing an OpenAPI-driven approach to quality engineering – starting with OpenAPI-driven contract testing.
In this article, you’ll learn what OpenAPI-driven contract testing is, how and when contract testing is performed and scaled for massive microservices programs, and how it can provide relief to development teams that have been overwhelmed by testing bottlenecks and increasing rates of code errors in production.
Contract testing is a fast, lightweight form of API testing that checks the content and format of API requests and responses. Complete API contract testing must validate both the API producer (server side) and the API consumer (client side) to detect and diagnose when a contract is broken by either side.
Contract testing is designed to monitor the API conversation that takes place between the API consumer and the API producer. The API conversation must agree on specific rules. This agreement — with the formal description of the rules that govern it — is the contract (aka, pact). Walk through how to build a clear service-level agreement here.
The contract, or pact, is generally presented as a specification file in a format such as OpenAPI, an open-source spec-building framework. The API conversation looks something like this:
If this contract is broken by either side, bugs and malfunctions can occur. API contract testing is the act of validating that the API producer and the API consumer are respecting the contract.
Today, many mobile, web, and cloud-native app development teams want to accelerate the journey from monolith to microservices as part of adopting distributed systems with improved resource sharing, concurrency, scalability, composability, and failover. Essentially, microservices architecture consists of decoupled, independently run services that communicate via APIs to execute single functions within data maps that enable microservices to function as a whole like a traditional, monolithic app.
While monolithic applications might have only required a few tests for a few scenarios, microservices involve an exponential increase in test cases due to the many ways that APIs interact with internal, external, partner and/or open APIs. Adding even one microservice on top of an existing microservice program can vastly increase the time, effort, and cost of validating end-to-end scenarios.
Enter OpenAPI-driven contract testing to the rescue. At a high level, contract testing allows you to run basic validation of high numbers of APIs instantly during design and development, in a way that saves teams from needing to create and maintain massive suites of end-to-end tests. Let’s dive in to learn more about contract testing and how it can benefit your team and your business.
Contract testing is designed to provide instant feedback that is usable during the earliest stages of API design and development. It has become popular for helping teams accelerate API-first and microservices development while lowering costs with significant reductions in false-negatives, rollbacks, test bottlenecks, errors in production, and more.
While contract testing has traditionally involved some heavy lifting in coding the test scripts, the massive scale of today’s microservices and API programs demand contract test generation capabilities that automate contract test writing and maintenance in addition to streamlining a spec-driven approach to all API development and quality. That’s why Sauce Labs built Sauce Labs Contract Testing with the ability to generate and automatically update both consumer and provider contract tests instantly from OpenAPI specification files on a platform offering easy handoffs and test reusability throughout the SDLC The Sauce Labs Piestry tool generates API mocks from OpenAPI specs to mimic a real API server’s requests and responses while allowing users to run contract tests before APIs are available or to close microservices dependencies.
Some key benefits of contract testing include:
Greater flexibility for continuously evolving codebases.
Rapid time to value with higher release confidence in CI/CD pipelines and microservices
Low barriers to getting started quickly
Massive scalability without disruption to workflows
Isolation of the consumer side as a way to reduce the scope of an issue.
Contract testing is ideal for developers who need to:
Validate APIs in stable conditions during the early stages of design and development
Create APIs that are internal and/or have a limited number of consumers
Here are a few considerations for when to use contract testing:
If there is a need to control the development of both the consumer and the provider sides of building or changing internal APIs.
If both the consumer and provider are under active development.
If the provider team needs to more easily control the data returned in the provider’s responses.
If the requirements of the consumer will be used to drive the features of the provider.
If the number of consumers is small enough for a provider so that the provider team can manage an individual relationship without being overwhelmed.
type: embedded-entry-inline id: 6tzj4FbQ4SyswYHZ8bLUwN
In a Sauce Labs API Testing project, you need to set up the API contract between the API producer and the API consumer with the specification files in OpenAPI (or RAML). We recommend setting up a mocked API producer for testing the API consumer side (see the following sections).
After the contract test environment has been established, a contract test created from the OpenAPI spec files is validated on the API producer side. This is done from the HTTP client (the web side) to take steps toward generating the test and asserting that it meets the contract validation details.
Functional testing can be done on top of the contract tests to confirm that the API is working as intended. Once the tests run as part of your continuous infrastructure (CI) part of the pipeline, you can review the contract test results and move on to testing the consumer side.
Testing the API consumer happens in a protected, static environment and runs against mocked (not live) APIs. The mocked APIs allow the contract tests to compare the API responses to the contract in isolation. This will raise any issues that need immediate attention.
It’s important to note that the same OpenAPI spec used to test the API producer side must be used to test the API consumer side. The Sauce Labs Piestry tool is then used to mock the APIs in the contract and to perform API requests to the mock server.
In a production scenario, unit tests will likely be run in your code to trigger API calls to ensure that the mock, rather than the actual service, is being used. Sauce Labs Piestry will validate the contract for every one of those inbound API requests for you to review.
The results log specific to testing API producer and API consumer contracts can be reviewed on your API testing project’s dashboard. A report document for the contract test details how the requests and responses appeared during the transaction and the nature of any test failures. Sauce Labs API testing will validate whether the API consumer side complies with the contract specifications.
Sauce Labs API testing use cases can help illustrate how this type of testing works.
The following illustrations show a contract testing scenario with the specification files in OpenAPI and the layout of the API consumer and mocked API producer server in place of the API producer server:
The following illustration shows the layout of the Sauce Labs API testing validation on the API producer side from the OpenAPI spec file:
From this position, the API request for testing can be sent. The spec file will populate the HTTP client fields, including HTTP method, request URL, or anything else detailed in the spec file to generate a test:
On the consumer side, Sauce Labs API testing runs in an isolated environment against mocked APIs on Piestry as described in the contract:
This example in Sauce Labs API testing shows that the API consumer side complies with contract specifications according to the contract:
Contract testing should be rigorous enough to provide confidence that the services running on your application will perform as expected in production. But can contract testing alone drive quality at speed for massive API or microservices programs?
Sauce Labs contract tests can be extended with functional assertions and then reused in any environment throughout the SDLC as end-to-end (E2E) API functional tests, integration tests, load tests, and API monitors that check API function, performance, and side effects in development, testing, and production.
While contract testing covers a lot of ground, some quality use cases may be better served by other types of API testing. Contract testing will not bridge certain QA gaps, such as verifying data semantics and validating complex integrations in real-world scenarios, for example. A large number (or “hairball”) of isolated contract tests would be required to do the work of a single E2E test.