Pro-Active API Testing and Docker Containers

Mehmet Efe
4 min readFeb 24, 2020

I recently found myself having to think about preventive Quality Assurance for API Services. Think pro-active QA.

Every digital transaction, every information system, every client-server technology utilizes APIs. In recent years, Restful API services have become the bedrock of innovation.

Imagine you are a QA engineer for a company that depends on consuming multiple external APIs or your company’s product and or service depends on consuming multiple 3rd party services. Combining forces, cross-selling, news aggregation, geo-data, localized personalization, broadcasting, analytics, 3rd party support services etc etc. Think about Amazon, Facebook, Google News, Shopzilla, Ticketmaster, SaaS for Supply-chain Management, or multi-vendor virtual malls…

Logging Errors is Not Handling Them!

Say, you’re providing a comparison shopping service and you depend on the API from many retailers for up-to-date pricing, accurate product inventory, coupons and deals. The bread and butter of your service is aggregating live and accurate information from countless 3rd party APIs and if you’re successful, the number of 3rd party APIs you poll will grow exponentially so will the occurrence and cost of responding to errors, failures and change management.

Consuming an API requires contract, service definition, standardized response codes and messages, parsed payload data and so-forth; requires reliability and predictability.

No matter how well architected your internal consumer platform or your micro-services are, you are depending on external APIs; services you have no control over. They can change the version, they can rollout a change that completely changes payload, tokens get invalidated, small retailer who can’t afford an IS department doesn’t care about your web-hooks and can change response codes, hierarchy of JSON objects can shift around, format of return body can change etc.

Catching, recording, reporting changes and failures and then creating tickets and tasks for developers to fix the consuming code is the traditional sequence. Integration tests, scheduled automated monitor jobs, automated error handling pipeline with integrated ticket systems are all fine but can the rate of failures and the resource cost be minimized? Do we really need to interrupt service from an API because some fields shifted around on the response data?

Collaborating to Fail-early

This is where the QA engineer’s way of thinking need to get in-front of development or at least come together side-by-side with engineering. Integrating QA mind with development mind at development time, can greatly reduce cost of integration failures and service interruptions and greatly improve an organization’s culture for innovation.

(Yes, QA and Development are two separate minds, and no you can’t find a software engineer who can do his own QA.)

Engineers follow requirements, current contract and the format of the API they’re writing the consumer code for. Engineers can consider various scenarios but ultimately, their top priority is to develop the consumer for the state of current API; write code for perfect conditions (AKA happy-path), and develop it as fast as possible. QA on the other hand, makes assertions. Thinks about failures, risks, possible changes, eventual conditions, imperfect transactions… Imagine the two of them working on developing (or refactoring) an intelligent micro-service!

Dockers and Demos…

Here is my thinking at a high-level: Virtualize the 3rd party APIs and create all failure scenarios against your consumer code.

My favorite stack is MEAN inside Docker Containers. Node.js / Express (and modules like chai and mocha framework) allows me to rapidly create or duplicate the current behavior of any API, then develop tests against them. (You can also use Postman and a proxy application to inject actual 3rd Party responses of course!)

Docker containers allow me to emulate any environment instead of taxing myself and my team with cumbersome mock code. I can have docker base image of any micro service or even separate any component and test them in isolation or in concert. With Docker Compose I can ’containerize’ any stack and create an isolated network for all containers for any testing scenario right on my personal computer or on a single server. This also makes my tests reflect the production behavior. I can replay actual transactions from production, run multiple tests in parallel and asynchronously and manipulate any dependency at will.

So I’d take the existing consuming services in production and dockerize them, then make them consume my virtualized 3rd party API. Then I’d make my 3rd party API go through every possible failure scenario I can think of and observe our consumer services. I must of course, remember to study past failures and identify repeating failures or common patterns of failures before-hand.

Punch-line: I can use our existing micro-services as well as requirements for any new service and create proof of concepts with my engineering counterparts and then make my imitation 3rd party API demonstrate all possible failures (from HTTP status codes to sudden version changes to malformed payload); especially most likely failure patterns that can be handled gracefully! This could help my esteemed engineer colleagues to refactor or architect the micro services to be more intelligent, adaptive and graceful. They can write (or inoculate) their code against my failure scenarios. This also helps us make the case for change in existing code or extra investment in the new code to our internal stake-holders.

It’s preventive, it’s pro-active, it’s in line with the principle of fail-early, it will reduce service interruptions for our users and it will save a lot of time and resources down-the-road.

What would you do?

--

--