At cure.fit, we strongly believe in the microservices philosophy, and have over 100 microservices deployed on production interacting with each other over HTTP APIs. Given the speed at which we work to ship out features (sometimes more than 30+ production deployments a day), regressions in existing functionalities started cropping up more often.
We had initially been relying on manual testing to ensure correctness. But the sheer volume of deployments meant that this could not scale well. Plus, automated tests became mandatory to ensure stability. So how did we go from manual to automated testing? What did it take for us to reach where we currently are? And, what did we learn along the way?
The Problem Statement
In the absence of API test automation, relying on sentry and rollbar was our only option to detect errors being reported during production. And even these tools only catch errors being thrown due to unexpected behaviour and not silent failures.
Occasionally, we saw issues with data quality as well eg: schedules for cult.fit not being published on time. Given our substantial offline presence (cult centers in more than 20 cities across India), we also needed a way to ensure different configurations worked fine for every city based on the offerings available. For instance,
- Displaying the cult center tab on the app for all cities where we have cult centers.
- Allowing customers to buy packs available in their respective cities.
- Correct pack and center images being shown properly for all centers in each city.
So the ask came down to the following requirements:
- We needed automated testing integrated with our CI/CD pipelines, to find and fix functional issues before the code hits production. These tests would become our quality gatekeepers.
- We needed a way to periodically poll our offerings in production and ensure that all major flows were always present with the right data.
To achieve these, we required an API test automation tool. And, the next big question arose:
Which API Test Automation Tool Should We Use?
API test automation involves testing the collection of APIs and checking if they meet expectations for functionality, reliability, performance and security.
Here are the essential requirements we had in mind while choosing our API Test Automation Tool:
- The API test automation framework once implemented should be easily extensible. It should be easy to write tests and implement a framework globally across the organization.
- Our key focus being adding integration tests alongside unit tests that cover end-to-end flow – right from the point when users login to the application to when they browse the catalog and finally checkout to make a payment for the products – it should enable the extraction of multiple data elements that could be reused in subsequent HTTP calls.
- Data-driven testing that allows the use of dynamic JSON or even CSV as a data source.
- Parallel execution of tests to bring down the test execution time.
- Built-in test reports that include HTTP requests and response logs in-line that help debug test automation failures.
- The ability to validate schema of all elements in a JSON.
- The ability to validate all payload values in a single step.
- JSON updation should happen through a path expression so the same JSON can be used across multiple environments, and tests can be executed on any environment just by passing an environment variable at run time.
Though Rest-assured fit almost all our requirements and offered a decent experience, we decided to try a relatively new test automation framework, ‘Karate’ which perfectly matched our needs.
Additionally, the effort of coding and maintaining the test automation associated with Karate was pretty low compared to other test automation frameworks available in the market.
Here is an article that will help you with a detailed comparison between Karate and Rest-assured.
Now, Let’s Look at the Key Features of Karate
- Based on the popular Cucumber/Gherkin standard, it features IDE support and syntax-coloring options.
- Eliminates the need for ‘Java Beans’ or ‘helper code’ to represent payloads and HTTP end-points and dramatically reduces the lines of code needed for a test.
- Simpler and more powerful alternative to JSON-schema for validating payload structure and format. It even supports cross-field/domain validation logic.
- Scripts can call other scripts, meaning you can easily re-use and maintain authentication and ‘set up’ flows efficiently across multiple tests.
- Makes reusing payload-data and user-defined functions across tests so easy that it becomes a natural habit for the test developer.
- Offers built-in support for switching configuration across different environments (e.g., dev, QA, pre-prod).
- Highlights a standard Java/Maven project structure, integrates seamlessly into CI/CD pipelines and supports JUnit 5.
- The multithreaded parallel execution acts as a huge time-saver, especially for integration and end-to-end tests.
- Built-in test reports are compatible with Cucumber, so you can use third-party (open-source) maven plugins for better-looking reports.
- Reports include HTTP request and response logs in-line, making troubleshooting and debugging easier.
So, What Does Our Current API Test Automation Setup Look Like?
- Unit and integration tests for our critical microservices.
- API test automation has been integrated with the dev CI/CD pipeline so the functional issues can be found in the pre-prod environment.
- Integrated the performance test with the dev CI/CD pipeline. Refer to this blog post for more details.
- We now catch most configuration issues in the production environment. This used to get missed previously since it was extremely time-consuming and prone to error to validate them manually every day.
- Failure alerts have also been integrated with the dev pagers (ops-genie) to notify stakeholders in time.
- Test report using Cucumber.
- API requests and responses are shown in-line in the test report making it easier to debug test failures.
- Debugging test failures using AWS cloud watch involves passing a custom header request-id, which can be used to search logs in AWS CloudWatch to troubleshoot test failures.
What are the Advantages of API Test Automation?
API test automation enabled us to quickly identify customer issues in the pre-production environment, which could have otherwise resulted in a bad user experience.
With writing regular unit tests, we could only ensure that a specific microservice was working fine. However, by integrating API test automation with the deployment pipeline, we can now ensure all upstream and downstream services are working fine, and no customer flows will be broken when this code is pushed to production.
What Did We Learn?
- The test data needs to be extensive taking into account all types of configurations (users, cities, etc). This ensures that the test automation mimics the actual production environment as closely as possible.
- The tests have to be executed 24*7 on production so they act as actionable site reliability alerts.
Where Do We Go from Here?
Currently, we are looking at increasing the code coverage for each microservice. Furthermore, we plan to integrate API test automation as a part of every microservice deployment and not just for the critical ones as implemented currently. This will help us ensure quality releases and environment stability.
Credits – Salil Gupta, Test Architect at cure.fit