Skip to content

Performance Insights

Jordan Walsh edited this page Jul 5, 2024 · 3 revisions

Below are some metrics gained through executing simple performance tests to validate how this project handles load.

Test Setup

Load Profile

  • 500 Virtual Users
  • 60s Ramp Period
  • 5 minute test duration

Collection Under Test

Simple collection with 2 requests:

  1. A GET request with a JSON response, and
  2. A POST request with a JSON response

Server Specs

This test was run on an Amazon t2.2xlarge server (8 CPU, 32 Gb RAM).

v2.0.0 Results (executed 07-2024)

Overall the system exceeded expectations with an average of 2,177 transactions per/second for each endpoint.

image

Transactions Per Second

image

Response Times Breakdown

image

Response Time vs Threads

image

Performance Insights

V2.0.0 has significantly better performance than v1.0.0 with more than 10x improvement in TPS.

This was due to major changes in the response matching approach implemented in PR#21 and PR#23 to better match the approach in the actual Postman application.

There is now also a caching system in the tool based on apicache, however for this test approach with only 1 response per endpoint, the usage of the apicache actually decreased performance (down from 2,177 to 2,023 TPS). The caching layer will be more effective with larger numbers of responses.

v1.0.0 Results (archive - executed in 2023)

Overall the system performed well with an average of 232 transactions per/second for each endpoint.

image

Transactions Per Second

image

Response Times Breakdown

image

Response Time vs Threads

image

Insights

The Post Request uses an x-mock-match header that skips a large part of the request matching logic. This is clearly visible in the response-time vs threads result.

The Get request iterates through the response options to find the result. This could benefit from some kind of caching layer to improve performance, however an average of ~232 tps is acceptable from a single server.

In the event we want to get this into the 1000s of tps we would need to optimise the code to cache similar responses and return these immediately.