Skip to content

Latest commit

 

History

History
145 lines (89 loc) · 4.43 KB

File metadata and controls

145 lines (89 loc) · 4.43 KB

Please make sure to prepare the pre-requisites ready before following this tasks.

Summary

In this case, we're going to demonstrate a blocking HTTP call that's going to wait for data from redis by polling the key frequently up to max try count while asynchronously publish message to kafka so that the consumer will provide the data into the pre-determined redis key. At the end, we're going to try load test the HTTP server to see whether the solution is good enough.

Step 1

Create a topic with 4 partitions using this command

If you have done this before then SKIP this step

$ docker exec -it kafka kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 4 --topic poc-test

Step 2

Make sure the topic is there with the expected partitions, you can use this command

$ docker exec -it kafka kafka-topics --zookeeper localhost:2181 --describe --topic poc-test

Or you can navigate to http://localhost:3030/ on your browser and choose Kafka Topics UI

Step 3

Increase the number of file descriptors to accomodate load test

$ ulimit -n 20000

Step 4

Start a new terminal window, change directory to root of the project.

Run the consumer first

$ go run redis_as_integration_point/*.go consumer

Optional flags:

  • -broker kafka broker host, default: localhost
  • -topic kafka topic name, default to poc-test
  • -cg consumer group name, default to testCG
  • -redisAddr redis address, default to localhost:6379
  • -minD minimum synthetic delay duration, default to 0s
  • -maxD maximum synthetic delay duration, default to 0s
  • -async whether to process each message from kafka asynchronously or not, default to true

Step 5

Now start another terminal and change directory to the project's root.

Run the http server

$ go run redis_as_integration_point/*.go http

Optional flags:

  • -broker kafka broker host, default: localhost
  • -topic kafka topic name, default to poc-test
  • -redisAddr redis address, default to localhost:6379

You can stop the http server using Ctrl+C

Step 6

Now you can start the load test using custom vegeta load test, by using this command:

flags

  • -rps request per second, default: 100
  • -dur how long the test going to be performed, default to 5s
  • -vars total num of variance on the request, default to 1000000
$ go run load_test/http.go -dur=1m -rps=100 | tee result.bin | vegeta report

You can NOT use standard vegetta flags, but the result (i.e. result.bin) can be inspected by normal vegetta command (e.g. vegeta report)

Above we're trying to load test in 1 minutes, rate on 100/s

Using my machine MacBook Pro (15-inch, 2017), CPU 2,9 GHz Intel Core i7, RAM 16 GB 2133 MHz LPDDR3; the result is good enough.

BUT

Change slightly the number of RPS to 500, there are a lot of 500 errors since redis connections exhaustion. Success rate is ~40%

$ go run load_test/http.go -dur=1m -rps=500 | tee result.bin | vegeta report

And...

It's even worse if consumer is set using -async=false, since the consumer only utilise 1 goroutine to process instead of exclusive for every message

Restart the consumer using this:

$ go run redis_as_integration_point/*.go consumer -async=false

Re-run the load test

$ go run load_test/http.go -dur=1m -rps=100 | tee result.bin | vegeta report

The result was very very dissapointing

Success rate is 8.22%, with average latency ~11 secs, 99th percentile ~13 secs

Those test we did is assuming each request will be done without delay on the consumer, if we introduce synthetic delay to simulate real world scenarios to 3-5 seconds each request, even on 100 rps it's not acceptable.

Restart the consumer using this:

$ go run redis_as_integration_point/*.go consumer -async=false -minD=3s -maxD=5s

Re-run the load test

$ go run load_test/http.go -dur=1m -rps=100 | tee result.bin | vegeta report

With the success rate just merely ~0.03%, too many request waiting for redis connection from the pool during polling resulting in timeout / HTTP 500 error.

Seems we can't use this approach...