This repository contains code for various experiments used to benchmark Redis and Memcached. We used memtier-benchmark to evaluate their performance.
- src: This directory contains the source code to run the experiments.
- logs: The logs of execution will be stored here. A new run-id will be created for each run and the logs will be stored in the corresponsing directory.
- outputs: All the plots and summary files are saved in this folder under the corresponding run-id of the execution.
- bin: This folder contains the memtier-benchmarch binary used for performing experiments. Note that we changed the source code of memtier-benchmark. The modified source code of memtier-benchmark can be found here.
- assets: This folder contains the configuration files of different persistance models of redis. These are used to evaluate the overhead of various persistance models.
matplotlib, numpy, os, pandas, tqdm, sys, redis, pymemcache
- Before executing the project, make sure redis and memcached services are started. It is assumed that redis and memcached don't need authentication to connect. If required, please disable authentication or change the commands in
src/run_experiments.py
. It is assumed that Redis runs on port 6379 and Memcached runs on port 11211. cd
into the project and Runpython3 src/run_experiments.py [experiment_name]
- Available options for
experiment_name
:latency_benchmark
: Runs latency benchmarks for redis, memcached with varying number of set, get operations and plots the average and 99-percentile latency.throughput_benchmark
: Runs throughput benchmarks for redis, memcached and plots the throughput at each second of the execution.scalability_benchmark
: Runs scalability benchmark by varying number of threads and plots the average latency, average throughput.memory_usage:
Runs memory usage benchmark by varying number of keys stored and stores the memory used.persistance_models:
For this benchmark, you need to use the configuration file corresponding to the persistance model of interest (available inassets/
dir) and restart redis. This experiment then saves the average latency and throughput corresponding to that persistance model. You need to do multiple runs to compare different persistance models.
- The output plots and the csv files will be stored in
outputs/{run-id}/
. The detailed logs of the execution can be found atlogs/{run-id}/
. - Note: Don't remove
logs/dump
directory. It stores intermediate files during executions.