Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throughput collapses on large object batch requests #189

Open
konradgithuup opened this issue May 31, 2024 · 1 comment
Open

Throughput collapses on large object batch requests #189

konradgithuup opened this issue May 31, 2024 · 1 comment

Comments

@konradgithuup
Copy link

I measured the throughput of batches of 10_000 object read/write operations. First, the benchmarks begin approaching a throughput limit as the size of read/written segments increases, but at some point the throughput collapses and is greatly reduced from there on out. This occurs on my machine as well as on the OVGU ants cluster (ant14).

The limit itself is influenced by the I/O interface used. On the cluster, POSIX I/O approached a limit of around 6 GB, a mmap backend approached 3 GB. The collapse occurs between segment sizes of 256 and 512 KiB, regardless of the I/O interface.

POSIX sequential write (ant14 is ssd, hdd, nvme; local is local):

POSIX-Impact-of-Environment-(Write)

with linear scaling for dramatic effect:

POSIX-Impact-of-Environmentliner-(Write)

@konradgithuup
Copy link
Author

Local

  • CPU: AMD Ryzen 5 5625U
  • Memory: 16 GB
  • File System: ext4
  • nvme: Micron MTFDKCD512TFK

ant14

  • CPU: AMD EPYC 7443 24-Core Processor
  • Memory: 128 GB
  • File System: ext4
  • nvme: Micron_7450_MTFDKBA960TFR
  • ssd: SAMSUNG MZ7LH960HAJR-00005
  • hdd: ST4000LM024-2U817V

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant