Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CP-52709: use timeslices shorter than 50ms #6177

Open
wants to merge 5 commits into
base: feature/perf
Choose a base branch
from

Conversation

edwintorok
Copy link
Contributor

@edwintorok edwintorok commented Dec 12, 2024

Changing the default OCaml thread switch timeslice from 50ms

The default OCaml 4.x timeslice for switching between threads is 50ms: if there is more than 1 active OCaml threads each one is let to run up to 50ms, and then (at various safepoints) it can switch to another running thread.
When the runtime lock is released (and C code or syscalls run) then another OCaml thread is immediately let to run if any.

However 50ms is too long, and it inserts large latencies into the handling of API calls.

OTOH if a timeslice is too short then we waste CPU time:

  • overhead of Thread.yield system call, and the cost of switching threads at the OS level
  • potentially higher L1/L2 cache misses if we switch on the same CPU between multiple OCaml threads
  • potentially losing branch predictor history
  • potentially higher L3 cache misses (but on a hypervisor with VMs running L3 will be mostly taken up by VMs anyway, we can only rely on L1/L2 staying with us)

A microbenchmark has shown that timeslices as small as 0.5ms might strike an optimal balance between latency and overhead: values lower than that lose performance due to increased overhead, and values higher than that lose performance due to increased latency:

auto_p
auto_pr

(the microbenchmark measures the number of CPU cycles spent simulating an API call with various working set sizes and timeslice settings)

This is all hardware dependent though, and a future PR will introduce an autotune service that measures the yield overhead and L1/L2 cache refill overhead and calculates an optimal timeslice for that particular hardware/Xen/kernel combination.
(and while we're at it, we can also tweak the minor heap size to match ~half of CPU L2 cache).

Timeslice change mechanism

Initially I used Unix.set_itimer using virtual timers, to switch a thread only when it has been actively using CPU for too long. However that relies on delivering a signal to the process, and XAPI is very bad at handling signals.
In fact XAPI is not allowed to receive any signals, because it doesn't handle EINTR well (a typical problem, that affects C programs too sometimes). Although this is a well understood problem (described in the OCaml Unix book, and some areas of XAPI make an effort to handle it, others just assert that they never receive one. Fixing that would require changes in all of XAPI (and its dependencies).

So instead I don't use signals at all, but rely on Statmemprof to trigger a hook to be executed "periodically", but not based purely on time, but on allocation activity (i.e. at places the GC could run). The hook checks the elapsed time since the last time it got called, and if too much then calls Thread.yield.
Yield is smart enough to be a no-op if there aren't any other runnable OCaml threads.

Yield isn't always beneficial though at reducing latencies, e.g. if we are holding locks then we're just increasing latency for everyone who waits for that lock.
So a mechanism is introduced to notify the periodic function when any highly contended locks are held, and the yield is skipped in this instance (e.g. the XAPI DB lock).

Plotting code

This PR only includes a very simplified version of the microbenchmark, a separate one will introduce the full cache plotting code (which is useful for development/troubleshooting purposes but won't be needed at runtime).

Default timeslice value

Set to 5ms for now, just a bit above 4ms = 1/HZ in our Dom0 kernel, the autotuner from a future PR can change this to a more appropriate value.
(the autotuner needs more validation on a wider range of hardware)

Results

The cache measurements needs to be repeated on a wider variety of hardware, but the timeslice changes here have already proven useful in reducing XAPI DB lock hold times (together with other optimizations).

Uses Gc.Memprof to run a hook function periodically.
This checks whether we are holding any locks, and if not and sufficient time has elapsed since the last,
then we yield.

POSIX timers wouldn't work here, because they deliver signals, and there are too many places in XAPI
that don't handle EINTR properly.

Signed-off-by: Edwin Török <[email protected]>
And apply on startup.

Signed-off-by: Edwin Török <[email protected]>
@@ -0,0 +1,48 @@
(* avoid allocating an extra option every time *)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the copyright headers.

let interval = !timeslice in
D.debug "Setting timeslice to %.3fs" interval ;
if interval >= 0.05 then
D.debug "Timeslice same as or larger than default: not setting"
Copy link
Member

@robhoes robhoes Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then where is Xapi_timeslice.Timeslice.set called for the default case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh you mean the OCaml default of 50 ms.

Copy link
Contributor Author

@edwintorok edwintorok Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah the default timeslice is always there, this is in addition to that, it is not possible to turn off the default 50 ms switching.
I should probably document that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants