-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the PoolQueue middleware #2540
Conversation
Tower's Buffer middleware is used to make a Service shareable across tasks. Within Linkerd, we've augmented the Buffer with failfast behavior, etc, as a Queue middleware. This queue plays a vital role in dispatching requests to load balancers, ensuring that load is shed when a balancer has no available endpoints. The tower Balancer additionally holds a Stream of endpoint updates that are processed as the balancer is driven to readiness. Crucially, this means that updates to the balancer can only occur while requests are being processed. We cannot eagerly drop defunct endpoints, nor can we eagerly connect to new endpoints. We have observed situations where long-lived, mostly-idle balancers can buffer discovery updates indefinitely, bloating memory and even forcing backpressure to the control plane. To correct this behavior, this change introduces the PoolQueue middleware. The PoolQueue is based on the tower's Buffer, but instead of composing over an inner Service, it composes over an inner Pool. Pool is a new interface that provides an additional interface to update the pool's members and to drive all pending endpoints in the pool to be ready (decoupling the semantics of Service::poll_ready and Pool::poll_pool). Pool implementations will typically hold a ReadyCache of inner endpoints (as the tower balancer does). This change, however, does not include the concrete implementation of a Pool to be replace the balancer. A p2c pool will be introduced in a followup change. This change has the added benefit of simplifying the endpoint discovery pipeline. We currently process Updates (including an array of endpoints) from the discovery API and convert that into a stream of discrete endpoint updates for the balancer, requiring redundant caching. The Pool interface processes Updates directly, so there is no need for the extra translation.
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #2540 +/- ##
==========================================
+ Coverage 68.00% 68.24% +0.23%
==========================================
Files 338 344 +6
Lines 15975 16171 +196
==========================================
+ Hits 10864 11036 +172
- Misses 5111 5135 +24 ☔ View full report in Codecov by Sentry. |
Following #2540, which introduces a new PoolQueue and Pool interface, this change introduces a P2cPool implementation that replaces Tower's p2c balancer (using the same underlying ReadyCache and p2c implementations). This balancer implementation is currently unused. It will be integrated in a follow-up change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! I had some fairly minor comments but didn't notice any significant concerns.
Remove the error::Closed variant, since it indicates that the inner task panicked. It seems preferable to propagate the panic than to handle it gracefully.
Following #2540, which introduces a new PoolQueue and Pool interface, this change introduces a P2cPool implementation that replaces Tower's p2c balancer (using the same underlying ReadyCache and p2c implementations). This balancer implementation is currently unused. It will be integrated in a follow-up change.
This change culminates recent work to restructure the balancer to use a PoolQueue so that balancer changes may occur independently of request processing. This replaces independent discovery buffering so that the balancer task is responsible for polling discovery streams without independent buffering. Requests are buffered and processed as soon as the pool has available backends. Fail-fast circuit breaking is enforced on the balancer's queue so that requests can't get stuck in a queue indefinitely. In general, the new balancer is instrumented directly with metrics, and the relevant metric name prefix and labelset is provided by the stack. In addition to detailed queue metrics including request (in-queue) latency histograms, but also failfast states, discovery updates counts, and balancer endpoint pool sizes. --- * outbound: Move queues into the concrete stack (linkerd/linkerd2-proxy#2539) * metrics: Remove unused features (linkerd/linkerd2-proxy#2542) * Add the PoolQueue middleware (linkerd/linkerd2-proxy#2540) * ci: Fixup codecov config (linkerd/linkerd2-proxy#2545) * ci: Cancel prior runs (linkerd/linkerd2-proxy#2546) * ci: Skip ARM builds during non-release CI (linkerd/linkerd2-proxy#2547) * deps: Update tokio, tonic, and prost (linkerd/linkerd2-proxy#2544) * build(deps): bump tj-actions/changed-files from 40.2.0 to 40.2.1 (linkerd/linkerd2-proxy#2549) * metrics: Use prometheus-client for proxy_build_info (linkerd/linkerd2-proxy#2551) * balance: Add a p2c Pool implementation (linkerd/linkerd2-proxy#2541) * metrics: Export process metrics using prometheus-client (linkerd/linkerd2-proxy#2552) * linkerd_identity: split `linkerd_identity::Id` into DNS and URI variants (linkerd/linkerd2-proxy#2538) * outbound: Move HTTP balancer into its own module (linkerd/linkerd2-proxy#2554) * app: Setup prom registry for use in balancers (linkerd/linkerd2-proxy#2555) * vscode: Move workspace settings to devcontainer (linkerd/linkerd2-proxy#2557) * build(deps): bump tj-actions/changed-files from 40.2.1 to 40.2.2 (linkerd/linkerd2-proxy#2556) * balance: Instrument metrics in pool balancer (linkerd/linkerd2-proxy#2558) * Enable PoolQueue balancer (linkerd/linkerd2-proxy#2559) Signed-off-by: Oliver Gould <[email protected]>
This change culminates recent work to restructure the balancer to use a PoolQueue so that balancer changes may occur independently of request processing. This replaces independent discovery buffering so that the balancer task is responsible for polling discovery streams without independent buffering. Requests are buffered and processed as soon as the pool has available backends. Fail-fast circuit breaking is enforced on the balancer's queue so that requests can't get stuck in a queue indefinitely. In general, the new balancer is instrumented directly with metrics, and the relevant metric name prefix and labelset is provided by the stack. In addition to detailed queue metrics including request (in-queue) latency histograms, but also failfast states, discovery updates counts, and balancer endpoint pool sizes. --- * outbound: Move queues into the concrete stack (linkerd/linkerd2-proxy#2539) * metrics: Remove unused features (linkerd/linkerd2-proxy#2542) * Add the PoolQueue middleware (linkerd/linkerd2-proxy#2540) * ci: Fixup codecov config (linkerd/linkerd2-proxy#2545) * ci: Cancel prior runs (linkerd/linkerd2-proxy#2546) * ci: Skip ARM builds during non-release CI (linkerd/linkerd2-proxy#2547) * deps: Update tokio, tonic, and prost (linkerd/linkerd2-proxy#2544) * build(deps): bump tj-actions/changed-files from 40.2.0 to 40.2.1 (linkerd/linkerd2-proxy#2549) * metrics: Use prometheus-client for proxy_build_info (linkerd/linkerd2-proxy#2551) * balance: Add a p2c Pool implementation (linkerd/linkerd2-proxy#2541) * metrics: Export process metrics using prometheus-client (linkerd/linkerd2-proxy#2552) * linkerd_identity: split `linkerd_identity::Id` into DNS and URI variants (linkerd/linkerd2-proxy#2538) * outbound: Move HTTP balancer into its own module (linkerd/linkerd2-proxy#2554) * app: Setup prom registry for use in balancers (linkerd/linkerd2-proxy#2555) * vscode: Move workspace settings to devcontainer (linkerd/linkerd2-proxy#2557) * build(deps): bump tj-actions/changed-files from 40.2.1 to 40.2.2 (linkerd/linkerd2-proxy#2556) * balance: Instrument metrics in pool balancer (linkerd/linkerd2-proxy#2558) * Enable PoolQueue balancer (linkerd/linkerd2-proxy#2559) Signed-off-by: Oliver Gould <[email protected]>
Tower's Buffer middleware is used to make a Service shareable across tasks. Within Linkerd, we've augmented the Buffer with failfast behavior, etc, as a Queue middleware. This queue plays a vital role in dispatching requests to load balancers, ensuring that load is shed when a balancer has no available endpoints.
The tower Balancer additionally holds a Stream of endpoint updates that are processed as the balancer is driven to readiness. Crucially, this means that updates to the balancer can only occur while requests are being processed. We cannot eagerly drop defunct endpoints, nor can we eagerly connect to new endpoints. We have observed situations where long-lived, mostly-idle balancers can buffer discovery updates indefinitely, bloating memory and even forcing backpressure to the control plane.
To correct this behavior, this change introduces the PoolQueue middleware. The PoolQueue is based on the tower's Buffer, but instead of composing over an inner Service, it composes over an inner Pool.
Pool is a new interface that provides an additional interface to update the pool's members and to drive all pending endpoints in the pool to be ready (decoupling the semantics of Service::poll_ready and Pool::poll_pool). Pool implementations will typically hold a ReadyCache of inner endpoints (as the tower balancer does). This change, however, does not include the concrete implementation of a Pool to be replace the balancer. A p2c pool will be introduced in a followup change.
This change has the added benefit of simplifying the endpoint discovery pipeline. We currently process Updates (including an array of endpoints) from the discovery API and convert that into a stream of discrete endpoint updates for the balancer, requiring redundant caching. The Pool interface processes Updates directly, so there is no need for the extra translation.
This basic implementation does not yet include metrics, though it includes copious diagnostic logging.