-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Envoy's lb_policy: ROUND_ROBIN is not balancing cluster hosts #37369
Comments
Are there log files showing the behavior? |
how much concurrency you used? Note the round robin is a thread local load balancer, it only ensure the requests in same thread will be destributed to the hosts in turn. |
@wbpcode Using concurrency as 8. /Envoy $ ps -aef | grep envoy 1001050+ 436 293 0 Nov26 pts/0 00:04:13 /usr/local/bin/envoy --config-path /tmp/envoy_bkp.yaml --base-id 0 --concurrency 8 --drain-time-s 30 --drain-strategy immediate --parent-shutdown-time-s 40 --restart-epoch 1 |
I tried running envoy in debug and got this log. [2024-11-27 03:52:00.102][24477][debug][dns] [source/extensions/network/dns_resolver/cares/dns_impl.cc:302] dns resolution for test-ckey-svc-headless.testns.svc.cluster.local. completed with status 0 This is how I am sending requests to port 8129 exposed by envoy: |
Title: Envoy's lb_policy: ROUND_ROBIN is not balancing cluster hosts
Description:
I have a envoy configuration like this:
I have 6 replicas of the pods which are behind the service - test-ckey-svc-headless.{{ .Release.Namespace }}.{{ .Values.global.servicedomainName }}.
My expectation was when I send 6 requests to envoy, it will evenly distribute 1 requests each to all the pods. But, it is not evenly distributing. Some pods do not even get any requests.
[testvm ~]$ kubectl exec -it test-sd-ckey-0 -n testns -- bash -c "wc -l /var/DebugTrace/envoy/test_polling_service.log"
3 /var/DebugTrace/envoy/test_polling_service.log
[testvm ~]$ kubectl exec -it test-sd-ckey-1 -n testns -- bash -c "wc -l /var/DebugTrace/envoy/test_polling_service.log"
0 /var/DebugTrace/envoy/test_polling_service.log
[testvm ~]$ kubectl exec -it test-sd-ckey-2 -n testns -- bash -c "wc -l /var/DebugTrace/envoy/test_polling_service.log"
2 /var/DebugTrace/envoy/test_polling_service.log
[testvm ~]$ kubectl exec -it test-sd-ckey-3 -n testns -- bash -c "wc -l /var/DebugTrace/envoy/test_polling_service.log"
0 /var/DebugTrace/envoy/test_polling_service.log
[testvm ~]$ kubectl exec -it test-sd-ckey-4 -n testns -- bash -c "wc -l /var/DebugTrace/envoy/test_polling_service.log"
1 /var/DebugTrace/envoy/test_polling_service.log
[testvm ~]$ kubectl exec -it test-sd-ckey-5 -n testns -- bash -c "wc -l /var/DebugTrace/envoy/test_polling_service.log"
0 /var/DebugTrace/envoy/test_polling_service.log
Is there anything wrong in my configuration. Can anyone please help.
I tried using max_requests_per_connection = 1, in the cluster section but, that too didnt help.
The text was updated successfully, but these errors were encountered: