You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@kfox1111 So, I'd really like to use chaoskube to force our deployment objects to have to exercise its connection tracking/safe shutdown code. Some assurances too many pods don't get killed would be good though. Would it be possible to add support for looking at the .spec.strategy.rollingUpdate.maxUnavailable field and the .spec.replica's field to ensure not too many are out at a time?
@linki I looked into PodDisruptionBudgets yesterday and they are pretty much want you want.
Kubernetes defines voluntary evictions (e.g. due to draining, auto-downscaling, etc.) and involuntary pod evictions (node failures etc.).
With those budgets you define a label selector and a minimun number of pods that should exists matching this selector. If not you cannot evict the pod. you can still delete it. kubectl drain uses evict under the hood in order to honor the disruption budgets. You can still fall under your minimum when an involuntary eviction happens while you are at your minimum value from your disruption budget.
I tested it yesterday with chaoskube and it works as expected. Unfortunately, the golang fake client that I use for writing tests doesn't quite show the same behaviour. It's usually very accurate.
The outcode should be that chaoskube can be run with a mode respecting the budgets and without for true chaos.
The text was updated successfully, but these errors were encountered:
from #6 deployment limits
@kfox1111 So, I'd really like to use chaoskube to force our deployment objects to have to exercise its connection tracking/safe shutdown code. Some assurances too many pods don't get killed would be good though. Would it be possible to add support for looking at the .spec.strategy.rollingUpdate.maxUnavailable field and the .spec.replica's field to ensure not too many are out at a time?
@linki I looked into
PodDisruptionBudget
s yesterday and they are pretty much want you want.Kubernetes defines voluntary evictions (e.g. due to draining, auto-downscaling, etc.) and involuntary pod evictions (node failures etc.).
With those budgets you define a label selector and a minimun number of pods that should exists matching this selector. If not you cannot evict the pod. you can still delete it.
kubectl drain
uses evict under the hood in order to honor the disruption budgets. You can still fall under your minimum when an involuntary eviction happens while you are at your minimum value from your disruption budget.I tested it yesterday with chaoskube and it works as expected. Unfortunately, the golang fake client that I use for writing tests doesn't quite show the same behaviour. It's usually very accurate.
The outcode should be that chaoskube can be run with a mode respecting the budgets and without for true chaos.
The text was updated successfully, but these errors were encountered: