-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix VPA for latest prom-operator #1497
Conversation
service/controller/resource/monitoring/verticalpodautoscaler/resource.go
Outdated
Show resolved
Hide resolved
As discussed, this will be hard to test without actually creating the |
Oh we can test on a dev branch for sure, or change the bundle version to 1.1.0 to check that it works yes :) |
This is currently breaking cluster creation / deletion on
Could you please not test such changes on |
3b87d45
to
aad9fa7
Compare
* fix forgotten verticalpodautoscaler.Config * move a const to a more suitable place --------- Co-authored-by: Herve Nicol <[email protected]>
8d9b502
to
f419f37
Compare
testFunc := func(v interface{}) (interface{}, error) { | ||
c := Config{ | ||
Logger: logger, | ||
K8sClient: k8sClient, | ||
VpaClient: vpa_clientsetfake.NewSimpleClientset(), | ||
Provider: cluster.Provider{ | ||
Kind: "aws", | ||
Flavor: "vintage", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is incorrect. It should be in the flavor loop right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like it's already in the flavor loop. So I removed the Provider
part of the config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did my best to test, by changing the observability-bundle
minimum release on a separate branch.
Seems to work 😁
* Fix VPA for latest prom-operator * Update CHANGELOG.md
This PR should fix VPA issues that arise with the latest kube-prometheus-stack 9.0.0 release.
The issue are coming from the new
scale
subresource that was added to the prometheus CR so the prometheus operator can scale the number of shards of a Prometheus CR using tools like KEDA or the prometheus adapter.VPA also relies on the scale subresource to find the resoruces that VPA needs to manage (recommend metrics) and to do that, the vpa controllers go up the topmost controller that has the scale subresource.
Prior to the upgrade of prometheus operator, the topmost controller was the statefulsetcontroller, but it is not the Prometheus CR.
TO be able to support slow CAPI upgrades, this PR actually uses the version of the observability bundle that runs on the MC to know if PMO should set VPA to use the Prometheus CR as a topmost controller (olly bundle >= 1.2.0 with kube-prometheus-stack > 9.0.0) or the statefulset controller (1.2.0 with kube-prometheus-stack 8.x or lesser)
Checklist
I have:
CHANGELOG.md