Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm does not update CRDs #78

Open
psact opened this issue Dec 9, 2021 · 5 comments
Open

Helm does not update CRDs #78

psact opened this issue Dec 9, 2021 · 5 comments

Comments

@psact
Copy link

psact commented Dec 9, 2021

I get the following error when trying to run a helm diff upgrade on my k8s couchbase cluster:

Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating
 "": error validating data: [ValidationError(CouchbaseCluster.spec.networking): unknown field "waitForAddressReachable" in
 com.couchbase.v2.CouchbaseCluster.spec.networking, ValidationError(CouchbaseCluster.spec.networking): unknown field
 waitForAddressReachableDelay" in com.couchbase.v2.CouchbaseCluster.spec.networking]

It appears this is from adding the waitForAddressReachable field in the CRD, and Helm does not support updating CRDs. From the docs: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/

There is no support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. Furthermore, there is currently no community consensus around how to handle CRDs and their lifecycle. As this evolves, Helm will add support for those use cases.

What is the recommended method for upgrading the CRD shipped with the helm chart?

@tahmmee
Copy link
Contributor

tahmmee commented Dec 9, 2021

That's correct helm does not automatically update CRDs. You can get the CRDs by unpacking the Releases: https://github.com/couchbase-partners/helm-charts/releases/download/couchbase-operator-2.2.201/couchbase-operator-2.2.201.tgz
Or Doing helm pull couchbase/couchbase-operator and install from crds directory

Then upgrade CRDs:
kubectl replace -f couchbase-operator/crds/couchbase.crds.yaml

@psact
Copy link
Author

psact commented Dec 10, 2021

Ok, will give that a try. When upgrading the operator to 2.2.201 with the old CRDs, it succeeds but gets into a CrashLoopBackoff and the following stacktrace:

{"level":"error","ts":1639082383.109733,"msg":"Observed a panic: \"invalid memory address or nil pointer dereference\" (runtime error: invalid memory address or nil pointer dereference)\ngoroutine 262 [running]:\nk
8s.io/apimachinery/pkg/util/runtime.logPanic(0x15d9ee0, 0x22de220)\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa6\nk8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x
0, 0x0, 0x0)\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x86\npanic(0x15d9ee0, 0x22de220)\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/golangi
JTE7/go1.16.3/src/runtime/panic.go:965 +0x1b9\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcileMemberAlternateAddresses(0xc00085a0c0, 0x0, 0x0)\n\t/home/couchbase/jenkins/workspace/couchbase
-k8s-microservice-build/couchbase-operator/pkg/cluster/reconcile.go:1205 +0x6b\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcile(0xc00085a0c0, 0x0, 0x0)\n\t/home/couchbase/jenkins/workspace/
couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/reconcile.go:224 +0x7f7\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).runReconcile(0xc00085a0c0)\n\t/home/couchbase/jenkins/workspac
e/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/cluster.go:511 +0xdaf\ngithub.com/couchbase/couchbase-operator/pkg/cluster.New(0x7ffe88a2c453, 0x3, 0xc0000e0b00, 0xc0006b22a0, 0x2, 0x2)\n\t/home/c
ouchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/cluster.go:205 +0x7ff\ngithub.com/couchbase/couchbase-operator/pkg/controller.(*CouchbaseClusterReconciler).Reconcile(0xc00
0725350, 0xc00076ed90, 0x9, 0xc0005f3d88, 0x15, 0xc0000a4680, 0xc0000e8ab0, 0xc0000e8a28, 0xc0000e8a20)\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/controller/contro
ller.go:74 +0xbb5\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00072b290, 0x1630300, 0xc0006b21c0, 0x0)\n\t/home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]
.4/pkg/internal/controller/controller.go:244 +0x2a9\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00072b290, 0x203000)\n\t/home/couchbase/go/pkg/mod/sigs.k8s.io/contro
[email protected]/pkg/internal/controller/controller.go:218 +0xb0\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(...)\n\t/home/couchbase/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.
6.4/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0000a45a0)\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f\nk8s.io/ap
imachinery/pkg/util/wait.BackoffUntil(0xc0000a45a0, 0x19595a0, 0xc0006b8840, 0x100000001, 0xc0001135c0)\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b\nk8s.io/apimachinery
/pkg/util/wait.JitterUntil(0xc0000a45a0, 0x3b9aca00, 0x0, 0x1, 0xc0001135c0)\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98\nk8s.io/apimachinery/pkg/util/wait.Until(0xc000
0a45a0, 0x3b9aca00, 0xc0001135c0)\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d\ncreated by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func
1\n\t/home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:179 +0x3d6\n","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/couchbase/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nk8s.io/klog/v2.(*loggingT).output\n\t/home/couchbase/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:893\nk8s.io/klog/v2.(*loggingT).printf\n\t/home/couchbase/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:733\nk8s.io/klog/v2.Errorf\n\t/home/couchbase/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1416\nk8s.io/apimachinery/pkg/util/runtime.logPanic\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:78\nk8s.io/apimachinery/pkg/util/runtime.HandleCrash\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48\nruntime.gopanic\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/golangiJTE7/go1.16.3/src/runtime/panic.go:965\nruntime.panicmem\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/golangiJTE7/go1.16.3/src/runtime/panic.go:212\nruntime.sigpanic\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/golangiJTE7/go1.16.3/src/runtime/signal_unix.go:734\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcileMemberAlternateAddresses\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/reconcile.go:1205\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcile\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/reconcile.go:224\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).runReconcile\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/cluster.go:511\ngithub.com/couchbase/couchbase-operator/pkg/cluster.New\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/cluster.go:205\ngithub.com/couchbase/couchbase-operator/pkg/controller.(*CouchbaseClusterReconciler).Reconcile\n\t/home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/controller/controller.go:74\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x143ca4b]

goroutine 262 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x109
panic(0x15d9ee0, 0x22de220)
        /home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/golangiJTE7/go1.16.3/src/runtime/panic.go:965 +0x1b9
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcileMemberAlternateAddresses(0xc00085a0c0, 0x0, 0x0)
        /home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/reconcile.go:1205 +0x6b
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcile(0xc00085a0c0, 0x0, 0x0)
        /home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/reconcile.go:224 +0x7f7
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).runReconcile(0xc00085a0c0)
        /home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/cluster.go:511 +0xdaf
github.com/couchbase/couchbase-operator/pkg/cluster.New(0x7ffe88a2c453, 0x3, 0xc0000e0b00, 0xc0006b22a0, 0x2, 0x2)
        /home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/cluster/cluster.go:205 +0x7ff
github.com/couchbase/couchbase-operator/pkg/controller.(*CouchbaseClusterReconciler).Reconcile(0xc000725350, 0xc00076ed90, 0x9, 0xc0005f3d88, 0x15, 0xc0000a4680, 0xc0000e8ab0, 0xc0000e8a28, 0xc0000e8a20)
        /home/couchbase/jenkins/workspace/couchbase-k8s-microservice-build/couchbase-operator/pkg/controller/controller.go:74 +0xbb5
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00072b290, 0x1630300, 0xc0006b21c0, 0x0)
        /home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:244 +0x2a9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00072b290, 0x203000)
        /home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(...)
        /home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:197
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0000a45a0)
        /home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0000a45a0, 0x19595a0, 0xc0006b8840, 0x100000001, 0xc0001135c0)
        /home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000a45a0, 0x3b9aca00, 0x0, 0x1, 0xc0001135c0)
        /home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc0000a45a0, 0x3b9aca00, 0xc0001135c0)
        /home/couchbase/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
        /home/couchbase/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:179 +0x3d6

Will try to upgrade the CRD and see if it repeats.

@dthomasag
Copy link

dthomasag commented Mar 8, 2022

@tahmmee So for our use case, we're attempting to use Terraform to manage the Helm chart deployment. The manual steps listed make sense, but would be slightly anti-Terraform to navigate. One of the suggestions from that Helm documentation page mentions splitting out CRDs to a separate chart, is this something that could be considered?

@tahmmee
Copy link
Contributor

tahmmee commented Mar 15, 2022

Hi @dthomasag separate chart will also work. There is a slight downside in that you would have to manage it and keep the CRD chart in sync with the patch associated with Couchbase that you are installing.
If something like a kubectl apply/replace fits into your workflow you could also do something like:

kubectl apply -f https://raw.githubusercontent.com/couchbase-partners/helm-charts/master/charts/couchbase-operator/crds/couchbase.crds.yaml

And just replace branch with whatever version you are using.

@dthomasag
Copy link

That's a good suggestion on running a kubectl apply against the repo's CRD manifest, thanks! We may give that a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants