Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes 1.16 API breaking changes #22

Closed
vardius opened this issue Dec 17, 2019 · 10 comments
Closed

Kubernetes 1.16 API breaking changes #22

vardius opened this issue Dec 17, 2019 · 10 comments

Comments

@vardius
Copy link
Owner

vardius commented Dec 17, 2019

Hello and thanks for the fantastic boilerplate. I am using it for a starter point for my own project and currently testing with minikube locally.
I will try to summarize my discoveries so far:

  1. After the breaking changes with apiVersion with kubernetes 1.16 onwards some of the helm charts don't work for me since they were not updated according to the guidelines.
    See: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
    Example after running helm make-install originally:
helm install --name go-api-boilerplate --namespace go-api-boilerplate helm/app/
Error: validation failed: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
make: *** [Makefile:64: helm-install] Error 1

As you can see with the current script it's impossible to see which chart specifically failed. Do you observe the same behavior by any chance ?
2. Because of that I have heavily modified the helm scripts to actually get latest packages from helm hub rather than use the archives included in the repo.
3. Even some of the latest stable charts are not updated as of yet to 1.16 guidelines. Namely magic-namespace and heapster so far. Due to this I have disabled those for the time being
4. Since I am installing the charts one by one and not as one whole package like the original I am facing some unexpected issues. Mostly some kubernetes services names don't match nginx ingress definitions. For example: go-api-boilerplate.user, go-api-boilerplate.auth etc... I see them as microservice-user, microservice-auth in my services list.

As soon as I have some decent workaround for the hardcoded charts I can create a PR, meanwhile I would appreciate if you had some ideas how to make those dependencies more flexible instead of tgz packages in the repo.

Originally posted by @mar1n3r0 in #15 (comment)

@vardius
Copy link
Owner Author

vardius commented Dec 17, 2019

Thank you @mar1n3r0 for bringing my attention to this, to narrow down the specifics of it, can you tell me if this happens on a fresh clone of this repository, and if not have you tried using make helm-dependencies (I am not sure if this will do anything if chart templates are outdated) ?

I will try to reproduce/debug this myself today or tomorrow after work. Will share my experience soon.

Possibly update of charts to call newer API will be required. If you have anything that you would like to push, you are more then welcome to do so. Contributions bring more value to the repository and encourages others to contribute as well.

@mar1n3r0
Copy link
Collaborator

mar1n3r0 commented Dec 17, 2019

Most welcome @vardius . It happens on a fresh clone following the install instructions as per the guide. I am running make-helm-dependencies before make helm-install.

I have the option to lock the kubernetes version in minikube so now I am testing with v1.14.10 which is the latest before the new API. This results in CrashLoopBackOffs for quite a lot of containers. Most of them seem to be due to service account issues for example kubernetes-dashboard complaining that it is not running in kube-system namespace.
I decided to try the kubernetes downgrade so that I am able to test with your original setup as intended.

Can you elaborate with which version of kubernetes it was originally created and tested so we can debug further together ?

@vardius
Copy link
Owner Author

vardius commented Dec 17, 2019

I will let you know asap, can't verify this right now as I am on my work station. However once I get home I will let you know exact version of Kubernetes cluster I've got and if that issue happens. (I suppose it will). Right now I can tell you that I've been using Docker for Mac with Kubernetes.

Edit:
I just reminded myself that versions are mention in Prerequisites

In order to run this project you need to have Docker > 1.17.05 for building the production image and Kubernetes cluster > 1.11 for running pods installed.

@mar1n3r0
Copy link
Collaborator

mar1n3r0 commented Dec 18, 2019

Yeah my bad I overlooked that. I will try all major versions between 1.11 and 1.14.10 on a fresh clone and report back.

Edit:
Tested with Kubernetes 1.14.10 and 1.13.12 which are the stable and regular channels for GKE. The auth and user service are in a constant Init:crashBackLoopOff due to the migrate init container. Kubernetes-dashboard service is also failing. All this is happening on a local cluster in minikube.
Going to try once more with 1.12.10 and stick to 1.14.10 afterwards in an effort to get all of them in healthy state.
I needed to make a few amendments which I will push as PR for review. Namely a repository was not reachable which was migrate:latest instead of migrate/migrate:latest. And apiVersion objects for the cert-manager were updated from cert-manager.io/v1alpha1 to cert-manager.io/v1alpha2

A question - were your tests done directly on a cloud provided Kubernetes instance or locally ?
I am trying to understand what could be the reason for the unstable state of most services going up and down and if it's related to lack of resources.

@vardius
Copy link
Owner Author

vardius commented Dec 18, 2019

All the tests I have performed was locally only, deploying to my local Kubernetes cluster.

It is better to use specific version instead of migrate/migrate:latest this way any breaking change will not impact boilerplate itself. Maybe migrate/migrate:v4.7.1 ?
https://hub.docker.com/r/migrate/migrate/tags

I believe some helm chart templates need to be updated to fix Kubernetes 1.16 API breaking changes. Following the tips in the article you linked.
Which Kubernetes apiVersion Should I Use?

apiVersion: certmanager.k8s.io/v1alpha1

apiVersion: extensions/v1beta1

apiVersion: certmanager.k8s.io/v1alpha1

Probably need to change v1 to apps/v1 ?



Need to make sure also this are correct:

We could generate current deployment templates with helm as they are right now
helm-install --debug --dry-run
convert them
kubectl convert -f ./my-deployment.yaml --output-version apps/v1
and then compare the difference.

@mar1n3r0
Copy link
Collaborator

mar1n3r0 commented Dec 19, 2019

I have locked migrate/migrate to v4.7.1 and managed to get the specific errors from the init containers:

error: open /migrations/auth: no such file or directory
error: open /migrations/user: no such file or directory

After thoroughly inspecting the Dockerfiles I noticed that the second part of the multistage build does not include the migrations folder so I replaced the scratch image with alpine image so I can have /bin/sh for inspection of the file system in the docker container. Locally the migrations/auth and migrations/user are copied accordingly after that but the errors still occur in the cluster pods.

All the tests I have performed was locally only, deploying to my local Kubernetes cluster.

It is better to use specific version instead of migrate/migrate:latest this way any breaking change will not impact boilerplate itself. Maybe migrate/migrate:v4.7.1 ?
https://hub.docker.com/r/migrate/migrate/tags

I believe some helm chart templates need to be updated to fix Kubernetes 1.16 API breaking changes. Following the tips in the article you linked.
Which Kubernetes apiVersion Should I Use?

I believe it should be cert-manager.io/v1alpha2. See:
cert-manager
Also the CRDs were updated to 0.12: kubectl apply --validate=false
-f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml

apiVersion: certmanager.k8s.io/v1alpha1

This one is fine, the ingress object definition didn't change.

apiVersion: extensions/v1beta1

I belive it should be cert-manager.io/v1alpha2. See:
cert-manager

apiVersion: certmanager.k8s.io/v1alpha1

I think all these remain with a tag v1 as there were no warnings about them.

Probably need to change v1 to apps/v1 ?

This one is apps/v1 yes.

Need to make sure also this are correct:

We could generate current deployment templates with helm as they are right now
helm-install --debug --dry-run
convert them
kubectl convert -f ./my-deployment.yaml --output-version apps/v1
and then compare the difference.

So far though the main issues are with external charts like:

magic-namespace PR open

And heapster meanwhile was deprecated in favor of metrics-server:
heapster deprecated

@vardius
Copy link
Owner Author

vardius commented Dec 19, 2019

I have locked migrate/migrate to v4.7.1 and managed to get the specific errors from the init containers:

error: open /migrations/auth: no such file or directory
error: open /migrations/user: no such file or directory

The migrations directory is not copied to the final image. Expected migration files for example auth service

- 'file:///migrations/auth'

Copied files are:
COPY --from=buildenv /go/bin/app /go/bin/app

Doing so should solve the issue, with migrations

FROM scratch
COPY --from=buildenv /go/bin/app /go/bin/app
+ COPY --from=buildenv /app/migrations/"$BIN" /migrations/"$BIN"
ENTRYPOINT ["/go/bin/app"]

This needs to be done for both auth and user service.

@vardius
Copy link
Owner Author

vardius commented Dec 21, 2019

I did some fixes at this pr #25. Please verify if it matches your changes.

Please see versions and testing process here #23 (comment)

@vardius
Copy link
Owner Author

vardius commented Dec 21, 2019

Split into more granular issues:

#24
#27

@vardius
Copy link
Owner Author

vardius commented Dec 22, 2019

Fixed
#26
#27

@vardius vardius closed this as completed Dec 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants