-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl 1.20.0 compatibility #776
Comments
Thanks for bringing this up and the patch! I've got a PR going to add CI support for k8s v1.20. I'll do my best to get this patched in today. If not, it might not happen till January because of holiday vacations. |
@sjagoe excellent writeup. I have stumbled into this as well, though my "situation" differs slightly. $ k version --short
Client Version: v1.19.4
Server Version: v1.17.13 configmaps apigroup is blank $ kubectl api-resources -o wide | grep "SHORTNAMES\|configmap"
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
configmaps cm true ConfigMap [create delete deletecollection get list patch update watch] blank "becomes" $ k apply -f ops.yaml --prune --prune-whitelist=v1/configmap
error: invalid GroupVersionKind format: v1/configmap, please follow <group/version/kind> Would a fix along the lines of "if apigroup is blank or 'v1', set it to 'core/v1'" suffice? (Please treat this as the naïve question that it is, and apologies for not providing actual code...) |
Something like that could work:
|
It looks like this is included in v2.1.4 and was fixed by #777. |
It looks like this is fixed but not noted in the changelog for v2.1.4 |
Bug report
Krane fails to deploy using kubectl v1.20.0 (released 8 Dec 2020). The reason appears to be changed output from
kubectl api-resources
that is now getting processed incorrectly.The new output format is:
contrasted with the old output
The new output's
APIVERSION
column can be used (almost) verbatim to build the list of prunable resources, aside from thev1
api version resources, which need to be specified ascore/v1
.Expected behavior: [What you expected to happen]
Actual behavior: [What actually happened]
krane
deployments fail on akubectl
error:The kubectl error message was retrieved by applying #714 to my krane installation.
Version(s) affected: [run
krane version
]Krane 2.1.3
Steps to Reproduce
My quick and dirty hack/fix
A quick and dirty hack that made krane work with kubectl 1.20.0. Obviously this isn't a sufficient fix and breaks support for earlier versions of kubectl. I include it here to help give an idea of the issue.
The text was updated successfully, but these errors were encountered: