You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Unexpected failure reloading the backend
nginx ingress nginx.conf not created
nginx ingress readiness probe failed
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG
NGINX Ingress controller version:
0.26.1
Kubernetes version (use kubectl version):
1.16.3
Environment:
Cloud provider or hardware configuration: Bare Metal - Dell PowerEdge R330
OS (e.g. from /etc/os-release): NAME="Ubuntu" VERSION="18.04.1 LTS (Bionic Beaver)"
Kernel (e.g. uname -a): 4.15.0-70-generic
Install tools: None
Others:
What happened:
I upgraded my Cluster (1xMaster, 2xWorkerNodes) from Kubernetes 1.13.x to 1.16.3 through a complete purge of every component, because I ran into problems during the proccess.
On the master I was initializing the cluster with kubeadm init --pod-network-cidr 10.244.0.0/16
Then I installed Flannell kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
I joined the worker nodes to the cluster
I applied a configmap matching the name of future ingess controller to have a minimal own setup apiVersion: v1 kind: ConfigMap metadata: name: ait-nginx-ingress-nginx-ingress-controller data: client-max-body-size: "2g"
After that I used Helm to install the stable/nginx-ingress-controller helm install ait-nginx-ingress stable/nginx
Then i get this from the ingress log until it exits and does the CrashLoopBackOff:
I1126 11:29:30.452307 1 flags.go:198] Watching for Ingress class: nginx
W1126 11:29:30.452780 1 flags.go:243] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W1126 11:29:30.452864 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1126 11:29:30.453208 1 main.go:182] Creating API client for https://10.96.0.1:443
I1126 11:29:30.465886 1 main.go:226] Running in Kubernetes cluster version v1.16 (v1.16.3) - git (clean) commit b3cbbae08ec52a7fc73d334838e18d17e8512749 - platform linux/amd64
I1126 11:29:30.468843 1 main.go:90] Validated default/ait-nginx-ingress-nginx-ingress-controller-default-backend as the default backend.
I1126 11:29:31.354300 1 main.go:101] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
I1126 11:29:31.400407 1 nginx.go:263] Starting NGINX Ingress controller
I1126 11:29:31.413969 1 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"ait-nginx-ingress-nginx-ingress-controller", UID:"5872f069-6eb7-42a0-805b-68a519130f4f", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/ait-nginx-ingress-nginx-ingress-controller
I1126 11:29:32.601278 1 nginx.go:307] Starting NGINX process
I1126 11:29:32.601348 1 leaderelection.go:241] attempting to acquire leader lease default/ingress-controller-leader-nginx...
I1126 11:29:32.602111 1 controller.go:134] Configuration changes detected, backend reload required.
I1126 11:29:32.608888 1 leaderelection.go:251] successfully acquired lease default/ingress-controller-leader-nginx
I1126 11:29:32.609115 1 status.go:86] new leader elected: ait-nginx-ingress-nginx-ingress-controller-d6b9d8cb4-md682
W1126 11:29:32.804960 1 nginx.go:39]
NGINX master process died (-1): signal: illegal instruction (core dumped)
E1126 11:29:32.824932 1 controller.go:146] Unexpected failure reloading the backend:
The command kubectl describe pod gives me an Readiness probe failed: HTTP probe failed with statuscode: 500
What you expected to happen:
I expect that, both the deployments (controller and default backend), will work out as expected.
How to reproduce it (as minimally and precisely as possible):
I tried to reproduce on a cluster with the same setup as shown above (But only old servers). There it worked out fine. You could try to reproduce if you follow the steps I did.
Anything else we need to know:
As mentioned, I habe a test cluster with the same setup and there it works fine.
My best guess are either problems with the communication of the nodes to the master API services, or some trace of the old cluster which did some damage.
I tried to reset the cluster several times with this sudo kubeadm reset rm -r $HOME/.kube sudo etcdctl rm --recursive sudo rm -rf /var/lib/cni sudo rm -rf /run/flannel sudo rm -rf /etc/cni sudo ifconfig cni0 down sudo brctl delbr cni0 sudo ip link delete cni0 sudo ip link delete flannel.1
Sorry for the bad formatting... ;)
I appreciate any help with this.
The text was updated successfully, but these errors were encountered:
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Unexpected failure reloading the backend
nginx ingress nginx.conf not created
nginx ingress readiness probe failed
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG
NGINX Ingress controller version:
0.26.1
Kubernetes version (use
kubectl version
):1.16.3
Environment:
uname -a
): 4.15.0-70-genericWhat happened:
I upgraded my Cluster (1xMaster, 2xWorkerNodes) from Kubernetes 1.13.x to 1.16.3 through a complete purge of every component, because I ran into problems during the proccess.
kubeadm init --pod-network-cidr 10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
apiVersion: v1 kind: ConfigMap metadata: name: ait-nginx-ingress-nginx-ingress-controller data: client-max-body-size: "2g"
helm install ait-nginx-ingress stable/nginx
I1126 11:29:30.452307 1 flags.go:198] Watching for Ingress class: nginx
W1126 11:29:30.452780 1 flags.go:243] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W1126 11:29:30.452864 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1126 11:29:30.453208 1 main.go:182] Creating API client for https://10.96.0.1:443
I1126 11:29:30.465886 1 main.go:226] Running in Kubernetes cluster version v1.16 (v1.16.3) - git (clean) commit b3cbbae08ec52a7fc73d334838e18d17e8512749 - platform linux/amd64
I1126 11:29:30.468843 1 main.go:90] Validated default/ait-nginx-ingress-nginx-ingress-controller-default-backend as the default backend.
I1126 11:29:31.354300 1 main.go:101] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
I1126 11:29:31.400407 1 nginx.go:263] Starting NGINX Ingress controller
I1126 11:29:31.413969 1 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"ait-nginx-ingress-nginx-ingress-controller", UID:"5872f069-6eb7-42a0-805b-68a519130f4f", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/ait-nginx-ingress-nginx-ingress-controller
I1126 11:29:32.601278 1 nginx.go:307] Starting NGINX process
I1126 11:29:32.601348 1 leaderelection.go:241] attempting to acquire leader lease default/ingress-controller-leader-nginx...
I1126 11:29:32.602111 1 controller.go:134] Configuration changes detected, backend reload required.
I1126 11:29:32.608888 1 leaderelection.go:251] successfully acquired lease default/ingress-controller-leader-nginx
I1126 11:29:32.609115 1 status.go:86] new leader elected: ait-nginx-ingress-nginx-ingress-controller-d6b9d8cb4-md682
W1126 11:29:32.804960 1 nginx.go:39]
NGINX master process died (-1): signal: illegal instruction (core dumped)
E1126 11:29:32.824932 1 controller.go:146] Unexpected failure reloading the backend:
Error: signal: illegal instruction (core dumped)
W1126 11:29:32.824958 1 queue.go:130] requeuing initial-sync, err
Error: signal: illegal instruction (core dumped)`
Readiness probe failed: HTTP probe failed with statuscode: 500
What you expected to happen:
I expect that, both the deployments (controller and default backend), will work out as expected.
How to reproduce it (as minimally and precisely as possible):
I tried to reproduce on a cluster with the same setup as shown above (But only old servers). There it worked out fine. You could try to reproduce if you follow the steps I did.
Anything else we need to know:
As mentioned, I habe a test cluster with the same setup and there it works fine.
My best guess are either problems with the communication of the nodes to the master API services, or some trace of the old cluster which did some damage.
I tried to reset the cluster several times with this
sudo kubeadm reset rm -r $HOME/.kube sudo etcdctl rm --recursive sudo rm -rf /var/lib/cni sudo rm -rf /run/flannel sudo rm -rf /etc/cni sudo ifconfig cni0 down sudo brctl delbr cni0 sudo ip link delete cni0 sudo ip link delete flannel.1
Sorry for the bad formatting... ;)
I appreciate any help with this.
The text was updated successfully, but these errors were encountered: