Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS installation with ALB, hostname and multi AZ configured, target group healthcheck fails. #120

Open
nabello opened this issue Sep 16, 2020 · 0 comments

Comments

@nabello
Copy link

nabello commented Sep 16, 2020

Hello everyone,
When deploying a Fusion 5 EKS setup with an ALB, hostname, multi AZ and no monitoring configured on top of default options as follows:
./setup_f5_eks.sh -c sandbox-f5 -p eks-sandbox -z us-west-2 -i m5.2xlarge --deploy-alb -h sandbox-f5.example.com --prometheus none --num-solr 1 --solr-disk-gb 50 --create multi_az
the resulting target group you are left with fails healthchecks which blocks access to the cluster.

The reason behind this seems to be because the target group is deployed with alb.ingress.kubernetes.io/healthcheck-path not set which then uses it's default value "/".

By adding alb.ingress.kubernetes.io/healthcheck-path: "/auth/" after line 450 of setup_f5_eks.sh this seems to fix the issue so that the only thing required after applying the command above (with your personal values for options) is to take care of DNS mapping.

Hope this help others, maybe a PR needs to be made for this issue to fix it for everyone using setup_f5_eks.sh to properly deploy Fusion 5 to EKS with an ALB (--deploy-alb) and a hostname (e.g -h sandbox-f5.example.com) ?

@nabello nabello changed the title EKS installation with ALB, hostname and multi AZ configured EKS installation with ALB, hostname and multi AZ configured, target group healthcheck fails. Sep 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant