You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone,
When deploying a Fusion 5 EKS setup with an ALB, hostname, multi AZ and no monitoring configured on top of default options as follows: ./setup_f5_eks.sh -c sandbox-f5 -p eks-sandbox -z us-west-2 -i m5.2xlarge --deploy-alb -h sandbox-f5.example.com --prometheus none --num-solr 1 --solr-disk-gb 50 --create multi_az
the resulting target group you are left with fails healthchecks which blocks access to the cluster.
The reason behind this seems to be because the target group is deployed with alb.ingress.kubernetes.io/healthcheck-path not set which then uses it's default value "/".
By adding alb.ingress.kubernetes.io/healthcheck-path: "/auth/" after line 450 of setup_f5_eks.sh this seems to fix the issue so that the only thing required after applying the command above (with your personal values for options) is to take care of DNS mapping.
Hope this help others, maybe a PR needs to be made for this issue to fix it for everyone using setup_f5_eks.sh to properly deploy Fusion 5 to EKS with an ALB (--deploy-alb) and a hostname (e.g -h sandbox-f5.example.com) ?
The text was updated successfully, but these errors were encountered:
nabello
changed the title
EKS installation with ALB, hostname and multi AZ configured
EKS installation with ALB, hostname and multi AZ configured, target group healthcheck fails.
Sep 16, 2020
Hello everyone,
When deploying a Fusion 5 EKS setup with an ALB, hostname, multi AZ and no monitoring configured on top of default options as follows:
./setup_f5_eks.sh -c sandbox-f5 -p eks-sandbox -z us-west-2 -i m5.2xlarge --deploy-alb -h sandbox-f5.example.com --prometheus none --num-solr 1 --solr-disk-gb 50 --create multi_az
the resulting target group you are left with fails healthchecks which blocks access to the cluster.
The reason behind this seems to be because the target group is deployed with alb.ingress.kubernetes.io/healthcheck-path not set which then uses it's default value "/".
By adding
alb.ingress.kubernetes.io/healthcheck-path: "/auth/"
after line 450 of setup_f5_eks.sh this seems to fix the issue so that the only thing required after applying the command above (with your personal values for options) is to take care of DNS mapping.Hope this help others, maybe a PR needs to be made for this issue to fix it for everyone using setup_f5_eks.sh to properly deploy Fusion 5 to EKS with an ALB (--deploy-alb) and a hostname (e.g -h sandbox-f5.example.com) ?
The text was updated successfully, but these errors were encountered: