-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fresh install fails #22
Comments
Can you paste the full output of
Yes, once we have the full logs from the operator install (which runs as part of terraform apply), we should be able to see what exactly failed. There have been changes to helm charts and docker images recently, my guess is some chart was probably not updated, but that's just a guess.
This does not look right, do u have a jx-requirements.yml file in the Basically, jx git operator failed ... |
Yes, I will paste the full jx admin log soon |
here is the js admin log output jx_admin_log.txt Yes, there is a jx-requirements.yml in my jx-eks-vault repo I will paste the redacted version here. please let me know what else I can do to help. |
I only changed numbers to 7's everything else is unchanged |
Looking at the logs, I dont think this is the first boot job. I see quite a few resources which are unchanged in that boot job:
Do you happen to have the very first boot job? The first boot job would run when you do terraform apply.
Commit these changes in ur cluster git repo, push it and tail the logs, let's see what happens ... Having said that, the fastest way to debug this would be to get the first boot job log, if not, then recreating the cluster, and getting fresh new logs will help a lot (I feel it might potentially save some time as well). |
@ankitm123 My mistake, Here is the new I pushed the email change but nothing happened in the |
I will take a look tonight, and come back to you. |
Starting to think this has to do with kubernetes 1.21, I will check what's the option to disable iss check in the helm charts. I see that you have tried by downgrading to 1.20, and it still did not work. Can you post the output of external-secrets pod in the I dont see anything suspicious in the boot log 😕 |
i got the same issue over here with v1.15.47 |
Use the latest version - there's a weirdness in the release pipeline and so it gets pegged to this version (1.15.47), uninstall the cluster created using this version, and use the latest: https://github.com/jenkins-x/terraform-aws-eks-jx/releases/tag/v1.18.2 |
I followed the readme with fresh repos and the installation failed. I'm going to include details about everything I did in hopes we can fix this template. I really want jx3 to take off but the adoption of jx3 is totally dependent on these templates working out of the box. I'm going to put in some work and make a PR to fix the easy things I found but I need help with the secrets population issue at the end.
4. exported the following env vars
my ~/.aws/credentials file looks like this:
terraform init
and received this errorterraform init
and received this errorran
terraform init -upgrade
to fixran the code and it passed:
jx admin logs
and saw this error at the end. Random thought: Is there some additional step not outlined in the README I need to perform to populate those secrets manually or should they be populated automatically?jx ui
and gathered these relevant error messageskubectl get pods --all-namespaces
and noticed these failed containerskubectl logs jx-preview-gc-jobs-27197690-6kjcs
output only this:kubectl describe pod jenkins-x-chartmuseum-79c9b8dcd9-vv9sx -n jx
kubectl describe pod lighthouse-foghorn-86b84cb46c-dkrzm -n jx
The text was updated successfully, but these errors were encountered: