Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 file asset repository URL validation #16760

Open
elliotdobson opened this issue Aug 19, 2024 · 1 comment
Open

S3 file asset repository URL validation #16760

elliotdobson opened this issue Aug 19, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@elliotdobson
Copy link
Contributor

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

Client version: 1.29.2 (git-v1.29.2)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Server Version: v1.29.7

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
We are configuring local file asset repository however we are running into an issue when trying to update the cluster.

We tried to work around #16759 by specifying fileRepository as an S3 URL (even though the docs suggest this should not work) and to my surprise kOps accepted it and allowed us to apply it to the cluster.

However upon rolling the first control-plane node it did not come online and the update failed (some what expected).

  1. Enable fileRepository in the Cluster spec (using an S3 URL as shown below)
  2. Copy the file assets kops get assets --copy
  3. Update the cluster kops update cluster
  4. Roll an instance group kops rolling-update cluster

5. What happened after the commands executed?
New node fails to join the cluster and cluster validation fails.

Upon SSH'ing into the new node and checking the logs via journalctl -u cloud-final.service we see:

Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: == nodeup node config starting ==
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: == Downloading nodeup with hash 73c2808ac814787ccca9671678d2919fc9023322c3b834ea47b3cef01d6841cb from s3://example-k8s-assets/kops/binaries/kops/1.29.2/linux/amd64/nodeup ==
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: == Downloading s3://example-k8s-assets/kops/binaries/kops/1.29.2/linux/amd64/nodeup using curl -f --compressed -Lo nodeup --connect-timeout 20 --retry 6 --retry-delay 10 ==
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: curl: (1) Protocol "s3" not supported or disabled in libcurl
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: == Failed to download s3://example-k8s-assets/kops/binaries/kops/1.29.2/linux/amd64/nodeup using curl -f --compressed -Lo nodeup --connect-timeout 20 --retry 6 --retry-delay 10 ==

6. What did you expect to happen?
I expected the validation of a S3 URL in fileRepository to fail before we could apply the changes to the cluster, and have new nodes failing to start.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
spec:
...
  assets:
    fileRepository: s3://example-k8s-assets/kops
...

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

3 participants