Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 file asset repository CLI unable to read file #16759

Open
elliotdobson opened this issue Aug 19, 2024 · 3 comments
Open

S3 file asset repository CLI unable to read file #16759

elliotdobson opened this issue Aug 19, 2024 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@elliotdobson
Copy link
Contributor

elliotdobson commented Aug 19, 2024

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

Client version: 1.29.2 (git-v1.29.2)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Server Version: v1.29.7

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
We are configuring local file asset repository however we are running into an issue when trying to update the cluster.

We have configured an AWS S3 bucket for the file assets to be stored. The S3 bucket is private and has a bucket policy to allow GetObject requests from a VPC Gateway Endpoint that is in the same VPC as the k8s cluster (as vaguely suggested by the docs).

  1. Enable fileRepository in the Cluster spec
  2. Copy the file assets kops get assets --copy
  3. Update the cluster kops update cluster

5. What happened after the commands executed?

Error: you might have not staged your files correctly, please execute 'kops get assets --copy'

With verbose logging it shows:

I0819 11:21:38.966967   90184 builder.go:260] adding remapped file: "https://example-k8s-assets.s3.ap-southeast-2.amazonaws.com/kops/release/v1.29.7/bin/linux/amd64/kubelet"
I0819 11:21:38.967029   90184 builder.go:342] Trying to read hash fie: "https://example-k8s-assets.s3.ap-southeast-2.amazonaws.com/kops/release/v1.29.7/bin/linux/amd64/kubelet.sha256"
I0819 11:21:38.967046   90184 context.go:243] Performing HTTP request: GET https://example-k8s-assets.s3.ap-southeast-2.amazonaws.com/kops/release/v1.29.7/bin/linux/amd64/kubelet.sha256
I0819 11:21:39.106328   90184 builder.go:346] Unable to read hash file "https://example-k8s-assets.s3.ap-southeast-2.amazonaws.com/kops/release/v1.29.7/bin/linux/amd64/kubelet.sha256": unexpected response code "403 Forbidden" for "https://example-k8s-assets.s3.ap-southeast-2.amazonaws.com/kops/release/v1.29.7/bin/linux/amd64/kubelet.sha256": <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>YY7WWWCZC494R0QJ</RequestId><HostId>HiiCNVsfHRNPM/NNOfZf9v67+BTB9REAIEsK4+vW8sS/tWpdgQcuqF1xRmTC47C1H3WOdOTSN7M=</HostId></Error>
I0819 11:21:39.106407   90184 builder.go:361] Unable to read new sha256 hash file (is this an older/unsupported kubernetes release?)
Error: you might have not staged your files correctly, please execute 'kops get assets --copy'

6. What did you expect to happen?
kops update cluster to use S3 aware parsing like kops get assets --copy and read the file assets with authenticated requests.

The error is not that surprising since:

  1. the S3 bucket is private.
  2. kOps is using HTTPS URLs to read the objects (so no authentication is passed).
  3. we are running kops from our laptop which is outside the VPC that has access to the file assets S3 bucket.

However since kops get assets --copy worked and the file assets were successfully uploaded to the S3 bucket this was unexpected.

This makes me think that kOps is handling the file asset URLs differently between the two commands. In kops get assets --copy it is using S3 aware parsing and adding authentication to upload the assets, whereas kops update cluster is just doing unauthenticated HTTP request.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
spec:
...
  assets:
    fileRepository: https://example-k8s-assets.s3.ap-southeast-2.amazonaws.com/kops
...

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?

  1. Is it possible to workaround this by using --lifecycle-overrides?
  2. Can kops update cluster use the same S3 awareness as kops get assets --copy?
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 19, 2024
@elliotdobson
Copy link
Contributor Author

Looks similar to #15104 but unfortunately there is no information on how the issue was resolved.

@elliotdobson
Copy link
Contributor Author

Looks like kops get assets --copy has a helper function to translate HTTPS URLs into S3 URLs thus the difference in behaviour from kops update cluster.

kops/pkg/assets/copyfile.go

Lines 179 to 220 in 5d4d867

// buildVFSPath task a recognizable https url and transforms that URL into the equivalent url with the object
// store prefix.
func buildVFSPath(target string) (string, error) {
if !strings.Contains(target, "://") || strings.HasPrefix(target, "memfs://") || strings.HasPrefix(target, "file://") {
return target, nil
}
var vfsPath string
// Matches all S3 regional naming conventions:
// https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
// and converts to a s3://<bucket>/<path> vfsPath
s3VfsPath, err := vfs.VFSPath(target)
if err == nil {
vfsPath = s3VfsPath
} else {
// These matches only cover a subset of the URLs that you can use, but I am uncertain how to cover more of the possible
// options.
// This code parses the HOST and determines gs URLs.
// For instance you can have the bucket name in the gs url hostname.
u, err := url.Parse(target)
if err != nil {
return "", fmt.Errorf("Unable to parse Google Cloud Storage URL: %q", target)
}
if u.Host == "storage.googleapis.com" {
vfsPath = "gs:/" + u.Path
}
}
if vfsPath == "" {
klog.Errorf("Unable to determine VFS path from supplied URL: %s", target)
klog.Errorf("S3, Google Cloud Storage, and File Paths are supported.")
klog.Errorf("For S3, please make sure that the supplied file repository URL adhere to S3 naming conventions, https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region.")
klog.Errorf("For GCS, please make sure that the supplied file repository URL adheres to https://storage.googleapis.com/")
if err != nil { // print the S3 error for more details
return "", fmt.Errorf("Error Details: %v", err)
}
return "", fmt.Errorf("unable to determine vfs type for %q", target)
}
return vfsPath, nil
}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

3 participants