Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3Proxy fix volume mounting #1648

Closed
Jose-Matsuda opened this issue Apr 17, 2023 · 3 comments
Closed

S3Proxy fix volume mounting #1648

Jose-Matsuda opened this issue Apr 17, 2023 · 3 comments
Assignees

Comments

@Jose-Matsuda
Copy link
Contributor

Jose-Matsuda commented Apr 17, 2023

See this comment.

When you delete the s3proxy pod in your namespace, any data that you had 'uploaded' is lost.
The mounting isn't working correctly.

We need to check the blob-csi controller to check mount points. Something there was changed, and perhaps with the change to naming caused the mount point to no longer be correct.

@mathis-marcotte
Copy link
Contributor

While investigating, first thing noticed is that there wasn't any volumes getting mounted to the s3proxy pods. So it makes sense that if the pod restarted, it would lose uploaded data.
Upon further investigation, it looked like the value "data.statcan.gc.ca/inject-blob-volumes": "true" was being set in the annotations for the pods, and not in the labels. This means that the pods weren't getting picked up by the blob-csi-injector to get the volume mounts. There was an update to the code that changed the logic to look at labels instead of annotations StatCan/aaw-blob-csi-injector@29da518, which is why this issue might have been introduced.

After updating the pods to have the label instead of the annotation, they are indeed now getting mounted.
image

But there is still a persistence issue. That is most likely because s3proxy is still expecting to use buckets named unclassified, unclassified-ro and protected-b. But those names got updated to have the aaw- prefix.

@mathis-marcotte
Copy link
Contributor

mathis-marcotte commented Apr 20, 2023

Updated the dropdown list of buckets that gets shown in the s3-explorer to point to the aaw-* volumes and now there is persistence of data when the pod restarts. But a new bug got introduced where when you create a new folder through the s3-explorer UI, those folders do get created in the volume but they aren't appearing in the UI.

Also getting this when accessing via kubeflow now, possibly whatever this is?
image or even this(I dont think its this it doesnt seem like there would be any illegal characters in there) either way we need to find out where that xml is and what it looks like.

Mathis found another instance of unclassified hanging around and upon changing it in my VS we got it.

@mathis-marcotte
Copy link
Contributor

mathis-marcotte commented Apr 25, 2023

Debugging more the issue where new folders aren't being shown in the UI after being created with the New Folder button. The issue is because s3 doesnt display empty directories. To circumvent this, a function was put in place in the service worker js to create an empty file with newly created folders.
This bug is happening only when accessing s3proxy through the service directly(s3proxy-web/s3/namespace/), and not when using it through kubeflow. And the reason it is happening is because the service worker is not being registered in the service. This seems to be because the s3proxy service runs in an unsecure context, but you need a secure context (https) to be able to use service workers.

This bug will be fixed with #1658

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants