-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3Proxy fix volume mounting #1648
Comments
While investigating, first thing noticed is that there wasn't any volumes getting mounted to the s3proxy pods. So it makes sense that if the pod restarted, it would lose uploaded data. After updating the pods to have the label instead of the annotation, they are indeed now getting mounted. But there is still a persistence issue. That is most likely because s3proxy is still expecting to use buckets named |
Updated the dropdown list of buckets that gets shown in the s3-explorer to point to the Also getting this when accessing via kubeflow now, possibly whatever this is? Mathis found another instance of |
Debugging more the issue where new folders aren't being shown in the UI after being created with the New Folder button. The issue is because s3 doesnt display empty directories. To circumvent this, a function was put in place in the service worker js to create an empty file with newly created folders. This bug will be fixed with #1658 |
See this comment.
When you delete the s3proxy pod in your namespace, any data that you had 'uploaded' is lost.
The mounting isn't working correctly.
We need to check the
blob-csi
controller to check mount points. Something there was changed, and perhaps with the change to naming caused the mount point to no longer be correct.The text was updated successfully, but these errors were encountered: