Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple streams writing in parallel does not work #7

Open
JustinKyleJames opened this issue Jan 25, 2019 · 1 comment
Open

Multiple streams writing in parallel does not work #7

JustinKyleJames opened this issue Jan 25, 2019 · 1 comment

Comments

@JustinKyleJames
Copy link
Owner

When writing multiple streams to a file, the output is not as expected and not consistent with the unixfilesystem resource.

The output is correct when using an S3FS mount point instead of a cache-less S3 resource so the problem is not in the S3FS code itself.

Most likely I need to mimic the locking behavior found in the POSIX write() call.

@JustinKyleJames
Copy link
Owner Author

The problem here is that the multiple streams are being written by different processes. If the streams are being written by the same process (as is done with the large file upload/download in iRODS) then it works.

To fix this, the S3FS cache needs to be in shared memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant