You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The problem here is that the multiple streams are being written by different processes. If the streams are being written by the same process (as is done with the large file upload/download in iRODS) then it works.
To fix this, the S3FS cache needs to be in shared memory.
When writing multiple streams to a file, the output is not as expected and not consistent with the unixfilesystem resource.
The output is correct when using an S3FS mount point instead of a cache-less S3 resource so the problem is not in the S3FS code itself.
Most likely I need to mimic the locking behavior found in the POSIX write() call.
The text was updated successfully, but these errors were encountered: