-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
oci: unpack: slurp up raw layer stream before Close() #360
oci: unpack: slurp up raw layer stream before Close() #360
Conversation
0e60e6a
to
1a1635d
Compare
Hmmm, the failure is real but it comes from |
We need klauspost/pgzip#40 before we can do this. |
1a1635d
to
6e0a77f
Compare
FWIW, this LGTM whenever all the stuff you need to land lands, feel free to merge. |
Thanks, I might just merge this as-is since the pgzip stuff isn't actually necessary now that I've implemented it by just copying from the actual file stream. |
Sure, sounds good to me.
…On Mon, Mar 29, 2021, at 6:58 AM, Aleksa Sarai wrote:
Thanks, I might just merge this as-is since the pgzip stuff isn't actually necessary now that I've implemented it by just copying from the actual file stream.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#360 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAF7VV33FKDULFFLLI4SRU3TGB2QRANCNFSM4X2F34EQ>.
|
This is to ensure that we handle both good and bad cases sanely -- namely, that we always are digesting the *full* blob when Close() is called. This avoids us producing intermittent errors if blobs have extra trailing bytes (such as garbage at the end of a gzip stream), as well as hardening us against incorrect digests (though it should be noted this case cannot be triggered without disabling size verification -- the size verficiation we have already blocks this issue). Signed-off-by: Aleksa Sarai <[email protected]>
It's possible there's some trailing bytes after the gzip stream (which truth be told, probably shouldn't be there), so in order to avoid spurious errors on layer unpacking with slightly-broken images, just slurp up that data. But we should output logging information since this indicates that the original image builder probably has a bug of some kind. Signed-off-by: Aleksa Sarai <[email protected]>
6e0a77f
to
6353208
Compare
LGTM. |
3c9687b
to
b7f13af
Compare
Since we are reading blobs we should definitely be returning errors from the lookup. Signed-off-by: Aleksa Sarai <[email protected]>
b7f13af
to
0976fbb
Compare
Codecov Report
@@ Coverage Diff @@
## master #360 +/- ##
==========================================
- Coverage 74.27% 73.27% -1.01%
==========================================
Files 59 59
Lines 3079 3098 +19
==========================================
- Hits 2287 2270 -17
- Misses 463 498 +35
- Partials 329 330 +1
|
It's possible there's some trailing bytes after the gzip stream (which
truth be told, probably shouldn't be there), so in order to avoid
spurious errors on layer unpacking with slightly-broken images, just
slurp up that data.
But we should output logging information since this indicates that the
original image builder probably has a bug of some kind.
We also add code to slurp trailing bytes in VerifiedReadCloser.Close()
This is to ensure that we handle both good and bad cases sanely --
namely, that we always are digesting the full blob when Close() is
called. This avoids us producing intermittent errors if blobs have extra
trailing bytes (such as garbage at the end of a gzip stream), as well as
hardening us against incorrect digests (though it should be noted this
case cannot be triggered without disabling size verification -- the size
verficiation we have already blocks this issue).
Fixes #358
Signed-off-by: Aleksa Sarai [email protected]