Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

images created using umoci/layer can include layers that are invalid tar files #436

Closed
mikemccracken opened this issue May 5, 2022 · 7 comments · Fixed by #534
Closed

Comments

@mikemccracken
Copy link
Contributor

We have hit a situation where images built using stacker ( see https://github.com/project-stacker/stacker/blob/master/overlay/pack.go#L279 ) appear to be fine - they can be unpacked and run by runc, and umoci can extract them just fine - but the individual blobs are not all valid .tar.gz files.

here's the error seen in such a case:

$ zcat ../python3/blobs/sha256/91ae8c2f364ac6a5295646ad5b99fa95eaef3972691fca78482d6128e4c873eb | tar xf -
tar: Removing leading `/' from member names
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

the use case here is that security scanners that expect to be able to untar individual image layers are not able to do so, and our images get flagged as failing.

Ideally we would never produce invalid tarballs, but failing that I'd like help understanding how this happens and how to be sure that the images aren't actually corrupt somehow.

I've attached a log from umoci --log=debug unpack of the image. here's the full log: umoci-unpack-debug.log.gz

the last few files that are unpacked are:

   • unpacking entry           path=var/log/apt/eipp.log.xz root=python3-1.0.9-2/rootfs type=48
   • unpacking entry           path=var/log/apt/history.log root=python3-1.0.9-2/rootfs type=48
   • unpacking entry           path=var/log/apt/term.log root=python3-1.0.9-2/rootfs type=48
   • unpacking entry           path=var/log/dpkg.log root=python3-1.0.9-2/rootfs type=48
   • ... done                 

if I untar the blob that contains these files, it stops with the EOF error above part-way through writing var/log/dpkg.log - it is clearly truncated - in this diff the - lines are from the tar-generated tree:
umoci-unpack-debug.log.gz

--- var/log/dpkg.log    2022-05-05 16:08:11.725576759 -0700
+++ ../python3-1.0.9/rootfs/var/log/dpkg.log    2022-04-28 15:31:59.000000000 -0700
@@ -97,4 +97,156 @@
 2022-04-28 22:30:45 trigproc libc-bin:amd64 2.31-0ubuntu9.7 <none>
 2022-04-28 22:30:45 status half-configured libc-bin:amd64 2.31-0ubuntu9.7
 2022-04-28 22:30:45 status installed libc-bin:amd64 2.31-0ubuntu9.7
-2022-04-28 22:30:45 trigproc ca-certificates:all 2
\ No newline at end of file
+2022-04-28 22:30:45 trigproc ca-certificates:all 20210119~20.04.2 <none>
+2022-04-28 22:30:45 status half-configured ca-certificates:all 20210119~20.04.2
+2022-04-28 22:30:46 status installed ca-certificates:all 20210119~20.04.2
+2022-04-28 22:31:54 startup archives unpack
+2022-04-28 22:31:54 install libpython3.8-minimal:amd64 <none> 3.8.10-0ubuntu1~20.04.4
+2022-04-28 22:31:54 status half-installed libpython3.8-minimal:amd64 3.8.10-0ubuntu1~20.04.4
...many more lines 
@timkuik
Copy link

timkuik commented May 6, 2022

Is dpkg.log something that could be omitted from the imaging as a work around?

@cyphar
Copy link
Member

cyphar commented May 7, 2022

Yeah so it looks like the tar file is getting truncated during generation (and then the image itself doesn't fail the many bits of internal validation we do). Very interesting. Are there any errors when generating the image?

It's interesting that there's no error during extraction with umoci -- I guess that means that Go doesn't validate that the data read from an archive entry is the right length... That's a bit concerning...

@mikemccracken
Copy link
Contributor Author

@cyphar I went back to look at the build logs for the image in question and don't see any errors, but it's possible that there were warnings that weren't propagated to the right logs or something.

If there were an hard errors we wouldn't have completed the build, though.

what's the best way to follow up here? I'm looking into how easy it is to just share the image. if I can reproduce the issue, is there additional logging I could turn on to help?

@cyphar
Copy link
Member

cyphar commented May 10, 2022

I just realised that umoci doesn't have corrected EINTR handling (I'll fix that in a separate PR), but even so you should get a hard error if umoci doesn't write the whole file to the tar stream (and if you get io.EINTR you would see that error anyway).

However, because all of the errors are produced in a goroutine, I have seen cases in tests where the writer.CloseWithError() doesn't actually result in an error coming out the return value of Close() is checked by the main goroutine (this seems like a Go stdlib bug to me but I've never been able to reproduce it outside of random test failures -- also it seems to me that golang/go#24283 should've fixed the issue).

If you are having this issue then you should be able to see a warning like generate layer: could not add file 'foobar': error code (in addition to setting the error value I also log a warning). In that case, make sure that the warning logs are being retained properly by your build process and then try to reproduce it.

@cyphar
Copy link
Member

cyphar commented May 11, 2022

@mikemccracken Can you try to see if you can reproduce the issue with #437? If you still can reproduce it, check if the build log has any new warnings (I've added new warnings around each CloseWithError call, which is where I suspect the errors are being dropped).

@mikemccracken
Copy link
Contributor Author

@cyphar thanks, I will give this a try this week.

@mikemccracken
Copy link
Contributor Author

@cyphar "this week" didn't work out, but I finally traced it down. My problem was still happening with #437 - see #534 for my suggested fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants