-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Dockerfile with template for breaking build cache #204
Conversation
@tianon or @yosifkit , is this approach reasonable? |
@@ -16,6 +16,10 @@ RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys D2486D2DD8 | |||
RUN . /etc/os-release \ | |||
&& echo "deb http://packages.osrfoundation.org/gazebo/$ID-stable `lsb_release -sc` main" > /etc/apt/sources.list.d/gazebo-latest.list | |||
|
|||
# break build cache for sync | |||
RUN echo "Release: Fri, 09 Nov 2018 14:51:50 UTC" \ | |||
&& echo "Digest: 1b1a9870d531fcfee20ae1ab3df60d499d6b206c11fe30e3c89dd111ae31c756" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of just a useless echo
to do a cache bust, here's what I had in mind:
# This is an auto generated Dockerfile for gazebo:gzserver7
# generated from docker_images/create_gzserver_image.Dockerfile.em
FROM ubuntu:xenial
# install packages
RUN apt-get update && apt-get install -q -y \
dirmngr \
gnupg2 \
lsb-release \
&& rm -rf /var/lib/apt/lists/*
# setup keys
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys D2486D2DD83DB69272AFE98867170598AF249743
# setup sources.list
RUN . /etc/os-release \
&& echo "deb http://packages.osrfoundation.org/gazebo/$ID-stable `lsb_release -sc` main" > /etc/apt/sources.list.d/gazebo-latest.list
# install gazebo packages
RUN apt-get update \
&& . /etc/os-release \
&& echo "021bafbd47748fa31505efcf66f499f8f9d7601c18f6d8456ec76686b76711df */var/lib/apt/lists/packages.osrfoundation.org_gazebo_$ID-stable_dists_$(lsb_release -sc)_InRelease" | sha256sum -c - \
&& apt-get install -q -y \
gazebo7=7.14.0-1* \
&& rm -rf /var/lib/apt/lists/*
# setup environment
EXPOSE 11345
# setup entrypoint
COPY ./gzserver_entrypoint.sh /
ENTRYPOINT ["/gzserver_entrypoint.sh"]
CMD ["gzserver"]
(Actually verifying the hash of the interesting file in the appropriate place.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok @tianon , I've adjusted the break point to check the digest of the InRelease file.
I didn't insert it inline with the last install step, as I wanted to the break point to be early, just after the repo is added, so that the rest of repo packages have a better chance staying in sync.
Is there anywhere I can look at the result of this change? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
@130s , could you clarify what you mean by "result"? Do you me to ask:
The docker images produced would resulting include a new layer that would have the history of this InRelease check. So the metadata about the release this was built with could be inferred. It wouldn't be as obvious as reading an environment variable or checking the image tag. I briefly considered have the template set an The other thing that 8e4a537 does in addition to 03b69b7 is that it will actually assert that the InRlease in the image matches the the digest templated into the dockerfile, and return non zero otherwise. So in addition to the version of the relevelt pined package bumping, a sync to any package in the package repo will break the build, helping to more tightly preserve the repeatability of the Dockerfile. This will effectively block newer builds of an image until the Dockerfile itself is updated to reflect the latest sync until the CI changes are propagated upstream. Still, I'm not sure this is best practice, as the blocking might slow down the propagation of upstream security patches inherited from base images (e.g. ubuntu, debian, etc). My ideal level of repeatability would be a little loser in that pinged packaged would be consistent across consecutive builds of a dockerfile, but the unpinned supported packages could be subject to upstream security updates. This is why I first opted for just a passive echo in 03b69b7 , just to make a change in the dockerfile to break the build cache, but not to necessary block future builds. I'd say until #125 is resolved, and we have a more streamlined end-to-end PR pipeline, I'd be hesitant in merging 03b69b7, as the demand on maintainers here would increase given we'd more frequently break the critical build path of security updates with any repo sync, regardless if relevant pinned packages where actually bumped. @nuclearsandwich is there an easy way now of having bots trigger new PRs by being triggered from other PR's being merged. #125 would benefit from having something ensure the upstream manifest files always stays in sync with copy of the manifest we generate locally in this repo. |
Can you draw me a little diagram of what you envision here. What upstream repos are you trying to track changes in? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC we'll need to update the sha256 sum and PR https://github.com/docker-library/official-images/ every time a package is released on packages.osrfoundation.org or a sync happens on on packages.ros.org. This is pretty frequent, but at least it's visible when a build on dockerhub fails.
Just checking I understand, it looks like updates from the base images would only be blocked if there was also a release or sync.
Approving the concept, I'll check what the current hashes are.
Indeed, that's why I'd like further automate the PR process upstream. I think we could use CircleCI's Manual Job Approval in a Workflow to allows us to control and deployed upstream PRs, so that our bots don't submit upstream PRs unsupervised, while still not burdening approvers with having to open them ourselves. Then we would be able to keep with the the relatively frequent syncs. https://circleci.com/blog/manual-job-approval-and-scheduled-workflow-runs/ @sloretz , I can add the circleci logic, but would you be able to add another python script like we have currently controlling our PR bot to make PRs targeting upstream master based from our manifest in local master? |
I can do that. Where's the script powering the PR bot?
As in, a script that creates a PR like this one? Is this script also updating the hashes and creating the dockerfiles? How should it get the github token from the context it's run in? Keyring? Env var? |
The current code for the PR bot resides in the travis.py file here: docker_images/.travis/travis.py Lines 241 to 246 in 99b90d8
Yep, it'll need to generalized for both the
The script/feature needed doesn't have to update the hashes in the local manifests, this is already achieved by the create_docker_library script that are created by the existing PR CI. What this script should do is check if the respective manifest differs from that of upstream and open a github PR with some kind of bot template if they are out of sync.
You can check the from env vars like the existing travis.py one does, and we'll just configure them into the CI admin panel as private variables. The idea in general: I'm thinking we could have the existing CI as-is, that just continues to open PR for when dockerfiles and manfest are generated out of date. The added circleci would additionally trigger on each PR and the workflow would immediately block waiting on maintainer confirmation. The maintainer checks that the existing CI can build and pass the generated PR, then clicks the github merge button, then click the CircleCI link to approve button to deploy the upstream PR, where the circleci workflow continues and runs the new script that diffs the respective manifest the local PR just merged wrt upstream. If they differ, it then check if an existing template PR is already open, if so it could error out, if not it opens a new one up using a template that pings the maintainer group. |
I believe this was discussed in the tickets, but I'd like to highlight that this means many more invalidations and rebuilds than the other suggested approaches, situations that come to mind:
As ROS2 now also installs always a single package, it may be not too cluttering to just provide the version number suggested in #112 (comment) and avoid invalidating when not necessary. I agree that automated builds and PR creation would be very beneficial if this is the strategy adopted 👍. |
Superseded by osrf/docker_templates#90, example or resulting automatic PR: #459 |
Fixes: #112