Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy nightly builds to remote server #144

Closed
paidforby opened this issue Aug 5, 2018 · 6 comments
Closed

deploy nightly builds to remote server #144

paidforby opened this issue Aug 5, 2018 · 6 comments

Comments

@paidforby
Copy link

After the developments in #137 we have the full build finishing in Travis. This already runs every night and contains the binaries for the N600, so there should be a way to push these builds somewhere useful. I'm guessing we could adapt the already existing send_to_webserver script. Not sure if we can or should use Zenodo, or just https://builds.sudomesh.org?

@gobengo
Copy link
Contributor

gobengo commented Aug 8, 2018

Let's

  • document how builds have been published
  • reaffirm commitment to that process or raise any concerns
  • automate the process - i.e. make travis do it

Here's what it looks like we've been doing:

  • New minor versions get code names like 'fledgling', 'disposessed'
    http://builds.sudomesh.org/builds/sudowrt-firmware/{artifact}
  • Build artifacts are published at URL (served from filesystem of host at peoplesopen.net): http://builds.sudomesh.org/builds/sudowrt/{codeName}/{version}/{artifact}
  • Builds artifacts are alos published to Zenodo (how?) for redundancy
  • remember to update/add build links to critical documentation like https://sudoroom.org/wiki/Mesh/WalkThrough

Anyone have thoughts on this? I have a couple

  • All for neato codenames. They are fun. But I think each release should also have a simple release number, and that if I know the release number (and project name), I should be able to build the builds.sudomesh.org URL without having to know the codename. So it'd be rad if we could symlink/redirect to make URLs like https://builds.sudomesh.org/{project}/{version}/{artifact}, e.g. https://builds.sudomesh.org/sudowrt-firmware/0.3.0/{artifact}
  • There should be a single script I can run to upload things to right places (Zemondo)
  • It would be great to also automate building release candidates or non-release builds like when pushing to a feature branch. For example, the feature branch for zeroconf was around for months and stuff. Would have been invaluable to have automated builds publishing to https://builds.sudomesh.org/sudowrt-firmware/0.3.0-rc.1/{artifact} so everyone could help test @paidforby's hot new stuff.
  • Travis should be able to run that script for 'release builds', e.g. builds in a special 'release' branch we can PR to from master to trigger automated releases. The release could also automate creating and pushing git tags to the same commit that produced a built artifact. Similar work was prototyped here for monitor, but the work has been deemed more appropriate for this repo.

@paidforby
Copy link
Author

Great breakdown @gobengo! As far as I know, there has never been a consistent deployment process, so I think no one is attached to any particular process.

For (at least) the last year the process has gone something like;

  • Someone creates a new feature or fixes a bug (e.g. new dashboard, new tunneldigger version)
  • Someone gets annoyed that new feature or bug patch isn't in the current build
  • Someone decides to run a new build of firmware and maybe tests on a few home nodes (very rarely on an extender node).
  • Someone decides the change is major enough (i.e. could not be done with makenode) and the build seems stable enough for general use.
  • Someone uploads that build to a file server (typically builds.sudomesh.org, @jhpoelen started using Zenodo, mostly because of concerns of storage space on sudomesh server)

I introduced the codenames to the builds server to help differentiate the new autoconf builds from the makenode builds, previously the codename had been the same as openwrt, 'chaos_calmer,' which seemed even less helpful. I agree, no need for codenames in the build server, though it doesn't seem too different from putting '0.3.x' or '0.3.0' as the directory name.

Love the idea for release candidate builds, I tried to do something similar with http://builds.sudomesh.org/dev-builds/ but it became confusing and unmaintained very quickly.

My ideal dev cycle would read something like this.

  1. Bug reported in firmware issues or sudomesh/bugs
  2. We come up with a great fix
  3. Clone firmware repo and attempt to integrate the bug fix into the firmware files
  4. Pull the latest docker image, copy in new firmware files, and re-run the docker container locally.
  5. After 5mins, new build is cooked and spit out of docker container.
  6. Flash at least one home node, try reproducing the bug or otherwise testing bug fix.
  7. Once adequately tested, push commit to master (unless working on something that warrants a branch a la 'autoconf').
  8. Immediately, commit is picked up by travis and rebuild is triggered.
  9. Built firmware is copied out of travis to somewhere like, builds.sudowrt.org/builds/sudowrt/0.3.0/latest

I don't think space is a concern on the sudomesh server, now that we are only building N600 firmware, though someone should check that.

I'm guessing we can push the build similar to how we a pushing to docker hub, though we may need to create a dummy user/pw on the sudomesh server for travis.

Hope this info helps, @gobengo I encourage you to go ahead and restructure the deployment process however you see fit, let me know if you need any access that you should already have. Thanks!

@paidforby
Copy link
Author

After ten or so commits, I think I've finally got the firmware binaries deploying to https://builds.sudomesh.org/sudowrt-firmware/latest. I based the deployment off of this guide and the old send_to_webserver script. It appears to work well, but we should keep an eye on it as it starts pushing nightly builds.

And as suggested by @gobengo I've reworked the directory structure for the firmware, check it out here, http://builds.sudomesh.org/sudowrt-firmware/. I left the old directory in place until we update links in the wiki.

Other thought, is there any point of committing the docker image back to docker hub? Now that we can extract the few files we want, maybe it makes sense to only update the docker image once the build time becomes too long? Not recommitting to docker hub might also address the bloated images described in #146.

@paidforby
Copy link
Author

Mentioned wrong issue in commit, see ae74d96 for docker hub related commit

@paidforby
Copy link
Author

This appears to be working consistently. It also helped us detect other, unrelated, issues with the latest build.

@bennlich
Copy link
Collaborator

bennlich commented Apr 5, 2019

I left the old directory in place until we update links in the wiki.

I updated the links in the wiki and moved the old build directories one level up. If nobody complains that they're missing in the next month, I'll delete 'em.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants