Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor CI workflows to use GitHub Actions capabilities #1968

Closed
5 tasks done
ronaldtse opened this issue Jan 10, 2023 · 10 comments · Fixed by #2137
Closed
5 tasks done

Refactor CI workflows to use GitHub Actions capabilities #1968

ronaldtse opened this issue Jan 10, 2023 · 10 comments · Fixed by #2137
Assignees

Comments

@ronaldtse
Copy link
Contributor

We should utilize GHA capabilities instead of shell scripts. The goal is to get rid of the ci/ folder.

The tasks are:

@dewyatt
Copy link
Contributor

dewyatt commented Jan 10, 2023

We used to be able to run something like env GPG_VERSION=beta BUILD_MODE=sanitize ci/run.sh in a local container and it would install and build all dependencies with sanitizers, run all tests, etc.
Is this no longer going to be supported?

@ronaldtse
Copy link
Contributor Author

Great question. The challenge is it has been difficult to know what the "local container" is... It has been difficult to keep the set of workflows working across the now diverse platforms we support.

Even if we don't have that container-CI script, we could use a repository fork to test out all the configurations using GitHub Actions?

@dewyatt
Copy link
Contributor

dewyatt commented Jan 10, 2023

The reason I tried to maintain those scripts previously is that it allowed for much faster iteration.
I could do a single local CI task run with sanitizers enabled in a Linux container on a dev machine w/8+ physical CPU cores, and that would cover 99.99% of potential issues in ~10m, allowing me to proceed with some confidence on my work.
That's opposed to pushing to GitHub, waiting in a queue, spinning up 70 containers/VMs on multiple platforms, and waiting ~2h or so.

Anyways, just wanted to make sure you are aware of the tradeoffs.

@ni4
Copy link
Contributor

ni4 commented Jan 11, 2023

My idea was to have some registry of RNP-related pre-built Docker images, which are periodically refreshed. However still didn't get to this task yet.
This could be done by re-using current CI scripts, and some more stuff over them, which would build Docker images via CD and upload somewhere.

This should solve both Github issue (workflow would just specify pre-built image) and local issue (just name an image and run via simple command line).

@dewyatt
Copy link
Contributor

dewyatt commented Jan 11, 2023

@ni4 I think that's a good approach so long as there is adequate hosting available for the images and such

@ronaldtse
Copy link
Contributor Author

ronaldtse commented Jan 12, 2023

Good point @ni4 , we did talk about pre-building Docker images. I believe @maxirmx could utilize Docker images as pre-built cached images (for the platforms that make sense) as part of this current issue.

GitHub repositories now provide a way to host Docker images so it is appropriate.

@maxirmx
Copy link
Member

maxirmx commented Jan 12, 2023

Well, all together it sounds like a proposal to have cmake project wrapped by set of bash scripts embedded into docker container that is run by GHA script in another container and possibly in yet another container with cross-dependecies split across all levels. :)

@ni4
Copy link
Contributor

ni4 commented Jan 12, 2023

@maxirmx For me it looked simpler - a repository with current CI scripts, and Dockerfiles, one for every setup we need (like centos8-openssl3, centos8-openssl1, centos8-botan, etc.). Each modification to these files should trigger CD which would build corresponding image and upload to the appropriate location. Also this should be done periodically to leave images up-to-date. CI (PR which modify certain Dockerfiles/build scripts) should check whether images are buildable.

If we'll have too many setups, those could be parametrized (like single centos8 Dockerfile, with some ENV parameters inside which would be overwritten by CI/CD and then used in scripts).

Another, even more simpler approach could be to have plain Dockerfile, without current CI scripts, emulating 'realworld' usage case where user issue certain commands to install required dependencies. Here is one I used for debugging:

# syntax=docker/dockerfile:1
FROM quay.io/centos/centos:stream9
RUN dnf -y -q install sudo wget git
RUN sudo dnf -y update
RUN sudo dnf -y -q install clang llvm gcc gcc-c++ make autoconf automake libtool bzip2 gzip ncurses-devel bzip2-devel zlib-devel byacc gettext-devel bison python3 ruby-devel cmake
RUN sudo dnf -y -q install epel-release && \
    sudo dnf -y -q install 'dnf-command(config-manager)' && \
    sudo dnf config-manager --set-enabled crb
RUN sudo dnf -y -q install openssl-devel json-c-devel gtest-devel libasan libubsan
RUN mkdir rnp
COPY . rnp/
ENV CFLAGS="-g3 -O0 -fsanitize=address,undefined -fno-omit-frame-pointer -fno-common -Wsign-compare -Wno-error=sign-compare" \
    CXXFLAGS="-g3 -O0 -fsanitize=address,undefined -fno-omit-frame-pointer -fno-common -Wsign-compare -Wno-error=sign-compare" \
    CXX=g++ \
    CC=gcc
# Following one would help to symbolize stacktrace    
ENV LSAN_OPTIONS=fast_unwind_on_malloc=0
RUN mkdir rnp-build
RUN cd rnp-build && \
    cmake -DCMAKE_BUILD_TYPE=Debug -DDOWNLOAD_GTEST=Off -DDOWNLOAD_RUBYRNP=Off -DCRYPTO_BACKEND=openssl ../rnp && \
    make -j4 && \
    make install

@maxirmx
Copy link
Member

maxirmx commented Jul 4, 2023

We are now at the point where ci/ scripts are required only when

  • we test against a specific version of botan/gpg
  • we need a backport of botan/json-c when the version provided by the system is too old

Considering

the suggestion is:

  • Create workflow(s) that builds docker containers for configurations with specific versions of botan/gpg/json-c (as opposed by system-shipped versions)
  • Publish containers at ghrc.io
  • Run tests in containers when specific version(s) of botan/gpg/json-c are required; use gha standard runners otherwise
  • Fully retire ci/ build scripts
  • Use environment-agnostic build sequence:
      - name: Configure
        run: |
          echo CORES="$(nproc --all)" >> $GITHUB_ENV
          cmake -B build   -DBUILD_SHARED_LIBS=${{ matrix.shared_libs}}   \
                           -DCRYPTO_BACKEND=${{ matrix.backend.name }}    \
                           -DDOWNLOAD_GTEST=ON                            \
                           -DCMAKE_BUILD_TYPE=Release  .

      - name: Build
        run: cmake --build build --parallel ${{ env.CORES }}

      - name: Test
        run: |
          mkdir -p "build/Testing/Temporary"
          cp "cmake/CTestCostData.txt" "build/Testing/Temporary"
          export PATH="$PWD/build/src/lib:$PATH"
          ctest --parallel ${{ env.CORES }} --test-dir build --output-on-failure

@ronaldtse
Copy link
Contributor Author

@maxirmx thank you for the detailed proposal. I think this works perfectly. Let's proceed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants