Skip to content

Commit

Permalink
Parallel GH actions workflow for Nixpkgs eval (#356023)
Browse files Browse the repository at this point in the history
  • Loading branch information
Mic92 authored Nov 20, 2024
2 parents cbc70ce + fbbe972 commit 6d2d99e
Show file tree
Hide file tree
Showing 12 changed files with 510 additions and 87 deletions.
139 changes: 139 additions & 0 deletions .github/workflows/eval.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
name: Eval

on: pull_request_target

permissions:
contents: read

jobs:
attrs:
name: Attributes
runs-on: ubuntu-latest
outputs:
mergedSha: ${{ steps.merged.outputs.mergedSha }}
systems: ${{ steps.systems.outputs.systems }}
steps:
# Important: Because of `pull_request_target`, this doesn't check out the PR,
# but rather the base branch of the PR, which is needed so we don't run untrusted code
- name: Check out the ci directory of the base branch
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: base
sparse-checkout: ci
- name: Check if the PR can be merged and get the test merge commit
id: merged
env:
GH_TOKEN: ${{ github.token }}
run: |
if mergedSha=$(base/ci/get-merge-commit.sh ${{ github.repository }} ${{ github.event.number }}); then
echo "Checking the merge commit $mergedSha"
echo "mergedSha=$mergedSha" >> "$GITHUB_OUTPUT"
else
# Skipping so that no notifications are sent
echo "Skipping the rest..."
fi
rm -rf base
- name: Check out the PR at the test merge commit
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# Add this to _all_ subsequent steps to skip them
if: steps.merged.outputs.mergedSha
with:
ref: ${{ env.mergedSha }}
path: nixpkgs

- name: Install Nix
uses: cachix/install-nix-action@08dcb3a5e62fa31e2da3d490afc4176ef55ecd72 # v30
if: steps.merged.outputs.mergedSha

- name: Evaluate the list of all attributes and get the systems matrix
id: systems
if: steps.merged.outputs.mergedSha
run: |
nix-build nixpkgs/ci -A eval.attrpathsSuperset
echo "systems=$(<result/systems.json)" >> "$GITHUB_OUTPUT"
- name: Upload the list of all attributes
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
if: steps.merged.outputs.mergedSha
with:
name: paths
path: result/*

outpaths:
name: Outpaths
runs-on: ubuntu-latest
needs: attrs
# Skip this and future steps if the PR can't be merged
if: needs.attrs.outputs.mergedSha
strategy:
matrix:
system: ${{ fromJSON(needs.attrs.outputs.systems) }}
steps:
- name: Download the list of all attributes
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: paths
path: paths

- name: Check out the PR at the test merge commit
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ needs.attrs.outputs.mergedSha }}
path: nixpkgs

- name: Install Nix
uses: cachix/install-nix-action@08dcb3a5e62fa31e2da3d490afc4176ef55ecd72 # v30

- name: Evaluate the ${{ matrix.system }} output paths for all derivation attributes
run: |
nix-build nixpkgs/ci -A eval.singleSystem \
--argstr evalSystem ${{ matrix.system }} \
--arg attrpathFile ./paths/paths.json \
--arg chunkSize 10000
# If it uses too much memory, slightly decrease chunkSize
- name: Upload the output paths and eval stats
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
if: needs.attrs.outputs.mergedSha
with:
name: intermediate-${{ matrix.system }}
path: result/*

process:
name: Process
runs-on: ubuntu-latest
needs: outpaths
steps:
- name: Download output paths and eval stats for all systems
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
pattern: intermediate-*
path: intermediate

- name: Check out the PR at the test merge commit
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ needs.attrs.outputs.mergedSha }}
path: nixpkgs

- name: Install Nix
uses: cachix/install-nix-action@08dcb3a5e62fa31e2da3d490afc4176ef55ecd72 # v30

- name: Combine all output paths and eval stats
run: |
nix-build nixpkgs/ci -A eval.combine \
--arg resultsDir ./intermediate
- name: Upload the combined results
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
with:
name: result
path: result/*


# TODO: Run this workflow also on `push` (on at least the main development branches)
# Then add an extra step here that waits for the base branch (not the merge base, because that could be very different)
# to have completed the eval, then use
# gh api --method GET /repos/NixOS/nixpkgs/actions/workflows/eval.yml/runs -f head_sha=<BASE>
# and follow it to the artifact results, where you can then download the outpaths.json from the base branch
# That can then be used to compare the number of changed paths, get evaluation stats and ping appropriate reviewers
1 change: 1 addition & 0 deletions ci/default.nix
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,5 @@ in
inherit pkgs;
requestReviews = pkgs.callPackage ./request-reviews { };
codeownersValidator = pkgs.callPackage ./codeowners-validator { };
eval = pkgs.callPackage ./eval { };
}
19 changes: 19 additions & 0 deletions ci/eval/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Nixpkgs CI evaluation

The code in this directory is used by the [eval.yml](../../.github/workflows/eval.yml) GitHub Actions workflow to evaluate the majority of Nixpkgs for all PRs, effectively making sure that when the development branches are processed by Hydra, no evaluation failures are encountered.

Furthermore it also allows local evaluation using
```
nix-build ci -A eval.full \
--max-jobs 4
--cores 2
--arg chunkSize 10000
```

- `--max-jobs`: The maximum number of derivations to run at the same time. Only each [supported system](../supportedSystems.nix) gets a separate derivation, so it doesn't make sense to set this higher than that number.
- `--cores`: The number of cores to use for each job. Recommended to set this to the amount of cores on your system divided by `--max-jobs`.
- `chunkSize`: The number of attributes that are evaluated simultaneously on a single core. Lowering this decreases memory usage at the cost of increased evaluation time. If this is too high, there won't be enough chunks to process them in parallel, and will also increase evaluation time.

A good default is to set `chunkSize` to 10000, which leads to about 3.6GB max memory usage per core, so suitable for fully utilising machines with 4 cores and 16GB memory, 8 cores and 32GB memory or 16 cores and 64GB memory.

Note that 16GB memory is the recommended minimum, while with less than 8GB memory evaluation time suffers greatly.
Loading

0 comments on commit 6d2d99e

Please sign in to comment.