Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document the policy about performance testing environment and epochs #1206

Merged
merged 3 commits into from
Oct 21, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 17 additions & 1 deletion docs/team/ci.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Continuous Integration

## Testing
## Correctness Testing

MMTK core runs CI tests *before* a pull request is merged.

Expand All @@ -11,3 +11,19 @@ MMTk core sets up two sets of tests, the *minimal tests* and the *extended tests
* Extended tests only run for a pull request if the pull request is tagged with the label `PR-extended-testing`. This set of tests
may take hours, and usually include integration tests with bindings which run the language implementation's standard test suite
as much as possible.

## Performance Testing

We conduct performance testing for each MMTk core commit after it has been merged.

### Testing Environment and Epochs

We track the performance of MMTk over years. Naturally, changes in the testing environment (hardware or software) and methodology are sometimes necessary. Each time we make such a change, it marks the start of a new *epoch* in our performance evaluation.

Since changes in the testing environment can significantly impact performance, we do not directly compare performance results across different epochs. Within an epoch, we ensure that **MMTk does not experience performance regressions**, and **we only update the testing environment when there is no performance regression in the current epoch**.

### Regression Test Canary

To monitor unnoticed performance changes and to measure the level of noise in the testing environment, we use a canary when doing performance regression tests with the OpenJDK binding. A "canary" is a chosen revision that is run along with any merged pull request. Since the same revision is run again and again, its performance should be relatively constant, within the range of noise. If we notice a change in the performance of the canary (especially something that resembles a [step function](https://en.wikipedia.org/wiki/Heaviside_step_function) in the line plot), we should inspect our testing environment for hardware or software changes.

We keep running the same canary version until it is no longer possible, for reasons such as the toolchain for compiling that version is no longer available. When that happens, we may choose a different canary version or switch to an automatic mechanism for choosing canary.
Loading