From 0e0311380a4f5fb8a8558ae039e2737337a92548 Mon Sep 17 00:00:00 2001 From: Evaan Yaser Ahmed Date: Thu, 22 Aug 2024 15:49:37 -0500 Subject: [PATCH] Eliminating Vale Warnings This commit only edits documentation material. The goal was to eliminate a significant chunk of vale warnings. I was able to resolve 150 warnings, which is about 35%. --- CONTRIBUTING.md | 65 ++++++++++--------- README.md | 18 ++--- SECURITY.md | 11 ++-- deployment-examples/chromium/README.md | 13 ++-- deployment-examples/kubernetes/README.md | 18 ++--- .../deployment-examples/on-prem-overview.mdx | 6 +- .../src/content/docs/explanations/history.mdx | 12 ++-- docs/src/content/docs/faq/lre.mdx | 6 +- .../src/content/docs/faq/remote-execution.mdx | 26 ++++---- docs/src/content/docs/faq/rust.mdx | 12 ++-- .../src/content/docs/introduction/on-prem.mdx | 20 +++--- .../docs/nativelink-cloud/Reclient.mdx | 23 +++---- local-remote-execution/README.md | 50 +++++++------- 13 files changed, 142 insertions(+), 138 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 78107fd70..c7eb6c9a9 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,17 +1,17 @@ # Contributing to NativeLink NativeLink welcomes contribution from everyone. Here are the guidelines if you -are thinking of helping us: +are thinking of helping: ## Contributions -Contributions to NativeLink or its dependencies should be made in the form of -GitHub pull requests. Each pull request will be reviewed by a core contributor -(someone with permission to land patches) and either landed in the main tree or -given feedback for changes that would be required. All contributions should +You can contribute to NativeLink or its dependencies via GitHub pull +requests. Each pull request will undergo review by a core contributor +(someone with permission to land patches) and either land in the main tree +or receive feedback for required changes. All contributions should follow this format, even those from core contributors. -Should you wish to work on an issue, please claim it first by commenting on +To work on an issue, please claim it first by commenting on the GitHub issue that you want to work on it. This is to prevent duplicated efforts from contributors on the same issue. @@ -19,7 +19,7 @@ efforts from contributors on the same issue. NativeLink has a somewhat specific contribution process to ensure consistent quality across all commits. If anything in the following guide is unclear to you -please raise an issue so that we can clarify this document. +please raise an issue to help clarify this document. 1. In your [GitHub settings](https://github.com/settings/keys), set up distinct authentication and signing keys. For further information see: @@ -65,8 +65,8 @@ please raise an issue so that we can clarify this document. > upstream git@github.com:TraceMachina/nativelink (push) > ``` -6. Finally, configure `git` to sign your commits with the keys you set up - previously. Create a `.gitconfig` file in your home directory like so: +6. For the last step, configure `git` to sign your commits with the keys + you set up. Create a `.gitconfig` file in your home directory like so: ```ini [user] @@ -86,15 +86,16 @@ please raise an issue so that we can clarify this document. ## Local development setup -NativeLink ships almost all of its tooling in a nix flake which is configured -via a [`flake.nix`](https://github.com/tracemachina/nativelink/tree/main/flake.nix) file in the root of the repository. While it's +NativeLink ships most of its tooling in a nix flake configured +via a [`flake.nix`](https://github.com/tracemachina/nativelink/tree/main/flake.nix) +file in the root of the repository. While it's possible to work on some parts of the codebase without this environment, it'll make your life much easier since it lets you reproduce most of CI locally. 1. Install Nix with flakes: https://github.com/NixOS/experimental-nix-installer For further information on Nix Flakes see: https://nixos.wiki/wiki/Flakes. 2. Optionally (but highly recommended), install [`direnv`](https://direnv.net/docs/installation.html) and - hook it into your shell: + integrate it into your shell: ```bash nix profile install nixpkgs#direnv @@ -166,13 +167,13 @@ NativeLink doesn't allow direct commits or human-created side branches in the - Use a capital letter to start the commit and use an imperative tone for the title. - - Don't end the title with a period. + - Don't end the title with a punctuation. - Keep the first line as short as possible. If your feature is complex, add - additional information in the commit message body. + extra information in the commit message body. - If you feel like you need the word `and` in the commit title, the commit - might try to do too many things at once and you should consider splitting + might try to do a lot of things at once and you should consider splitting it into separate commits. - - The commit message body should have a maximum line length of 72 characters. + - The length of commit message's body shouldn't exceed 72 characters. This is to keep the `git log` readable with raw terminals. ```bash @@ -210,7 +211,7 @@ NativeLink doesn't allow direct commits or human-created side branches in the remaining issues. Feel free to ask for help if you have trouble getting CI for your pull request green. -8. If you need to make additional changes, don't use a regular `git commit` on +8. If you need to make further changes, don't use a regular `git commit` on the pull request branch. Instead use `git commit --amend` and `git push -f` to update the commit in-place. The changes between the commit versions will remain visible in the Reviewable UI. @@ -302,14 +303,16 @@ add the changed files manually to the staging area. ### Setting up rust-analyzer -[rust-analyzer](https://rust-analyzer.github.io/) works reasonably well out of the box due to picking up the manifest for the `nativelink` crate, but it isn't integrated with Bazel by default. In order to generate a project configuration for rust-analyzer, +[rust-analyzer](https://rust-analyzer.github.io/) works reasonably well out of the box due to picking up the manifest for the `nativelink` crate, but it isn't integrated with Bazel by default. To generate a project configuration for rust-analyzer, run the `@rules_rust//tools/rust_analyzer:gen_rust_project` target: ```sh bazel run @rules_rust//tools/rust_analyzer:gen_rust_project ``` -This will generate a `rust-project.json` file in the root directory. This file needs to be regenerated every time new files or dependencies are added in order to stay up-to-date. You can configure rust-analyzer can pick it up by setting the [`rust-analyzer.linkedProjects`](https://rust-analyzer.github.io/manual.html#rust-analyzer.linkedProjects) [configuration option](https://rust-analyzer.github.io/manual.html#configuration). +This will generate a `rust-project.json` file in the root directory. + +To stay up-to-date, you need to regenerate this file every time new files or dependencies get added. You can configure rust-analyzer to pick it up by setting the [`rust-analyzer.linkedProjects`](https://rust-analyzer.github.io/manual.html#rust-analyzer.linkedProjects) [configuration option](https://rust-analyzer.github.io/manual.html#configuration). If you use VS Code, you can configure the following `tasks.json` file to automatically generate this file when you open the editor: @@ -388,11 +391,11 @@ bazel test doctests ## Writing documentation -NativeLink largely follows the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/welcome/). +NativeLink primarily follows the [Microsoft Style Guide](https://learn.microsoft.com/en-us/style-guide/welcome/). NativeLink implements its documentation style guide via Vale. The pre-commit -hooks forbid errors but permit warnings and suggestions. To view all of Vale's -suggestions invoke it directly: +hooks forbid errors but permit warnings and suggestions. To view every Vale +suggestion, invoke Vale directly: ``` vale somefile @@ -400,10 +403,11 @@ vale somefile ## Creating releases -To keep the release process in line with best practices for open source -repositories, not all steps are automated. Specifically, tags should be signed -and pushed manually and the release notes should be human readable beyond what -most automatically generated changelogs provide. + +To align the release process with best practices for open source repositories, +some steps remain un-automated. Specifically, you should sign and push tags manually, +and ensure the release notes are human-readable, offering more detail than most +automatically generated changelogs. 1. Bump the current version in the following files: @@ -418,7 +422,7 @@ most automatically generated changelogs provide. 3. Create the commit and PR. Call it `Release NativeLink v0.x.y`. -4. Once the PR is merged, update your local repository and origin: +4. After merging the PR, update your local repository and origin: ```bash git switch main @@ -442,7 +446,7 @@ most automatically generated changelogs provide. git push origin v0.x.y ``` -7. Pushing the tag triggers an additional GHA workflow which should create the +7. Pushing the tag triggers an extra GHA workflow which should create the container images in your own fork. Check that this workflow is functional. If the CI job in your fork passes, push the tag to upstream: @@ -450,7 +454,8 @@ most automatically generated changelogs provide. git push upstream v0.x.y ``` -8. The images for the release are now being created. Go to the [Tags](https://github.com/TraceMachina/nativelink/tags) +8. The images for the release are now getting created. Go to the + [Tags](https://github.com/TraceMachina/nativelink/tags) tab in GitHub and double-check that the tag has a green `Verified` marker next to it. If it does, select `Create a release from tag` and create release notes. You can use previous release notes as template by clicking on the @@ -460,7 +465,7 @@ most automatically generated changelogs provide. Make sure to include migration instructions for all breaking changes. Explicitly list whatever changes you think are worth mentioning as `Major - changes`. This is a fairly free-form section that doesn't have any explicit + changes`. This is a free-form section that doesn't have any explicit requirements other than being a best-effort summary of notable changes. 9. Once all notes are in line, click `Publish Release`. diff --git a/README.md b/README.md index 6b1e52e94..fed7945ac 100644 --- a/README.md +++ b/README.md @@ -16,11 +16,11 @@ [![Slack](https://img.shields.io/badge/slack--channel-blue?logo=slack)](https://nativelink.slack.com/join/shared_invite/zt-281qk1ho0-krT7HfTUIYfQMdwflRuq7A#/shared-invite/email) [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) -## What's NativeLink? +## What's NativeLink -NativeLink is an efficient, high-performance build cache and remote execution system that accelerates software compilation and testing while reducing infrastructure costs. It optimizes build processes for projects of all sizes by intelligently caching build artifacts and distributing tasks across multiple machines. +NativeLink is an efficient, high-performance build cache and remote operation system that accelerates software compilation and testing while reducing infrastructure costs. It optimizes build processes for projects of all sizes by intelligently caching build artifacts and distributing tasks across several machines. -NativeLink is trusted in production environments to reduce costs and developer iteration times--handling over **one billion requests** per month for its customers, including large corporations such as **Samsung**. +Users trust NativeLink in production environments to reduce costs and developer iteration times--handling over **one billion requests** per month for its customers, including large corporations such as **Samsung**. NativeLink Explained in 90 seconds @@ -30,9 +30,9 @@ NativeLink is trusted in production environments to reduce costs and developer i 1. **Advanced Build Cache**: - Stores and reuses results of previous build steps for unchanged components - - Significantly reduces build times, especially for incremental changes + - Dramatically reduces build times, notably for incremental changes -2. **Efficient Remote Execution**: +2. **Efficient Remote Operation**: - Distributes build and test tasks across a network of machines - Parallelizes workloads for faster completion - Utilizes remote resources to offload computational burden from local machines @@ -42,14 +42,14 @@ NativeLink seamlessly integrates with build tools that use the Remote Execution ## 🚀 Quickstart -To start, you can deploy NativeLink as a Docker image (as shown below) or by using our cloud-hosted solution, [NativeLink Cloud](https://app.nativelink.com). It's **FREE** for individuals, open-source projects, and cloud production environments, with support for unlimited team members. +To start, you can deploy NativeLink as a Docker image (as shown below) or by using a cloud-hosted solution provided by the NativeLink team - [NativeLink Cloud](https://app.nativelink.com). It's **FREE** for individuals, open-source projects, and cloud production environments, with support for unlimited team members. The setups below are **production-grade** installations. See the [contribution docs](https://docs.nativelink.com/contribute/nix/) for instructions on how to build from source with [Bazel](https://docs.nativelink.com/contribute/bazel/), [Cargo](https://docs.nativelink.com/contribute/cargo/), and [Nix](https://docs.nativelink.com/contribute/nix/). ### 📦 Prebuilt images -Fast to spin up, but currently limited to `x86_64` systems. See the [container +Fast to spin up, but limited to `x86_64` systems at the moment. See the [container registry](https://github.com/TraceMachina/nativelink/pkgs/container/nativelink) for all image tags and the [contribution docs](https://docs.nativelink.com/contribute/nix) for how to build the images yourself. @@ -117,7 +117,7 @@ See the [contribution docs](https://docs.nativelink.com/contribute/nix) for furt ## 🤝 Contributing -Visit our [Contributing](https://github.com/tracemachina/nativelink/blob/main/CONTRIBUTING.md) guide to learn how to contribute to NativeLink. We welcome contributions from developers of all skill levels and backgrounds! +Visit the [Contributing](https://github.com/tracemachina/nativelink/blob/main/CONTRIBUTING.md) guide to learn how to contribute to NativeLink. Contributions from developers of all skill levels and backgrounds are welcome! ## 📊 Stats @@ -125,6 +125,6 @@ Visit our [Contributing](https://github.com/tracemachina/nativelink/blob/main/CO ## 📜 License -Copyright 2020–2024 Trace Machina, Inc. +Copyright 2020 through 2024 Trace Machina, Inc. Licensed under the Apache 2.0 License, SPDX identifier `Apache-2.0`. diff --git a/SECURITY.md b/SECURITY.md index 7164bc78d..7a68e8e70 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -9,8 +9,7 @@ vulnerabilities. Please send a report if something doesn't look right. ## Supported versions At the moment no version of `nativelink` is officially supported. Consider -using the latest commit on the `main` branch until official production binaries -are released. +using the latest commit on the `main` branch until the release of official production binaries. ## Reporting a vulnerability @@ -31,9 +30,9 @@ for publicly disclosed vulnerabilities. See the published [OCI images](https://github.com/TraceMachina/nativelink/pkgs/container/nativelink) for pull commands. -Images are tagged by nix derivation hash. The most recently pushed image -corresponds to the `main` branch. Images are signed by the GitHub action that -produced the image. Note that the [OCI workflow](https://github.com/TraceMachina/nativelink/actions/workflows/image.yaml) might take a few minutes to publish the latest image. +Nix derivation hash tag the images. The latest pushed image +corresponds to the `main` branch. GitHub action producing an image signs that +image. Note that the [OCI workflow](https://github.com/TraceMachina/nativelink/actions/workflows/image.yaml) might take some time to publish the latest image. ### Get the tag for the latest commit ```sh @@ -58,6 +57,6 @@ export PINNED_TAG=$(nix eval github:TraceMachina/nativelink/#image.ima > [!TIP] > The images are reproducible on `X86_64-unknown-linux-gnu`. If you're on such a > system you can produce a binary-identical image by building the `.#image` -> flake output locally. Make sure that your `git status` is completely clean and +> flake output locally. Make sure that your `git status` is clean and > aligned with the commit you want to reproduce. Otherwise the image will be > tainted with a `"dirty"` revision label. diff --git a/deployment-examples/chromium/README.md b/deployment-examples/chromium/README.md index bdadb379c..378172e06 100644 --- a/deployment-examples/chromium/README.md +++ b/deployment-examples/chromium/README.md @@ -6,18 +6,18 @@ and worker. Don't use this example deployment in production. It's insecure. > [!WARNING] > - The client build request is best done from a Ubuntu image, `./03_build_chrome_tests.sh`. It will check if the image is Ubuntu and > fail otherwise. -> - This tutorial has been tested in a Nix environment of version `2. +> - This tutorial underwent testing in a Nix environment of version `2. > 21.0`. > - You need to install the [Docker](https://docs.docker.com/engine/install/ubuntu/) Engine in Ubuntu. > - To get your Nix environment set up see the [official Nix installation documentation](https://nix.dev/install-nix). -All commands should be run from nix to ensure all dependencies exist in the environment. +Run all commands from nix to ensure all dependencies exist in the environment. ```bash nix develop ``` -In this example we're using `kind` to set up the cluster `cilium` to provide a +This example uses `kind` to set up the cluster `cilium` to provide a `LoadBalancer` and `GatewayController`. First set up a local development cluster: @@ -30,8 +30,7 @@ native up > The `native up` command uses Pulumi under the hood. You can view and delete > the stack with `pulumi stack` and `pulumi destroy`. -Next start a few standard deployments. This part also builds the remote -execution containers and makes them available to the cluster: +Next, start some standard deployments. This step includes building and preparing the remote containers for use in the cluster.: ```bash ./01_operations.sh @@ -42,7 +41,7 @@ execution containers and makes them available to the cluster: > `nativelink` and worker images. You can view the state of the pipelines with > `tkn pr ls` and `tkn pr logs`/`tkn pr logs --follow`. -Finally, deploy NativeLink: +Time to deploy NativeLink: ```bash ./02_application.sh @@ -74,7 +73,7 @@ in [linux/build_instructions.md](https://chromium.googlesource.com/chromium/src/ ``` > [!TIP] -> You can monitor the logs of container groups with `kubectl logs`: +> Use `kubectl logs` to view container group logs: > ```bash > kubectl logs -f -l app=nativelink-cas > kubectl logs -f -l app=nativelink-scheduler diff --git a/deployment-examples/kubernetes/README.md b/deployment-examples/kubernetes/README.md index 15069810a..cf0442371 100644 --- a/deployment-examples/kubernetes/README.md +++ b/deployment-examples/kubernetes/README.md @@ -3,7 +3,7 @@ This deployment sets up a 4-container deployment with separate CAS, scheduler and worker. Don't use this example deployment in production. It's insecure. -In this example we're using `kind` to set up the cluster `cilium` to provide a +This example uses `kind` to set up the cluster `cilium` to provide a `LoadBalancer` and `GatewayController`. First set up a local development cluster: @@ -16,8 +16,8 @@ native up > The `native up` command uses Pulumi under the hood. You can view and delete > the stack with `pulumi stack` and `pulumi destroy`. -Next start a few standard deployments. This part also builds the remote -execution containers and makes them available to the cluster: +Next start some standard deployments. This part also builds the remote +operation containers and makes them available to the cluster: ```bash ./01_operations.sh @@ -28,7 +28,7 @@ execution containers and makes them available to the cluster: > `nativelink` and worker images. You can view the state of the pipelines with > `tkn pr ls` and `tkn pr logs`/`tkn pr logs --follow`. -Finally, deploy NativeLink: +Time to deploy NativeLink: ```bash ./02_application.sh @@ -86,7 +86,7 @@ kind delete cluster ## Use a published image -[Published images](https://github.com/TraceMachina/nativelink/pkgs/container/nativelink) can be found under the Container registry, which uses the namespace `https://ghcr.io`. When using the Container registry, you can select prebuilt images and avoid building the image yourself. +Find [Published images](https://github.com/TraceMachina/nativelink/pkgs/container/nativelink) under the Container registry, which uses the namespace `https://ghcr.io`. When using the Container registry, you can select prebuilt images and avoid building the image yourself. To pull an existing image, you can run: @@ -102,13 +102,13 @@ To derive the tag of the NativeLink image at a specific commit, run the below co nix eval github:TraceMachina/nativelink/someCommit#image.imageTag --raw ``` -Alternatively, the tag can be derived from the upstream sources at the current state of the upstream main branch by running this command: +You can also derive the tag from the upstream sources at the current state of the upstream main branch by running this command: ```sh nix eval github:TraceMachina/nativelink#image.imageTag --raw ``` -Similarly, you can also clone or checkout a specific version or commit of the NativeLink git repository to evaluate the output of the entire NativeLink flake. For example, assuming you've done the [NativeLink Getting Started Guide](https://github.com/TraceMachina/nativelink?tab=readme-ov-file#getting-started-with-nativelink) and cloned the repository, you can run these sample commands: +Similarly, you can also clone or checkout a specific version or commit of the NativeLink git repository to inspect the output of the entire NativeLink flake. For example, assuming you've done the [NativeLink Getting Started Guide](https://github.com/TraceMachina/nativelink?tab=readme-ov-file#getting-started-with-nativelink) and cloned the repository, you can run these sample commands: ```sh git log @@ -118,7 +118,7 @@ nix eval .#image.imageTag --raw The `--raw` removes the surrounding quotes from the output string. > [!WARNING] -> We don't recommend using this command to +> Don't use this command to > retrieve an image: > ```sh > nix eval github:TraceMachina/nativelink#image.imageTag --raw @@ -151,7 +151,7 @@ IMAGE_TAG=$(nix eval .#image.imageTag --raw) nix run .#image.copyTo docker-daemon:"${IMAGE_NAME}":"${IMAGE_TAG}" ``` -You can find more about details around [nix](https://github.com/nlewo/nix2container). Published images are signed using `cosign`. For more details of the verification process of publishing OCI images see [SECURITY.md](https://github.com/TraceMachina/nativelink/blob/main/SECURITY.md) +You can find more about details around [nix](https://github.com/nlewo/nix2container). `cosign` signs the published images. For more details of the verification process of publishing OCI images see [SECURITY.md](https://github.com/TraceMachina/nativelink/blob/main/SECURITY.md) ## NativeLink Community diff --git a/docs/src/content/docs/deployment-examples/on-prem-overview.mdx b/docs/src/content/docs/deployment-examples/on-prem-overview.mdx index 8e55642a6..03e52330b 100644 --- a/docs/src/content/docs/deployment-examples/on-prem-overview.mdx +++ b/docs/src/content/docs/deployment-examples/on-prem-overview.mdx @@ -14,15 +14,15 @@ on-premises Kubernetes setup. These examples aren't intended for production use, but rather to serve as basic, illustrative guides for using NativeLink in a custom Kubernetes environment. -Each example leverages some of the latest, cutting-edge Kubernetes +Each example leverages some latest, cutting-edge Kubernetes configurations, but with a simplified architecture that prioritizes understandability. This approach allows you to grasp the fundamental concepts and operations involved in deploying NativeLink on-premises, without getting overwhelmed by complex production-grade setups. -You can modify and expand these setups to better suit your specific +You can change and expand these setups to better suit your specific needs. If you have any specific questions about adding NativeLink to existing or -new setups, feel free to ask for help in our +new setups, feel free to ask for help in this [Slack](https://nativelink.slack.com/join/shared_invite/zt-281qk1ho0-krT7HfTUIYfQMdwflRuq7A#/shared-invite/email). diff --git a/docs/src/content/docs/explanations/history.mdx b/docs/src/content/docs/explanations/history.mdx index d58fad79b..c79a13998 100644 --- a/docs/src/content/docs/explanations/history.mdx +++ b/docs/src/content/docs/explanations/history.mdx @@ -4,17 +4,17 @@ description: 'What is NativeLink?' --- This project was first created due to frustration with similar projects not -working or being extremely inefficient. Rust was chosen as the language to +working or being inefficient. Rust got chosen as the language to write it in because at the time Rust was going through a revolution in the new-ish feature async-await. This made making multi-threading more accessible when paired with a runtime like Tokio while still giving all the lifetime and -other protections that Rust gives. This pretty much guarantees that we will -never have crashes due to race conditions. This kind of project seemed perfect, +other protections that Rust gives. This pretty much guarantees +never having crashes due to race conditions. This kind of project seemed perfect, since there is so much asynchronous activity happening and running them on different threads is most preferable. Other languages like Go are good -candidates, but other similar projects rely heavily on channels and mutex locks -which are cumbersome and have to be carefully designed by the developer. Rust +candidates, but other similar projects rely a lot on channels and mutex locks +which are cumbersome and require careful design by the developer. Rust doesn't have these issues, since the compiler will always tell you when the code you are writing might introduce undefined behavior. The last major reason is -because Rust is extremely fast, comparable to C++ and has no garbage collection +because Rust is fast, comparable to C++ and has no garbage collection (like C++, but unlike Java, Go, or Typescript). diff --git a/docs/src/content/docs/faq/lre.mdx b/docs/src/content/docs/faq/lre.mdx index 761e11dab..957ea2bb4 100644 --- a/docs/src/content/docs/faq/lre.mdx +++ b/docs/src/content/docs/faq/lre.mdx @@ -6,9 +6,9 @@ pagefind: true Local Remote Execution (LRE) is a system for rapid iteration on toolchain setups. -It offers a framework for developers to build, distribute, and swiftly alter +It offers a framework for developers to build, distribute, and alter custom toolchain configurations. This system aims to be transparent, fully hermetic, and reproducible across machines with the same system architecture. -For more detailed information about Local Remote Execution (LRE), please -visit our [LRE Documentation](https://docs.nativelink.com/explanations/lre/). +For more detailed information about LRE, please +visit [LRE Documentation](https://docs.nativelink.com/explanations/lre/). diff --git a/docs/src/content/docs/faq/remote-execution.mdx b/docs/src/content/docs/faq/remote-execution.mdx index e307e8aa5..157c06975 100644 --- a/docs/src/content/docs/faq/remote-execution.mdx +++ b/docs/src/content/docs/faq/remote-execution.mdx @@ -4,23 +4,23 @@ description: "Remote Execution with NativeLink" pagefind: true --- -Remote execution is a powerful feature that allows you to distribute build -and test actions across multiple machines, such as a data center. -This can significantly speed up your build times and improve your development workflow. +Remote processing is a powerful feature that allows you to distribute build +and test actions across different machines, such as a data center. +This can speed up your build times and improve your development workflow. -By leveraging remote execution, you can enjoy the following benefits: +By leveraging remote processing, you can enjoy the following benefits: -- **Faster build and test execution**: By distributing actions across multiple nodes, -you can execute builds and tests in parallel, significantly reducing the overall time required. -- **Consistent execution environment**: Remote execution ensures that all members of +- **Faster build and test operation**: By distributing actions across different nodes, +you can carry out builds and tests in parallel, reducing the time required. +- **Consistent processing environment**: Remote processing ensures that all members of a development team are working in the same environment, reducing the -"it works on my machine" problem. -- **Reuse of build outputs**: Build outputs can be shared across a development team, +"it works on this machine" problem. +- **Reuse of build outputs**: You can share build outputs across a development team, reducing redundant work and further speeding up the development process. For more information on how to get started with remote execution in NativeLink, -please refer to our [NativeLink On-Prem Guide](https://docs.nativelink.com/introduction/on-prem). +please refer to [NativeLink On-Prem Guide](https://docs.nativelink.com/introduction/on-prem). -For more detailed information about remote execution, you can visit the below links: -[Bazel Remote Execution Documentation](https://bazel.build/remote/rbe). -[Buck2 Remote Execution Documentation](https://buck2.build/docs/users/remote_execution/) +For more detailed information about remote processing, you can visit the below links: +[Bazel Remote Processing Documentation](https://bazel.build/remote/rbe). +[Buck2 Remote Processing Documentation](https://buck2.build/docs/users/remote_execution/) diff --git a/docs/src/content/docs/faq/rust.mdx b/docs/src/content/docs/faq/rust.mdx index 034887a9d..16df46b79 100644 --- a/docs/src/content/docs/faq/rust.mdx +++ b/docs/src/content/docs/faq/rust.mdx @@ -8,8 +8,8 @@ pagefind: true NativeLink, as a system, demands both speed and safety in its operation. Among all the languages that are fast and non garbage collected, Rust stands out as the -only one that provides the necessary guarantees for writing asynchronous code -for multiple distributed systems that communicate over GRPC. +sole language that provides the necessary guarantees for writing asynchronous code +for several distributed systems that communicate over GRPC. Rust's unique features make it an ideal choice for NativeLink. It offers unparalleled safety and speed, which are critical for the efficient operation @@ -18,14 +18,14 @@ collection, zero-cost abstractions, and powerful static analysis tools contribute to its robustness and reliability. Moreover, the addition of the Tokio library to Rust's async ecosystem has -significantly enhanced its capabilities. Tokio is a Rust framework for -developing applications with asynchronous I/O, which is particularly useful +further enhanced its capabilities. Tokio is a Rust framework for +developing applications with asynchronous I/O, which is useful for systems like NativeLink that involve a lot of network communication. Tokio provides the foundation that made NativeLink possible. It offers a multi-threaded, work-stealing scheduler, non-blocking I/O, and enables -efficient, high-level asynchronous APIs. This has allowed us to build -NativeLink as a highly efficient, reliable, and scalable system. +efficient, high-level asynchronous APIs. This paved the way to make +NativeLink a highly efficient, reliable, and scalable system. In conclusion, Rust, with its speed, safety, and the powerful async ecosystem provided by Tokio, has been instrumental in the development and success of diff --git a/docs/src/content/docs/introduction/on-prem.mdx b/docs/src/content/docs/introduction/on-prem.mdx index 501d1bf31..15966bd57 100644 --- a/docs/src/content/docs/introduction/on-prem.mdx +++ b/docs/src/content/docs/introduction/on-prem.mdx @@ -5,20 +5,20 @@ pagefind: true --- -While NativeLink offers robust cloud solutions, we understand -that you may need to run NativeLink on-premises for various +While NativeLink offers robust cloud solutions, you may need to +run NativeLink on-premises for various reasons such as compliance requirements, unique scaling needs, or specific infrastructure setups. -To facilitate this, NativeLink is designed to be seamlessly -deployable in an on-premises environment. Our team has worked +To provide this functionality, NativeLink is seamlessly +deployable in an on-premises environment. The team has worked hard to ensure that the process of setting up NativeLink on -your own servers is as straightforward as possible, and we -provide comprehensive documentation to guide you through the process. +your own servers is as straightforward as possible and +provides comprehensive documentation to guide you through the process. ## Making your First Deployment -To get started with running NativeLink on-premises, we recommend taking a -look at our example deployments and NativeLink configurations that may suit +To get started with running NativeLink on-premises, it's recommended to take a +look at example deployments and NativeLink configurations that may suit your needs. - [**On-Prem Example Deployments**](/deployment-examples/on-prem-overview): @@ -29,9 +29,9 @@ in a custom Kubernetes environment. - [**Configuration Examples**](/config/configuration-intro): NativeLink uses a JSON file as the configuration format. This section provides -a few examples of configuration files that you can refer to when setting up +some examples of configuration files that you can refer to when setting up your own NativeLink configuration. If you encounter any issues or have any questions about running NativeLink on-premises, don't hesitate to -reach out to us on [Slack](https://nativelink.slack.com/join/shared_invite/zt-281qk1ho0-krT7HfTUIYfQMdwflRuq7A#/shared-invite/email). +reach out on [Slack](https://nativelink.slack.com/join/shared_invite/zt-281qk1ho0-krT7HfTUIYfQMdwflRuq7A#/shared-invite/email). diff --git a/docs/src/content/docs/nativelink-cloud/Reclient.mdx b/docs/src/content/docs/nativelink-cloud/Reclient.mdx index 382a0e845..82d416fbb 100644 --- a/docs/src/content/docs/nativelink-cloud/Reclient.mdx +++ b/docs/src/content/docs/nativelink-cloud/Reclient.mdx @@ -8,7 +8,7 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; ### Reclient Configuration - Utilize NativeLink's cloud CAS to build your Reclient Chromium projects. + Use NativeLink's cloud CAS to build your Reclient Chromium projects. The following guide will help you set up your authentication keys (using mTLS) and configure Reclient for remote CAS usage. @@ -38,7 +38,7 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; ### 3. Generating your mTLS key files Follow the instructions below in your terminal to generate the mTLS keys. - These keys allow your local machine to communicate with our remote CAS: + These keys allow your local machine to communicate with NativeLink's remote CAS: ```bash cd $HOME/nativelink-reclient mkdir certs && cd certs @@ -124,9 +124,9 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; autoninja -C out/Default chrome ``` - ### 7. Watch the execution + ### 7. Watch the process - In a new terminal window, execute the following: + In a new terminal window, run the following: ```bash watch ${HOME}/chromium/src/buildtools/reclient/reproxystatus ``` @@ -145,16 +145,17 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; mkdir $HOME/chromium ``` - To check whether you have XCode properly installed and the Mac SDK present, run: + To check whether you have XCode installed and the Mac SDK present, run: ```bash ls `xcode-select -p`/Platforms/MacOSX.platform/Developer/SDKs ``` - If this command doesn't return MacOSX.sdk (or similar), install the latest version of XCode, and ensure it's in your /Applications directory. If you're only seeing the command line tools, this command may fix that: + If this command doesn't return MacOSX.sdk (or similar), install the latest version of XCode, and ensure it's + in your /Applications directory. If you're just seeing the command line tools, this command may fix that: ```bash sudo xcode-select -switch /Applications/XCode.app/Contents/Developer ``` - When you fetch the code, we recommend running the following to speed up your build: + When you fetch the code, you can speed up your build (recommended) by running the following: ```bash caffeinate fetch --no-history chromium ``` @@ -169,7 +170,7 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; ### 3. Generating your mTLS key files - Follow the instructions below in your terminal. This will generate the mTLS keys that allow your local machine to communicate with our remote CAS: + Follow the instructions below in your terminal. This will generate the mTLS keys that allow your local machine to communicate with NativeLink's remote CAS: ```bash cd $HOME/nativelink-reclient mkdir certs && cd certs @@ -237,7 +238,7 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; ### 6. Build Chromium - First, we will run a script to set some final configurations to optimize your build for remote caching. The --src_dir assumes Chromium under the $HOME directory: + First, run a script to set some final configurations to optimize your build for remote caching. The --src_dir assumes Chromium under the $HOME directory: ```bash cd $HOME/nativelink-reclient git clone https://github.com/TraceMachina/reclient-configs.git @@ -253,9 +254,9 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; autoninja -C out/Default chrome ``` - ### 7. Watch the execution + ### 7. Watch the process - In a new terminal window, execute the following: + In a new terminal window, run the following: ```bash brew install watch watch ${HOME}/chromium/src/buildtools/reclient/reproxystatus diff --git a/local-remote-execution/README.md b/local-remote-execution/README.md index 773fd7a26..f8a1c1e21 100644 --- a/local-remote-execution/README.md +++ b/local-remote-execution/README.md @@ -1,10 +1,10 @@ # Local Remote Execution NativeLink's Local Remote Execution is a framework to build, distribute, and -rapidly iterate on custom toolchain setups that are transparent, fully hermetic, -and reproducible across machines of the same system architecture. +iterate on custom toolchain setups that are transparent, fully hermetic, +and reproducible across machines of the same system architecture, while a saving a lot of time. -Local Remote Execution mirrors toolchains for remote execution in your local +LRE mirrors toolchains for remote execution in your local development environment. This lets you reuse build artifacts with virtually perfect cache hit rate across different repositories, developers, and CI. @@ -14,7 +14,7 @@ perfect cache hit rate across different repositories, developers, and CI. > Slack if you have any questions or ideas for improvement. > [!NOTE] -> At the moment LRE only works on `x86_64-linux`. +> At the moment LRE works just on `x86_64-linux`. ## Pre-Requisites @@ -59,7 +59,7 @@ imports = [ ]; ``` -Finally, add the `lre.bazelrc` generator in your `devShell`'s `shellHook`: +For the last step, add the `lre.bazelrc` generator in your `devShell`'s `shellHook`: ```nix devShells.default = pkgs.mkShell { @@ -97,8 +97,8 @@ build --extra_execution_platforms=@local-remote-execution//generated-cc/config:p build --extra_toolchains=@local-remote-execution//generated-cc/config:cc-toolchain ``` -In the snippet above you can see a warning that no local toolchain is -configured. LRE needs to know the remote toolchain configuration to make it +In the snippet above you can see a warning about the absence of a configured local toolchain. +LRE needs to know the remote toolchain configuration to make it available locally. The `local-remote-execution` settings take an `Env` input and an optional `prefix` input to configure the generated `lre.bazelrc`: @@ -145,11 +145,11 @@ Bazel's toolchain creates, but for nix to be able to generate the store paths it needs to fetch the files to your local system. In other words, all paths printed in `lre.bazelrc` will be available on your local system. -Let's move on to Bazel's configuration. +Next up is Bazel's configuration. ## 🌱 Bazel-side setup -First, hook the generated `lre.bazelrc` into the main `.bazelrc` with a +First, integrate the generated `lre.bazelrc` into the main `.bazelrc` with a `try-import`: ```bash @@ -187,7 +187,7 @@ git_override( ) ``` -Let's use NativeLink's Kubernetes example to verify that the setup worked. +Next, verify that the setup worked using NativeLink's Kubernetes example. ## 🚢 Testing with local K8s @@ -195,7 +195,7 @@ Start the cluster and set up NativeLink in an LRE configuration. For details on this refer to the [Kubernetes example](https://github.com/tracemachina/nativelink/tree/main/deployment-examples/kubernetes): > [!TIP] -> NativeLink's `native` CLI tool is self-contained and can be imported into +> NativeLink's `native` CLI tool is self-contained and integrates into > other nix flakes by adding `inputs.nativelink.packages.${system}.native-cli` > to the `nativeBuildInputs` of your `devShell`. @@ -289,7 +289,7 @@ INFO: 11 processes: 9 internal, <<<2 remote>>>. INFO: Build completed successfully, 11 total actions ``` -Now let's disable remote execution and attempt a local rebuild, but keep access +Now disable remote execution and attempt a local rebuild, but keep access to the remote cache: ```bash @@ -322,7 +322,7 @@ INFO: Build completed successfully, 11 total actions > [!TIP] > You can also do this the other way around, that is, run a local build and > upload artifacts to the remote cache and have a remote build reuse those -> artifacts. Or you can only run local builds on different machines that all +> artifacts. Or you can run local builds on different machines that all > share the same cache. > > If you set up all your projects to use the same LRE configuration you'll be @@ -330,7 +330,7 @@ INFO: Build completed successfully, 11 total actions ## 🛠️ Rebuilding the toolchains -This is only relevant if you're updating the base toolchains in the `nativelink` +This is relevant if you're updating the base toolchains in the `nativelink` repository itself. If you run `nix flake update` in the `nativelink` repository you need to update the generated Bazel toolchains as well: @@ -340,8 +340,8 @@ generate-toolchains ## 📐 Architecture -The original C++ and Java toolchain containers are never really instantiated. -Instead, their container environments are used and passed through transforming +The original C++ and Java toolchain containers are virtually never instantiated. +Instead, their container environments pass through transforming functions that take a container schematic as input and generate some other output. @@ -355,8 +355,8 @@ flowchart LR --> |deploy| K{Kubernetes} ``` -In the case of LRE the base image is built with Nix for perfect reproducibility. -However, you could use a more classic toolchain container like an Ubuntu base +For LRE, the base image gets built with Nix for perfect reproducibility, though +you could use a more classic toolchain container like an Ubuntu base image as well: ```mermaid @@ -378,8 +378,8 @@ flowchart LR --> |custom toolchain generation logic| B[RBE client configuration] ``` -In many classic setups the RBE client configurations are handwritten. In the -case of LRE we're generating Bazel toolchains and platforms using a pipeline of +In classic setups the RBE client configurations are handwritten. In the +case of LRE, Bazel toolchains and platforms get generated using a pipeline of custom image generators and the `rbe_configs_gen` tool: ```mermaid @@ -392,13 +392,13 @@ flowchart LR When you then invoke your RBE client with the configuration set up to use these toolchains, the NativeLink scheduler matches actions to the worker they require. -In Bazel's case this scheduler endpoint is set via the `--remote_executor` flag. +In Bazel's case, the `--remote_executor` flag sets this scheduler endpoint. ## 🦜 Custom toolchains -The general approach described works for arbitrary toolchain containers. You -might need to implement your own logic to get from the toolchain container to -some usable RBE configuration files (or write them manually), but this can be -considered an implementation detail specific to your requirements. +The general approach described works for arbitrary toolchain containers. You might +need to create your own logic to get from the toolchain container to some usable RBE +configuration files (or write them manually), but this is an implementation detail +specific to your requirements. TODO(aaronmondal): Add an example of a custom toolchain extension around lre-cc.