Skip to content

Commit

Permalink
[docs] fix: update README (#34)
Browse files Browse the repository at this point in the history
  • Loading branch information
leonardo0lyj authored May 12, 2024
1 parent dd44ba5 commit 9047a73
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 3 deletions.
2 changes: 1 addition & 1 deletion docs/texts/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Furthermore, **veScale** is also developing a _Mixed Mode_* of partial _Eager_ a
**veScale** is designed and implemented on top of a primitive called _DTensor_ that provides a global tensor semantic with local shards distributed on multiple devices.
**veScale** extends and enhances the _PyTorch DTensor_ for our production standard, and further develops the _Auto-Plan*_ and _Auto-Paralleize_ with a unified configuration and API.

Furthermore, **veScale** also supports online _Auto-Reshard_ for distributed checkpoints, which will be open-sourced as a new project -- **OmniStore**.
Furthermore, **veScale** also supports online _Auto-Reshard_ for distributed checkpoints.

(`*` is under development)

Expand Down
2 changes: 0 additions & 2 deletions vescale/checkpoint/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@ abstracting away the complexities of underlying details such as process rank and

`vescale.checkpoint` incorporates [fast checkpointing](https://arxiv.org/abs/2402.15627) and various I/O optimization techinques, enhancing I/O efficiency during LLM training.

`vescale.checkpoint` will be a part of `OmniStore` project, a new open-source project coming soon.

`vescale.checkpoint` is built on top of `PyTorch Distributed Checkpoint` with significant differences as discussed above.

## How to use `vescale.checkpoint`?
Expand Down

0 comments on commit 9047a73

Please sign in to comment.